text
stringlengths 144
682k
|
---|
Eating Disorders
Eating disorders are very common and come in many different ways. Here are some eating disorders and ways to see if you or a family member have an eating disorder.
What is Anorexia Nervosa?
Anorexia Nervosa is a psychological disorder characterized by one’s complete aversion to food. It is caused by an impaired perception of body image, which results in an intense fear of gaining weight.
Anorexia nervosa is physically manifested by a critically low body weight. Because of poor self-image, patients suffering from this disorder intentionally restrict food intake. In addition, they may resort to other potentially harmful methods to continue losing weight like misusing laxatives and diuretics or vomiting after meals.
What is Bulimia Nervosa?
Bulimia nervosa is an eating disorder that causes a person to binge eat large quantities of food in a short space of time. This also leads to
obsessive purging to get rid of the calories. This harmful cycle can lead to a range of physical and mental health issues and can affect anyone at
any age.
Left untreated, eating disorders can result in severe medical and psychological conditions, affecting not only the person with the disorder, but their relationships with family and friends. Intense focus, distress or concern about body weight or shape may also characterize an eating disorder. Eating disorders generally appear during the teenage years and young adulthood, though they may also develop during childhood or later in life. Common eating disorders include Anorexia, Bulimia, Binge Eating Disorder, and EDNOS.
What is Anorexia?
Anorexia is a condition commonly defined as self-induced starvation.This definition can be misleading because a person with anorexia is often hungry but will refuse to eat by denying their own hunger and need for food as a result of an intense and distorted fear of becoming fat. Other symptoms include excessive calorie and fat restriction, weight loss, as well as obsessive thoughts of food, food preparation and the extreme worry about body shape and size. Read more about our Anorexia treatment approach here.
Atypical Anorexia
With atypical anorexia nervosa, the individual falls within or above the normal weight or body fat range for their height and age. The atypical anorexic will present with the intense fear of gaining weight or becoming fat, accompanied by a distorted self-image, and may engage in restrictive intake, fasting, or excessive exercise.
What is Avoidant/ Restrictive Food Intake Disorder (ARFID)?
ARFID is very similar to anorexia without the drive for thinness and does not accompany a distorted body image. Classic signs of ARFID include significant weight loss, nutritional deficiency, and a marked interference with psychosocial functioning. Avoiding food or eating, a lack of interest in food, and a concern about eating food are all signs of ARFID. The condition can also be accompanied by a negative response to food intake including choking or vomiting.
What is Bulimia?
Bulimia is quite often a secretive cycle of binge eating followed by engaging in behaviors such as purging or using laxatives to prevent
weight gain.
A binge consists of eating an amount of food that is definitely larger than most individuals would eat under similar circumstances. People struggling with Bulimia will compensate for the binge eating in two ways; Purging and Nonpurging. Purging behaviors include self-induced vomiting and using laxatives and diuretics. Non-Purging behaviors involve excessive exercise and alternating periods of strict dieting or fasting. Read more about our Bulimia treatment approach here.
What is a Binge Eating Disorder?
Like those with Anorexia or Bulimia, those struggling with Binge Eating Disorder also suffer from the consequences of a serious eating disorder, though until recently, that was not always recognized to be the case. Until May 2013, there were no official diagnosis for Binge Eating Disorder. Read more about our Binge Arizona Eating Disorder Treatment approach here.
EDNOS (eating disorders not otherwise diagnosed)
Many people who have eating disorders do not meet all the diagnostic criteria for either Anorexia or Bulimia. For example, if someone is 14 percent below the ideal body and the diagnostic criteria indicate that they have to be 15 percent below to have Anorexia Nervosa, then, they do
not officially meet the diagnosis. This holds true for someone who has been binging and purging numerous times a day, but for less than 6
months; which is the duration necessary for the diagnosis of Bulimia. However, the eating disorder is just as real and serious as when all diagnostic criteria are met. Yet, the official diagnosis will be stated as EDNOS. The tendency to minimize the magnitude of intrusiveness of the eating disorder behaviors and to believe that there is “no real problem” is a position that is frequently, and errantly, held.
Co-Existing Conditions
diagnoses treated at Canopy Cove are Anorexia Nervosa, Bulimia Nervosa, Binge Eating Disorders, EDNOS, and co existing diabetes, our
Compulsive Overeating
Compulsive overeating is an eating disorder that is not concerned with weight loss or weight gain prevention. A person who suffers from compulsive overeating eats beyond the feeling of fullness, may participate in night eating, eating out of the garbage, and hiding food to eat in excess. Compulsive overeating is often a coping mechanism for underlying emotional stressors and results in severe weight-related medical problems and body image issues that may trigger other forms of eating disorders.
Diabetes and Eating Disorders
For starters, both can be life-threatening – but the risk of death is even greater when they co-exist. There is a high focus on “control” in the
management of diabetes and it becomes familiar to make control a priority in its’ day to daycare. Controlling blood glucose, controlling food intake, controlling urges, controlling the timing of testing and injections, and adding exercise to the daily routine for diabetes is recommended for most who have this disease. Likewise, control is also a primary issue for those who struggle with Anorexia, Bulimia, and Binge Eating Disorders.
Often believing that controlling food and weight is the one thing that can be controlled in their chaotic world seems to bring about a sense of stability. So it becomes an easy transition for many women and men who have developed diabetes to fall into the trap of an eating disorder.
What is Dia-bulimia?
Dia-bulimia is an eating disorder that is exclusive to those who suffer from type 1 diabetes. This eating disorder occurs when people who are insulin-dependent omit or delay the administration of insulin or intentionally under-dose themselves to maintain a hyperglycemic state after eating or binging in an attempt to induce weight loss or prevent weight gain. The misuse of insulin is a form of purging — maintaining a hyperglycemic state essentially forces the body to release ketones into the urine rather than the body storing it as fat.
Arizona Eating Disorder Treatment checking blood sugar
Celiac Disease and Eating Disorders
Although an eating disorder is about more than just food, we cannot neglect the damage it has on the body, specifically with regard to the
malabsorption of nutrients and damage to many organ systems. It is for this reason that an eating disorder can often be masked by a medical diagnosis addressing a physical abnormality that presents with similar signs and symptoms. A person suffering from untreated Celiac Disease often experiences rapid weight loss, nausea, vomiting, bloating, and abdominal discomfort. These same symptoms could also be observed in an individual struggling with an eating disorder. Although there is very little research that has been conducted on the correlation between Celiac Disease and
eating disorders, we believe that there are elements of both disorders that could provide a potential link between the two diagnoses, therefore
requiring an individualized therapeutic and nutritional approach to recovery. The treatment team is equipped to provide the nutritional support necessary for the dissolution of Celiac symptoms, thereby allowing the client to focus on the underlying issues instead of food.
What is Orthorexia?
Orthorexia is not yet recognized as a formal eating disorder by the Diagnostic and Statistical Manual (DSM), but is considered a serious
eating disorder. A person who suffers from orthorexia becomes so fixated on only eating what they consider healthy to a compulsive or excessive state in a way that becomes damaging. Dieting, clean eating, and other forms of heavily modified nutrition regimens are not considered a disorder unless the fixation becomes damaging to the individual’s well-being.
Signs of orthorexia include:
• An inability to eat anything but a narrowed group of “healthy” or “pure” foods.
• Compulsive checking of ingredient lists and inability to eat foods that cannot be checked.
• Cutting an increasing number of food groups — sugar, carbohydrates, gluten, dairy, all meat, etc.
• High levels of distress when these foods are not available and avoidance of events to prevent the distress.
What is a Purging Disorder?
Purging disorder is similar to bulimia only in the fact that it involves purging or attempting to rid the body of food — either by intentional vomiting or use of laxatives. A person who suffers from purging disorder does not binge eat and may purge any food eaten or without the intake of food in an attempt to lose weight or change the shape of their body.
Arizona Eating Disorder Treatment healthy food
Vegetarianism and Eating Disorders
In today’s society, choosing to be vegetarian is often viewed as a healthy lifestyle change complete with many nutritional benefits. However, vegetarianism can also be used as a socially acceptable way to mask food fears. Because the nature of this nutritional approach is the avoidance of certain foods, individuals with eating disorders are often drawn to this lifestyle as a means to continue more restrictive eating behaviors without attracting suspicion from others.
What is Binge Eating Disorder?
Binge eating is also commonly known as ‘compulsive eating disorder.’ Unlike anorexia and bulimia, this eating disorder triggers a person to vastly overeat in a short space of time. This behavior can lead to unhealthy body image and obesity, which can have lasting effects on mental and physical health.
What is an Avoidant/Restrictive Food Intake Disorder?
This is a condition in which a person does not consume enough food to sustain their bodies appropriate nutritional and energy requirements.
What is Orthorexia Nervosa?
Orthorexia nervosa, is an eating disorder that involves an unhealthy obsession with healthy eating. Unlike other eating disorders, orthorexia mostly revolves around food quality, not quantity. Unlike with anorexia or bulimia, people with orthorexia are rarely focused on losing weight.
You can turn your life around today.
|
The font-style CSS property sets whether a font should be styled with a normal, italic, or oblique face from its font-family.
Italic font faces are generally cursive in nature, usually using less horizontal space than their unstyled counterparts, while oblique faces are usually just sloped versions of the regular face. When the specified style is not available, both italic and oblique faces are simulated by artificially sloping the glyphs of the regular face (use font-synthesis to control this behavior).
font-style: normal;
font-style: italic;
font-style: oblique;
font-style: oblique 10deg;
/* Global values */
font-style: inherit;
font-style: initial;
font-style: revert;
font-style: unset;
The font-style property is specified as a single keyword chosen from the list of values below, which can optionally include an angle if the keyword is oblique.
Selects a font that is classified as normal within a font-family.
Selects a font that is classified as italic. If no italic version of the face is available, one classified as oblique is used instead. If neither is available, the style is artificially simulated.
Selects a font that is classified as oblique. If no oblique version of the face is available, one classified as italic is used instead. If neither is available, the style is artificially simulated.
oblique <angle>
Selects a font classified as oblique, and additionally specifies an angle for the slant of the text. If one or more oblique faces are available in the chosen font family, the one that most closely matches the specified angle is chosen. If no oblique faces are available, the browser will synthesize an oblique version of the font by slanting a normal face by the specified amount. Valid values are degree values of -90deg to 90deg inclusive. If an angle is not specified, an angle of 14 degrees is used. Positive values are slanted to the end of the line, while negative values are slanted towards the beginning.
In general, for a requested angle of 14 degrees or greater, larger angles are preferred; otherwise, smaller angles are preferred (see the spec's font matching section for the precise algorithm).
Variable fonts
Variable fonts can offer a fine control over the degree to which an oblique face is slanted. You can select this using the <angle> modifier for the oblique keyword.
For TrueType or OpenType variable fonts, the "slnt" variation is used to implement varying slant angles for oblique, and the "ital" variation with a value of 1 is used to implement italic values. See font-variation-settings.
Note: For the example below to work, you'll need a browser that supports the CSS Fonts Level 4 syntax in which font-style: oblique can accept an <angle>. The demo loads with font-style: oblique 23deg;. Change the value to see the slant of the text change.
Accessibility concerns
Large sections of text set with a font-style value of italic may be difficult for people with cognitive concerns such as Dyslexia to read.
Formal definition
Initial valuenormal
Computed valueas specified
Animation typediscrete
Formal syntax
normal | italic | oblique <angle>?
Font styles
.normal {
font-style: normal;
.italic {
font-style: italic;
.oblique {
font-style: oblique;
CSS Fonts Module Level 4 (CSS Fonts 4)
# font-style-prop
Browser compatibility
BCD tables only load in the browser
See also
|
Stress Management
We all experience stress and sometimes it seems like there’s nothing we can do about it. Stress is unavoidable, and to some extent, it is an essential part of life. Although everyone experiences stress, the cause can be different from one person to another. For example, being busy may overwhelm a person, while another might actually love having many things to do. Being stuck in a crazy traffic jam can make a person angry, while another might think it’s a mild inconvenience. Although stress is inescapable, there are things you can do to manage it once you realize that you actually have a lot more control of your life.
You need to manage your stress because it can have a serious impact on your overall wellbeing. When you let stress get the best of you, you put yourself at risk of developing a variety of diseases. When you’re stressed, your brain experiences chemical and physical changes, affecting its overall function. From headaches to premature death, stress is closely related to numerous health problems. The effects of stress are also strongly linked to mental health conditions, as it can result in depression, anxiety, post-traumatic stress disorder (PTSD), psychosis, and
|
Cue, as defined in Merriam Webster:
Definition of cue
a : a signal (such as a word, phrase, or bit of stage business) to a performer to begin a specific speech or action That last line is your cue to exit the stage.
b : something serving a comparable purpose : [HINT] I'll take that yawn as my cue to leave.
I feel CUE has the similar meaning as HINT in this case. Can we say:
• That last line is your hint to exit the stage.
• I'll take that yawn as my hint to leave.
I am not sure if these two words have some overlaps in meaning. Or sometimes they are interchangeable in some circumstances?
• No, "hint" is just something aiding you in understanding, thus is similar to give a clue. Jan 15 '18 at 4:17
They have some overlap in meaning but they're not the same.
A hint is a thing used to help discover unknown or forgotten information. A cue is more about coordinating timing. The common idiom "I'm waiting for my cue" means that I know what to do but am waiting for the signal to do it.
Don't confuse "cue" with "clue".
In your examples:
The actor knows that they need to leave the stage. The cue let's them know the right time. It would only be a hint if the actor didn't know what to do.
The yawn is more of a hint because it carries the message that the person is sleepy and it is a good idea to leave. It is a cue in the sense that the person is intending to leave and the yawn signals that now is the right time.
• This is a very good explanation! I learnt it. Thanks!
– dan
Jan 16 '18 at 7:43
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
Open Journal of Ecology
Vol.4 No.9(2014), Article ID:47216,47 pages DOI:10.4236/oje.2014.49047
Game Ranching: A Sustainable Land Use Option and Economic Incentive for Biodiversity Conservation in Zambia
Chansa Chomba1*, Chimbola Obias2, Vincent Nyirenda3
1School of Agriculture and Natural Resources, Disaster Management Training Centre, Mulungushi University, Kabwe, Zambia
2Department of Mathematics and Statistics, Mulungushi University, Kabwe, Zambia
3School of Natural Resources, Copperbelt University, Kitwe, Zambia
Email: *, *, *
Copyright © 2014 by authors and Scientific Research Publishing Inc.
Received 23 April 2014; revised 23 May 2014; accepted 2 June 2014
The ten provinces of Zambia were surveyed to determine number and size of game ranches situated in these areas up to the end of 2012/early 2013. Three classes of game ranches were developed as; 1) ≥500 hectares as game ranch proper, 2) ≥50 - <500 hectares as game farm, and 3) <50 hectares as ornamental. A total of 200 game ranches keeping large mammals from the size of common duiker to eland were recorded with a growth rate of 6 per year for the period 1980-2012. The largest number was ornamental 98 (49%); large game ranches were 75 (38%) and the least was game farms 27 (14%). Thirty seven species of large mammals were recorded, of which, 15 were the most abundant with impala topping the list with 21,000 individuals (34%). It was found that of the ten provinces, Luapula, Western and Northern Provinces despite being largely rural with low population densities except for Luapula did not have any game ranch. The province with the largest number was Lusaka 71(36%), Southern 59 (30%), Central 31(16%), Copperbelt 19 (10%), Eastern and Northwestern 9 (4.5% each) and Muchinga was the least with 2 (1%). The rapid increase in the number of ornamental category is mainly attributed to the rise in the development of tourist accommodation facilities and high cost residential properties. This growth provides an opportunity to convert to game ranching schemes abandoned farmlands which are not currently useful to agriculture due to loss of fertility and other forms of land degradation. Similarly, parcels of land with natural ecological limitations should also be considered for such schemes. Rehabilitation of degraded land through ranching could also enhance carbon sequestration, a factor critical in minimizing carbon emissions and other green house gases.
Keywords:Game Ranch, Province, Number, Species, Increase, Carbon Emissions
1. Introduction
Game ranching in Zambia (Figure 1) has emerged as a popular use of wildlife by the private sector. This is discerned from the rapid increase in the number of private wildlife estates from one in early 1980 to 200 by end of 2012, representing a mean establishment rate of six each year. The first private wildlife estate was established in Lusaka province, but growth of the sector has now covered seven of the ten provinces. At the time the first game ranch was established, there was no policy or legislative framework to guide and facilitate its growth [1] [2] . The only provision made available was the Statutory Instrument on Licences and Fees, which provided a discount of 50% for all live wild animals sold to individuals stocking game ranching schemes. This provision was initially intended to encourage indigenous Zambians to establish and manage game ranches as an alternative land use option to conventional agriculture and livestock keeping. Over the years, government realized the need to regulate the sector through Policy and Legislative frameworks. The policy for National Parks and Wildlife in Zambia of 1998 and the Zambia Wildlife Act of 1998 provided the required legislative frameworks to support the growth of the game ranching sector.
After the establishment of the first Game Ranch in Lusaka Province, the Southern Province of Zambia adopted and recorded a faster rate of increase in the establishment and growth of private wildlife estates. This is assumed to be attributed to the cattle keeping tradition of the local tribes and the large scale cattle ranching schemes by white farmers of mainly British and South African descent which is also a factor behind animal husbandry skills inherent in the people of the province. This factor, coupled with the occasional outbreaks of cattle diseases such as Foot and Mouth Disease (FMD), Contagious Bovine Pleuro-Pneumonia (CBPP) and others are assumed to have inspired many livestock keepers in the province to switch to game ranching as a complement to livestock keeping. However, most of the game ranch owners are mixed race and white settlers who had acquired large tracts of land for cattle keeping and crop cultivation before and shortly after independence [2] . The second phase of rapid growth in the number of private wildlife estates was recorded when there was an increased in flow of white migrants from South Africa after 1994, and Zimbabwe during land ownership disputes respectively. These latter groups brought with them new skills in game ranching. Their experience in the sector encouraged some local farmers to switch to game ranching.
In the east and southern African sub regions, particularly in the Southern African Development Community (SADC) in general, game ranching has been increasing more rapidly than any other part of Africa and has emerged as a desirable alternative to traditional ways of using land. In South Africa, for instance, there were more than 10,000 game ranches in 2012 [2] [3] , up from 5000 in 2003 as recorded by ABSA [4] , occupying over 20.5 million hectares of land or above 17% of the country’s available land under private conservation, which was more than double the 7.5 million hectares of the national and provincial reserves combined [2] [3] [5] . Game ranches generated over US$400 million annually through mainly live auctions and trophy hunting and there were more game animals of some species in South Africa than the previous century, save for the rhino which has recently been persecuted due to the sudden emergence of a new white and black rhino (Ceratotherium simum and Diceros bicornis) horn market in Asia. For instance, there were about 30,000 African buffalo (Syncerus caffer) of which 90% were disease free on 1918 game ranches [3] [5] .
Experiences on the profitability of game ranching when compared with livestock particularly in marginal areas obtained from countries such as, Zimbabwe, Namibia and South Africa inspired Zambia to enhance its policies and legislation on ranching in order to support growth of the sector. Although the sector is described as being relatively new, it has now gained momentum such that it now includes; leopard tortoise (Stigmochelys pardalis) bell’s hinged (Kinixys belliana) and pancake (Malacochersus tornieri), keeping of birds and snakes, large mammals and Nile crocodile (Crocodylus niloticus) on a commercial scale. Tortoise farming in particular, increased after the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) imposed a non time bound moratorium on live exports of tortoises from Zambia, as it was assumed that the live pancake tortoises that were being exported from Zambia were smuggled specimens from Tanzania. Government
Figure 1. (a) Location of Zambia and its nine provinces, in relation to other Southern Africa countries before the subdivision in 2011; (b) After the subdivision in 2011 which created the tenth province, Muchinga.
was left with no option but to carry out field research to establish the existence of pan cake in Zambia [6] , to dispel the anecdotal reports of smuggling between Zambia and Tanzania. After the species was discovered in North eastern Zambia [6] , government encouraged captive breeding of the three species of tortoises with permission from CITES. Trade eventually resumed and a number of people ventured into this new enterprise. Since tortoises can be bred on limited space in back yards and requires minimal capital investments, many small scale farmers took up the challenge [7] -[9] .
This paper focused on game ranches where large mammals are kept for trophy hunting and photographic tourism as well as enhancing aesthetic beauty for purely ornamental purposes such as on hotel premises and high cost residential properties. It provided an analysis of the characteristics of private wildlife estates and their patterns of distribution throughout the 10 provinces of Zambia for potential farmers to use in deciding on suitable location, size and species of interest, and how to overcome ecological and socioeconomic related obstacles in the establishment of PWEs in provinces such as Northwestern, Luapula and Northern Provinces which seem to be lagging behind. It could also be a useful tool in lobbying for customary land through traditional authorities (chiefs) especially for game ranching which requires more land than monoculture. It is hoped that the conclusions from this study will also stimulate local academic institutions such as the University of Zambia, Copperbelt and Mulungushi Universities to offer courses that would provide the needed technical expertise in the sector. The land policy also imposes a severe limitation by restricting the extent of land to be alienated for agriculture to 250 hectares which is not adequate for game ranching, hence the provision of technical information on the land size requirements would help remove this barrier of size limitation which is caused by paucity of data.
2. Methods and Materials
Data on the number of private wildlife estates covering all the provinces of Zambia (Figure 1) followed the method of retrieving information stored in the directorate of research and licencing office of the Zambia Wildlife Authority (ZAWA). Duplicate copies of the Certificate of Ownership (CO) and Permit to Keep Wild animals in Captivity which are stored at ZAWA were accessed. Both the CO and Permit to Keep animals in Captivity are issued to all private wildlife estates and are renewed annually which guaranteed up to date data on the performance of the sector. The forms also contain all the species and numbers kept by each private property, area in hectares of the property, name of the property and the owner’s name.
Forms described above, provide details on species name, sex, and numbers, category of private wildlife estate (e.g. game ranch or crocodile farm), size in hectares, location of the property, and year established. Such data were entered on data sheets.
Basic statistical analyses using Microsoft Excel 2007, and Minitab Software Programme Version 14 were applied to process and present results. Game Ranch sizes were classified on the basis of their size as follows; ≥500 hectares as game ranch proper, <500 hectares but >50 hectares as game farm and 50 hectares and less as ornamental.
3. Results
3.1. Total Number of Game Ranches Based on Size in Hectares
By end of year 2012, there were 200 game ranches of different sizes in Zambia representing a moderate annual growth rate of 6 (3%) per year in the last 32 years. The national pattern was such that ornamental properties were the most abundant 98 (49%) and these were less than 50 hectares in extent, mainly located around residential properties, hotels and other tourist facilities; Game ranches of size ≥500 hectares 75 (38%) ranked second with the least being game farms of intermediate size 50 - 499 hectares which recorded 27 (14% of national total) (Figure 2).
3.2. Number of Game Ranches in Each Province
3.2.1. Comparison of Size of Province, Human Population Density and Number of Private Wildlife Estates
The number and size of game ranches were not determined by the size of province or human population density (P < 0.05). Lusaka Province which is the smallest (21,896 km2) and which also had the highest human density per sq kilometer (100.4/km2) had the largest number of game ranches at national level of 71 (36% of national total). Of this total, the largest number were ornamental properties 46 (65%), followed by game ranches were 15 (21%) and the least were game farms 10 (14%). The second largest number were in Southern Province with a combined total of 59 (30% of national total), of which ornamental properties were 29 (49%), game ranch proper 21 (36%) and the least were game farms 9 (15%).
Figure 2. Total number of Game Ranches based on size by December 2012, Zambia.
Central Province ranked third with a total of 31 (16% of national total). Of this total, 22 (71%) were game ranch proper, ornamental 6 (19%) and the least were game farms 3 (10%). The province also had the largest number of Game ranch proper (≥500 hectares) which were 22 (29% of national total) (Figure 3(a), Figure 3(b)). Copper belt province had the fourth largest number of game ranches in the country 19 (10% of national total). Of this total, 9 (47%) were ornamental properties, while the other two categories had an equal number of 5 each (26.5%).
Eastern and Northwestern provinces had the least 9 (4.5% of national total each) and shared the same pattern of size where of the total 5 (55% each) were game ranches of the size equal to and larger than 500 hectares and 4 (45% each) were ornamental. Muchinga Province had a total of 2 (1% of national total) and both were of the size larger than 500 hectares.
Luapula, Western and Northern Provinces, despite being largely rural and not densely populated did not have any game ranch (Table 1, Figure 3(a), Figure 3(b)).
3.2.2. Species of Large Mammals Popular on Game Ranching
A total of 37 species of large mammals (≥5 kg) were recorded with a total of 61, 934 individuals. Of the 37, two were exotic species, sported deer or Axis deer (Axis axis) donated by the Government of India to the Government of Zambia in the early 1980s and lowland nyala (Tragelaphus angasii) imported from the Republic of South Africa, the rest (35) were indigenous species.
Of these 37 species 15 were the most abundant with numbers exceeding 1000 individuals per species. These were in order of abundance as follows; impala (Aepyceros melampus), warthog (Phacochoerus africanus), bushbuck (Tragelaphus scriptus), common duiker (Sylvicapra grimmia), puku (Kobus vardoni), greater kudu (Tragelaphus strepsiceros), bush pig (Potamochoerus larvatus), sable antelope (Hippotragus niger), zebra (Equus quagga), reedbuck (Redunca arundinum), Kafue lechwe (Kobus leche kafuensis), eland (Taurotragus oryx), lichtensteini’s hartebeest (Alcelaphus lichtensteini), defassa waterbuck (Kobus defassa), and common waterbuck (Kobus ellipsiprymnus) (Figure 4(a), Figure 4(b)).
4. Discussion
4.1. Size of Game Ranches
4.1.1. Ornamental Properties
Most ornamental properties were pre-existing fenced hotel and lodge premises as well as high cost residential properties. Therefore, no effort was required to secure land, fence and provide water; these requirements were already in place and since the properties were already on the title, all things that were required were to simply introduce game to add aesthetic value. In certain instances, only minor modifications were required such as improving the shape and configuration of drinking troughs and increasing the height of the enclosure in cases where animals such as impala or spotted deer which can jump over a 2.5 metre fence were concerned. Species diversity and animal numbers per property were also low and this lowered capital inputs and curtailed the lengthy and strenuous process required to establish a game ranch on virgin land. Since the properties were relatively
Figure 3. (a) Distribution of Game Ranches in each Province by end of year 2012; (b) Comparison of size of province, human population density and number of game ranches, Zambia.
small with respect to ranching, only few animals of mainly small to medium size which also cost less and transportation can be done in one truck load which is affordable made the process of establishment easier. The overall cost of commissioning the property was therefore generally low. Additionally, the increase in competition between tourist accommodation facilities mainly around Lusaka and Livingstone to elevate their aesthetic beauty in order to attract more visitors has also compelled many property owners to earn a competitive edge by introducing game around the property to enhance their attraction profile. The latter reason appears to have applied
Table 1 . Provinces, area in square kilometers, human population size and density compared with total number of established game ranches in each province, 2013, Zambia.
Notes: Data used in this table are for the period up to December 31, 2013; **Muchinga province was separated from Northern Province in 2011 and since then, there has been no census to determine its population size and in this report, it has been treated as part of Northern Province.
to high cost residential owners as well who would like to out compete their neighbours which eventually created a domino effect particularly in areas such as new Kasama high cost residential area of Lusaka. As neighbours learned from one another, it became relatively easy for the idea to spread quickly from one property owner to the other. This explains the almost exponential growth rates in ornamental category (see Figure 2) experienced in the last few years.
The reason for the relatively slow growth in the game farms cannot be readily explained. This is because the current Lands Act permits land acquisition not exceeding 250 hectares for agricultural purposes. Implying that it was easy to convert unproductive agricultural land to game farming but this was not the case. Perhaps the only explanation would be lack of knowledge in the majority of middle class Zambian citizens most of whom engage in maize production and other crops. Popular private enterprises for most Zambians in addition to farming seem to be those associated with law firms, construction, accommodation and transport.
4.2. Distribution of Game Ranches in the Ten Provinces
Lusaka, southern and central provinces were the leading provinces in the number and size of game ranching schemes, maintaining a skewed distribution across the ten provinces in the last 32 years. Most of the game ranches were still concentrated along the old line of rail. The skewed distribution of game ranches in favour of Lusaka and Southern provinces in particular, could be attributed to the increase in the number of hotels and lodges as well as residential properties where certain species of game are bred as a way of enhancing aesthetic beauty of their surroundings.
The rapid increase in the number of ornamental properties in the last few years was also attributed to the liberalization of the economy in 1991, after which the country experienced an increase in the establishment of tourist accommodation facilities and an emergence of the middle class. These tourist accommodation facilities as mentioned earlier usually keep game for ornamental purposes as one way of enhancing the attraction profile of their premises. Additionally, the increasing number of middle income groups and the rich increased the number of spacious high cost premises. In certain areas such as New Kasama in Lusaka, most property owners’ stocked game around their premises, which were then classified as ornamental.
On private properties and tourist accommodation facilities, only game considered non dangerous to people were introduced and this explains why impala was the most abundant species. Large game is often considered dangerous and can only be found on medium to large size game ranches and not on residential properties or hotels. During the breeding season for instance, males of most species of wild animals in musth fight for females and some species also become aggressive towards both human and properties. For this reason, large game is not usually kept on residential properties.
Figure 4. (a) Distribution of Game Ranches in each Province by end of year 2012, (b) Comparison of size of province, human population density and number of game ranches, Zambia.
4.3. Game Ranching Schemes in Rural and High Rainfall Provinces
The absence of game ranching schemes in Northwestern, Luapula and Northern Provinces and the small number of ranches in other rural provinces could be attributed to a number of factors including; long distance(s) from major cities and key tourist attractions such as the Mosi oa Tunya Falls, poor road net work and lack of air transport facilities. Lack of such facilities ultimately increase transportation costs and is a disincentive to the establishment of game ranching schemes. For example, it is difficult for foreign clients to reach these remote areas. Even if it were possible, the trip may not be rewarding as there are no other developed tourist attractions in the vicinity which would enrich the visitor’s trip to justify the long and strenuous journey. In terms of trophy hunting, game ranches usually offer soft skin game hunting, implying that a client needs to go to another area for cats and big game, and as such, most clients would like to operate in an area where they can meet a full hunting package within a short distance, which saves time and money. It is hoped that through the current government programme “link Zambia 8000” many remote areas would be opened up to game ranching.
The other reason is that most provinces in agro ecological zones III and II have a common ecological limitation of dystrophic soils and hence poor pasture with coarse grasses such as Hyparrhenia spp dominating the range. Such grass species are of low nutritional value as they are dominated by sclerenchyma tissue with thick secondary walls containing lignin and hence unpalatable. In these areas which also have highly leached soils, stocking rates would not be high, except for bulky grazers. Eutrophic soils are normally found in drier ecological zones II and I and this may explain why most of the large game ranches are located in Southern Province. Similar observations were made in South Africa [10] [11] where the sweet veld carried a higher diversity and densities than sour velds.
4.4. Popular Species for Game Ranching
Impala was the most abundant species of the top 15 species on game ranches. It was also the most abundant even in the wild out numbering other bovines in Eastern and Southern Africa, gregarious and easy to capture, reproduces fairly quickly with gestation period of six months, has a smooth coat and good colour and hence attractive to look at, tames relatively easily save for the males who may be vociferous during rut when testosterone levels surge and they puff and grunt. At this time, they may cause some unintentional coalition with humans as was reported in Kafue National Park, Zambia, where they were responsible for most of the vehicle collisions [12] . Otherwise, impala is a successful species and this explains their abundance on ranches and in the wild. The absence of predators such as leopard (Panthera pardus) and wild dog (Lycaon pictus) on many game ranches implies that mortality is virtually absent and so the population may increase exponentially even with minimum management effort.
Buffalo (Syncerus caffer) was supposed to be one of the top 15 species as it is very popular in both hunting and photographic tourism. Every rancher saves for ornamental properties would like to have buffalo as a premium species on the property. The main challenges however, are the stringent veterinary legislative and policy frameworks which prohibit the capture and translocation of buffalo unless the specimens are Foot and Mouth Disease (FMD) free. FMD is a disease of concern because it is a highly contagious viral disease affecting practically all cloven-footed domesticated mammals, including cattle, sheep, goats, and pigs. It spreads rapidly and negatively impacts on animal productivity, and as such, it is considered to be the most economically devastating livestock disease in the world. To prevent the spread of FMD, the Department of Veterinary Services bans the movement or import of animals and animal products from known or suspected infected areas. Since buffalo is known to be a carrier of the virus, its movement is prohibited unless tests are carried out to prove that the specimen is FMD free. It is the huge costs involved in raising FMD free calves that makes it prohibitive and prevents most farmers from purchasing buffalo. This could be the reason why buffalo was not one of the top 15 species.
Roan antelope numbers were low, perhaps because it is not easy to obtain breeding stock. The numbers are low everywhere including in National Parks and Game Management Areas. Getting a large founder may not have been practicable. Breeding of this species also appears to be slower than its relative the sable antelope for reasons that are not very clear at the moment, though others associate this to habitat quality. For a few years to come, roan antelope numbers at each property may continue to be low. Warthog and other members of the family suidae have large litters and populations increase quickly and in many ranches, the species is said to be stable to increasing. This species is expected to increase on every property where they exist.
Cats are virtually absent on all properties, due to legislative restrictions and because many farmers feel that their presence on the ranch increases calf mortality of antelope species. On the few properties on which they exist, they are confined to enclosures and are fed artificially which increases costs. Captive breeding and cat hunting are not allowed in Zambia, so the incentive to encourage ranchers to breed them is logically curtailed.
The absence of rhino can be explained by the current legislative and policy restrictions and high security costs. Under the current policy, rhino can only be kept on a private property under custodianship arrangements. Many property owners also feel that it would be risky to keep rhinos on private property, particularly beginning 2008 when poaching started to increase to unprecedented levels to the extent that even in South Africa which has the highest investment in security on the African continent, started to lose more than 600 animals each year (2008-2013). The poaching scourge for rhino increased due to a sudden upsurge of rhino horn price on the black market particularly in Vietnam where it was estimated to cost no less than US$ 65,000 per kg (March 2014) or in extreme circumstances up to USD 1400 as was once recorded in Vietnam in 2013. This had made rhino security a risk and probably discouraged new property owners in Zambia from keeping rhino even under custodianship arrangements, as the cost of doing so may far exceed benefits.
The game ranching sector in Zambia has the potential to increase as human population density is still low (17/km2). To achieve good growth rates in each province there should be provision of technical information and services. This would to enable new entrants to manage the range in a professional manner as to increase stocking rates. The following suggestions are made to support growth of the sector:
1) Growing of Lurcene/Alfalfa (Medicago sativa) which is a nutritious fodder crop rich in proteins, minerals and vitamins should be encouraged. Lurcene can with stand extremes of drought and this makes it remarkably adaptable to various climatic conditions and its wide spread use in Zambia can revolutionarize game ranching as well as live stock raising and should be considered as a priority programme to be implemented side by side with game ranching. To speed up this process, the local academic institutions such the University of Zambia, Copperbelt University and Mulungushi University should be willing and ready to offer technical training and outreach programmes to communities.
2) In areas along the Tanzania-Zambia Railways (TAZARA) corridor, it would be advisable to encourage ranching as the area is already accessible by train. Additionally, ranching schemes in the high rainfall ecological zones, should consider managing such properties as integrated production systems, which would also include bee keeping, and in some instances aquaculture on the same property to increase profit margins and leverage lower stocking rates.
3) In Western Province where the land tenure systems highly engrained in traditional systems, it would be advisable for government to engage traditional authorities there to release land for ranching. In Luapula and Northern Provinces, the major factor could be inadequate air and road support infrastructure coupled with the absence of cattle keeping tradition. In Luapula Province for instance, fishing in natural water bodies which are in fact in abundance, is a major occupation and ranching would be considered alien. In these areas establishment of game ranching schemes should be preceded by massive awareness campaigns.
We wish to thank staffs of licensing and research departments of Zambia Wildlife Authority in particular Mr. Daniel Mwizabi a research assistant at ZAWA for collecting data from the field, Mr. Ngubu a student on attachment from the University of Zambia for entering data on excel spread sheets, Mr. Ignatius Mulembi of the Licensing Office for providing documents which contain duplicate copies of the Certificates of Ownership, Mr. Musonda for his skillful and careful driving during routine visits to Game Ranches. Many anonymous readers that made contributions and critique of the initial draft summarizing it from several pages to the present succinct account. We thank them all for the effort.
1. Chansa, W., Kampamba, G., Siamudaala, V. and Changwe, K. (2005) Management Guidelines for Private Wildlife Estates in Zambia. Zambia Wildlife Authority, Chilanga, 1-9.
2. Chansa, W., Kampamba, G. and Changwe, K. (2005) General Guidelines for Conducting Ecological Assessments for the Establishment and Management of Game Ranching Operations in Zambia. Zambia Wildlife Authority, Chilanga, 1-51.
3. Anon (2012) Game Time, the Changing Face of South African Farming. Financial Mail, 27-54.
4. Absa Group of Economic Research (2003) Game ranch Profitability in Southern Africa. Monty Print cc. Rivonia, 1-83.
5. Mwenya, A.N. (2009) Game Ranching. Zambian Economist, 49-50.
6. Chansa, W. and Wagner, P. (2006) On the Status of Malachochersus tornieri (Siebenrock, 1903) in Zambia. Salamandra (Journal of Herpetology), 42, 187-190.
7. Kingdon, J. (2008) The Kingdon Field Guide to African Mammals. A& C Black, London.
8. Anon (2002) Game Ranching Policy for Botswana. Ministry of Trade, Industry and Tourism, Botswana Government Printer, Gaborone.
9. Chansa, W., Siamudaala, V., Kampamba, G. and Changwe, K. (2005) The National Crocodile Conservation Plan. Zambia Wildlife Authority, Chilanga, 1-37.
10. Siamudaala, V. (2000) Draft Policy on Private Wildlife Estates—Game Ranching and Other Novel Uses. Zambia Wildlife Authority, Chilanga, 1-27.
11. Smit, N. (2005) Calculating Your Land’s Game Carrying Capacity. Farmers Weekly, 44-46.
12. Mkanda, F.X. and Chansa, W. (2011) Changes in the Temporal and Spatial Pattern of Road Kills along the Lusaka-Mongu (M9) Highway, Kafue National Park, Zambia. South African Journal of Wildlife Research, 41, 68-78.
*Corresponding author.
|
How do you conjugate future tense in French?
What are the conjugations for future tense?
Regular verbs in the future tense are conjugated by adding the following endings to the infinitive form of the verb: -é, -ás, -á, -emos, -éis, -án. There are twelve common verbs that are irregular in the future tense. Their endings are regular, but their stems change.
What are the future tenses in French?
The simple future tense
Subject Future ending Example
il/elle/on -a il/elle/on regardera
nous -ons nous jouerons
vous -ez vous parlerez
ils/elles -ont ils/elles partiront
What is an example of future tense in French?
The French future tense (le futur simple) is used in a similar way to the English ‘will (+ main verb)’: to describe upcoming actions. L’année prochaine, j’apprendrai le chinois. apprendre, futur Next year, I will learn Chinese.
What is saber in the future tense?
We use saber in the future tense to talk about things that will be known. Saber in the future tense can also mean ‘to find out‘. In the future tense, saber is an irregular verb.
What is future simple tense with examples?
Simple Future Tense Examples
They will play football in that field. April will prefer coffee to tea. Bob will go to the library tomorrow. We will go shopping in that market this Monday. We will watch a movie in this Cineplex on next Friday.
THIS IS FUNNING: Which tribes were allied with the French and which were enemies of the French?
How do you form immediate future in French?
The immediate future tense is also used to talk about what is going to happen in the future. It is easy to formulate.
How to form the immediate future.
Subject pronoun Aller = to go English
je vais I’m going
tu vas You’re going (informal)
il/elle/on va He is going/She is going/We are going
nous allons We are going
|
Was Spain more powerful than France?
Was Spain once the most powerful country?
During the sixteenth century, Spain became the most powerful country in both Europe and the Americas. … Spain rose to a position of power in the sixteenth century due to the consolidation of the two largest Spanish kingdoms, Aragon and Castile, in 1492, along with the conquest of Granada that same year.
Is Spain more developed than France?
8. However, France is a richer country than Spain — and not just in terms of per capita GDP (France’s €31,100 is 37% higher than Spain’s €22,700). … According to the European Commission, Spain will export a higher share of GDP than France both in 2013 (33.7% of GDP compared to 28.2%) and in 2014 (35.2% and 29.2%).
Why was Spain so powerful?
The Spanish exploited resources and labor from their newly colonized territories. Southern America was rich in both timber and precious metals, and harvesting the gold and silver in the area made the empire very rich. … Spain had colonies on the other half of the world, too, including Africa and other parts of Europe.
Was Spain a powerful country?
Spain was the wealthiest and most powerful nation in the world in the late 1500s.
THIS IS FUNNING: You asked: How long was Cameroon a French colony?
Is France or UK more powerful?
France surpassed the US and Britain as the world’s top soft power, according to an annual survey examining how much non-military global influence an individual country wields. Britain headed the list two years ago, but was edged off top spot by the US last year.
Is Spain still powerful?
Most Powerful Countries 2021.
Power Rank 18
Country Spain
GDP $1.39 Tn
GDP per Capita $29,565
2021 Population 46,745,216
Is Spain a first world country?
The term “First World” was first introduced by French demographer Alfred Sauvy in 1952* and used frequently throughout the Cold War.
First World Countries 2021.
Ranking 25
Country Spain
Human Development Index 0.904
2021 Population 46,745,216
Did France go to war with Spain?
The Franco-Spanish War (1635–1659) was a military conflict fought by France and Spain, with other powers participating at different points.
Franco-Spanish War (1635–1659)
Date 19 May 1635 – 7 November 1659 (24 years, 5 months, 2 weeks and 5 days)
Territorial changes Artois, Roussillon and Perpignan annexed by France
When did Spain lose its power?
The war ended with the signing of the Treaty of Paris on December 10, 1898. As a result Spain lost its control over the remains of its overseas empire — Cuba, Puerto Rico, the Philippines Islands, Guam, and other islands.
|
伊坂プレス トラベルマガジン
New Account Log In
HOME > Travel Magazine > Niigata Prefecture > Is there anyone else who wants to know the roots of Japan’s representative folk song “Sado Okesa”?
In Sado Island of Niigata Prefecture, there is Japan’s famous folk song called “Sado Okesa” that has been handed down since the Edo period; however, it came to be called “Sado Okesa” since the end of the Taisho era.
“Sado Okesa” is based on “Ushibuka Haiya Bushi” in Kumamoto Prefecture. In the Edo period, “Haiya Bushi” was sung in drinking bouts of sailors of Kitamae-Bune that transported goods from Osaka to Hokkaido by stopping by the ports on the Sea of Japan and it also introduced to Sado Island.
Ogi Port, opened as a shipping port for gold and silver mined at Sado Gold Mine, was one of the calling ports of Kitamae-Bune. The Haiya Bushi sung by the sailors became “Okesa Bushi”. Geisha in Ogi composed the choreography of “Okesa Bushi” and became a performance at dinner parties in a tatami room with a geisha. Later, it was also transmitted to Aikawa Gold Mine (commonly known as Sado Gold Mine) and became a song sung by miners, and it came to have a melancholic melody like today.
“Sado Okesa” formerly called “Aikawa Okesa”, but in the Taisho era, the preservation society of folk song in Sado Island named “Sado Okesa” to spread it all over Japan. They had many performances in many places in Japan and overseas performances such as in south Karafuto (Sakhalin), Korean Peninsula, Taiwan, Manchuria, etc. As a result, “Sado Okesa” is now known as a representative folk song in Japan.
There are many “Okesa legends” related to “Sado Okesa” handed down in Sado Island. Still, according to the most well-known legend, a cat was kept carefully by the old couple in the local Soba shop in Ogi. When the Soba shop became less popular due to the increase of competitors, the cat turned into a beautiful girl called “Okesa”, attracted customers with song and dance, and the Soba shop became very popular. Later the song and dance performed by this beautiful girl came to be called “Okesa”, and this is said to be the origin of “Okesa Bushi”.
Writer of this article
You need to login to comment on an article.
|
« March 2018 | Main | May 2018 »
Hog wild about dot maps
Reader Chris P. sent me this chart.
Dot maps are very limited. Think before you use them.
Beauty is in the eyes of the fishes
Reader Patrick S. sent in this old gem from Germany.
He said:
It displays the change in numbers of visitors to public pools in the German city of Hanover. The invisible y-axis seems to be, um, nonlinear, but at least it's monotonic, in contrast to the invisible x-axis.
There's a nice touch, though: The eyes of the fish are pie charts. Black: outdoor pools, white: indoor pools (as explained in the bottom left corner).
It's taken from a 1960 publication of the city of Hanover called *Hannover: Die Stadt in der wir leben*.
This is the kind of chart that Ed Tufte made (in)famous. The visual elements do not serve the data at all, except for the eyeballs. The design becomes a mere vessel for the data table. The reader who wants to know the growth rate of swimmers has to do a tank of work.
The eyeballs though.
I like the fact that these pie charts do not come with data labels. This part of the chart passes the self-sufficiency test. In fact, the eyeballs contain the most interesting story in this chart. In those four years, the visitors to public pools switched from mostly indoor pools to mostly outdoor pools. These eyeballs show that pie charts can be effective in specific situations.
Now, Hanover fishes are quite lucky to have free admission to the public pools!
Playfulness in data visualization
Lines, gridlines, reference lines, regression lines, the works
This post is part 2 of an appreciation of the chart project by Google Newslab, advised by Alberto Cairo, on the gender and racial diversity of the newsroom. Part 1 can be read here.
In the previous discussion, I left out the following scatter bubble plot.
This plot is available in two versions, one for gender and one for race. The key question being asked is whether the leadership in the newsroom is more or less diverse than the rest of the staff.
The story appears to be a happy one: in many newsrooms, the leadership roughly reflects the staff in terms of gender distribution (even though both parts of the whole compare disfavorably to the gender ratio in the neighborhoods, as we saw in the previous post.)
Unfortunately, there are a few execution problems with this scatter plot.
First, take a look at the vertical axis labels on the right side. The labels inform the leadership axis. The mid-point showing 50-50 (parity) is emphasized with the gray band. Around the mid-point, the labels seem out of place. Typically, when the chart contains gridlines, we expect the labels to sit right around each gridline, either on top or just below the line. Here the labels occupy the middle of the space between successive gridlines. On closer inspection, the labels are correctly affixed, and the gridlines drawn where they are supposed to be. The designer chose to show irregularly spaced labels: from the midpoint, it's a 15% jump on either side, then a 10% jump.
I find this decision confounding. It also seems as if two people have worked on these labels, as there exists two patterns: the first is "X% Leaders are Women", and second is "Y% Female." (Actually, the top and bottom labels are also inconsistent, one using "women" and the other "female".)
The horizontal axis? They left out the labels. Without labels, it is not possible to interpret the chart. Inspecting several conveniently placed data points, I figured that the labels on the six vertical gridlines should be 25%, 35%, ..., 65%, 75%, in essence the same scale as the vertical axis.
Here is the same chart with improved axis labels:
Re-labeling serves up a new issue. The key reference line on this chart isn't the horizontal parity line: it is the 45-degree line, showing that the leadership has the same proprotion of females as the rest of the staff. In the following plot (right side), I added in the 45-degree line. Note that it is positioned awkwardly on top of the grid system. The culprit is the incompatible gridlines.
The solution, as shown below, is to shift the vertical gridlines by 5% so that the 45-degree line bisects every grid cell it touches.
Now that we dealt with the purely visual issues, let me get to a statistical issue that's been troubling me. It's about that yellow line. It's supposed to be a regression line that runs through the points.
Does it appear biased downwards to you? It just seems that there are too many dots above and not enough below. The distance of the furthest points above also appears to be larger than that of the distant points below.
How do we know the line is not correct? Notice that the green 45-degree line goes through the point labeled "AVERAGE." That is the "average" newsroom with the average proportion of female staff and the average proportion of leadership staff. Interestingly, the average falls right on the 45-degree line.
In general, the average does not need to hit the 45-degree line. The average, however, does need to hit the regression line! (For a mathematical explanation, see here.)
Note the corresponding chart for racial diversity has it right. The yellow line does pass through the average point here:
In practice, how do problems seep into dataviz projects? It's the fact that you don't get to the last chart via a clean, streamlined process but that you pass through a cycle of explore-retrench-synthesize, frequently bouncing ideas between several people, and it's challenging to keep consistency!
And let me repeat my original comment about this project - the key learning here is how they took a complex dataset with many variables, broke it down into multiple parts addressing specific problems, and applied the layering principle to make each part of the project digestible.
Well-structured, interactive graphic about newsrooms
Discoloring the chart to re-discover its plot
Today's chart comes from Pew Research Center, and the big question is why the colors?
The data show the age distributions of people who believe different religions. It's a stacked bar chart, in which the ages have been grouped into the young (under 15), the old (60 plus) and everyone else. Five religions are afforded their own bars while "folk" religions are grouped as one, and so have "other" religions. There is even a bar for the unaffiliated. "World" presumably is the aggregate of all the other bars, weighted by the popularity of each religion group.
So far so good. But what is it that demands 9 colors, and 27 total shades? In other words, one shade for every data point on this chart.
Here is a more restrained view:
Let's follow the designer's various decisions. The choice of those age groups indicates that the story is really happening at the "margins": Muslims and Hindus have higher proportions of younger followers while Jews and Buddhists have higher concentrations of older followers.
Therein lies the problem. Because of the lengths, their central locations, and the tints, the middle section of each bar is the most eye-catching: the reader is glancing at the wrong part of the chart.
So, let me fix this by re-ordering the three panels:
Is there really a need to draw those gray bars? The middle age group (grab-all) only exists to assure readers that everyone who's supposed to be included has been included. Why plot it?
The above chart says "trust me, what isn't drawn here constitutes the remaining population, and the whole adds to 100%."
Another issue of these charts, exacerbated by inflexible software defaults, is the forced choice of imbuing one variable with a super status above the others. In the Pew chart, the rows are ordered by decreasing proportion of the young age group, except for the "everyone" group pinned as the bottom row. Therefore, the green bars (old age group) are not in a particular order, its pattern much harder to comprehend.
In the final version, I break the need to keep bars of the same religion on the same row:
Five colors are used. Three of them are used to cluster similar religions: Muslims and Hindus (in blue) have higher proportions of the young compared to the world average (gray) while the religions painted in green have higher proportions of the old. Christians (in orange) are unusual in that the proportions are higher than average in both young and old age groups. Everyone and unaffiliated are given separate colors.
The colors here serve two purposes: connecting the two panels, and revealing the cluster structure.
|
Quick Answer: Why did Alexander go to India?
When and why did Alexander invade India?
Alexander Invasion of India
Why did the Greeks go to India?
In ancient times, trade between the Indian subcontinent and Greece flourished with silk, spices and gold being traded. The Greeks invaded South Asia several times, starting with the conquest of Alexander the Great and later with the Indo-Greek Kingdom.
Why is Alexander called Sikander?
He is known as Sikandar in Urdu and Hindi, a term also used as a synonym for “expert” or “extremely skilled”. Explanation: Sikandar is the Persian rendition of the name Alexander. When the Greek emperor Alexander the Great conquered Persia, the Persians called him Sikandar, meaning “defender” or “warrior”.
Why didn’t Genghis Khan conquer India?
To summarize, Genghis Khan refused to invade India for the following four reasons: … He did not face any provocation from the Mamluk dynasty which was ruling northern India. He did not want to pursue a man who had lost everything and was no longer a threat. He was not motivated by wealth.
THIS IS FUN: Which is clean railway station in India?
Did Alexander the Great invade India?
The invasion of India began in the summer of 327 B.C. Alexander proceeded as he had in his Persian conquest, vanquishing city by city. Many cities surrendered without a fight; those that did not were usually massacred without mercy. Alexander soon gained the support of Ambhi, the ruler of Attock.
Why did Dionysus go to India?
But he is most famous as the last great writer to celebrate the gods of ancient Greece. … The story of the Dionysiaca begins with Zeus, leader of the Greek gods, ordering Dionysus to travel to India, whose inhabitants refuse to worship him. The Indians stubbornly prefer their ancestral gods of fire and water.
How did Greeks come to India?
When the Greeks and Macedonians in Alexander’s army reached India in 326 BCE, they entered a new and strange world. They knew a few legends and travelers’ tales, but their categories of thought were inadequate to encompass what they witnessed.
Did Aristotle visit India?
Peripateticism. Aristotle’s knowledge of India came essentially from Scylax and Ctesias. … The Peripatetic philosopher Clearchus of Soli, traveled to the east to study Indian religions. The Peripatetic philosopher Theophrastus, in his book on history of plants contains an excursus on Indian species.
|
Diet Tips For Diabetic Patients
Diet Tips For Diabetic Patients
Diabetes, Diet And Fatty Liver Disease
Diabetes is a major risk factor for Non-Alcoholic Fatty Liver Disease, which if left untreated may progress to liver scarring or liver fibrosis
Dr. Girish Parmar | Jan 05, 2021
Suffering from diabetes? Give extra attention to your liver. Diabetes is a major risk factor for Non-Alcoholic Fatty Liver Disease (NAFLD)—a range of liver conditions, affecting individuals despite little or no alcohol consumption.
Also Read| Fatty Liver Disease? Here's How To Reverse It
NAFLD causes fat buildup in the liver, resulting in its enlargement, inflammation or even scarring of different levels. If left untreated, the condition may cause liver cirrhosis, requiring liver transplant. Most importantly, almost half the people with type 2 diabetes develop fatty liver disease.
Symptoms of Fatty Liver Disease
Fatty liver does not cause any noticeable symptoms. There are chances that you may feel tired or complain of mild discomfort or pain in the upper right side of your abdomen.
If left untreated, fatty liver may progress to liver scarring or liver fibrosis. Severe liver fibrosis leads to cirrhosis. Symptoms of liver cirrhosis include loss of appetite, fatigue, weakness, weight loss, nose bleeds, yellowish discolouration of skin and eyes, abdominal pain and swelling, swelling of your legs, confusion, and itchy skin.
Causes of Fatty Liver Disease
The fatty liver disease develops due to the production of excessive fat in your body or decreased efficiency in the metabolization of fats. This excess fat gets stored in your body’s liver cells leading to fatty liver disease.
Also Read| Fats, FSSAI Experts, Or My Effective Diet: Who Won Health Race?
The following factors play a major role in the development of fatty liver -
• Obesity
• High blood sugar
• Raised cholesterol and triglyceride levels in your blood
• Insulin resistance
• Certain genes and other types of infections like hepatitis C
Fatty Liver Disease & Diabetes
Diabetes is not the only cause of fatty liver disease. But if you have diabetes, you are sure to develop the fatty liver disease as well. The two diseases tend to occur together in some people owing to their obesity and insulin resistance. Hence if you are a prediabetic person or an individual with diabetes, it is absolutely necessary to keep your blood sugar levels under control. This will help prevent the complications of fatty liver disease and maintain your overall health.
Treatment of Fatty Liver Disease
There is no medication that can be used to treat fatty liver disease. Lifestyle changes and dietary modifications can help reverse the condition.
Follow these auxiliary measures to help reduce your risk of fatty liver disease.
1. Limit or avoid alcohol completely
Alcohol is directly linked to increase in blood sugar and cholesterol levels. Avoiding alcohol will help regulate your blood sugar levels, thereby reducing your risk of fatty liver disease.
2. Lose weight
Obesity is a major risk factor for fatty liver disease. Lose weight to maintain a healthy life and to lower your chances of developing the condition. Exercise regularly by going for daily walks; some form of basic yoga or physical activity will help you stay fit and healthy. Try to achieve a minimum of 10,000 to 15,000 steps a day to boost your metabolism.
Also Read| How Sugar And Fructose Slowly Damage Liver
3. Dietary changes
Making changes to your daily diet will help you lose weight and remain fit. This will help you lead a healthy lifestyle which is absolutely necessary to keep fatty liver disease away. Avoid foods and drinks high in fructose like artificially sweetened sodas, juices, pastries, desserts. Include plenty of vegetables, fruits, and fresh foods in your diet. Limit your fat intake to nuts and healthy oils like olive oil. Limit your intake of refined carbohydrates like white rice, sweets, white bread, processed grains, and refined grain products. Avoid trans fats and reduce your consumption of saturated fats.
4. Control blood sugar levels
If you are diabetic, you need to keep your blood sugar levels under check in order to avoid fatty liver disease and its complications. Good control over your blood sugar levels can be achieved by dietary modifications, daily exercise and a healthy lifestyle.
Also Read| Diabetic? Here's A Diet For You
5. Regulate blood cholesterol levels
Maintain your low-density lipoprotein cholesterol and triglycerides under normal levels to keep fatty liver disease at bay. These are a type of blood fat that can go settle in the liver when in excess. To prevent your blood cholesterol from increasing, you need to avoid fried and fatty foods, start exercising and eat healthy.
Prevention of Fatty Liver
Fatty liver disease and diabetes go hand-in-hand. In order to conquer one disease, you need to tackle the other as well. Obesity and high sugar levels paired with insulin resistance can increase your risk of developing fatty liver disease. Hence your target should be to maintain a healthy weight by losing weight if you are obese or overweight. Exercise regularly and control your blood sugar levels and triglyceride levels to protect yourself against the dreadful combination of diabetes and fatty liver disease.
(The author is a Senior Consultant – Endocrinology, Nanavati Super Speciality Hospital)
|
Diagnostic approach to peripheral neuropathy
Ann Indian Acad Neurol. 2008 Apr;11(2):89-97. doi: 10.4103/0972-2327.41875.
Peripheral neuropathy refers to disorders of the peripheral nervous system. They have numerous causes and diverse presentations; hence, a systematic and logical approach is needed for cost-effective diagnosis, especially of treatable neuropathies. A detailed history of symptoms, family and occupational history should be obtained. General and systemic examinations provide valuable clues. Neurological examinations investigating sensory, motor and autonomic signs help to define the topography and nature of neuropathy. Large fiber neuropathy manifests with the loss of joint position and vibration sense and sensory ataxia, whereas small fiber neuropathy manifests with the impairment of pain, temperature and autonomic functions. Electrodiagnostic (EDx) tests include sensory, motor nerve conduction, F response, H reflex and needle electromyography (EMG). EDx helps in documenting the extent of sensory motor deficits, categorizing demyelinating (prolonged terminal latency, slowing of nerve conduction velocity, dispersion and conduction block) and axonal (marginal slowing of nerve conduction and small compound muscle or sensory action potential and dennervation on EMG). Uniform demyelinating features are suggestive of hereditary demyelination, whereas difference between nerves and segments of the same nerve favor acquired demyelination. Finally, neuropathy is classified into mononeuropathy commonly due to entrapment or trauma; mononeuropathy multiplex commonly due to leprosy and vasculitis; and polyneuropathy due to systemic, metabolic or toxic etiology. Laboratory investigations are carried out as indicated and specialized tests such as biochemical, immunological, genetic studies, cerebrospinal fluid (CSF) examination and nerve biopsy are carried out in selected patients. Approximately 20% patients with neuropathy remain undiagnosed but the prognosis is not bad in them.
Keywords: Axonal demyelination; diagnosis; nerve conduction; peripheral neuropathy.
|
Frequent question: Are the Scottish Roman Catholic?
Are Scots Catholic or Protestant?
Are Scottish Highlanders Catholic?
In the 162 Highland parishes there were 295,566 people. There were 282,735 Protestants, and 12,831 Roman Catholics. That means that 95.66% of the Highlanders were Protestant, and 4.34% were Catholic. Of every 10,000 Highlanders, 9566 were Protestant.
What religion are Scots?
Is Scotland mostly Protestant?
Are Scots mostly Catholic?
IMPORTANT: What do the Hebrew Scriptures contain?
Are Jacobites Catholic?
Jacobites weren’t all Roman Catholics
Which Scottish clans were Protestant?
Protestant clans: Clan Campbell, Clan Murray, Clan Stewart, Clan Forbes, Clan Macgillivray, Clan Maclean, Clan Grant, Clan MacNeil, Chattan Confederation – Clan Mackintosh.
Is Scotland an Islamic country?
Muslims constitute 1.45% of the population in Scotland – there are 76,737 Muslims, 41,241 of them men, and 35,496 women. Scotland’s Muslims make up 2.8% of all Muslims in the UK. The Muslim population of Scotland is larger than the total population of all the other non-Christian faith groups in Scotland.
What is the Celtic religion beliefs?
The Celtic religion was closely tied to the natural world and they worshipped gods in sacred places like lakes, rivers, cliffs and bushes. The moon, the sun and the stars were especially important – the Celts thought that there were supernatural forces in every aspect of the natural world.
|
Identical Twins Aren’t Always Genetically Identical After All
Identical Twins Aren’t Always Genetically Identical After All
Identifying twins, more accurately known as monozygotic twins, are the result of a single fertilized egg splitting into two embryos. As a result, it is assumed that their genetics are the same and that any differences between a pair must be environmental. A new study found that less accurate than thought, some twins were not shared by partners with a number of mutations. Considering that the “nature vs. nurturing” debate has been led by thousands of dual studies, it could be more than just scientific curiosity.
Mutations that cause cell division occur when we evolve from the first organism, so naturally, there are some twins, many of which appear after embryo separation. Most comparisons between monozygotic and dizygotic (fraternal) twins, however, consider the matter very little from transformation.
Dr. Hákon Jónssonof DCOED Genetics, Iceland, attempted to investigate mutation rates by modifying the genomes of members of both 387 pairs of monozygotic twins, as well as sequences from their parents, children, and even husbands. On average in Nature Genetics, Jónssonof and co-authors report that each pair of pairs undergoes an initial development of 5.2 which occurs in one but not in both cases, as well as a larger number appearing later in life. Many of these mutations will be in genes where their effects are difficult to detect. Others are not and may be responsible for some of the differences previously held with the environment. The paper cites autism as a condition sometimes seen in only one monozygotic twin that may be mutated rather than environmentally.
More interestingly, the authors found across 15 percent of the genome differences that occur fairly frequently. There was more than 100 present in this subset than the number of twin-specific mutations following the hour curve that appeared in just one pair, much higher than the rest, increasing the probability that some could make noticeable differences. These results also shed light on the time of change, as both pairs would share what happened before evolution.
Many monozygotic pairs follow the same national life path, such as newly elected U.S. Senators Mark Kelly and Scott Kelly, both of whom became astronauts, giving NASA a great opportunity to study the effects of a pair in space and longer bodyweight loss earth. Yet some are surprised at their differences in twin personalities or talents. It is usually blamed on the environment, perhaps expressing a deliberate desire to separate siblings. However, new research by Johnson and co-authors suggests that the contribution of mutations did not diminish during early development.
If so, some research may require some re-analysis. Measures such as intelligence or sexuality show how much difference there is between monozygotic twins and this compares to differences between dizygotic twins. If the monozygotic gap is entirely environmental, the argument goes, the greater the extent of the dizygotic variation, the greater the contribution of genetics. However, if the monozygotic difference contains a mutation-driven genetic material, it may complicate the seemingly straight-forward test.
Share This Post
|
The role of Victorian detective fiction in the evolution of the crime literature genre is sometimes underappreciated these days, in part perhaps knocked into the shade by the bright, startling and oft lurid dust-jacket artwork that began to dominate in the 1920s (of which I confess to being a huge fan), and the global recognition of the big names from the Golden Age of detective fiction. However, the foundation stones for most of the tropes, plots, clichés and even forensic methods that we have come to expect from detective fiction today can be found in the stories of the Victorian and Edwardian eras.
Although there had been other dalliances with the genre before, detective fiction really came of age with Edgar Allan Poe's Tales, published in book-form in New York in 1845. This volume combined three stories featuring Poe's amateur sleuth and master of 'ratiocination', C. Auguste Dupin. The first of these, and the most famous, 'The Murders in the Rue Morgue', is widely considered the first modern detective fiction title: 'Here was no trifling advance over the blunderings of the past, no mere pioneering or experimental effort. Here was the detective story, stepping boldly out of its eggshell, "fully grown and armed to the teeth"' ('Ellery Queen', Queen's Quorum, p.10).
"Here was the detective story, stepping boldly out of its eggshell, 'fully grown and armed to the teeth'" - Ellery Queen
Waters, Recollections of a Detective Police-Officer by “Waters”, 1856
An early example of 'recollections'.
Despite Poe's sterling efforts (he wrote three more stories subsequently), the emerging genre took some time to ignite interest and gain traction, especially in America. Early works from this period are decidedly uncommon, especially in original condition. In Britain things took off a little quicker, aided in part by none other than that stalwart of Victorian literature, Charles Dickens, writing about friends' experiences as plainclothes detectives in a series of articles in 1850. This is where Victorian detective fiction really begins. These tales sparked a series of similar works, with diminishing foundation in fact but increasing investment in imagination, often featuring the words 'recollections', 'reminiscences' or 'experiences' in the titles. Within these lurk what are sometimes known as the 'Cities Mysteries', tales specific to a particular metropolis, i.e. Paris, Berlin, New York, London - a sub-sub-genre of Victorian detective fiction, if you will.
Such titles proved immensely popular, exciting the public imagination. Some are definitely better than others, but they were, appropriately, often somewhat procedural in their approach. The next cornerstone of modern detective fiction to be born from the Victorian era would have to be Wilkie Collins' The Moonstone (1868), an epistolary tale revolving around a purloined jewel, considered by G.K. Chesterton as 'probably the best detective tale in the world', and 'probably the very finest detective story ever written' by Dorothy L. Sayers.
"probably the very finest detective story ever written" - Dorothy L Sayers, re The Moonstone
Graham Greene, Dorothy Glover, Victorian Detective Fiction, Catalogue, 1966
Greene 's Victorian Detective Fiction catalogue
Collins had already had success, and fun, with the crime fiction genre, introducing a female detective in 1856 ('Diary of Anne Rodway'), and even introducing comedy into his 1858 story 'The Biter Bit'. With The Moonstone, he put in place elements that can be seen threading throughout the subsequent development of detective fiction: country house robbery, incompetent coppers, red herrings and plot twists, for example.
Graham Greene was so inspired by The Moonstone that he would go on to publish his own catalogue, Victorian Detective Fiction: A Catalogue of the Collection... (1966), a useful guide to the range of works published before and after Collins.
Female detectives come further to the fore from the 1860s onwards, though mostly (as far as we know anyway, given the application of anonymity or pseudonyms) written by male authors initially.
Early examples of such works include The Experiences of a Lady Detective (aka The Revelations of a Lady Detective), which first appeared we think in 1864, featuring the perspicacious Mrs. Paschal, one of the very first female detectives. Andrew Forrester’s Miss Gladden (or 'G') also appeared in print around this time.
George Sims, Dorcas Dene, Detective. Her Adventures, 2 vols, first editions
Victorian female detectives in action!
William Stephens Hayward, The Experiences of a Lady Detective, 1884
Revelations, Experiences, or a ghost story?
Somewhat later, but still considered important within the evolution of Victorian detective fiction, came Dorcas Dene, created by George R. Sims.
Sims was intrigued by the psychology of crime; Dorcas Dene, and her ‘Council of Four’ (comprising her mother, her blind artist husband, their dog Toddlekins[!] and herself) solved numerous crimes and mysteries, much to the delight of the burgeoning crime fiction buying public.
Grant Allen, famously author of the at-the-time scandalous The Woman Who Did (1895), created two female detectives for the popular market, Hilda Wade and Miss Cayley, but in the context of detective fiction he is best known for An African Millionaire. Episodes in the Life of the illustrious Colonel Clay, a classic of 'rogue fiction', popularising elements such as the use of disguises and international jet-setting.
Allen Grant, An African Millionaire, first edition, 1897
Grant Allen's African Millionaire
Anna Katharine Green, Filigree Ball, first edition, 1903
One of Anna Katharine Green's later titles
To offset this literary cross-dressing to some degree, the first Victorian detective fiction novel was written by a woman, and an American to boot; The Leavenworth Case (1878), by Anna Katharine Green. This author, sometimes known as 'the mother of the detective story', wrote well plotted, legally accurate stories which distinguished her writing from her contemporaries, and on many levels defined the shape of detective fiction to come, notably influencing Arthur Conan Doyle, Agatha Christie and even perhaps 'Carolyn Keene'.
Arthur Conan Doyle is obviously not a name that can be mentioned solely in passing here... Sherlock Holmes without question ranks as one of the defining characters of the Victorian period, numbered among the most beloved characters of English literature, and for many represents the preeminent detective of criminality and mystery. Appearing in serialised form in The Strand Magazine, beginning with 'A Scandal in Bohemia' in 1891, the adventures of Sherlock Holmes would go on to become a global phenomenon.
The Adventures of Sherlock Holmes [With] The Memoirs of Sherlock Holmes.
The Adventures & Memoirs, first separate editions
All the first editions of Arthur Conan Doyle's books from the 19th century are highly collectable, but to find first editions in book form of the earliest Sherlock Holmes works in anything approximating original condition is challenging; Holmes' first appearance, A Study in Scarlet (1887), either in book form or in Beeton's Christmas Annual for the same year, is notoriously scarce.
Hound of the Baskervilles, first edition
Hound of the Baskervilles, first edition
The wonderful first edition of The Hound of the Baskervilles (1902), with its excellent decorative binding and plates by Sidney Paget, continues to go up in value with each year it seems. What one would have to pay now for a copy in the original dust-jacket almost defies imagination...a six figure objet d'art...
Fergus W. Hume, The Mystery of a Hansom Cab, second edition, 1887
Mystery of a Hansom Cab, 1887
Victorian detective fiction is also great for the emergence of the 'sensation novel', popular works often cheaply produced for mass consumption, sometimes as 'Penny Dreadfuls' or 'Yellowbacks', often with eye-catching wrappers or covers. This is a popular area of collecting in its own right. One of the most famous of these was Australian author Fergus Hume's Mystery of a Hansom Cab, originally published in Australia in 1886, then in the UK and USA in 1887. Hume's work is seen by many as a bridge between sensational fiction and detective fiction, and played an important role in globally establishing the latter genre more firmly.
The distinction between Victorian and Edwardian detective fiction is blurred, not least as authors such as Conan Doyle had works published in both eras. New authors brought new approaches to the genre to an ever-ready and blossoming readership; R. Austin Freeman's forensic Dr Thorndyke for example, or Chesterton's empathic Father Brown. New sub-genres also sprouted, such as 'railway murder', and other new experiments with the locked room paradigm.
Collecting Victorian Detective Fiction
With works ranging from the 1840s to early 1900s, there is a usefully broad array of entry-points for those looking to collect Victorian and Edwardian detective fiction, allowing for a range of price-points. Many works have excellent original covers, from the more lurid pulps through to the fine pictorial cloth bindings that grace first editions of authors such as M.M. Bodkin and Louis Tracy. Very early works can be difficult to date correctly, and some books have tricky issue points that may require speaking to a specialist about.
Check out the British Library for more information on crime and crime fiction.
|
You asked: Do we need Unicef?
Why is UNICEF necessary?
UNICEF is the driving force that helps build a world where the rights of every child are realized. … UNICEF was created with this purpose in mind – to work with others to overcome the obstacles that poverty, violence, disease and discrimination place in a child’s path.
What does UNICEF do to help?
What is UNICEF? … One of the world’s largest providers of vaccines, UNICEF supports child health and nutrition, safe water and sanitation, quality education and skill building, HIV prevention and treatment for mothers and babies, and the protection of children and adolescents from violence and exploitation.
Why is UNICEF education important?
It should be free and fair, with equal access for girls and boys. Of the 58 million primary-aged children who are not in school, half are from countries facing war and conflict. Across the globe, UNICEF is committed to free access to education for every child, every girl and boy. …
Why is UNICEF so successful?
UNICEF ensures more of the world’s children are vaccinated, educated and protected than any other organisation. We have done more to influence laws and policies to help protect children than anyone else. For over 75 years we have been there for children in danger.
THIS IS IMPORTANT: Best answer: Is Vietnam Veterans of America a legit charity?
UNICEF is supported entirely by the voluntary contributions of governments, non-governmental organizations (NGOs), foundations, corporations and private individuals. UNICEF receives no funding from the assessed dues of the United Nations. … Of all the National Committees, UNICEF USA has been around the longest.
What makes UNICEF unique?
UNICEF’s Unique Role
As the only UN agency working on the ground for children and women, only UNICEF has the influence to work at the global level with all governments in order to determine the future priorities in support of the world’s children. UNICEF is supported entirely by voluntary donations.
Do UNICEF workers get paid?
UNICEF offers an attractive remuneration package with competitive pay and benefits, in accordance with United Nations-wide salary scales, policies and practices. … National Officer (NO) staff are paid according to a local salary scale.
How trustworthy is UNICEF?
How much money does UNICEF raise each year?
Most heartening of all, since our founding in 1947, our supporters’ generosity has enabled us to raise a cumulative total of $8.2 billion in donations and gifts-in-kind, including $568 million in Fiscal Year 2019.
Why is education so important?
Education shows us the importance of hard work and, at the same time, helps us grow and develop. … Learning languages through educational processes helps interact with different people in order to exchange ideas, knowledge, good practices. It teaches us to live in harmony.
THIS IS IMPORTANT: What are PetSmart charities?
What countries have bad education?
• Burma.
• Central African Republic.
• Dominican Republic.
• Equatorial Guinea.
• Georgia.
• Liberia.
• Libya.
• Monaco.
Charity Blog
|
A CREDIBLE COMMITMENT: REDUCING DEFORESTATION IN THE BRAZILIAN AMAZON, 2003-2012 SYNOPSIS In the early 2000s, deforestation increased sharply in the Brazilian Amazon, jeopardizing the tropical rain forest's critical role in mitigating global climate change. In 2003, under the administration of President Luiz Inácio Lula da Silva and his minister of the environment, Marina Silva, the federal government decided to address the problem. More than a dozen ministries worked together to draft the Action Plan for Prevention and Control of Deforestation in the Legal Amazon. Implementation, which began the following year under coordination by the Office of the Chief of Staff of the President, expanded Brazil's system of protected areas, improved remote monitoring of the Amazon, and increased enforcement of existing forestry laws. By 2007, the deforestation rate was less than half of 2004 levels. In response to an uptick in deforestation in late 2007 and early 2008, however, the Ministry of the Environment shifted tactics. Silva and her team at the ministry published a list of municipalities that bore the greatest responsibility for deforestation. The blacklisted municipalities were targets of increased enforcement operations and sanctions. The federal government also restricted landholders' access to credit by requiring environmental compliance to qualify for government-subsidized agricultural credit. Brazil's decade-long effort reduced the deforestation rate in the Amazon region by nearly 75% from the 1996-2005 average annual rate. Rachel Jackson drafted this case study based on interviews conducted in Brazil, in September and October 2014. This case was funded by the Norwegian Agency for Development Cooperation in collaboration with the Science, Technology, and Environmental Policy program at the Woodrow Wilson School of Public and International Affairs. Case published January 2015. INTRODUCTION In January 2003, Brazil's newly elected president, Workers' Party candidate Luiz Inácio Lula da Silva (popularly known as "Lula"), appointed Marina Silva to head the Ministry of the Environment. Silva's appointment represented a victory for environmental interests. As a child, she had worked alongside her parents as a rubber tapper in the Amazon rain forest. In her twenties, she marched alongside environmental activist Chico Mendes, leading protests against Amazon deforestation. Elected to the national senate as representative of the Amazonian state of Acre in 1994, Silva continued her fight to protect the Amazon region and built a reputation as a dedicated environmentalist. During her first few months as environment minister, Silva had to set goals for the ministry to achieve during Lula's four-year presidential term. Silva and her new team, drawn largely from civil society, wanted the ministry's top priority to be reducing the rate of deforestation in the Amazon region. However, many veteran members of ministry staff worried that achieving the objective would require unprecedented cooperation from other federal ministries as well as state and municipal governments and that the Ministry of the Environment would take the blame if it failed. "Half the ministry, especially the people who had been there before we took office, were against setting it as our goal for the next four years," recalled Tasso Azevedo, who served as director of the National Forest Program under Silva. "They asked, "How could we assume a goal like that, which depended on things outside our control?'" Silva and her team eventually persuaded doubters that the ministry should commit to tackling Amazon deforestation, even though the task was daunting. Azevedo said the team successfully argued that "if we performed very well in all other areas but deforestation in the Amazon was not controlled, our work would be seen as useless. But if we actually controlled deforestation, even if other things went wrong, the perception of progress would be there." Silva's dedication to protecting the Amazon rain forest was important not just for Brazil but also for the world. The Amazon River basin was home to the largest tropical rain forest on the globe, and 60% of it lay within Brazil's borders. Occasionally referred to as "the lungs of the world," the biome helped regulate global climate, ejecting more than 20 billion tons of water vapor into the atmosphere each day. The Amazon basin was also the source of 20% of the world's fresh water and stored an estimated 90 billion to 140 billion metric tons of carbon. Preserving the forest presented immense challenges. Brazil's Legal Amazon region comprised more than half of the country's total territory, covering more than 5 million square kilometers (larger than the combined area of the European Union countries), though by 2003, only 3.5 million square kilometers of forest cover remained. Much of the region was inaccessible by road. When Silva came into office, Brazil's deforestation rate had risen during the previous six years. Past policies to reduce cutting of trees had failed to make any long-term reduction in forest clearance rates. At the 1992 United Nations Conference on Environment and Development, held in Rio de Janeiro, Brazil's second-largest city, Brazil's government had come under strong domestic and international pressure to develop a sustainable development strategy and slow the loss of trees. However, the government had failed to implement any concrete policy framework for achieving that goal. In the decade that followed the Rio conference, logging, cattle ranching, and agriculture (mostly soy cultivation) accounted for most Amazon forest clearing. Those industries often worked in sequence over a period of several years. After loggers opened roads to remove trees from a tract of land for timber, ranchers would finish clearing the land to graze their livestock. In some areas, farmers would then move in to cultivate soy and other crops. Mining, government-funded hydroelectric dams (which required flooding certain parts of the forest during construction), urban expansion, and road construction-including the creation of federal highways-contributed to the problem. Only a small percentage of the accelerating deforestation was legal. For example, in 1999, licensed tree removal accounted for just 14.2% of the total hectares logged that year. In 2000, federal licenses covered only 8.7% of the area actually cut.1 Nearly three-quarters (70%) of Amazon deforestation took place in an arc of deforestation that stretched across seven of the nine Brazilian Amazon region states and spread northwest into the heart of the Amazon rain forest.2 Silva had no trouble persuading Lula and the rest of the cabinet that reduction of Amazon deforestation should be a formal priority for the Ministry of the Environment. It became one of the stated goals of Lula's term, along with other ministries' targets. "It was more controversial inside the Ministry" of the Environment than in the administration, Azevedo said. "It was an old fight of the Workers' Party. In the same way that we wanted to fight poverty, we wanted to fight deforestation. It was part of the package of doing something different." The promise of change was a hallmark of Lula's new administration, Brazil's first left-wing government since the end of military dictatorship and return to democracy in the 1980s. "Because the Workers' Party for the past 20 years had been fighting to take over and assume the presidency, [when they won] there was this climate for change, and that allowed for innovation," said Mauro Oliveira Pires, director of deforestation policy at the Ministry of the Environment. Political support for preserving the Amazon had gradually strengthened. The 2002 presidential race was the first time all of the major candidates agreed on the need to reduce deforestation in the Amazon region.3 In previous presidential elections, at least one prominent contender had supported increased deforestation in the name of regional economic development. Still, Azevedo said, it remained unclear whether Brazil's politicians or the nation's citizenry fully appreciated Silva's vision for transforming environmental governance. "I don't think people really understood the meaning and the commitment," Azevedo said of the early cabinet meetings. "I think the people in the other ministries-or even society as a whole-looked at [reducing deforestation in the Amazon] as an intention and a good thing to say, but it was not taken seriously." That would soon change. THE CHALLENGE Forest preservation represented a complex challenge. To control deforestation, the federal government had to change the long-standing behaviors of loggers, farmers, and ranchers. Stopping destruction of the forest also required nimble coordination among ministries to (1) eliminate existing federal policy incentives for cutting trees, (2) monitor forest cover, (3) enforce penalties against violators, and (4) offer producers incentives to protect trees. President Lula and Environment Minister Silva had to work with nine state governments and hundreds of municipalities in order to achieve their goals. The history of Amazonian land settlement made deforestation especially hard to address. In the 1970s, Brazil's military government had encouraged citizens to expand into the Amazon. Partly to alleviate land conflicts in more-densely-populated coastal areas, the federal government offered settlers acreage if they cleared at least 50% of the property. The land cleared under that policy officially belonged to the federal or state governments, which were supposed to issue official titles to settlers. However, many settlers never received land titles because of uneven policy implementation. The result was a patchwork of unspecified and overlapping claims that accumulated during the following decades. Clearing land for timber or pasture just to assert ownership became common practice. Brazilians used the term grileiros to describe the people who grabbed land and submitted counterfeit titles or deeds to support their claims. The word grileiro, derived from the Portuguese word for cricket, referred to the practice of putting a forged document in a drawer full of the insects to artificially age and yellow the paper. The grileiros could then resell the land with falsified documents to third parties. By the time the federal or state governments investigated and took steps to prosecute, the grileiros typically had resold the land and moved on. Unraveling the tangle of falsified documents was a massive and complex undertaking. Even after the federal government committed to reducing deforestation and enacted new laws to protect the Amazon in the mid-1990s, federal subsidies continued to finance agricultural development in the Amazon region and fueled further land clearing. During the 2001-02 harvest year, the federal government granted R$14.7 billion (US$5.8 billion4) of rural agricultural credit to ranchers and farmers throughout the country under guidelines set by the Ministry of Agriculture.5 Such subsidies more than doubled during the following year, to about R$31 billion (US$9.4 billion).6 The Ministry of Agriculture estimated that rural agricultural credit covered about 30% of producers' total annual costs.7 Much of the credit financed agricultural expansion on illegally cleared land. The forest laws sharply restricted deforestation even on privately held land. From 1991 to 1997, the federal government passed a series of changes to its Forest Code that contained some of the strongest forest protections in the world-at least on paper. Private landholders in the Amazon with properties larger than 100 hectares (1 square kilometer) had to preserve 80% of their land in legal reserves. The landholders also had to set aside land within 50 to 300 meters of springs and rivers, depending on the size of the body of water. The new laws carried steep fines for violations, and landowners who deforested their legal reserves could be forced to replant. However, because of weak enforcement, the new restrictions had no measurable long-term impact on deforestation rates. From 1997 to 2002, deforestation rates continued to rise (figure 1). Environmental protection was largely the domain of the Brazilian Institute of Environment and Renewable Natural Resources (Instituto Brasileiro do Meio Ambiente e dos Recursos Naturais Renováveis, or IBAMA), the enforcement arm of the federal Ministry of the Environment, and the state environmental ministries. Those entities struggled to monitor illegal activity. Because much of the expansive region was inaccessible by road, physical inspection was challenging. Weak monitoring capacity made it difficult to identify and penalize lawbreakers. Since 1988, the National Institute for Space Research had used satellite imagery to track deforestation in the Amazon. However, the satellite system had two significant weaknesses: it produced deforestation data only once a year, and the imagery was not precise enough to identify individual offenders. In addition to lack of access to timely and accurate information, IBAMA suffered from serious institutional challenges. The agency had few enforcement officers, and many were poorly trained. Corruption was another problem, because some officers accepted bribes in exchange for lower fines or permits to clear protected federal forest. And the problem extended to state-level environmental ministries, which shared some of the enforcement responsibilities. Ranchers and farmers, known collectively as producers, had powerful incentives to undermine enforcement of the strict environmental laws governing private property. Landowners who had deforested beyond their allowed limits could be fined and ordered to replant trees on acreage they used to graze cattle or grow crops. The ranching and farming industries-together with rural landowners-wielded major clout in Brazil's economy and legislature, where they were known as the ruralistas. FRAMING A RESPONSE In June 2003, Silva convened a meeting of scientists and civil society representatives to examine deforestation in the Amazon, including the policy landscape and the factors driving land clearance. "This understanding was very important because the majority of people in government used to say agriculture didn't have anything to do with deforestation because [those people in government] recognized only logging as the activity destroying the forest," said Adriana Ramos of the Instituto Socioambiental, or Socioenvironmental Institute, a Brazilian nongovernmental organization focused on Amazon preservation and indigenous rights. Under Silva, the Ministry of the Environment understood that "it was not the case of choosing one driver and focusing on that driver but, rather, on the dynamic between different economic sectors." That same month, preliminary data from the Brazilian space agency indicated that deforestation rates were rising rapidly in the Amazon. "Marina brought [that number] to the president, saying, 'Look, we have to announce this number together. We said fighting deforestation was a priority of the government, so now the announcement needs to come from the government, not just from the Ministry of the Environment,'" recalled Azevedo, Silva's former chief of staff. Silva announced the growing problem alongside the minister of agriculture and the president's chief of staff. In July, Lula threw his political will behind the effort by issuing a presidential decree that created a permanent interministerial working group for the development of a coordinated plan to combat deforestation in the Amazon. The insistence on coordination among ministries was a break from previous federal government deforestation policies, which had been the sole responsibility of the Ministry of the Environment. In his decree, Lula laid out six policy instruments the group should focus on: (1) land planning in the municipalities that made up the arc of deforestation, (2) tax and credit incentives aimed at increasing the economic efficiency and sustainability of already deforested areas, (3) procedures for implementing works of environmentally sustainable infrastructure, (4) generation of employment and income in the restoration of degraded areas, (5) incorporation of open and abandoned areas into the production process and management of forest areas, and (6) integration of operations of the federal agencies responsible for the monitoring and surveillance of illegal activities in the arc of deforestation. Many of the proposals had been inspired by Silva and her team's recent experience in tackling the illegal mahogany trade, as well as their decades of experience in civil society organizations. In late 2002, signatory nations had increased the levels of protection of the mahogany tree under the Convention on International Trade in Endangered Species of Wild Fauna and Flora. Implementing the new requirements was one of Silva's first tasks as minister. As a party to the treaty, Brazil would export mahogany only if the federal government could certify that the wood had been harvested legally and sustainably. Previously, environmental enforcement agents had seized illegally harvested wood but did not necessarily stop the export of unsustainably harvested mahogany. "The strategy we used in the mahogany cases was based on the idea that if we can cut the links in the market of the money flows, that could be an answer to actually stop the illegal trade of mahogany," Azevedo said. "The whole business of mahogany was actually paid for up front by the buyers, who gave monetary advances to the people operating the logging business in Brazil." The team realized that if the federal government made enough large seizures quickly, the mahogany exporters would be unable to repay their advances and continue funding the illegal logging. "The mahogany business suddenly was virtually eliminated," Azevedo said. After Lula issued his July 2003 decree, the Office of the Chief of Staff of the President, or Casa Civil, convened 12 ministries8 to produce a coordinated plan that would reduce deforestation based on the six policy instruments. The creation of the interministerial committee marked the first time that so many different government ministries had come together on the issue of deforestation. "The context was highly favorable to the cause, because at that time we had one of the highest rates of deforestation in history, so there was this need for action or we would reach the highest deforestation rates since 1994," said Johannes Eck, assistant deputy chief of staff for analysis and monitoring of government policies at the Casa Civil, who helped coordinate the meetings. The committee included both ministers and a separate technical committee composed of subject experts from each ministry. They divided into subgroups in four priority areas: (1) territorial planning and land tenure, which focused on land policy covering conservation areas and sustainable local development; (2) monitoring and enforcement, which worked on instruments to monitor, license, and audit legal and illegal deforestation; (3) fostering of sustainable production activities, which examined rural credit and fiscal incentives, technical assistance, and scientific research; and (4) infrastructure, which looked into the transportation and energy sectors.9 The subgroups divided their proposed policy responses into different time frames. "The most important thing about the committee was that it presented short-, medium-, and long-term solutions to the problem of deforestation," said Juliana Simoes, a project manager at the Department of Deforestation Policy in the Ministry of the Environment who served on the technical committee. Based on those projections, the committee developed an implementation schedule. In March 2004, the interministerial committee unveiled the Action Plan for Prevention and Control of Deforestation in the Legal Amazon (Plano de Ação para Prevenção e Controle do Desmatamento na Amazônia Legal). In the short term, the plan emphasized expansion of the number of protected areas and command-and-control policies that aimed to improve monitoring and enforcement. In the medium term, it focused on tightening cooperation between federal agencies and state and local governments and on dealing with existing economic incentives that encouraged deforestation, such as the rural credit system for the producers. For the long term, the action plan included efforts to build more-sustainable development and production chains and to encourage agricultural intensification rather than expansion, although the mechanisms to achieve those goals remained undetermined. Even though the action plan set forth more than a hundred separate actions and goals among the participating ministries and agencies, the document lacked clear metrics for determining overall success. The immediate focus on illegal logging made the plan more politically palatable in the short term. "The argument was very simple: we just wanted to combat illegality," Azevedo said. "It was very easy to sell that at that moment, because all the actions we were doing were tied to that. At that point, in 2003-05, I don't think people really realized what the impact of those actions would be. I don't think they really believed we would be able to do the things in the second phase." The Casa Civil continued to play a coordinating role during implementation of the plan by meeting regularly to review ministry reports on progress in each target area. "The most successful element of this strategy was that the strategy was coordinated by the highest government institution in the country," Simoes said. "Deforestation was no longer a problem attributed only to the Ministry of the Environment. It was a problem for the federal government. Every area of the government had to take deforestation into account in its policies." GETTING DOWN TO WORK The 2004 action plan had a multistage timeline. The first phase, from 2004 to 2008, took aim at the short-term goals of monitoring deforestation and enforcing existing laws. The second, 2009 to 2011, focused on economic incentives and working with state and local governments. The third, 2012-15, dealt with longer-term problems of sustainable economic development. Regulating land and building a green wall One of the first strategic objectives of the action plan involved better territorial management and land-use planning for public land in the Amazon region. The committee divided that objective into two parts: creating more protected areas and clarifying land tenure. Silva and her team at the Ministry of the Environment took responsibility for coordinating a massive expansion of the national system of protected areas, which were classified into indigenous territories (land reserved for exclusive occupation and use by the indigenous population) and conservation units (parks, biological and wildlife reserves, and areas designated for sustainable use). Early on, Silva, Azevedo, and the rest of the team realized that management of the conservation units designated for sustainable use would require a better legal framework. The team also needed stronger laws to regulate public forests outside the conservation units. In 2004, Azevedo began working with the National Commission of Forests alongside congressional representatives, business, civil society, scientists, and indigenous community representatives to draft the Law on the Management of Public Forests for Sustainable Use (Lei de gestão das florestas públicas para a produção sustentável). After a period of public consultation, the commission sent the bill to Congress in February 2005, where the bill passed in January 2006. The law created the Brazilian Forest Service, which would have responsibility for (1) managing sustainable production within public forests, (2) the Registry of Forests, (3) plans for community forest management, and (4) the national system of forest concessions. The law tightened the rules governing the bidding process for forest concessions across public lands and, in a break from previous bidding processes, allowed the Brazilian Forest Service to take into account the potential environmental impact of bids and those bids' social impacts in addition to the usual financial considerations.10 "We wanted to run people out of the illegal logging business and the utilization of logs that came from illegal deforestation-in favor of sustainable management plans," Azevedo said. "And to have sustainable management plans, we needed to have rules on how to operate on public lands. So, this whole process led to the law for using the public lands, which implemented the concessions program and the plan to promote sustainable management of community lands." From 2003 to 2010, the ministry designated more than 500,000 square kilometers of conservation units on previously undesignated federal and state land-split between areas under full conservation (such as national parks and wildlife refuges) and those that allowed sustainable, licensed extraction (such as national forests and sustainable development reserves). The office of the presidency also designated 100,000 square kilometers of indigenous territory for protection during the same period. Establishing new protected areas was a slow process. "Creating these areas required several months of debate between the federal government, the state government, the local government, and the population," recalled Pires, Silva's director of deforestation policy. "Brazilian law requires that we consult with the population when we create new protected areas, and some of the municipalities we had to consult with were very remote." To reach populations spread throughout the Amazon region, ministry officials developed a trickle-down process. They would first explain the proposed policy to a small group of local representatives, who would then spread out across the municipality and hold their own consultation sessions. Silva's team built on work started at the ministry under former president Fernando Henrique Cardoso and used an existing World Bank project to help fund the expansion. The project, which had begun in August 2002, committed US$81.5 million over a five-year period, with the aim of increasing by 10% the amount of protected land in the region. During the first phase of the action plan, the National Institute for Colonization and Agrarian Reform (Instituto Nacional de Colonização e Reforma Agrária), issued an administrative rule that required holders of properties larger than 100 hectares (1 square kilometer) to reregister their properties within 120 days. Landholders who failed to submit the proper documentation by the deadline had their property registrations frozen until they did so, meaning that they were not permitted to sell the land or access rural credit for the property. During the first few years of the program, the federal government froze more than 70,000 rural property registrations.11 Most of the protected areas were within the arc of deforestation and were meant to build a green-wall buffer zone against any northward expansion of the arc into the better-preserved areas of the Amazon. Designating an area of land as a conservation unit or an indigenous territory helped simplify the legal process of dealing with illegal occupation of public land by placing a higher burden of proof on occupiers claiming tenure. Law enforcement also prioritized protected areas-ahead of other land. Linking monitoring to enforcement Protecting thousands of square kilometers of forest required new tools. "One of the first gaps we identified as a priority was monitoring of the forest," Simoes said. "We wanted to go beyond producing an annual rate of deforestation. We needed a type of monitoring that would give us information more quickly so the police and the government could take faster action." In May 2004, just two months after completion of the action plan, the federal space agency debuted a new satellite monitoring system that enabled the government to identify Amazon forest clearing far more quickly. The Real-Time System for Detection of Deforestation (Detecção de Desmatamento em Tempo Real, or DETER) provided deforestation updates every 15 days rather than the yearly data produced by the agency's previous satellite imagery operation. The timeliness of the information more than made up for the new system's less-precise imagery. (It could sense deforestation only in areas greater than 25 hectares, compared with 6.25 hectares in the old system.) The DETER system went live in time to detect the second-highest recorded annual level of deforestation-27,000 square kilometers in 2004-which underscored the need to respond quickly to the problem. To manage the flow of new satellite information, IBAMA established a monitoring center where analysts received alerts from the new system, evaluated the urgency of each instance of apparent deforestation, and referred cases to IBAMA offices in the affected areas. The new system, which used satellite data from the US National Aeronautics and Space Administration's multinational Earth Observing System, represented a crucial early step in the implementation of the action plan. IBAMA and other law enforcement agencies no longer had to rely on potentially unreliable human reporting. They could respond to deforestation in progress in even the remotest areas of the Amazon region. The ability to catch violators in the act made for stronger legal cases under Brazil's Forest Code. To react quickly to the new intelligence and deal with physical opposition from land grabbers and illegal loggers, IBAMA had to build greater enforcement capacity. "The resistance was enormous," Simoes recalled. "The local people armed themselves against the government, they set fire to IBAMA offices, they would close down roads, and they would block bridges so the IBAMA teams couldn't reach certain areas. It wasn't easy." Those developments underscored the need to upgrade the skills and training of enforcement officers. Luciano Meneses Evaristo, director of environmental protection at IBAMA, recalled that "many of the IBAMA officers were semi-illiterate. They were not physically conditioned to face the war zones we had in the rural areas, and they did not have the skills to collect evidence to build a report that we could file to fine the offender." In addition to deficiencies in training, IBAMA and other federal and state environmental protection agencies had to confront the long-standing corruption problem. "At that time in these institutions, there was no system to fight corruption," Evaristo said. "Without technology, [agents] could do anything they wanted. With a paper system, they could give the offenders fines that were very high and then negotiate the high fines down to get bribes." Silva had recognized the scope of the problem early on. "In 2003, when we got to the ministry, there were a lot of letters and calls saying there was corruption in the process," Azevedo said. Concerned that an internal investigation would lack independence, "Marina decided that anything we received related to an accusation of corruption or wrongdoing, we would send directly to the federal police," he said. For two years, Silva's team continued to forward the reports to the federal police, and nothing happened. "We were wondering what they were doing with it, because we didn't hear about anything," Azevedo said. Then, in 2005, the federal police and prosecutors launched a sweep to clean up environmental enforcement, with the cooperation of the Ministry of the Environment and IBAMA's comptroller. The operation, called Curupira after a mythological creature of Brazilian folklore, targeted a corruption ring within IBAMA's offices in the state of Mato Grosso and the state's environmental secretariat. The ring had sold timber-transport permits on the black market, facilitating the sale of illegally harvested timber. Among those arrested were the head of the office and the state's secretary of the environment, as well as numerous businesspeople who had allegedly paid the bribes. "It was the largest environmental operation ever done by the federal police," Azevedo said. "They explained to us, 'That's why you didn't hear from us for two years. We were preparing that case.' But if we hadn't decided two years earlier that we needed that independence to check for corruption, it never would have happened." Similar anticorruption operations followed. From 2004 to 2008, the federal government arrested more than 600 civil servants who had committed environmental crimes.12 "We started this new cycle to fight corruption and those who were corrupting these people," said Evaristo, who dealt with internal affairs for IBAMA during that period. "After Mato Grosso, we went to Rondônia, Pará, and other states. We were able to minimize that rotten part of the institution." Transparency and technology helped alleviate some opportunities for corruption. In 2003, during the drafting of the action plan, the government for the first time had released to the public satellite images of deforestation. Beginning in December 2004, the federal space agency began to release data monthly from its new system, and IBAMA, too, began to release reports of completed enforcement operations. The publication of that kind of information enabled other parts of the federal government, civil society, and other interested parties to monitor the effectiveness of responses to developing problems and to question any failure to do so. The involvement of the federal police and the Ministry of Justice in the enforcement process allowed for easier coordination with IBAMA. In the short term, due in part to capacity restraints and in part to anticipated violent resistance, IBAMA relied heavily on the federal police, the federal highway police, and the army to help shut down the largest illegal logging operations. The IBAMA enforcement team identified the nine worst hot spots in the arc of deforestation and set up bases of operation in those areas. "We used to come into an area for 21 days, and the illegal deforesters would hide their equipment and wait for us to leave," Evaristo said. "But this time, they eventually realized we wouldn't leave anymore." These steps had an immediate impact on the effectiveness of enforcement actions. From 2000 to 2003, IBAMA issued an average of approximately R$500 million (US$206 million) a year in fines for illegal deforestation; in 2004, it issued approximately R$750 million (US$257 million) in fines; and in 2005, the total was about R$1.75 billion (US$722 million). IBAMA began to hire more environmental enforcement agents to staff the nine hot spots its leadership had identified. Recruits had to pass the federal government's civil service exam before they could qualify for training. As the agency required agents to learn higher-level technology and more-rigorous law enforcement techniques, many less-qualified veteran officers decided to retire rather than go through the additional training needed to raise their skill levels. "With the arrival of new environmental analysts, the inspection process was improved and the space for the semi-illiterate and those with low levels of education was dramatically reduced," Evaristo said. "Law enforcement was based on the new technology available through satellite imaging, and those officers with low levels of education were not able to use this technology." Tackling the supply chain In the early years of implementing the action plan, Lula's administration focused more on creating additional protected areas and tackling illegal logging than on addressing deforestation problems related to cattle ranching, soy cultivation, and the economic chain behind the pressure to clear land. Civil society environmental movements, however, had other ideas. In April 2006, Greenpeace released a report called Eating Up the Amazon, which linked deforestation in the Amazon caused by the Brazilian soy industry to Cargill, a US-based agribusiness giant, and McDonald's, the US fast-food chain, and called for them to end purchases of Brazilian soy. "We went to McDonald's, which was the final consumer in the supply chain before individual people, and we said two things," said Marcio Astrini, Greenpeace's campaigner for the Amazon. "First, 'Your money is financing deforestation in the Amazon,' and second, 'There is a way to produce soy in the Amazon without deforestation. If you help us make Cargill follow this new model, we'll have an agreement. Otherwise, you are an accomplice to this problem, and we will share that with your consumers.'" Both companies pledged to stop buying soy from the Amazon region unless they could be certain it was not linked to illegal deforestation. Soy industry trade associations quickly reached out to the Ministry of the Environment and Greenpeace to work out a solution. In July 2006, the Brazilian Association of Vegetable Oil Industries and the National Association of Cereal Exporters agreed that they would not buy soy from farmers who deforested in the Amazon region after that date. The two trade associations also worked out a separate agreement with the Ministry of the Environment and a number of civil society groups that stipulated they would work with local producers' unions to help soy producers comply with the Forest Code. In turn, the civil society groups would provide technical advice. The Ministry of the Environment, for its part, would help state environmental agencies implement a state rural environmental registry (Cadastro Ambiental Rural, or CAR), the tool that helped soy exporters determine whether their suppliers met the requirements. CAR required producers to submit documentation to state environmental agencies showing the boundaries of their properties, the legal reserves they had maintained, and restoration plans for any areas that had been deforested illegally. "Even though soy was not as big a driver as cattle ranching, the potential was there to cause deforestation," Astrini said. "We decided to start with the soy area, because it was the preparation for our becoming able to deal with cattle ranching later on." Three years later, Greenpeace applied the tactic to the cattle industry. In 2009, the group published a report called Slaughtering the Amazon and with a consortium of other nongovernmental organizations demanded a similar moratorium by slaughterhouses and beef exporters on cattle raised on illegally deforested pasture. "If you go to a slaughterhouse, you won't find a single chain saw there, but they buy from a thousand farms in the Amazon," Astrini said. "The decision to work with the slaughterhouses was strategic, and we tried to use economic power there to influence those who deforest the areas. We put pressure on the slaughterhouses to make it difficult for them to sell products that come from deforested areas." Unlike the soy industry, the cattle industry faced legal as well as social pressure to reach a solution. That same year, federal prosecutors and IBAMA agents in the state of Pará filed charges against slaughterhouses that they alleged had bought cattle from suppliers that had deforested illegally after 2008. They also reached out to the slaughterhouses' customers, advising them to avoid the slaughterhouses' products or risk charges themselves. The slaughterhouses, federal prosecutors, and the Pará state government began negotiations. Brazil's four largest slaughterhouses agreed to a moratorium on cattle raised on illegally deforested land and to implementation of a tracking system that would enable them to determine the origins of their suppliers' cattle. Greenpeace would monitor the implementation of the moratorium and tracking system. The slaughterhouses agreed to buy cattle only from properties that had completed CAR documentation. The state government agreed to computerize those registries and take other measures to speed implementation. Shifting strategy By 2007, deforestation rates had dropped 59% from 2004 levels due in large part to the new protected areas and the stronger monitoring and enforcement of illegal deforestation in those areas. That year, however, the National Institute for Space Research issued a warning that its monitoring system had detected an increase in the rate of deforestation, coinciding with rising prices for beef and soy. The 2004 action plan was scheduled to enter its second phase in 2009, so in early 2008, Silva and her team at the Ministry of the Environment began to assess their progress and plan for the following phase. Silva and her team realized that the nature of deforestation was changing. "In the beginning, when we started the plan, 80% of the deforestation was happening in large areas, concentrated in a certain number of municipalities," Azevedo said. "By 2007, the share of small-scale deforestation-less than 100 to 200 hectares-was growing, and we were starting to see this spreading out to more municipalities." In December 2007, Lula signed a decree that authorized Silva and her team to publish a list of municipalities that were the worst offenders with regard to deforestation. Those municipalities would receive priority attention from IBAMA and other law enforcement agencies. They could not receive permits for legal logging, and the National Institute for Colonization and Agrarian Reform would not allow any reregistration of rural properties while a municipality remained on the list. In each municipality, properties with illegal deforestation were placed under an embargo that cut off the owners from agricultural subsidies and outlawed the sale of those properties or products produced there. For instance, federal prosecutors warned that enforcement officers had the authority to confiscate cattle that slaughterhouses bought from such landowners. In January 2008, Silva published a list of 36 municipalities that were responsible for more than 50% of total deforestation, though they accounted for only 6% of private land in the Amazon region.13 Silva and her team targeted municipalities based on those municipalities' total areas of cleared forest, the amounts of forest they had cleared in the previous three years, and whether their rates of deforestation had increased in at least three of the previous five years. To get removed from the list by 2009, for example, a municipality had to register 80% of its privately held land under the CAR system, bring its deforestation rate in 2008 to 40 square kilometers or less, and reduce average deforestation in 2007-08 to less than 60% of the average deforestation during 2005-06. "We made the municipal governments part of the deforestation policy because they were closer to areas [of illegal deforestation]," Pires said. "They knew the lay of the land and the farmers best." After publication of the blacklist, IBAMA agents made those municipalities priority targets for law enforcement efforts, effectively ending any large-scale illegal timber operations in those areas. The following month, Brazil's central bank tightened access to rural agricultural credit by requiring landholders in the Amazon region to complete CAR documentation and submit it to their state environmental secretariats in order to qualify for government-subsidized rural agricultural credit from banks or credit unions. The measure cut off a major source of funding for both legal and illegal agriculture- and ranching-linked deforestation across the region, at least temporarily, as landholders worked to complete registrations. The central bank resolution also helped federal and state environmental agencies build a database that linked deforested land to individuals based on the declarations required by the CAR process. In a country where many landholders lacked official documentation of land tenure, tying responsibility for deforestation to individuals was difficult. To receive CAR registration, property holders had to submit an account of existing deforestation on their properties, including whether they had maintained their legal reserves. If they had not maintained those reserves, they had to submit plans to reforest. Focusing on CAR enabled the Ministry of the Environment to sidestep the problem of ambiguous landownership in many parts of the Amazon region. Azevedo said, "If you were using the land, you could assume environmental responsibility over the land to be able to manage it-without any guarantee that because you are assuming environmental responsibility, you will get the land title . . . We had tons of people declaring they were responsible for land. If there was an overlap, they had to sort it out among themselves." In 2008, it also became clear that issuing more fines would not serve as a long-term solution to Amazon deforestation. Those caught clearing land could appeal the fines through a process that could take years, and during that time, the offenders often continued to cut down trees. In some areas-particularly the state of Pará-enforcement agents met violent resistance when they attempted to shut down illegal timber operations. IBAMA's leadership continued to upgrade the skills of field agents by tightening job requirements and providing more-rigorous training on law enforcement tactics and intelligence techniques. In addition, a presidential decree in July 2008 handed IBAMA agents a number of new tools. The decree sped up IBAMA's authority to penalize deforesters by clarifying administrative procedures, and it allowed the agency to (1) publicly identify the owners of properties that had been deforested illegally and (2) disclose their names to the federal agencies that controlled access to credit. The same decree reaffirmed the right of enforcement agents to seize, disable, or destroy tractors, chain saws, and other equipment found in use for illegal deforestation. Disabling or destroying the equipment avoided the complex legal hurdles required for the seizure of property. Enlisting state and local governments In May 2008, Silva resigned from the Ministry of the Environment, and Lula appointed a well-known environmentalist, Carlos Minc, to succeed her. A founding member of the Green Party, Minc was a senator representing Rio de Janeiro. Minc became minister as the action plan transitioned to its second phase in late 2008 and early 2009. The new phase called for Minc and his team at the Ministry of the Environment to bring state and municipal governments into the fight against deforestation. That task was not easy. Though the original federal action plan called for state governments to develop their own plans, as of early 2008 no state in the Amazon region had done so. Some Amazon region states resisted because their economies rested on industries that relied on deforestation. In Mato Grosso, for example, Governor Blairo Maggi was nicknamed "King of Soy" because his family owned Brazil's largest soy production company. After he came to office in 2003, Maggi encouraged land clearing across the state as a way to expand the state's agricultural economy. At the municipal level, many local mayors were involved in the timber industry or politically connected to those who were. Even in municipalities where government leaders did not have direct economic ties to businesses that relied on deforestation, cooperating with the requirements of the federal action plan was often politically difficult. "In the beginning, we didn't expect resistance from the state and local governments, which we thought were our partners," Leiza Dubugras of the Casa Civil said. However, she added, "when we shut down the timber businesses, we would have a lot of people who were unemployed, and those people would go to the city halls and complain. The municipal governments didn't have money to deal with this issue, so they would go to the state, and the state couldn't deal with it either, so they'd go to the federal government. The impact of this was that the people we thought were our partners-the local governments and the state governments -were not in fact partners 100%-because of these issues." The federal government was able to offset some of the negative side effects. During his two terms in office, Lula massively expanded social welfare and antipoverty programs to provide some relief for the unemployed. The federal government also created short-term jobs on infrastructure projects and in other public works. To make the final push to reduce illegal deforestation, Minc and his team had to bring state governments on board. "We encouraged the state governments to create their own plans to fight deforestation," Pires said. "We had to invest even more in this coordination effort to deal with the conflicts and to deal with the politics." Over time, the Casa Civil pressured state governments to implement their own antideforestation plans, Dubugras said. Restrictions on the sale of soy and cattle linked to illegal deforestation provided economic motivation. In Pará, the agreement the federal prosecutors mediated between slaughterhouses and the state government following the Greenpeace report included the state's commitment to support the federal government's action plan and to develop its own plan. Some help came in 2009 through Brazil's new national plan on climate change. As part of a broader commitment to reduce greenhouse gas emissions by at least 36% by 2020, Brazil formally pledged to reduce its yearly deforestation rate by 80% to 3,920 square kilometers-down from the average of 19,600 square kilometers from 1996 to 2005. "This event was a major landmark for us because we started setting targets in our work for the first time," said the Casa Civil's Eck. At a 2009 international climate conference in Denmark, the Brazilian government made that commitment on the international stage as part of the Copenhagen Accord. At the time of the pledge, Brazil was already more than halfway to achieving its goal. In exchange for that commitment and other emissions reduction targets, the government of Norway agreed to conditionally grant Brazil US$1 billion to implement related projects. The money was to go to the Brazilian National Development Bank's Amazon Fund, but only if Brazil's deforestation rate continued to decline. The fund, first designed in 2007 and established in August 2008, provided financial support for federal, state, and municipal governments; civil society; and private companies for projects to prevent, monitor, and combat deforestation. The Amazon Fund, boosted by Norway's cash infusion, provided a powerful incentive for state governments to set up their own plans. "One of the rules of the fund was that every institution could present projects to get funds from the Amazon Fund, but the state governments could present projects themselves only if they had a state plan to combat deforestation," Azevedo said. "[States] also have a seat on the board of the fund, but they can vote only if they have a plan to combat deforestation. So in a matter of nine months, all the states presented their state plans." OVERCOMING OBSTACLES As minister of the environment, Silva had ignited a firestorm when she shifted strategy-with the support of President Lula's decree and the central bank resolution-to restrict credit and target landowners in municipalities responsible for most of Brazil's deforestation. Opposition from cattle ranching, agricultural interests, and landowners in the Amazon had been growing behind the scenes, but Silva's new strategy brought her opponents into the open. The resolution was controversial-particularly with the Ministry of Agriculture, where officials worried that the credit restrictions were unfair to producers in blacklisted municipalities that had obeyed the law. Ministry officials were also concerned that the resolution could expand beyond the Amazon to the country as a whole. "The Ministry of Finance helped on this because, they said, we should do this for all of Brazil and because we should not accept that illegal operations receive money from us," Azevedo recalled. "When the Ministry of Finance started to say that it made sense for the whole country, the Ministry of Agriculture accepted that we could do it just for the Amazon." Although the central bank resolution went through, political opposition to the strategy continued to build. "We had very strong opposition from the governor of Mato Grosso [Blairo Maggi] and the Minister of Agriculture at that point," Azevedo said. "They were pushing the president to step back on the decree." The president was more susceptible to that pressure than he had been in the past. The 2006 election had left Lula on shakier political footing in his second term, because congressional losses by his Workers' Party forced him to build broader coalitions. "In the 2006 elections, the Congress changed immensely in its composition, and the ruralistas got a lot of seats in Congress; so, in 2007, in the first year of the second term, they were very strong and they were building momentum," Azevedo said. Led by Senator Kátia Abreu, head of the Brazilian Confederation of Agriculture and Livestock, the ruralistas raised concerns among environmentalists by proposing revisions to the Forest Code. "Marina had started to receive signs that the president might actually step back on some of his decisions for political reasons," Azevedo said. The possibility that the president could reverse his decree creating the credit restrictions put Silva in a difficult position. According to Azevedo, Silva was concerned that if Lula reversed the decree while she was in office, her presence would legitimize the move. But she strategized that if she left office, he would have to take sole responsibility for weakening environmental protection. "If [Lula] stepped back, people would say the commitment to ending deforestation was there only because Marina was there, not because the president made a commitment," Azevedo said. "If he reversed the decree with Marina outside the government, then all the credit for the deforestation plan would go to Marina." In May 2008, Silva resigned as minister of the environment, surprising many both within and outside the government. Her announcement raised widespread concern about Brazil's political will to continue the war against deforestation. In her resignation letter, Silva outlined her team's achievements and noted increased opposition to her efforts but did not identify those she considered opponents. "The measures we adopted show a clear and irreversible path to make the social and environmental policy and the economy into one single agenda," she wrote. She cited "growing resistance to our team in important sectors of government and society" and announced her intention to return to the legislature so she could seek "crucial political support to consolidate all that we have achieved and to advance the implementation of the environmental policy."14 Lula helped defuse some of the public's concerns by appointing Minc to succeed Silva. As a founder of the Green Party, Minc had the legitimacy to carry on Silva's work. "When Marina resigned, the president had to put in someone who was also significant in the environmental movement," Azevedo said. "Marina was an icon." Minc's appointment also underscored the president's commitment to implementing the action plan. "Minc actually put as a condition to assume the position as minister that the president maintain the decree," Azevedo said. "He would not accept that on the first day he's minister, the president would step back on the most important tools he would have to fight deforestation." Minc quickly demonstrated his commitment to fighting deforestation. He accelerated plans for Operation Boi Pirata (Pirate Ox). Federal agents followed through on earlier threats and seized cattle raised on illegally deforested land. In June 2008, the month after Silva's departure, IBAMA agents seized 3,500 head of cattle in Pará and promised more seizures across the region. The Forest Code under fire Silva's resignation reinforced political commitment to the action plan through the end of Lula's second term in 2010 but did not guarantee that that commitment would remain in place through a new administration. At the end of 2010, Brazilians elected Lula's chief of staff, Dilma Rousseff, the Workers' Party candidate, as president of Brazil. Rousseff's governing coalition included many ruralistas, who won a majority in the Congress. Minc had left the Ministry of the Environment during the campaign period to run again for state office in Rio de Janeiro, and the ministry's executive secretary, Izabella Teixeira, took his place as minister. In her new position, Teixeira had the task of leading the Ministry of the Environment's implementation of the final phase of the action plan from 2012 to 2015. In contrast to Silva and Minc, who had built political careers on environmental issues, Teixeira was a technocrat who had worked her way up through IBAMA and the Ministry of the Environment. In a move that raised concerns among environmentalists, Rousseff took coordination of the action plan out of the domain of the Casa Civil, assigning the responsibility to the Ministry of the Environment. Because the environment ministry had little formal control over other ministries, Rousseff's decision led to worries that Teixeira would lose buy-in from those ministries and endanger the implementation process. Teixeira's challenge broadened and deepened quickly as the ruralista majority in Congress sought to revise the Forest Code. In earlier years, landowners in the Amazon region and their ruralista representatives in Congress had pushed for revisions to the Forest Code but failed to garner sufficient support. Following the 2010 election, that changed. "There was pressure to review the decisions that made many of the agricultural producers illegal," Teixeira said. "There was an effort to try to end the Forest Code in Brazil, forgive everyone, and eliminate instruments such as the permanent protection areas and the legal reserves." In April 2012, both houses of Congress passed Forest Code revisions that environmentalists said would cripple forest protections and usher in a new era of deforestation in Brazil. Abreu, the senator who headed one of the country's largest agricultural lobbying organizations, argued that the revisions would end "environmental dictatorship."15 Though the revisions left intact the requirement for an 80% legal reserve, they provided amnesty from fines for any illegal deforestation prior to July 2008 and allowed continued cultivation on land deforested prior to that date. The revisions also called for less land to be set aside for permanent preservation around riverbanks and other erosion-vulnerable areas, thereby reducing buffers from 100 meters to 15 in some cases. On the positive side, the revisions also proposed mandatory CAR registration for all rural properties throughout the country. Once completed, the database of CAR registrations would lead to more-efficient monitoring and clearer liability for deforestation and therefore pave the way for a better national system of forest management. As environmental organizations staged protests and called for Rousseff to immediately veto the bill, the president and Teixeira went to the negotiating table with Congress. Rousseff's administration would need the ruralistas' support for other areas of its agenda, making a compromise vital. "We took on the negotiations in very unfavorable conditions compared with 10 years ago," Teixeira said. Rousseff ultimately vetoed portions of the changes but allowed others to go through. She struck language that granted amnesty from fines to those who had illegally deforested prior to July 2008, but for smaller farms, she approved exemptions from the stricter obligations of larger landowners to recover land illegally deforested prior to that date. Those larger landholders who met their recovery obligations could also have their fines forgiven. Opponents argued that the changes unfairly penalized landowners who had obeyed the law prior to 2008, because they would be required to maintain their full reserves, whereas those who had previously overcleared land could use a greater proportion of their properties. They also argued that the 2008 amnesty provision created expectations for landowners that future deforestation might be forgiven later on, which could lead to increased land clearing. "You had a politically weak government with a very strong ruralista sector in the Congress, and that was a moment," Azevedo said. "They found the momentum where they could push for a Forest Code that in other circumstances they would never have received. We lost that one-and we lost badly." ASSESSING RESULTS At its most fundamental level, Brazil's 2004 action plan succeeded in reducing deforestation in the Amazon. In 2004, the nation saw its second-highest deforestation rate since it began collecting data in 1988. From 2004 to 2014, the federal government reduced annual deforestation by 75% from the yearly average of 19,600 square kilometers from 1996 to 2005 (figure 2). To reach the 80% target reduction set by the national climate change plan and in the Copenhagen Accord, the country needed to bring annual deforestation down to 3,920 square kilometers by 2020. Scholars attribute most of Brazil's reduction in Amazon deforestation directly to the action plan that began in 2004. Some observers argued that the shifting prices for commodities such as beef and soy may have contributed to much of the improvement in deforestation rates from 2004 to 2009. In an empirical analysis that controlled for the prices of agricultural outputs, however, Juliano Assunçao of the Climate Policy Initiative and his team found that under the action plan, conservation policies-rather than fluctuations in commodity prices-were responsible for the bulk of the reduction in deforestation.16 In a related analysis, the Climate Policy Initiative team analyzed whether the restriction of rural credit affected deforestation rates from 2009 to 2011. The researchers estimated that without the restriction, more than 2,700 square kilometers of additional forest would have been cleared during that period.17 From 2004 to 2011, the Brazilian government nearly doubled the area of protected land, increasing the total by 250,000 square kilometers to cover 520,000 square kilometers, with the goal of reaching 600,000 square kilometers by 2018. During the same period, state-protected areas expanded by 250,000 square kilometers.18 The federal government also awarded concessions for legal logging on approximately 490 square kilometers of public forest under the Sustainable Forest Management program.19 Under Rousseff's administration, however, some of those protected areas became redesignated, drawing criticism from environmental groups. "Recently, we've had some setbacks because of pressure from other sectors and the view of the current government that protection should be more balanced with development initiatives," said the Instituto Socioambiental's Ramos. "We're living in a very challenging moment." On the enforcement side, from 2004 to 2011, IBAMA carried out 649 operations that resulted in fines of R$7.2 billion (US$3.6 billion), and seized 864,000 cubic meters of timber. IBAMA officers had arrested more than 600 individuals who committed environmental and "public-order" crimes, including some of their own.20 Another quantitative analysis by the Climate Policy Initiative team concluded that law enforcement prioritization of blacklisted municipalities had a far greater impact on reducing deforestation in those municipalities than did either the CAR requirements or restrictions on the sale of products from illegally cleared land.21 As the IBAMA agents adapted, however, so did the deforesters: loggers and land grabbers realized that the DETER satellite-monitoring system would quickly uncover clear-cutting, so they began to cut down trees on smaller, disparate pieces of land, leaving the tallest trees to shield themselves from overhead view. IBAMA and the National Institute for Space Research began using higher-definition images to identify that type of multipoint deforestation. Not all facets of the CAR program could be marked as successful. As of late 2014, it was too early to tell whether CAR-mandated reforestation plans would lead to the massive forest recovery the action plan called for. Under the national system, landholders had 20 years to meet the requirements. In 2009, before the new Forest Code created the nationwide CAR system, the federal space agency found that at least 20% of deforested land in the Amazon was regrowing. The space agency was unable to determine, however, what portion of that land was deliberately under reforestation and what portion had been cleared for timber and then simply abandoned. Some observers predicted that many of the landholders who were required to reforest under their individual CARs would wait to see whether the federal government would enforce penalties in the future. The 2012 changes to the Forest Code contributed to the problem by absolving some landowners of reforestation responsibilities for land deforested prior to 2008. "Some people are saying, 'We didn't have to pay before, so we can go on and do it again because there will be new legislation in five years,'" Ramos said. Full supply-chain monitoring also was incomplete. Though Brazil's largest meat and soy processors implemented the agreed-upon tracking systems, some smaller companies had not done so. IBAMA's system for tracking legally harvested timber also had serious flaws, opening the door for covert sales of illegal timber. Struggling for a model of sustainable development As of late 2014, in the third phase of the action plan, the Ministry of the Environment was still in the early stages of implementing policies to promote more-sustainable agricultural and other economic activities-despite a 2015 deadline. In the long term, maintaining low deforestation rates meant the federal government had to provide viable alternatives to the economic activities that had fueled the problem in the first place. The Ministry of Agriculture was just beginning to implement a program of credit specifically for low-carbon agriculture, and results were not yet available. Also by late 2014, the federal government had not yet succeeded in resolving the problem of uncertain land tenure in the Amazon region. Under the action plan, the Ministry of Agriculture and the National Institute for Colonization and Agrarian Reform had mapped 25,618 rural properties for registration-well short of the goal of 300,000. Nor had the National Institute for Colonization and Agrarian Reform met the action plan's goal of promoting more-sustainable ranching and agricultural activities across the region. REFLECTIONS During the decade after the Action Plan for Prevention and Control of Deforestation in the Legal Amazon was first implemented, the Brazilian government successfully reduced deforestation in the Amazon rain forest. Those involved in the project attributed their success to political commitment from President Luiz Inácio Lula da Silva and his cabinet and to the leadership of Minister of the Environment Marina Silva. "Marina's charisma and the respect she commanded contributed to the creation of this environment [for change] and allowed the policies to come to fruition, even after she left the ministry" said Mauro Oliveira Pires, former director of deforestation policy at the Ministry of the Environment under Silva, Carlos Minc, and Izabella Teixeira. He added that continuity under subsequent ministers also was important. "Minc maintained the policy and expanded it to other biomes." Of the action plan's myriad changes to Brazil's framework for environmental governance, the key contributions to reduced deforestation were (1) expansion of protected areas, (2) a nearly real-time monitoring system, (3) more-effective environmental law enforcement, and (4) elimination of federal agricultural subsidies for production on illegally deforested land. Transparency of the data on the places deforestation was occurring also reduced opportunities for corruption and raised public awareness about the seriousness of the problem. Silva and her staff at the Ministry of the Environment played a key role in both designing and implementing the action plan, but the interministerial nature of the plan enabled the federal government to tackle the problem of Amazon deforestation in a more comprehensive and coordinated way than it could in previous efforts. After Silva resigned from the Ministry of the Environment in 2008, government leaders sustained political will and momentum by reaffirming in the 2009 Copenhagen Accord Brazil's commitment to curb deforestation as part of a national climate policy and on the world stage. Though the action plan had nearly achieved the national target for reduction in the rate of Amazon deforestation, implementers still faced challenges in 2014. The federal government had largely failed to address the insecurity of land tenure that had fueled illegal land grabbing and land clearing in federal and state forests outside the protected areas. In addition, antideforestation efforts had failed to develop either (1) systematic strategies to encourage agricultural intensification on legally cleared land or (2) other sustainable economic alternatives for rural communities that had previously relied on logging for income. The failure to provide alternative sources of income threatened the long-term sustainability of Brazil's lowered rate of deforestation. "We were able to prove that it is possible to grow economically and reduce deforestation at the same time," said Johannes Eck, deputy chief of staff at the Casa Civil, which coordinated implementation of the action plan. In the short term, Eck noted, the federal government offset the loss of jobs in illegal industries through infrastructure projects in the Amazon region. But he added, "Now that the economy is stagnant, we are concerned that the lack of growth is going to stimulate illegal economic activities again, and we're going to increase deforestation because we're not going to have as many legal jobs." Teixeira said that most of the new land clearance was coming from illegal logging, which she linked to broader economic issues in the region. Designing a more sustainable economic model for the Amazon region was her main concern for the future. "We have to protect the environment, but we cannot forget that to achieve that result, we must be engaged in social and economic policy in such a way that we can provoke a new economic base for regional development," Teixeira said. "We cannot look at the forest and forget that we have around 22 million people that live in that region." Innovations for Successful Societies makes its case studies and other publications available to all at no cost, under the guidelines of the Terms of Use listed below. The ISS Web repository is intended to serve as an idea bank, enabling practitioners and scholars to evaluate the pros and cons of different reform strategies and weigh the effects of context. ISS welcomes readers' feedback, including suggestions of additional topics and questions to be considered, corrections, and how case studies are being used: [email protected]. Terms of Use Before using any materials downloaded from the Innovations for Successful Societies website, users must read and accept the terms on which we make these items available. The terms constitute a legal agreement between any person who seeks to use information available at successfulsocieties.princeton.edu and Princeton University. In downloading or otherwise employing this information, users indicate that: a. They understand that the materials downloaded from the website are protected under United States Copyright Law (Title 17, United States Code). b. They will use the material only for educational, scholarly, and other noncommercial purposes. c. They will not sell, transfer, assign, license, lease, or otherwise convey any portion of this information to any third party. Republication or display on a third party's website requires the express written permission of the Princeton University Innovations for Successful Societies program or the Princeton University Library. d. They understand that the quotes used in the case study reflect the interviewees' personal points of view. Although all efforts have been made to ensure the accuracy of the information collected, Princeton University does not warrant the accuracy, completeness, timeliness, or other characteristics of any material available online. e. They acknowledge that the content and/or format of the archive and the site may be revised, updated or otherwise modified from time to time. f. They accept that access to and use of the archive are at their own risk. They shall not hold Princeton University liable for any loss or damages resulting from the use of information in the archive. Princeton University assumes no liability for any errors or omissions with respect to the functioning of the archive. g. In all publications, presentations or other communications that incorporate or otherwise rely on information from this archive, they will acknowledge that such information was obtained through the Innovations for Successful Societies website. Our status (and that of any identified contributors) as the authors of material must always be acknowledged and a full credit given as follows: Author(s) or Editor(s) if listed, Year of publication, Full title, Innovations for Successful Societies, Princeton University, http://successfulsocieties.princeton.edu/ Innovations for Successful Societies (ISS) is a joint program of Princeton University's Woodrow Wilson School of Public & International Affairs and the Bobst Center for Peace & Justice. The Woodrow Wilson School prepares students for careers in public service and supports scholarly research on policy and governance. The mission of the Bobst Center for Peace & Justice is to advance the cause of peace and justice through mutual understanding and respect for all ethnic traditions and religious faiths, both within countries and across national borders. References 1"Plano de Ação para a Prevenção e Controle do Desmatamento na Amazônia Legal," Presidência da República, Brasilia, Brazil, March 2004. 2Ibid. 3Gary S. Becker, "Brazil: If Lula Wins, Free Markets Will Survive," BloombergBusinessweek, October 20, 2002; http://www.businessweek.com/stories/2002-10-20/brazil-if-lula-wins-free-markets-will-survive. 4 All currency conversions are based on historical average exchange rates for the period in question. 5Juliano Assunção, Clarissa Gandour, Romero Rocha, and Rudi Rocha, "Does Credit Affect Deforestation? Evidence from a Rural Credit Policy in the Brazilian Amazon," Climate Policy Initiative, January 2013; http://climatepolicyinitiative.org/wp-content/uploads/2013/01/Does-Credit-Affect-Deforestation-Evidence-from-a-Rural-Credit-Policy-in-the-Brazilian-Amazon-Technical-Paper-English.pdf. 6"Economic Survey of Brazil 2005," Organization for Economic Co-operation and Development, 2005. 7Juliano Assunção, Clarissa Gandour, Romero Rocha, and Rudi Rocha, "Does Credit Affect Deforestation? Evidence from a Rural Credit Policy in the Brazilian Amazon," Climate Policy Initiative, January 2013, op. cit. 8Ministry of the Environment; Ministry of Agriculture, Livestock, and Supply; Ministry of Science and Technology; Ministry of Defense: Ministry of Agrarian Development; Ministry of Development, Industry, and Foreign Trade; Ministry of National Integration; Ministry of Justice; Ministry of Mines and Energy; Ministry of Labor and Employment; and Ministry of Transport. The Ministry of Planning, Budget, and Management and the Ministry of Foreign Affairs were added in March 2004. 9"Plano de Ação para a Prevenção e Controle do Desmatamento na Amazônia Legal," Presidência da República, op. cit. 10Ivan Tomaselli and Alastair Sarre, "Brazil Gets New Forest Law," ITTO Tropical Forest Update, vol. 15, no. 4, 2005. 11Ministerio do Meio Ambiente, "The Brazilian Redd Strategy," Brazil, 2009; http://www.mma.gov.br/estruturas/168/_publicacao/168_publicacao19012010035219.pdf. 12"Plano de Ação para a Prevenção e Controle do Desmatamento na Amazônia Legal: 2a Fase (2009-2011), Rumo ao desmatamento ilegal zero," Presidência da República, Brasilia, November 2009; http://www.mma.gov.br/estruturas/168/_publicacao/168_publicacao02052011030251.pdf. 13Ane Alencar, Daniel Nepstad, David McGrath, Paulo Moutinho, Pablo Pacheco, Maria Del Carmen Vera Diaz, and Britaldo Soares Filho, "Desmatamento na Amazônia: Indo Além da 'Emergência Crônica,'" Instituto de Pesquisa Ambiental da Amazônia, Bélem, 2004; http://www.ipam.org.br/biblioteca/livro/Desmatamento-na-Amazonia-Indo-Alem-da-Emergencia-Cronica-/319. 14As translated by the World Wildlife Fund for Nature; http://assets.wwf.org.br/downloads/resignitation_letter_marina_silva_eng.pdf. 15"Kátia Abreu saúda texto do novo Código Florestal e critica 'ONGs inimigas do Brasil,'" Senado Federal, 12 July 2011' http://www12.senado.gov.br/codigoflorestal/news/katia-abreu-sauda-texto-do-novo-codigo-florestal-e-critica-ongs-inimigas-do-brasil. 16Juliano Assunçao, Clarissa Gandour, and Rudi Rocha, "Deforestation Slowdown in the Legal Amazon: Prices or Policies?" Climate Policy Initiative, 6 February 2012; http://climatepolicyinitiative.org/wp-content/uploads/2012/03/Deforestation-Prices-or-Policies-Working-Paper.pdf. 17Juliano Assunção, Clarissa Gandour, Romero Rocha, and Rudi Rocha, "Does Credit Affect Deforestation? Evidence from a Rural Credit Policy in the Brazilian Amazon," Climate Policy Initiative, January 2013, op. cit. 18"Plano de Ação para a Prevenção e Controle do Desmatamento na Amazônia Legal: 3a Fase (2012-2015), Pelo uso sustentável e conservação da floresta," Presidência da República, Brasilia, June 2013; http://www.mma.gov.br/images/arquivo/80120/PPCDAm/_FINAL_PPCDAM.PDF. 19Ibid. 20Ibid. 21Juliano Assunçao and Romero Rocha, "Getting Greener by Going Black: The Priority Municipalities in Brazil," Climate Policy Initiative, August 2014; http://climatepolicyinitiative.org/wp-content/uploads/2014/08/Getting-Greener-by-Going-Black-Technical-Paper.pdf. --------------- ---------------------------------------- --------------- ---------------------------------------- Michael Scharff Innovations for Successful Societies 2 (c) 2012, Trustees of Princeton University Terms of use and citation format appear at the end of this document and at www.princeton.edu/successfulsocieties ISS is a joint program of the Woodrow Wilson School of Public and International Affairs and the Bobst Center for Peace and Justice: successfulsocieties.princeton.edu. ISS invites readers to share feedback and information on how these cases are being used: [email protected]. (c) 2015, Trustees of Princeton University Rachel Jackson Innovations for Successful Societies 18 (c) 2015, Trustees of Princeton University Terms of use and citation format appear at the end of this document and at successfulsocieties.princeton.edu/about/terms-conditions. Rushda Majeed Innovations for Successful Societies Innovations for Successful Societies 23 (c) 2011, Trustees of Princeton University
|
UC Nursery and Floriculture Alliance
University of California
UC Nursery and Floriculture Alliance
Scouting for diseases and environmental monitoring
by Steve Tjosvold
Regular scouting for the earliest occurrence of disease is essential. Check plants at least weekly for symptoms of diseases and record disease occurrence, severity and environmental or other conditions that favored the disease. Learn what diseases are known to be associated with the crops you are growing. Focus attention on diseases that historically are found in your nursery or are supported by the current environmental conditions. Look for off-color or irregular growth, wilting, soft or necrotic areas on leaves or stems. Remove root balls from containers and look for root disease symptoms: necrotic lesions, galls and rots on roots or near the root crowns. A 10x hand lens can be helpful to take a close look at unhealthy plants in the field. Sometimes characteristic spores or other signs of the pathogen might be visible and aid in field identification. Often pathogens are not readily identified in the field because identification and diagnosis may require more analysis and specialized techniques that are only available in a plant pathology laboratory.
Make note and look for patterns where unhealthy plants or unhealthy portions of the plant are found. Collect plant samples that represent all apparent developmental stages of the disease. For example, cyclamen infected with Fusarium oxysporum f.sp. cyclaminis might be first noticed when some plants begin to become less vigorous as compared to healthy plants. Corms may have some internal discoloration. With more time, wilting might occur, at first, just on one side of the plant. In the final stages of the disease, the plant leaves become necrotic and the plant collapses. If present, all these stages should be sampled. Don’t forget to collect some healthy plants or plant parts that can be used to compare against the unhealthy portions.
Samples may need to be examined more closely by a plant pathologist and laboratory analysis performed to positively identify if any pathogens are present in affected plant parts. Commercially available detection kits can be used for the detection of some pathogens at the nursery, and the test takes as little as 10 minutes. ImmunoStrip® tests (AgDia Inc. Elkhart IN) use ELISA technology where antibodies are used to recognize proteins that are unique to specific pathogens such as Phytophthora species, bacteria and viruses. A variant of the ELISA method is a portable lateral flow device known as Pocket Diagnostic® (Abingdon Health Products, York, UK).
Diseases are mostly managed by preventative measures and chemical controls. Fungicides and bactericides usually must be applied before infection occurs. (The exception are powdery mildew diseases, which potentially could be eradicated by some fungicides.) Early detection of diseases definitely helps so protective sprays might be applied to unaffected plants. Yet, often it is impossible to see what is actually infected since overt symptoms might not have developed yet.
Since diseases must be managed preventatively, monitoring conditions that promote diseases and unhealthy plants is very important. Crops should be inspected for issues that might cause plant problems such as under- or over-watering, fertilizer injector problems, pesticide mixing problems, and thermostat inaccuracy or heater breakdowns.
Environmental monitoring, particularly humidity and leaf wetness, is especially important. The majority of fungi, aerial nematodes and bacteria that cause plant diseases require liquid “free” water on the plant surfaces before they can infect the plant. Free water could be in the form of rain, fog, dew, sprinkler irrigation water, syringing water, or even pesticide spray. Moreover, many fungi need high humidity to produce spores. Dew formation is triggered when the surface temperature of a leaf canopy drops below the dew point temperature of the surrounding air (fig 1).
Fig. 1. Dew formed at night on leaf and other plant surfaces create the favorable conditions for pathogen infection. Photo: S. Tjosvold.
Tjosvold fig. 1
This typically occurs at night in greenhouses that are not ventilated and heated properly, or outside on calm clear nights. Often the period that free water exists on the plant can dramatically affect disease severity by enhancing conditions that favor infection. In table 1 you can see that the severity of gray mold (Botrytis cinerea) increases with longer wet periods. Other major leaf and stem pathogens are also supported by wet periods of four hours or more of continuous leaf wetness.
Table 1. The effect of wet period on Botrytis cinerea infection of ‘Volare’ and ‘Magic Carousel’ roses.
'Volare' 'Magic Carousel'
wet Period Disease index1 wet period Disease index1
hours hours
4 1.67 4 0.33
5 2.22 8 0.89
6 3.56 12 1.22
24 4.00 24 3.67
1Disease index (0 to 5 ) 0 = no infections, 5 = all or nearly all petal tissue necrotic. Numbers in the same column followed by the same letters are not statistically different. Source: D. Coyier 1986.
Commercial disease prediction models exist for apple scab, cedar apple rust, potato late blight, tomato early blight, strawberryanthracnose,botrytis fruit rot, citrus brown spot, lettuce downy mildew, grape powdery mildew, among others. Sensors quantify and collect leaf wetness duration and models predict disease risk (fig 2).
Tjosvold fig. 2
Fig. 2. Wetness sensor quantifies the potential leaf wetness that is favorable for pathogen infection. This information can be used in existing models or used empirically to predict higher risk of disease occurrence. Photo: S. Tjosvold.
These systems can reduce the number of sprays that are needed for disease control. It has been recently suggested that disease models instead use leaf wetness data based on a simple empirical model using relative humidity. Relative humidity sensors can be standardized and calibrated more easily than leaf wetness sensors.
Many current greenhouse control systems can help collect and organize data from leaf wetness, relative humidity and temperature sensors. Alternatively, a simple environmental monitoring system can be pieced together for an outdoor nursery or greenhouse using readily available sensors and dataloggers from various companies (e.g., Campbell Scientific Inc (Logan, UT), Onset (Bourne, MA), Spectrum Technologies Inc. (Aurora, IL). Most disease risk models have not been tested in ornamental crops but there is no reason why they cannot be wholly or partly used for disease risk monitoring in ornamental crops. Botrytis models have been intensively studied in other crops and should be one of the first to try in ornamental crops. Empirical evaluation of these models in the field is a first step to confirm their usefulness. Models that predict high disease risk could improve scouting efficiency by targeting more intensive scouting during these periods, help to reduce fungicide applications by predicting optimal timing of fungicides before infection occurs, and target periods when dehumidification cycles are needed in greenhouses.
Steve Tjosvold is Environmental Horticulture Advisor Farm Advisor, UC Cooperative Extension, Santa Cruz and Monterey counties
Webmaster Email: [email protected]
|
ACS > ACS POSTS > Artificial intelligence in higher education
Artificial intelligence in higher education
By Asomi College of Sciences
Artificial intelligence is one of the newest and best technological inventions, gaining importance in higher education. Some higher education institutes are already using artificial intelligence by applying it in several areas such as student support in learning, staff support in administration practices and student Q&A sections, or other purposes. ASOMI College of Sciences wants to give its students the best possible services for improving the student experience and is interested in issues regarding technology, digitalisation, and AI. This article, therefore, discovers the improvements brought up using AI in higher education.
AI and its multi-faceted improvements
Artificial intelligence provides higher education institutions with a long list of opportunities, for instance, the ability to anticipate enrolment trends, the elevation of academic performance and optimisation of recruitment processes. For example, algorithms based on previous student data and behaviour can anticipate which kind of students will enrol. The algorithm might even predict the origin and the number of students who are about to register. This aspect, of course, helps the higher education institutions to personalisetheir course programs on their future students’ needs to facilitate their college or university experience.
Moreover, artificial intelligence can build up retention plans by helping in time and preventing, rather than curing, students’ needs and requests. For instance, in some universities or colleges, the drop-out rates might currently be high, but AI would help prevent it all by using data to point out the moments in which a student might have difficulties. These difficulties may be of various kinds; for instance, some students may need financial help and universities and colleges usually give them micro-loans or advances.
In this context, higher education institutions could use AI to prevent this situation by warning the school and the student. To do that, this sort of AI warning system should be based on academic, non-academical and operational data.
Student support and AI
Secondly, the recruitment process would become much easier with the use of artificial intelligence. Many new or international students might have a series of questions to which answering using AI bots would be the right solution. Of course, should the answers to these questions not satisfy the students, they should still be given the possibility to contact an administration employee for more complicated issues.
In every case, this kind of Q&A system is helpful because it spares a lot of time for the administrative staff, who can spend their time resolving essential issues instead of answering the simple yes/no questions. Moreover, the admissions processes would ameliorate since these processes would become faster and easier. For instance, AI could help higher education institutes to create retention plans for their future students.
Additionally, higher education institutions could also employ AI for helping students to plan their course load or even recommend them the suitable courses based on previous students with similar data. AI can also build up personalised learning pathways by helping the students to fill the gaps between their studies and careergoals
Support to teaching and learning methods
Of course, the last but not the most minor aspect is the learning support. Artificial intelligence and bots can reply to students’ questions on what has been learnt during the lessons and store and provide them with the necessary course material
Students can also ask some simple questionson the course to a bot, but it would be better if an educator would reply to complicated questions. If the lessons are registered, whether online or in-person, a bot could also search for video transcripts and deliver them to the students. By putting, for instance, QR codes on sheets or another kind of course material, the entire class can become virtual and personalised for every student, thanks to AI.
Moreover, also intelligent reading systems using AI are a good idea. For instance, this kind of reader already exist, and they are provided with language detection and text-to-speech translation options. These readers could moreover be used for teaching or analysing texts in foreignlanguages
In other words
Whatever the use, artificial intelligence has a lot of improvements to offer to higher education institutions. It is of use in improving student experience, facilities and learning methods. It moreover helps colleges and universities predict the enrolment trends and personalise their courses on the needs of their future and current students.
Call Now Button
|
The nilgai is the largest antelope in Asia. It stands 1–1.5 metres (3.3–4.9 ft) at the shoulder; the head-and-body length is typically between 1.7–2.1 metres (5.6–6.9 ft). Males weigh 109–288 kilograms (240–635 lb); the maximum weight recorded is 308 kilograms (679 lb). Females are lighter, weighing 100–213 kilograms (220–470 lb). Sexual dimorphism is prominent; the males are larger than females and differ in colouration.
A sturdy thin-legged antelope, the nilgai is characterised by a sloping back, a deep neck with a white patch on the throat, a short mane of hair behind and along the back ending behind the shoulder, and around two white spots each on its face, ears, cheeks, lips and chin. The ears, tipped with black, are 15–18 centimetres (5.9–7.1 in) long. A column of coarse hair, known as the "pendant" and around 13 centimetres (5.1 in) long in males, can be observed along the dewlap ridge below the white throat patch. The tufted tail, up to 54 centimetres (21 in), has a few white spots and is tipped with black. The forelegs are generally longer, and the legs are often marked with white "socks".
While females and juveniles are orange to tawny, males are much darker – their coat is typically bluish grey. The ventral parts, the insides of the thighs and the tail are all white. A white stripe extends from the underbelly and broadens as it approaches the rump, forming a patch lined with dark hair. Almost white, though not albino, individuals have been observed in the Sariska National Park (Rajasthan, India) while individuals with white patches have been recorded at zoos. The hairs, typically 23–28 centimetres (9.1–11.0 in) long, are fragile and brittle. Males have thicker skin on their head and neck that protect them in fights. The coat is not well-insulated with fat during winter, and consequently severe cold might be fatal for the nilgai.
Only males possess horns, though a few females may be horned as well. The horns are 15–24 centimetres (5.9–9.4 in) long but generally shorter than 30 centimetres (12 in). Smooth and straight, these may point backward or forward. The horns of the nilgai and the four-horned antelope lack the ringed structure typical of those of other bovids.
*information courtesy of https://en.wikipedia.org/wiki/Nilgai
|
Login Sign Up
English-Hindi > cognitive miser
cognitive miser meaning in Hindi
cognitive miser sentence in Hindi
• संज्ञानात्मक कृपण
cognitive ज्ञानात्मक
miser कंजूस कृपण बेचारा
1.This second effect helped to lay the foundation for Fiske and Taylor's cognitive miser.
2.The term " cognitive miser " was first introduced by Susan Fiske and Shelley Taylor in 1984.
3.Nobel Prize winning psychologist Herbert A . Simon won the Nobel prize for his theory that people are cognitive misers.
4.First, people may not allocate all of the available resources to a task at hand because they are cognitive misers.
5.First, the human tendency to choose the least cognitive approach to decision-making, which is called the cognitive miser hypothesis.
7.According to Susan T . Fiske and Shelley E . Taylor, human beings are by nature " cognitive misers ", meaning they prefer to do as little thinking as possible.
How to say cognitive miser in Hindi and what is the meaning of cognitive miser in Hindi? cognitive miser Hindi meaning, translation, pronunciation, synonyms and example sentences are provided by Hindlish.com.
|
About Coyote Creek
coyote creek map_edited.jpg
Coyote Creek is actually a river! It is over 64-miles long and is the largest watershed in Santa Clara County. Its headwaters are in Henry Coe Park, and it runs through Morgan Hill, Gilroy, and downtown San Jose and into San Francisco Bay. This diverse ecosystem includes creeks, dams, urban areas, percolation ponds, saline environments, and over 20 miles of scenic biking and hiking trails.
The Fisheries and Aquatic Habitat Collaborative Effort (FAHCE) agreement of 2003 established a framework for long term resolution of the fishery restoration issues and the achievement of the restoration of this valuable resource. This project was a multi-agency project convened by the Santa Clara Valley Water District and the Department of Fish and Game to develop an interim fisheries and aquatic habitat management plan.
The goals of FAHCE are:
1. Identify the contribution of SCVWD facilities and operations to existing fishery habitat conditions within the context of the variety of factors impacting salmonid populations.
2. Identify reasonable flow and non-flow measurements that will improve habitat conditions for such fish populations within the context of competing water and land use demands.
From the report, specific issues and actions were recommended for the affected watersheds (Guadalupe River, Stevens Creek, and Coyote Creek). One main issue is that the Coyote Creek watershed is an important steelhead restoration opportunity. The main locations for fish spawning are down stream of Anderson Dam and parts of Upper Penitencia Creek.
|
The widespread of poverty, crowded living conditions, and poor sanitation are contributing factors to many health problems, such as malnutrition, tuberculosis, malaria, and infections from parasites. Due to a lack of funds, modern equipment, and staff, the country is unable to meet the needs of the people. Even the most basic supplies can sometimes be unaffordable. Few doctors are available, especially in rural communities. Even with the small number of rural area clinics, help is still not accessible to everyone. With an unreliable ambulance and fire rescue system, sick people must be able to afford the trip to treatment, in addition to the care itself. The minimal or lack of medical care leads to short average lifespans of 47 for males and 51 for females.
MDH is opening a medical clinic in Bercy, which neighbors Traveau. This clinic has 25 staff members, including 4 Haitian doctors and 11 Haitian nurses. MDH also supports the clinic in Turbe, which is lead by Dr. Ocean and his team.
|
How to Use an Elimination Diet to Determine Your Food Allergies or Sensitivities
What would you do without food? Not only does it nourish your body, it helps you celebrate life’s accomplishments, comforts you when you’re down, and brings family and friends together. So, what do you do when your body rejects certain foods?
The first step is identifying exactly which foods cause an intolerance or trigger your immune system to mount an attack, or allergic reaction. Dr. Farah Khan and our team here at Millennium Park Medical Associates in Greenwood Village, Colorado, can help you nail down the culprit or culprits with a medically supervised elimination diet. Here’s how it works.
Understanding the elimination diet
Simply put, the elimination diet cuts out specific foods from your diet to help you identify which ones cause a problem. Eliminations diets are especially helpful for identifying food intolerances or sensitivities, which cause symptoms but don’t involve the immune system.
While many food allergies can be determined by a blood or skin test, an elimination diet can confirm the diagnosis and/or help you pinpoint the exact culprit within a food group that’s triggering a response.
The elimination phase
The first part of the elimination diet is the elimination phase, where you avoid certain foods you suspect may be causing a problem. For instance, if you always have digestive problems after eating lasagna, you may be allergic to or sensitive to dairy products, wheat, tomatoes, certain spices, or eggs.
Dr. Khan helps you narrow down the items to avoid, and explains how to keep a food diary that provides valuable information about your reaction to certain foods.
It’s important to read labels carefully, as many foods contain ingredients you may not suspect are harmful to you or that trigger a reaction. Some food additives have been known to cause sensitivities or allergic reactions, too, so pay attention to artificial sweeteners, preservatives, colorings, and flavor enhancers.
Be sure to monitor your diet closely during this phase, not only for possible problematic foods, but also to make sure you maintain a nutritious diet despite the ingredients you’ve eliminated.
The reintroduction phase
If the elimination phase has provided relief from your symptoms, you can logically deduce that something on the list of eliminated items is to blame for your reactions. During the reintroduction phase, you’ll add those items back one at a time and write down your physical reactions in your food diary.
If you have only a mild reaction to some foods, you may be merely sensitive to that particular food rather than clinically allergic to it. It’s still valuable information, as you can avoid that food when possible and be prepared for a response when you eat it.
If you have a severe reaction to a particular food you’ve reintroduced, such as an instant rash or hives, a swollen or closed throat, or trouble breathing, seek emergency medical help immediately.
Types of elimination diets
Depending on the severity of your allergies and the suspected allergens, Dr. Khan may recommend one of several different types of elimination diets, including:
Dr. Khan supervises your elimination diet and monitors your responses to help identify food sensitivities, food allergies, and severe reactions, including anaphylaxis, which can lead to life-threatening problems, such as a constricted airway, shock, a dangerous drop in blood pressure, and loss of consciousness.
If you suspect a food allergy or sensitivity but can’t figure out what's causing it, schedule an appointment with Dr. Khan and get to the bottom of your food-related itchy skin, oral swelling, digestive issues, respiratory problems, or worse — anaphylaxis. Call today or book online using our handy scheduling tool.
You Might Also Enjoy...
Are Your Vaccines Up to Date?
Everything You Should Know About Gout
|
What is Myopia?
Myopia is more commonly known as nearsightedness, where distance objects are blurry. Myopic eyes are longer than non-myopic eyes. The longer the eye, the worse the vision. The liklihood of children developing myopia increases 1 in 2 when both parents are myopic, 1 in 3 when one parent has myopia, and 1 in 4 when neither parent has myopia. Research is showing that increased time indoors reading and using devices such as tablets and smartphones may also influence the development of myopia. It is becoming more widespread- today more than 40% of Americans have myopia compared to 25% in the early 1970s. That number is continuing to increase, especially among school-age children. A study recently done during the pandemic (and safer at home orders) also confirmed this.
What is Myopia Management?
Myopia Management is the process by which eye care providers work to limit the amount of myopia (nearsightedness) that develops in young eyes. This can be accomplished using several different methods, which we will discuss in greater detail below.
Why should we consider Myopia Management?
Myopia, or nearsightedness, occurs when light entering the eye focuses in front of the retina. This happens because the eye grows too long (axial length). The eye can continue to lengthen until around age twenty. If a child starts being moderately nearsighted when young, and continues to progress as they age and the eye grows, the potential for significant myopia to develop is great. That can lead to increased risk of retinal detachment, myopic maculopathy, glaucoma, and cataracts.
Treatment is intended to reduce the progression of myopia both so that the child doesn't develop a very strong prescription, but also to reduce the risk of eye diseases later in life.
How do I know if my child is a candidate for Myopia Management?
Your doctor will be able to help assess whether Myopia Management could benefit your child. General guidelines state that a child 6 or under who is slightly hyperopic (farsighted) is low risk, someone less hyperopic or starting to be myopic at age 6 is medium risk, and a child who has been diagnosed with myopia between ages 8 and 12 is high risk for progression.
Therapeutic Treatment Options for Myopia Management
The first option we will discuss is specially designed multifocal contact lenses. Our office is one of the few in the area fitting MiSight lenses, the first FDA approved contact lens for treating myopia progression. This is a daily disposable lens, which is especially great for kids. The advantages of a daily disposable are that they are a healthier replacement modality (fresh lenses every day!), and as parents you don't have to worry about how well they are cleaning and caring for the lenses. The multifocal contact lens uses correction zones and treatment zones in the lens to slow the growth of the eyeball (length). This means they allow the child to have vision correction now while helping the eye resist growing longer to preserve vision for the future. Studies have shown these lenses can reduce the progression of myopia by up to 59%!
MiSight Facebook Image 6
A second treatment option is atropine drops. These drops dilate the eyes, and a specially formulated low dose is used nightly before bed. The child would still need to wear their glasses during the day, but this has been shown to decrease the progression of myopia. Potential side effects include mild near blur.
How can I learn more or get started with Myopia Management?
Please contact our office at 615-758-2344 to schedule a Myopia Management consultation or email This email address is being protected from spambots. You need JavaScript enabled to view it..
It is exciting to have new science and technology at our fingertips to impact both how our children see, and the health of their eyes, later in life!
To read more about the FDA approval of MiSight, click here
Latest News
|
Number words and number recognition
Students will match numbers to their corresponding number words. The activities will include flashcards, a concentration game, a matching game, and a word search.
Mrs. Simmons
This activity was created by a Quia Web subscriber.
Learn more about Quia
Create your own activities
|
Menu Close
Healthcare data sharing during Covid
The COVID-19 pandemic has led to more institutions sharing healthcare data. There are still barriers to be overcome to ensure data sharing is more commonplace however.
Data sharing across sectors has been feasible for a period of time now. It’s not prohibited by technical issues but instead by other obstacles such as ethical, social, and legal issues.
Healthcare data sharing needs an infrastructure to support it.
That infrastructure should provide functionality, transparency, and data security. All the stakeholders must commit to making it work.
One of the greatest obstacles to sharing data can be a lack of trust. Other barriers include the cost of the data and how it ends up used.
During the pandemic some providers have started sharing their data on COVID-19. The HCA healthcare system in Nashville, Tennessee has collected data on its COVID patients since March 1, 2020.
They were approached by the Agency for Healthcare Research and Quality and asked if they wanted to collaborate on a long term basis. The aim was to provide information that could be be used to provide a greater understanding of the nature of COVID-19.
HCA were able to offer access to their data to help build on AHRQ´s existing knowledge. Due to the urgency of the pandemic, trust is being built quicker among HCA and other such external research organizations. The potential of adding other health systems to contribute their data is also being actively looked into.
Another large health data sharing project is the N3C. Sponsored by the National Center for Advancing Translational Sciences, the goal of this group is to gather data on people that have COVID. Looking at their medical information dating back from 2018 should help them get a better understanding of the virus.
Currently 197 organizations are using the data.
It includes 1.2 million COVID patients’ electronic health records, in addition to control patients who haven’t had COVID. The data provides a wide range of data covering all sectors of the population, in cities and more rural areas as well as ethnicity, and race. Around 2,000 people have contributed to the database. The pandemic has been terrible but it’s also shown how people and organisations can work together for a common good.
The database raises issues with regards to who can use that data however. The database includes healthcare data from a wide range of sources. If an organisation is looking at the data for a specific purpose can they look at other parts of the data which isn’t necessarily relevant to their needs?
Patient privacy is another issue which almost always raises its head when it comes to discussions like these.
Many organisations have already put out guidelines and codes of conduct for using data and the implications and procedures are being looked at.
Whilst there are still problems to iron out, the US has come a long way with health data in a relatively short time. 5 or 6 years ago less than 50% of the country had functional electronic health records. These days that number significantly higher. There is still a long way to go with the main challenge being the fragmentation of the industry.
|
Part Four of the Discourse reads as a very brief summary of the first three Meditations (though the geometrical proof of God's existence is in the Fifth Meditation). A more detailed commentary on all these matters can be found in the SparkNote on the Meditations. This commentary will simply be a brief overview.
At the beginning of his investigation, Descartes undertakes to consider as false everything that he can possibly doubt. Such doubt effectively demolishes the whole enterprise of Aristotelian philosophy, which bases its claims on sensory experience and demonstrative reasoning. His goal is to sweep away the philosophical prejudices of the previous two thousand years and to start afresh. In doing so, he also manages to set the tone for the nearly four hundred years of philosophy that follow him. The questions of how we can know that there are objects external to our minds, that there are minds other than our own, and so on, have been hotly contested in the light of Descartes's new standard for what counts as certainty.
Perhaps Descartes's most significant contribution to philosophy is his revolutionary conception of what the human mind is. According to Aristotelian philosophy, only reason and understanding are distinctly mental properties. Sensing, imagination, and willing are not simply mental properties, since they connect the mind with objects in the world. Descartes overturns this conception, suggesting that our sense experience, imagination, and will are all a part of the mind alone, and are not linked to the world. In suggesting that we may be dreaming or otherwise deceived, Descartes argues that sensory experience is not necessarily a faithful report of what is actually in the world. Effectively, Descartes re-conceives the mind as a thing—the source of all the thoughts, sensations, imaginings, and so on that constitute our world—trapped inside our body. How our mind can connect with a world outside this body has been a pressing problem for all modern humans since Hamlet.
"I am thinking, therefore I exist" is Descartes's proposed way out. This famous phrase is less precisely translated as "I think, therefore I am." The fact that I am thinking right now, and not that I am capable of thought, is what confirms that I exist right now, and not that "I am" in general. Descartes cannot doubt that he exists, and so he claims to have certain knowledge of this fact. It is quite tricky, however, to determine the nature of this knowledge. Descartes has doubted the certainty of demonstrative reasoning, so it can't follow from a logical argument. Descartes's answer is that it is a "clear and distinct perception": it is not something he has to argue for; it is something that it is simply impossible to doubt.
Descartes seems to argue in a circle later in his discussion, when he claims that God confirms the truth of clear and distinct perceptions. This implies that without God, clear and distinct perceptions would not be true. But he has only managed to "prove" that God exists by appealing to a clear and distinct perception to that effect. What, then, is the foundation upon which Descartes builds? If God is the source of all truth, including the truth of clear and distinct perceptions, how can Descartes prove that God exists? And if clear and distinct perceptions are the source of all truth, then what role does God play in all this?
We should note that Descartes's "proofs" of God are neither original nor very satisfying. Unlike his revolutionary ideas about the nature of the mind and of certainty, his proofs of God are borrowed from the medieval scholastic tradition. The first proof claims that the idea of God, as an idea of perfection, must be caused by something as perfect as the idea itself. This proof relies on notions of causation that are questionable to say the least. The second proof claims that existence is a property of God just as geometrical figures have certain properties. Kant was the first to point out that "exists" is not a property in the way that "angles add up to 180 degrees" is. Having angles that add up to 180 degrees is a property of a triangle: it says something about the triangle. Existing, however, is not a property of God's so much as it is a property of the world: it is saying that the world is such that God exists in (or above) it.
|
Good Friday
Why is it called Good Friday? Jesus was tortured and executed on this day - but we call it "good". Some say it may have once been call "God's Friday" as it was sacred. Others say good because it led to resurrection and salvation to all. Don't say how much He suffered, but instead say, look how much He loved. I found this website that is also a good explanation (https://www.avemariapress.com/engagingfaith/2008/03/why-do-they-call-it-good-friday/).
God so loved the world that He gave His only begotten son. – John 3:16
Recent Posts
See All
|
Securing Another Tier of Protection: Knowing More About VPN Functions
Personal information is of utmost importance. Hackers recognize it, and corporations know it. That is why both parties go to extreme measures to secure it. Although, one of the two is supporting a much more proper and legal way.
Sadly, as the practices on technology and data collection progress, so do the methods that cybercriminals follow to steal sensitive information.
If you are a business owner, you have a critical responsibility to protect our customers’ data and be transparent with our practices.
1. Defining Cybersecurity
Cybersecurity makes reference to the process and consistent practice of protecting computers, networks, and data from ill-usage either by external threats, cyber-attacks or perhaps, other modern threats.
Secured data usually includes passwords, contact information, bank account info, credit card numbers, medical records, driver’s license and passport numbers, social security numbers, and any additional non-public data.
As we continuously utilize the Internet for transactions, there is a particular chance for hackers to access your network, which would be the last thing that we want to transpire. Still, you can avoid such an unfortunate event by connecting to a VPN.
2. Knowing VPN and Its Purpose
We use the term Virtual Private Network to explain a digital chain within another physical network. Virtual Private Networks are applied to enable users access to safe information on a private system.
Users can do this by linking to that particular network using a public network.
These particular networks don’t provide an additional tier of security, but people can also use the companies to obtain a safeguarded network from any specific Internet connection remotely.
a. Tunneling
Virtual Private Networks operate by sending data across tunneling protocols, which are meant to produce an additional layer of encryption as well as information security.
Tunneling procedures send data in one network protocol through another, supplying the second phase of protection.
Tunneling is similar to sending an addressed box within another bigger box through the mail. And the individual who receives the box at the first address sends the smaller box to the other address.
b. Intranet Sites
Users often use Virtual Private Networks to enable them to access internal web pages and services.
For instance, a company or group may employ a VPN to serve as a guard for a confidentially hosted email and communication board operation.
The privately hosted service does not precisely connect to the Internet, for the only method to get inside is through the utilization of Virtual Private Network.
The VPN safety practice is functions differently from hosting a web page on the Internet and managing entries through utilizing a password.
Any user can enter a publicly hosted web page, but they cannot get in if it is password protected. A VPN-hosted web page, on the other hand, can’t even be entered unless the user representative is qualified to connect to the network.
c. Network Range and Protection
Businesses often use Virtual Private Networks to create a private computer wide-area channel that users can only access from both the non-immediate and immediate geographical zone.
A corporation may be dealing with classified data, which must not move across the Internet in such a fashion that the hackers can intercept and pull off the information, so the Virtual Private Network serves as another tier of protection.
Furthermore, setting up a VPN allows users to join the network through the Internet as if they were like in the corresponding local network.
d. Remote Factors
The most obvious method to keep pesky hackers from getting into your computer and stealing data is not to connect your computer to the Internet to any extent.
You can do this by manipulating the computer servers and databases so that the only local network-connected computer units have permission to access.
A business can employ a Virtual Private Network to enable remote access to a guarded system via a three-computer structure highlighting the remote user, the computer bridge, and the guarded server.
The guarded server doesn’t connect to the Internet directly; but, the particular server will link to a computer bridge, which is the one that has the Internet connection.
A remote user may connect to the computer bridge through the Internet and then reach the protected system via the computer bridge.
Engineers often use this method to remedy internal network concerns without needing to be in the very same building or room as the computer encountering the problem.
3. What about smartwatches?
Since a lot of people nowadays don’t usually wear Omega Speedmaster because they engage more in technological advancements, remember that modern smartwatches are also at risk. So how do you protect your watch? Here are a couple of ways how.
a. Update Software Updates When Available
It is crucial to update the software of your device frequently, and that is why every business makes software updates regularly.
Aside from enhancing the performance of your device, software updates identify the flaws in the security system and patch it, which is indeed a critical protection method.
Although the update may need some time to reboot, it surely is worth your time because hackers can easily access flawed applications.
And having your confidential information stolen and misused is the last thing you would want to happen.
b. Protect Fitness Data
Cybercriminals and hackers are not the only ones who are interested in your sensitive information, because you can now include insurance companies with them.
A smartwatch is full of sensitive data about its users, and several institutions count on this information to know more about current needs or even health conditions.
Before you begin using any fitness applications, always manage your privacy settings first to determine what information you would allow for public sharing.
Moreover, it is highly recommended to use an alias and not your real name to protect your personal information further.
4. To Conclude
Cyber-attacks have been innovative nowadays, and businesses need to be at least at the same pace as these hackers.
To guarantee another set of data protection, companies make great use of Virtual Private Networks to further secure not only their sensitive information but for the entire employees as well.
Helpful Resources:
1. Why is Cybersecurity Important For Enterprises?
2. 5 Website Security Tips Every Employee Should Know
3. Get VPS Hosting For Your Websites For Better Results
4. Top Ten Blockchain Applications That Are Transforming Industries
Published by
Recent Posts
December 3, 2021
How Recruitment Agency Services Prove Helpful For Businesses
November 29, 2021
November 24, 2021
Why Is Competitive Research Significant In Digital Marketing
November 24, 2021
How to Boost Employee Engagement During the Holidays
November 23, 2021
Are Bundle Deals Cost-Effective in Reality?
November 22, 2021
|
What Speed Doesa Whitetail Deer Run?
How fast can a whitetail deer run?
A fully-grown white-tailed deer can run up to 35 miles per hour. This speed is due to their long limbs and powerful shoulder and hip muscles, which
Who is faster tiger or deer?
Despite of their huge body weight they can reach maximum speed up to 65 km/h, which is about 40 mph. A tiger can run as fast as 35 mph (56 km/h), but only for short distances. Even though deer can run extremely fast, some felines can exceed that speed and catch them very quickly.
What is faster a deer or horse?
If you’ve ever won a few bucks at the racetrack, you know horses are speedy animals. Though they have a different need for speed, deer are pretty swift, but in the end, the horse probably will win the race.
What animal runs 20 mph?
Even when lacking a front foot, a coyote can still run at around 32 km/h (20 mph).
How fast can humans run?
A cheetah can outrun a horse; it’s one of the fastest animals on the planet. This beautiful wild cat can run up to 70 mph.
Can a deer run faster than a wolf?
Wolves can also hit 35 mph when chasing prey over short distances, so they can — and do — catch up to moose. Coyotes are speedier at 43 mph while in pursuit, but the whitetail deer can only reach 35 mph, so a deer on the run is easy pickings for a pack of coyotes giving chase.
You might be interested: Question: How Many Fawns Do Whitetail Deer Have In A Lifetime?
How high can a deer jump?
White-tailed deer can jump almost eight feet high, so effective upright fences against them should be this high. Deer may be able to jump high, but not both high and over a distance. So a fence may not be as high, perhaps six feet, but slanted outward. The deer will try walking under the fence and meet resistance.
Who is faster tiger or lion?
The lion (Panthera leo) is one of the four big cats in the genus Panthera and a member of the family Felidae. With some males exceeding 250 kg (550 lb) in weight, it is the largest cat species apart from the tiger. Adult tigers can run as fast as 30-40 miles per hour in short bursts.
What animal can reach the highest speeds?
The mighty cheetah has been clocked at 75 mph — the speediest runner on the planet. Perhaps you know that the fastest animal in the sea, the sailfish, cruises through the water at 68 mph. In the sky, the peregrine falcon reigns supreme.
Who runs faster cheetah or deer?
The fastest animal that we know of is the peregrine falcon, which can reach 349 kilometres per hour as it dives for the kill. We have also been told that the fastest mammal is the cheetah at 112 kph, and – get this – the fastest insect is the Deer Botfly at 1,287 kph – which is faster than the speed of sound!
Leave a Reply
|
more from Immanuel Kant
Single Idea 5573
[catalogued under 11. Knowledge Aims / A. Knowledge / 2. Understanding]
Full Idea
In the first part of our transcendental logic we defined the understanding as the faculty of rules; here we will distinguish reason from understanding by calling reason the faculty of principles.
Gist of Idea
Reason is distinct from understanding, and is the faculty of rules or principles
Immanuel Kant (Critique of Pure Reason [1781], B356/A299)
Book Reference
A Reaction
If we narrow the concept of rationality down to a concern with rules or principles, the concept of 'understanding' has to widen out to cover inferences from experience. Personally I think we can be rational about particulars as well as principles.
|
Archive / Social Skills
RSS feed for this section
Spotlight on Schoolhouse Talk: Using Marble Runs in Speech Therapy!
Research Tuesday: What happened to my sweet child?: Behavioral Milestones of a 4 year old!
My son tends to show most growth (meaning most changes in cognitive, motor, social skills and yes behavior) around his birthday and half birthday each year. This is fairly standard development when researching typical development in young children. As I just finished re-researching behavioral milestones in a four year old I thought I’d share that […]
Pretend Play: Keeping Track of Progress!
Data Collection is probably the most difficult thing to master when participating in pretend play with our clients/students/children. How can we PROVE that progress has been made? **Picture courtesy of** Here are a few things we need to think about first: 1. Baseline skills: the most imporant thing to do prior to determining appropriate […]
Pretend Play: Choosing Materials to Target Speech & Language Goals!
Pretend play is crucial for a child’s cognitive, communication and social development. But how can we as SLPs and parents use this activity to target specific language goals? There are a few rules to follow when using pretend play to target specific goals: 1. Choose interesting materials: If your child is not interested in the materials you […]
Pretend Play: Why it’s Important!
I was at the Dollar Tree in early September and look at what I found?!! Some great Halloween/ dress up goodies for…well you guessed it, $1 a piece! Look at the wonderful pretend play outfits I found for a total of $6 (plus tax)! Pretend play is sometimes overlooked in the Speech Pathology world because we are trained to “make the […]
|
1945-1991: Cold War world Wiki
From the longer English Wikipedia page [1] which has a list of his writings.
The Hungarian Revolution of 1956 began shortly after his release from prison. He was weak and ill recovering from surgery, but escaped from the hospital to accept appointment as commander-in-chief of the military guard and military commander of Budapest.
He recognized his forces had no hope of victory over the Soviet army, but resented then Soviet ambassador Yuri Andropov's chicanery in concealing the imminent invasion. After the Soviet military intervention in Hungary, he fled to Austria and later the United States to avoid yet another death sentence, one unlikely to be commuted. He was, in fact, sentenced to death in absentia.
He earned graduate degrees at Columbia University. From 1964 he taught Military History at Brooklyn College, and became chairman of the history department. He retired as Professor Emeritus in 1982.
From the Hungarian Wikipedia page [2].
Participation in the 1956 War of Independence 1956 . On 7 September, he was temporarily released, on 14 October [10] requesting his return from the Minister of Defense, István Bata. On October 21, a minor surgery was performed, and was treated in a hospital until October 28 during the Revolution , and was rehabilitated on October 31st . The president of the Revolutionary Army Commission and the Revolutionary Defense Commission , which was founded , and Imre Nagy appointed the military commander of Budapest at that time . He was commissioned to organize the new National Guard.
The Revolutionary Army Committee was formed on October 31 by representatives of the insurgents, the army, the police and the workers, with a view to creating a consolidated situation in preparation for the formation of a new coalition government. This committee made conceptual decisions to implement them by commanding an armed forces command . It was also desirable to organize units of freedom fighter according to the same rules. In order to carry out these tasks, a National Guard based on the '48 traditions was organized.
In the aftermath of fighting, he struggled with the attackers in the Nagykovácsi region: “On the night of November 10th, the units of the 97 th mechanized regiment were appointed to destroy the group in the forest near Nagykovácsi. The captured militiamen informed us that Béla Király left for the Austrian border. At the command of Marshal Konyev, a special group set off, headed by an experienced reconnaissance officer, Colonel II Scriptko to arrest Béla Király. However, Béla Király failed to find or arrest him. ” He left Hungary near Ják.
Béla Király memories of perceived suspected atomic bombing during the Soviet invasion mushroom cloud [ source? ] At Nagykovács, justifying his quick departure. In another interview he spoke, had to consider: if he stayed in the country, hanged, so he decided to save his life while leaving for Austria.
Views of '56[]
The ten point views are as follows (for more details, see Documents on Emigration 10th, No Yes, 260):
1. In 1956, sober patriots did not want a revolution but urged fundamental reforms.
2. The aim of the revolution was to formulate the most accurate in the 16 points of university youth, but it did not require the abolition of the communist regime. “But it is also true that the unanimous claim was to make a general, equal, secret vote. The Parliament thus formed would have been competent to declare the final form of government and society. It is possible, but likely, that if the country had not been deprived of its sovereign aggression from its independence in 1957, what would have happened in 1990 was "
3. The revolution won. Imre Nagy established a multi-party government that quickly consolidated the situation. Revolution is an internal affair, and armed aggression is an international affair. Although the Hungarian society is lagging behind the latter, it does not change the fact that the revolution has won.
4. The victory was won by the Hungarian youth.
5. The winning young people chose central management to ensure political consolidation. Following the example of '48, the fighting units were to be organized in the National Guard under a unified command.
6. The USSR began an armed intervention against our country from October 30 to October 31.
7. The declaration of neutrality on November 1 was a consequence of Soviet intervention, not a cause.
8. The entire Soviet bloc has moral responsibility for what happened.
9. The Soviet intervention was a war without war. It was a war with its purpose as it sought to overthrow the legitimate Hungarian government. And there was war in terms of its volume, as in the "Rotating Wind" operation, some 100,000 Soviet soldiers took part in about 2000 tanks. This war was a war between socialist countries, as the revolution program did not include the abolition of the socialist system.
10. The West and the United Nations recognized its truths after the revolution. Raymond Aron concluded his fate point essay entitled: "The Hungarian Revolution ... victory in defeat, will forever remain one of those rare events that will return the man himself faith and reminded ... the fate of the meaning, the truth." ( What is not a verb , 264)
From January 1957 he was a vice president of the Hungarian Revolutionary Council in Strasbourg . Anna Kéthly and József Kővágó testified about the revolutionary events before the United Nations Commission in New York . Between 1957 and 1966 he was a member of the Hungarian Committee under the presidency of Béla Varga.
From the Russian Wikipedia page [3]
In October 1956, became the military leader of the rebel forces in Budapest, who opposed the pro-Soviet regime . Since October 30 - Co-Chairman (along with Colonel Pal Maleter) of the Revolutionary Armed Forces Committee, which led the rebel formations. The committee’s activities were largely hampered by the rivalry between Maleter and Kiraly: each had subordinate parts to it, but many servicemen refused to obey both. While Maleter sought to contain the militants (in particular, he ordered 12 militants killed in the Corvin cinema), Kiraly, by contrast, was a supporter of radical actions against the supporters of the former regime (in fact, their lynching).
From October 31 - member (and actual leader) of the Revolutionary Defense Committee, military commander of Budapest. On November 1, he formed a rehabilitation commission, which returned to the army officers who were dismissed for political reasons.
From November 3 - Commander-in-Chief of the National Guard, which was to become the core of the new Hungarian army. He led armed resistance to Soviet troops, entered on 4 November in Budapest. His headquarters was originally located in the outskirts of Budapest, then relocated to the town of Nagykovachi , where it was located on November 7-8. However, the superiority of the Soviet troops was obvious, and the majority of the Hungarian soldiers did not support Kiray, who was forced to flee to Austria from 9 to 10 November. In June 1958, he was sentenced to death in absentia in a closed trial of the “case of Imre Nagy and his accomplices”. He was deprived of Hungarian citizenship.
|
Pregunta: Who weakened the Catholic Church?
When did the Catholic Church weaken?
From 1378 until 1417, the Great Schism divided the Church. During this time, both popes claimed power over all Christians. Christians became confused about which pope had power and authority. The split greatly weakened the Church.
Who first challenged the Catholic Church?
Why did the power of the Catholic Church began to weaken?
Is Catholic Church the first church in the world?
The Roman Catholic Church
ES INTERESANTE: Dónde se registran las asociaciones religiosas?
What factors weakened the Catholic Church?
The Weakening of the Catholic Church By the Late Middle Ages, the Catholic Church was weakened by corruption, political struggles, and humanist ideas. Many Catholics were dismayed by worldliness and immorality in the Church, including the sale of indulgences and the practice of simony.
What is the difference between a Catholic and a Protestant?
How was the Catholic Church corrupt during the Renaissance?
Leaders of the Catholic Church during the Renaissance era certainly engaged in corrupt behaviors and acts. High ranking leaders of the church lived lavish lifestyles while they preached the holiness of a humble and modest life. Affairs, adultery, and pedophilic behaviors by church leaders were all too common.
What were the main complaints against the Catholic Church?
People felt that the clergy and the pope had become too political. The way the church raised money was also considered unfair. The sale of pardons or indulgences was unpopular. An indulgence provided a relaxation of penalties for sins people had committed.
How did Renaissance humanists contribute to the weakening of the Roman Catholic Church?
How did Renaissance humanists contribute to the weakening of the Roman Catholic Church? They believed in free thought and questioned many accepted beliefs. … Many Catholics were deeply disturbed because it was not their way of beliefs. They were buying sins.
ES INTERESANTE: Cómo se celebra el día de Navidad en España?
Does the Catholic Church support war?
The Church says “just wars” are allowed as long as certain conditions are met. Those conditions include ensuring that all other peaceful means have been exhausted and that the force is appropriate and will not lead to worse violence. … The war must be for a just cause.
What was the worst punishment for being named a heretic by the Catholic Church?
Dios eterno
|
Intro: With the rise of mental disorders and anxiety, more people are trying to find ways to cope with their symptoms. One way is through emotional support animals (ESA). ESA’s can be dogs, cats, rodents, or any other animal that provides comfort to an individual. But what if you have a pet? Do they qualify as Emotional Support Animal (ESA)? The answer may surprise you! Read on for more information about what qualifies as an ESA.
Emotional Support Animals: How They Help
What is an Emotional Support Animal?
An emotional support animal (ESA) is a companion animal that a medical professional has determined benefits an individual with a disability, including depression, anxiety or PTSD. This determination must be made by a licensed mental health professional, such as a psychiatrist or psychologist. You can also register emotional support animals for your
Do Your Pets Qualify as Emotional Support Animals?
This Is How You Can Get an Emotional Support Animal Right Now | HelloGiggles
Although the idea of a pet is charming, it might not be feasible for everyone. For those considering bringing home a new pet but aren’t sure if it’s possible, understanding the difference between emotional support animals and pets can provide some guidance.
• Mental Support Animals – Are They Legally recognized? Mental Support Animals (also known as Psychiatric Service Animals and Emotional Support Animals) are often thought of as any other pet. Still, they bring a lot more to the table than just unconditional love. The Americans with Disabilities Act (ADA) provides psychiatric service animals and emotional support animals certain protections and allows handlers to bring their ESA into public areas where pets usually are not allowed. It includes restaurants and hotels. ESA’s have been adapted to accompany owners in Airplanes and other forms of public transit.
• Mental Support Animal Rights – Vindication: However, not all animals make good emotional support animals. Not just any animal can become a service animal, which carries a lot of weight. Typically an individual will seek a doctor or mental health professional for a recommendation, then used to get an emotional support animal letter.
What’s the difference between ESA and a service animal?
What's the Difference between an Emotional Support Animal and a Service Animal? - PetsBlogs
An emotional support animal is NOT the same as a service dog. While they are both animals that provide comfort to their owners, emotional support dogs are not trained for specific tasks. Their primary job is to boost their owner’s morale!! When comparing emotional support animal vs service animal, consider the following:
• An emotional support animal differs from a service dog — which is specially trained to perform specific tasks for its handler that they cannot do for himself — in that there are no task-specific training requirements. Although some dogs can naturally sense and respond to an individual’s needs, no advanced training or significant effort is required from the individual with a disability.
• An emotional support animal also differs from a therapy dog. While both provide comfort and non-judgmental companionship to individuals under their care, neither require specific training to perform that function. Therapy dogs are brought to facilities such as hospitals and hospices for therapy sessions to help comfort someone who is depressed or upset.
• On the other hand, emotional support animals provide their handlers with companionship and unconditional love, without any additional benefit that a well-trained dog could provide. Psychiatric service animals perform tasks that their handlers cannot do for themselves. An example of this would be a seeing-eye dog guiding a blind handler.
• It is not necessarily required for an individual to have medical documentation to be approved for an emotional support animal. However, for service animals, having rigorous training with the specific tasks that the animal will perform to prove that the animal can adequately provide care helps ensure legitimacy.
What about your pet?
Dog Owners Are Much Happier Than Cat Owners, New Study Finds
If they’ve helped you cope with depression, anxiety, or another condition you have, you may qualify for reasonable accommodation from your landlord. It means your pet may technically count as a “service animal” and be allowed in public places or live with you, no matter what species it is.
Are you a dog lover? Examples of service dogs that can be classified as emotional support animals:
• Guide dogs for the blind
• Alerting services for hearing impaired
• Psychiatric service animal for medication reminders and reducing anxiety
• Autism service dogs that help autistic children and adults with self-calming techniques and physical support
You don’t have to train your pet at all – you need to answer the questionnaire and prove your mental health diagnosis. The ESA Letter Professionals can provide you with an ESA letter to qualify you for your animal. If you own a dog, he may be considered an ESA if he possesses
One or more of the following qualities:
• If they give you assistance to cope with your stress or help calm down your panic attacks
• If they relieve one of your symptoms of an existing medical condition (i.e., stomach ache, headache, etc.)
• alerts their owner when they need to go to the bathroom
• helps their owner get up from a wheelchair or bed
• alerts their owner when panic or anxiety is coming on strong and helps to calm them down
• Acts as a calming force during a flashback, which could be triggered by an event that worsens your mental health condition. For example, if you have Post Traumatic Stress Disorder (PTSD), a dog can remind you that you are safe.
• opens doors pick up items from the floor, and help with their owner’s balance when standing or walking
• acts as a “social lubricant,” helping people to feel more comfortable around them -can help their owner get out of the house and participate in social functions, such as going to the store or attending events
• It helps their owner to regulate sleep patterns and reduce reliance on medications to fall asleep.
• It can help an individual with a sensory processing disorder (math disability) be less distracted by sounds around them and focus better on tasks at hand. It can be conducive for children and adults with autism.
• Can provide the motivation needed to complete a task, such as taking medication or completing physical therapy.
• Can prevent their owner from feelings of depression and anxiety by helping them socialize and feel more confident in public places. An emotional support animal can be a calming presence for those who have PTSD (post-traumatic stress disorder).
• It can help lower blood pressure and ease feelings of anxiety and depression.
• Help keep those who experience panic attacks calm, especially when exposed to specific triggers that cause anxiety or fearfulness or in social settings such as parties.
Emotional Support Animals do not have the extensive training that Service Dogs have, but they can still help someone suffering from emotional distress. An Emotional Support Animal provides affection, support and comfort just by being with its owner.
|
Large-scale social movements
Throughout history, large-scale social movements have been important in bringing lasting social change. This speaks to the importance of this cause, but tractability (or rather individual contribution) seems difficult to determine.
Michael Huemer calls activism “utopian”1:
This is a utopian solution. It is utopian because it requires changes in human nature without proposing a realistic mechanism to bring about those changes. The democratic failures that I have described are not a mysterious accident, nor are they the product of a few bad actors. They result from the operation of normal human selfishness within the incentive structure of a democratic state. It is not in individual citizens’ interests to keep tabs on their elected representatives. The behavior of citizens and elected representatives will not change unless either the incentive structure changes or people become much less selfish than they are.
While he acknowledges that activism has led to improvements in quality of life, he maintains the “utopian” notion of activism. Specifically, he does not see activism as “the solution to the constant, everyday malfeasance of government”.
1. The Problem of Political Authority. §9.4.4.
|
Asked 7 Months ago Answers: 5 Viewed 87 times
Note that this will only work on UNIX.
from functools import wraps
import errno
import os
import signal
class TimeoutError(Exception):
def decorator(func):
def _handle_timeout(signum, frame):
raise TimeoutError(error_message)
def wrapper(*args, **kwargs):
signal.signal(signal.SIGALRM, _handle_timeout)
result = func(*args, **kwargs)
return result
return wraps(func)(wrapper)
return decorator
from timeout import timeout
def long_running_function1():
# Timeout after 5 seconds
def long_running_function2():
@timeout(30, os.strerror(errno.ETIMEDOUT))
def long_running_function3():
Tuesday, June 1, 2021
answered 7 Months ago
python3 is not Python syntax, it is the Python binary itself, the thing you run to get to the interactive interpreter.
You are confusing the command line with the Python prompt. Open a console (Windows) or terminal (Linux, Mac), the same place you'd use dir or ls to explore your filesystem from the command line.
If you are typing at a >>> or In [number]: prompt you are in the wrong place, that's the Python interpreter itself and it only takes Python syntax. If you started the Python prompt from a command line, exit at this point and go back to the command line. If you started the interpreter from IDLE or in an IDE, then you need to open a terminal or console as a separate program.
Other programs that people often confuse for Python syntax; each of these is actually a program to run in your command prompt:
• python, python2.7, python3.5, etc.
• pip or pip3
• virtualenv
• ipython
• easy_install
• django-admin
• conda
• flask
• scrapy
• -- this is a script you need to run with python [...].
• Any of the above together with sudo.
with many more variations possible depending on what tools and libraries you have installed and what you are trying to do.
If given arguments, you'll get a SyntaxError exception instead, but the underlying cause is the same:
>>> pip install foobar
File "<stdin>", line 1
pip install foobar
SyntaxError: invalid syntax
Tuesday, June 1, 2021
answered 7 Months ago
use the following to convert to a timestamp in python 2
Sunday, August 22, 2021
answered 4 Months ago
The ajax function takes a timeout parameter and you can check the status in case of error.
var call =function(){
url: '<?php bloginfo('template_directory'); ?>/ajax/product.php',
type: 'get',
timeout: 400,
error: function(x, textStatus, m) {
if (textStatus=="timeout") {
You might want to make something a little smarter to avoid permanent calls...
From the documentation :
Set a timeout (in milliseconds) for the request. This will override any global timeout set with $.ajaxSetup(). The timeout period starts at the point the $.ajax call is made; if several other requests are in progress and the browser has no connections available, it is possible for a request to time out before it can be sent. In jQuery 1.4.x and below, the XMLHttpRequest object will be in an invalid state if the request times out; accessing any object members may throw an exception. In Firefox 3.0+ only, script and JSONP requests cannot be cancelled by a timeout; the script will run even if it arrives after the timeout period.
Tuesday, August 24, 2021
Remy Lebeau
answered 4 Months ago
To reload the webpage incase the loading process is taking too long you can configure pageLoadTimeout. pageLoadTimeout sets the amount of time to wait for a page load to complete before throwing an error. If the timeout is negative, page loads can be indefinite.
An example (using Selenium v3.141.59 and GeckoDriver v0.24.0):
• Code Block:
public class pageLoadTimeout
public static void main(String[] args)
System.setProperty("webdriver.gecko.driver", "C:\Utility\BrowserDrivers\geckodriver.exe");
WebDriver driver=new FirefoxDriver();
// do your other work here
}catch(WebDriverException e){
System.out.println("WebDriverException occured");
• Console Output:
1565680787633 mozrunner::runner INFO Running command: "C:\Program Files\Mozilla Firefox\firefox.exe" "-marionette" "-foreground" "-no-remote" "-profile" "C:\Users\Debanjan.B\AppData\Local\Temp\rust_mozprofile.3jw3aiyfNAiQ"
1565680826515 Marionette INFO Listening on port 56499
1565680827329 Marionette WARN TLS certificate errors will be ignored for this session
Aug 13, 2019 12:50:28 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Detected dialect: W3C
Aug 13, 2019 12:50:31 PM org.openqa.selenium.remote.ErrorCodes toStatus
WebDriverException occured
• You can find a relevant discussion in pageLoadTimeout in Selenium not working
You can find a detailed discussion in Do we have any generic function to check if page has completely loaded in Selenium
Tuesday, August 31, 2021
answered 3 Months ago
Not the answer you're looking for? Browse other questions tagged :
|
Skip to main content
Updated date:
The Impacts of Corona Virus on the Psychological Well-Being of the Population in Pakistan
Anisa shah is a graduate in clinical psychology. She is interested in human psychology, politics, and critical analysis.
Mental health
The impacts of corona virus on the psychological well-being of the population in Pakistan
A well-connected society is considered to be the healthy sign of social life as per south Asian society standards which mainly comprised of the collectivists culture. In such culture the norms of connectivity and social bonds are signs of healthy family life. In such cultures everyone take cares of each other through material, financial and emotional support. The disruption in such norms would create anomaly in the overall life cycle. Similarly, the current wave of pandemic disrupt the normal life routine around the world by creating the new normal. The new normal comprised of social distancing, avoiding social interactions and limiting oneself to home. This new normal which is considered to protect the physical health is casting a huge toll on the mental health of the populace particularly in the collectivists’ culture. In the same way Pakistan is prone to the mental health issues due to current wave of pandemic.
The outbreak of novel corona virus is wreaking havoc around the world in the realms of socio-economic and personal domains. Similarly, in Pakistan health sector is vulnerable due to excessive amount of burden to cater the needs of the patients impacted with corona virus. Where this virus is influencing negatively the physical health of patients in the same way it is revealing the vulnerability of the mental health of the general masses within the country. The drastic, negative influence is on the health of women who are prone to more trauma and anxiety because of the additional household responsibilities. For instance, Naila (names are changed) a housewife is feeling tired, nervous, restless and tensed. She is feeling those anxiety symptoms from the past few days. It is influencing her responsibilities and relationship with her children, spouse and in-laws. . In addition, this pandemic is the source of another mental health problem an obsessive compulsive disorder (OCD). A disorder which comprised of un wanted repetitive thoughts followed by repetitive compulsions (behaviour). Saad, who is a university student complains about the repetitive thoughts about being infected with this virus. He complains about the recurrent thoughts of infection which has a negative influence on his mental well-being. In order, to divert his attention from such worrying thoughts he constantly washes his hands and sanitize his surroundings. This is consuming his time for the studies and other productive work he is supposed to do.
Pakistan is a third world country with the fragile economic growth rate and economic conditions. This pandemic which is reversing the growth rate of the major powers of the world is leading to the economic stagnant growth rate in Pakistan. Moody’s investors service one of the top three global services has anticipated that Pakistan economic growth will shrink with sluggish gross domestic product growth rate at two percent. This leads to the drop in the investment opportunities and the downsizing of the employees. Farhan, an employee of the Islamabad based High tech Company has been given two options either to resign or work on the minimal salary. He has a family to support and expenses to look after. This is causing a huge stress in his life which has a direct influence on his mood. He is complaining anger management issues, sleep deprived nights and the physical pain in his body.
In addition, mental health is the neglected domain of the health sector as the famous saying is ‘there is no health without mental health’. People are complaining about the psychological issues which comprised of the anxiety disorder, panic attacks, stress, and sleep disorders. In Pakistan the absence of mental health care facilities are creating more problems. Stress due to economic and financial difficulties are creating extreme anger in people. This can be observed from the recent two reported incidents from Peshawar. In one incident a man killed his niece because she was making noise in the house. In the second incident a wife was murdered by the husband because the food was not warmed properly. These are glimpses of the society which is not taking mental health seriously. Due to this pandemic many disorders are emerging in the population which is a huge burden in terms of healthy functioning, various psychologists and counsellors are sharing coping strategies to deal with such gruesome problems.
The first and foremost advice shared by them is the consumption of the healthy nutritious diet. A healthy diet is necessary to keep the flow of the firing of the neurotransmitters. One of the crucial neurotransmitter is the serotonin which is responsible to regulate mood, sleep and the inhibition of the pain. Secondly, the exercise at home and even free online exercise platforms are available. Exercise is the crucial mechanism to boost the mood by enhancing the physical and mental health. Thirdly, the sleep routine is most important in this stressful time. Right now most of the people are complaining about the disturbance in their sleep cycle which is the source of severe headaches and migraines. The sound sleep will be helpful in relieving the pain and the management of stress. Fourthly, the setting of the routine will indirectly help the sleep routine and would be helpful for the purposeful life cycle at the moment. Lastly, the exercises of the deep breathing and meditation are helpful for curing anxiety and stress at this stressful time.
Aneesa shah
Interested in human behavior.
Honors In psychology with majors in counselling and clinical psychology.
© 2020 Ansa
Related Articles
|
how to use visualizing and verbalizing a research-based strategy for reading comprehension and autism
academic strategies to teach children on the autism spectrum
how to use graphic organizers
what is the difference between asperger's and autism
behavior strategies for autism spectrum disorder
picture schedules
my child is having problems in school
using timers for adhd
Translate »
|
Sign In
Not register? Register Now!
Essay Available:
You are here: HomeEssayPsychology
8 pages/≈2200 words
6 Sources
English (U.S.)
MS Word
Total cost:
$ 34.56
The Psychology Behind Long-Term Marriages (Essay Sample)
The present paper is about the psychological background of healthy and long-term marriages. Because no other person has as much influence on our health and well-being as our spouse, love and marriage are undoubtedly the most researched themes in psychology. Some persons who have been married for a long time admit that the intensity of romantic love in long-term partnerships lessens when compared to couples who are initially in love. Despite the fact that many marriages are failing, there are many happy and contented families out there. The purpose of this study is to discover what characteristics make a marriage last and to explain the behavior patterns that lead to long-lasting and happy relationships. Its goal is to answer the question of how pleasant and long-lasting relationships are formed and maintained.
Scientific paper
Meri Arsenyan
HDP 281
Professors: Khachatur Gasparyan, Tatevik Arakelyan
April 26, 2020
Long-lasting Marriages and the Psychology Behind Them
Love and marriage are probably the most studied topics of psychology, because no other person influences our health and well-being as much as the spouse does. Some people in long-term marriages confess that in lasting relationships the intensity of romantic love decreases compared to couples newly in love. Despite the fact that a lot of marriages are breaking down, there are also many cases of happy and contented families. The present paper is intended to find out what factors make a marriage last and explain the behavior patterns behind long-lasting and rewarding marriages. The question it aims to answer is the following: how happy and enduring relationships are built and maintained.
A couple in love and looking forward to their wedding, can never imagine a day when they will not be happy together. According to Maslow’s hierarchy of needs, social intimacy is one of the basic human needs and contributes to physical and mental well-being CITATION Jil15 \l 1033 (Leeuw, 2015). Still, not everyone who has found his/her love, also gets the chance to keep it throughout lifetime. It may seem that if you love a person, nothing will be changed no matter how many years pass, however, sharing one’s daily routine with another person for a long time may be challenging. Nearly half of the marriages in the US ends up with a divorce and among the ones who stay as a family, marital unhappiness is widely-spread CITATION Car14 \l 1033 (Gregoire, 2014). In long-term relationships, love commonly turns into companionship and friendship. The good news is that according to scientific research; love can endure throughout many years after marriage. Long-lasting and happy relationships are not just a matter of luck; those are the result of two-way effort and commitment CITATION The \l 1033 (The Keys to a Successful Marriage, 2020). Psychologists, observing marriage and love, compiled a list of factors and theories contributing to marital happiness and satisfaction.
It is scientifically proven that we have a chance of keeping lifelong romance. Oxytocin, labelled as “cuddle hormone”, boosts happiness in our bodies when we hug our partner and makes us feel closer. Our brain requires more of that hormone, and makes us stay with our partners CITATION Kor20 \l 1033 (Miller, 2020). In a study done in 2012, involving couples married for a decade, 40 percent of participants reported being intensely in love. Furthermore, among the couples that were together for more than 30 years, 35 percent of men and 40 percent of women stated they were strongly in love CITATION Car14 \l 1033 (Gregoire, 2014). Surveys are not the only source of evidence of long-term love: research in neuroscience revealed that deep romantic love can remain even after many years of marriage. A study conducted in 2011 examined the brain sections responsible for love and affection of those people who were in long-term marriages (with an average of 20 years) compared to people who had recently fallen in love. The brain activity was similar in both of the groups. According to psychologist Adoree Durayappah, “Our brains view long-term passionate love as a goal-directed behavior to attain rewards" CITATION Car14 \l 1033 (Gregoire, 2014), which includes decline in anxiety and stress level, and increased feeling of safety and calmness.
Quality communication between partners is a fundamental component for building a lasting relationship. Talking with your spouse is one of the best ways to keep your marriage healthy. Having intimate conversations with each other helps to feel emotional closeness and understand what can you expect from a person. The problem with many couples is that years after marriage, the list of conversation topics narrows down to kids' problems and bills. Psychologist John Gottman suggests couples to build ‘love maps’. In other words, to know each other better, it is essential to communicate by asking deep questions CITATION Kor20 \l 1033 (Miller, 2020). According to marital therapist Edward Waring, “intimacy is the dimension which most determines satisfaction with relationships which endure over time” CITATION Jil15 \l 1033 (Leeuw, 2015). He suggests Cognitive Self-Disclosure as a way of increasing intimacy. This theory includes open conversations about one’s beliefs and attitudes, needs and ideas. So, it is about knowing each other better and also increasing self-awareness CITATION Jil15 \l 1033 (Leeuw, 2015).
Finding the balance between togetherness and independence is another the key aspect of harmonious relationships. Happy couples spend a lot of time together, while at the same time each of the partners maintains his/her autonomy. Boredom is a major problem in long-term relationships, so finding ways to keep things interesting strengthens the bond between partners. According to a research done by The Pew Research Center, 64% of Americans said that having shared interests helps them stay married CITATION Kor20 \l 1033 (Miller, 2020). Learning new skills, for example, snowboarding, or going on a trip together would be a wonderful idea, however doing daily activities like cooking or exercising together is also very beneficial. The above-mentioned research also found out that 56% of U.S population does housework together to upkeep the relationship CITATION Kor20 \l 1033 (Miller, 2020). No matter the activity, it’s the sense of togetherness and shared memories that strengthens the couple.
While intimacy is extremely important for a healthy relationship, having some “me time” is also essential. According to O’Leary’s study, people who are generally pleased with their life, bring that happiness to the relationship CITATION Whi12 \l 1033 (Whitbourne, 2012). In other words, it may sound selfish to enhance one’s well-being through time at gym or walking all alone, but the happiness of each partner improves the quality of a relationship. So, being happy outside the relationship, brings happiness to the relationship.
Gratefulness is like a nurturing pill for a relationship. When people live for long years together, sometimes they stop noticing things they used to value and take those for granted. Showing gratitude and thinking positively about one’s partner is one of the central pillars of a happy marriage. Appreciating the little pleasant things about partners every day increases the intensity of love CITATION Par02 \l 1033 (Parker, 2002). Saying ‘thank you’ for simple things like preparing dinner, washing the dishes or looking after the children improves the quality of the relationship. Even in the times of disagreements and crises, lifelong couples focus on the favorable traits of the partner. By appreciating all the good in each other, it is easier to resolve a problem CITATION Jil15 \l 1033 (Leeuw, 2015). Moreover, according to Gottman, criticism and resistance to discuss problems are severe pressures on a marriage. There are misunderstandings in every family, having disagreements is natural, however it is important to separate fair arguments from subjective and disrespectful remarks. Couples who know how to argue constructively and compromise tend to stay together in a marriage for a longer time CITATION Kor20 \l 1033 (Miller, 2020).
Decades of psychologists’ efforts have provided us with a list of recommendations on how to live a happy marital life. One of the most extensive theories is the “Triangular Theory of Love” developed by Sternberg. This theory is like a guide to a happy marriage, because it explains three concepts on which the relationships are mostly dependent. All of the advices described in this paper are a part of one or more of these elements. Those are commitment, intimacy and passion. Commitment is the decision to love and to keep it even in the times of difficulties. Passion is responsible for the “butterflies in the stomach”, and physical desire. Third element of the triangle, intimacy, stands for the emotional attachment CITATION Jil15 \l 1033 (Leeuw, 2015).
In order to be harmonious, a relation
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:
Other Topics:
• Freud's Concept of Narcissism and Mourning
Description: The term narcissism is defined as the value of adoration that people accord themselves with being tools of sexual desire. Freud, a Greek mythologist, postulated narcissism as a factor that causes people to have affection towards a specific object. The word narcissism emanated from Greek mythology, where...
2 pages/≈550 words| 4 Sources | APA | Psychology | Essay |
• Evaluating Various Human Development Theories
Description: Social development theories were established to explain how cognitive development can be achieved through social interactions. The interactions can be between friends, families, or colleagues that share common interests. Some of the popular ones include the maturationist, psychoanalytic, behaviorism...
• Human Services: Model of Service Delivery
Description: The service delivery model comprises public, medical, and human service models. Depending on the nature of the problem and the intensity, the three models are practical to humanity. Human services revolve between environment and individual, trying to balance both of them. For instance, the behavior...
2 pages/≈550 words| 3 Sources | APA | Psychology | Essay |
Need a Custom Essay Written?
First time 15% Discount!
|
• english
• spanish
Besides the Amharic symbols, I will also write about other things like basic words and some grammar.
In today’s post I’m going to talk about numbers, which are one of the first things we learn when studying a new language, at least how to count from 1 to 10. Numbers in Amharic have their own characters and even when today the Arabic numbers are widely used in Ethiopia, the traditional notation is also used. To start I’m going to list the numbers from 1 to 10 with their writing and pronunciation.
# Sign Pronunciation Feedel Transliteration
1 1 1.mp3 one. and
2 2 2.mp3 two hulätt
3 3.mp3 three sost
4 4.mp3 four aratt
5 5.mp3 five ammïst
6 6.mp3 six sïddïst
7 7.mp3 seven säbatt
8 8.mp3 eight sïmmïint
9 9.mp3 nine zät’äñ
10 10.mp3 ten assïir
The complete sequence from 1 to 10: numbres.mp3 (Sounds by FSI)
These are the signs for the following numbers:
# Sign Feedel Transliteration
20 twenty haya
30 thirty sälasa
40 forty arba
50 50 fifty // fifty-2 amsa // hamsa
60 60 sixty // sixty-2 sïlsa // sïdsa
70 70 seventy säba
80 eighty sämanya
90 ninety zät’äna
100 hundred mäto
1.000 thousand // thousand-2 ših // ši
To say 24 for example we must say "20-4" or "haya aratt" 204, 47 would be "40-7" or "arba säbatt" 407 and so on. In the numbers from 11 to 19 instead of saying "assïir" for the number ten, you say "asra" and the number from 1 to 9 that corrensponds, for example 10-3, "asra sost" for 13.
For the hundreds the process is similar, for example if we want to say 536, we will say "5-100-30-6", or "ammïst mäto sälasa sïddïst" 5100306 .
Finally if we want to say 1985, we’ll say "10-9-100-80-5" or "asra zät’äñ mäto sämanya ammïst" 109100805 .
Neither 1.000, nor 1.000.000 have symbols, so to write the number 8.593, we will say "8-ši-5-100-90-3" 8thousand-25100903.
|
The Big Five Personality Traits ~ CANOE system
The Big Five Personality Traits ~ CANOE system
Five Factor Model of Personality
Personality is a unique aspect of every human being. Statistically, no two people have the same personality. This means that there are different types of personality traits for different people.
According to recent and traditional studies, we have information that we classify these traits into five simple yet different personality types. The history behind this has an interesting story. As per earlier studies, 4000 different traits in personality exist, which was reduced to 16 after years. From 16, studies have jotted the traits down to the five most important and popular among them.
These five personality traits are explained by the CANOE system that includes Conscientiousness, Agreeableness, Neuroticism, Openness and Extraversion. Let us know about all these traits in detail.
Big Five Personality Traits / Five Factor Model
five factor model of personality
Conscientiousness (C)
This personality trait contains higher thinking levels, good control of the impulse, and aim-directed actions. As we can see, people with this personality trait tend to have an organised and framed approach towards life. They are mostly associated with science, scientific works, and high retail financial services. They have skills in making detailed orientation and organisation, which makes them excel in their jobs. A person with a high conscientiousness trait generally plans situations ahead of time and works accordingly to see how others get affected by them. They are commonly seen in the project management and human resource departments and teams. This is because they can balance the skeletal roles along with the development of the entire team. They can balance both in places of work and family. They can keep focus at all places.
Agreeableness (A)
A person having this personality trait shows good virtues of trust, love, kindness and altruism. They have high philanthropic behaviour, and they love spending their time helping others. Sharing problems, comforting others, and cooperating with them in their hard times are some of the unique traits of Agreeableness. You can also call it empathy to some extent. People having this personality trait have their careers in servicing and charitable organisations. They are mostly charity workers, psychologists, and medical persons. Even the people who work as a helping hand in delivering soups in the kitchen and restaurants are also observed falling under this category. They work as social workers and love dedicating time to them.
Neuroticism (N)
This personality trait is characterised by tranquillity, moodiness, sadness as well as emotional instability. Neuroticism does not mean anti-socialism or some bad psychological problem. It is both an emotional and physical response to matters like stress, threats, and bad moments in a person’s life. People having Neuroticism at high levels experience uneven and unexpected mood swings, irrational beliefs, and anxiety. A person or an individual who changes their personality every day according to the situation tends to be highly neurotic. These people have a lot of stress in their career as well as personal life. Anxiety plays a very major role in the life of a neurotic person. It is an ability of an individual to cope with the daily stress or risks. They overthink a lot and then suffocate within their own space.
Openness (O)
Openness is one of the traits of the Five Factor Model of Personality that includes the quality of imagination as well as insights. They are very eager to see the world, experience new things and learn various aspects of life. They have a wide range of interests in their life and love to be on the run with thrill and adventures. This makes them make interesting decisions in their life. They also seem to be very creative and love getting out of their comfort zone when making decisions from their lateral and abstract thinking. They are the type of people who will order the most exotic food in the restaurant, travel the world, and do things that a normal person may not have thought of in his entire life.
Extraversion (E)
This is a trait that is mostly seen in people nowadays. This trait is easily identifiable and popularly known. You can describe someone as an ‘extravert’ when the person feels a lot of energy in the company of other people. It is the opposite of Introversion. Some characteristics of this trait are excessive talking, assertiveness, and emotional expression in large amounts. These qualities are so common that it makes them recognisable through their way of social interactions. They enjoy interacting with new people and then making the biggest friend groups. They are generally seen doing services in sales marketing, politics, and also in giving education.
The five big Personality tests/Five Factor Model
These five big personality traits ( five factor model of personality) can be tested using various tools as well as procedures. These tests find out the major to minor differences within the five big personality traits. A set of questions is asked where the person replies in the method of objective responses. Records based on the experiments of 5 consecutive tests are found to be reliable. It is the most scientific, valid, and reliable method to test personalities.
There was a general comparison between the personality traits of men and women to see how much they differ from each other. Recent studies say that men and women are almost the same with their personalities and thoughts. These experiments have also concluded that more women tend to fall under Extraversion, Neuroticism and Agreeableness personality types than men. These personalities often show up over time in both genders.
big five personality test
Researchers say that if you carry all the five traits explained above, you’re an empath! To read more about empaths, click here :
The five factor model of personality also depicts the behaviour of a person. You may also find people having mixed personalities. In this case, what happens is – they have one trait very specific in them that makes them fall under one of the personality traits while the rest of the traits in them may belong to a different category/type.
Every personality trait is unique in its own way, which makes people differ from others in their thoughts and points of view. Learn to embrace yourself and your thoughts. Remember, you are a unique individual, and beautiful personality embedds within you.
This Post Has One Comment
Leave a Reply
|
What does the phrase knot it imply in trifles Is there any difference between quilting and knotting?
What does quilt it or knot it mean in Trifles?
The quilt and Minnie’s decision to finish it in one of two styles—quilting or knotting—is developed as a metaphor for her innocence or her guilt. The act of knotting a quilt is linked to the act of killing a man with a rope around his neck.
What meaning does the phrase knot it possess besides quilting?
What if any meaning does the phrase knot it possess besides quilting? … AThe phrase means marriage, referring to the phrase “tying the knot.” CThe phrase describes Mrs. Wright’s current emotional state, as if she is tied up in knots.
What does the phrase knot it mean in a jury of her peers?
In this scene, Martha Hale’s statement that she was planning to “knot it” symbolizes the act of killing by tying a rope around another’s neck.
IT IS SURPRISING: Frequent question: Is it easy to make curtains?
Why is the quilt significant in Trifles?
The quilt represents her mental instability. Since she was always home alone she spent most her time making quilts. In the play Mrs. Hale points out that the one she was just working on was so nice and even then the pattern went all over the place.
What is ironic about the ending of Trifles?
Written in the early 1900s, “Trifles” deals with the rights of, expectations for and assumptions about women in society at the time. In an ironic twist, the audience knows that the women have solved the murder mystery while the men remain oblivious of the truth because of their assumptions.
What does the dirty towel symbolize in Trifles?
The Dirty Towel Symbol Analysis
This is one of many out-of-place objects in Minnie’s kitchen that cause George Henderson to accuse her of being a poor housekeeper. … In addition, the mess in the kitchen symbolizes the ways in which the men in this play expect women to fulfill certain gender roles.
What does knotting a quilt mean?
When a person knots a quilt, he or she uses fewer stitches, sewing together all three layers at once and then securing the stitching with a knot at the end. The men repeatedly make fun of the women, Mrs. … Wright was going to sew or knot her quilt.
What is the symbolic significance of Mr Wright killing Mrs Wright’s bird quizlet?
Wright killed his wife’s desire to have children. -The bird represents peace; Mr. Wright destroyed the peace of the household by constantly fighting with his wife.
IT IS SURPRISING: Where is stitch data hosted?
What is the symbolism in trifles?
The most important symbol in Trifles is Mrs. Wright’s dead bird. This symbolizes the appalling treatment that Minnie Wright’s husband, John, had meted out to her and that led to her killing him. The canary represents Minnie’s true character and how she used to be before her marriage to John.
What is the theme of a jury of her peers?
The main theme for A Jury of Her Peers is Gender Roles; much of the tension in the story results from what the women understand and what the men are blind to.
Why is the quilt important in a jury of her peers?
It serves as a double understanding in that the men who enter the house to explore the crime scene are already biased and against Minnie because the house was untidy. To add to this, having a quilt sewn badly is significant in that women often quilt to relax, and to feel serene in the comfort of their own home.
Who does the bird symbolize in trifles?
Well, given your choices, I would answer that the bird in Trifles represents Mrs. Wright. Minnie is a beautiful caged bird in her marriage to the dark and unforgiving Mr. Wright.
What do the broken jars symbolize in Trifles?
The canning jars are broken as Minnie feared, and this symbolizes the inevitability of her conviction. The women’s decision to lie to Minnie is also the first clear example of these women’s connection with another woman-in-need to the point of working against the concerns or preferences of their husbands.
IT IS SURPRISING: Do you need to install spark on all nodes of yarn cluster?
Why does Mrs Hale think Mrs Wright is innocent?
Hale thinks that Mrs. Wright’s worries about her preserves indicate her innocence because a woman who had murdered her husband would not be concerned over such trivial matters.
Why is Mrs Peters statement that Mrs Wright was going to knot the quilt ironic?
Peters’s statement, and later Mrs. Hale’s, that Mrs. Wright was going to knot the quilt is ironic because the audience knows that knotting the quilt refers to the way that Minnie Wright killed her husband.
|
Modeling behavior: how children copy their parents
Modeling behavior: how children copy their parents
All normal children try to copy what their parents are doing. Children mostly learn by watching and trying to imitate what an adult does. Sometimes, the person they copy doesn’t have to be the people around them — models can be friends, teachers, or even movie characters.
What is modeling of behavior?
Behavior modeling is a natural learning process whereby someone observes the behavior of another and then imitates it. It is sometimes called observational learning or social learning. This form of learning needs no direct instruction, and most of the time, the model doesn’t even know that someone is learning from them.
There are four steps in modeling of behavior:
1. Attention: Your child is observing what you are doing
2. Retention: The child understands and remembers your behavior
3. Reproduction: Your child tries to replicate what they remember
4. Motivation: This is what prompts your child to imitate the action — the more your child looks up to the model, the stronger the motivation (that’s why babies copy toddlers and teenager popstars)
Which behavior do children model?
Children are more likely to model a behavior where there is some form of reinforcement for it. For instance, if your child sees another child jump down the stairs and get praised for it, they are more likely to copy this behavior. But if the child was scolded or ignored for jumping down, they are less likely to copy the behavior.
While they mostly learn from you and other kids around, children can also learn from YouTube videos and movie characters. We, therefore, should be mindful of what a child watches. On TV, bad and aggressive behaviors are often reinforced, which means your child may tend to imitate them.
Tips for using modeling to teach positive behaviors:
DownloadMali Daily Pregnancy Tracker
Daily Pregnancy & Parenting Tracker
Mali has 4.8 Stars from 2000+ ratings
4.8 Stars from 2000+ ratings
|
Group B Strep and Pregnancy: Explained by a Labor and Delivery Nurse
Alright, Mama. Today we’re talking all about Group B Strep and pregnancy. Group B Strep is a type of bacteria that’s naturally present in as many as 1 in 4 women!
It’s not harmful to you, but it can cause serious issues for your baby if it infects them during birth. For this reason, Group B Strep screenings are a standard part of prenatal care here in the US.
And with an occurrence rate of up to 25%, I want to make sure you all understand exactly what Group B Strep is all about.
Here we’re going to learn the details about this infection, how GBS disease can impact your baby, and how your provider will screen for GBS. Lastly, we’ll take a look at what it means if you’re GBS positive.
If you do have Group B Strep, your birth will be a little bit different, because antibiotic treatment is highly recommended. Luckily, it’s not as big of a deal as it sounds.
If you haven’t already, please feel free to join over 350k new moms and follow me here on Instagram for awesome pregnancy + birth tips!
What is Group B Strep (GBS)?
Group B Strep is a strain of bacteria that naturally occurs in as many as 1 in 4 women. This bacteria lives in the vagina and rectum, but rarely (if ever) has any negative effects on the women who have it.
In fact, most women are totally unaware they’re “colonized” until their Group B Strep screening comes back positive.
Is it contagious?
According to March of Dimes, Group B Strep is something that’s just kind of naturally present in your body. It isn’t contagious in adults, and it’s not something that you can contract from sex, water, food, or things you touch.
Then why is Group B Strep bad?
The problem with Group B strep is that the bacteria can pass from mama to baby during labor and delivery. This is actually pretty rare, only happening 1-2% of the time.
But, antibiotic treatment is highly recommended to prevent GBS infection to baby because the outcomes for baby can be dire.
How does GBS affect baby at birth?
Okay, so we know that if you have GBS it’s important that it’s not passed to baby, but why is this so important?
When GBS is passed to baby during birth, they are then said to have a GBS infection or GBS disease.
Symptom onset of a GBS infection to baby can occur within the first 7 days of life (early-onset GBS), or later when they are between 7 days and 3 months old (late-onset GBS).
Early Onset GBS in Newborns
With early-onset GBS, symptoms usually present within 12 to 48 hours after birth. This type of infection passes to baby during birth. Antibiotic treatment during labor significantly reduces baby’s risk of contracting a GBS infection during birth, but it’s important to know the warning signs just in case.
Symptoms include fever, drowsiness, and trouble breathing. These symptoms can quickly develop into life-threatening infections including sepsis (blood infection), pneumonia (lung infection), or meningitis (infection of the fluid lining the brain).
Boston Children’s Hospital states that nearly 75% of all GBS infections in babies are considered early onset.
Late Onset GBS in Newborns
Late-onset GBS occurs in babies that are older than 7 days, usually up to 12 weeks old. This type of GBS infection is not passed to baby during birth. Antibiotic use during labor and delivery doesn’t prevent the infection either.
In these cases, it is often unknown how baby became infected with GBS. The U.S. CDC explains that scientists and experts don’t fully understand GBS transmission aside from during birth. But what we do know is that the immature immune systems of very young babies seem to put them at risk.
Symptoms of late onset GBS in newborns include coughing, congestion, fever, drowsiness, and neurological symptoms like seizures. Like early-onset, this infection can cause pneumonia, sepsis, or meningitis.
What if my baby gets a GBS infection?
On the rare chance that your baby does contract early-onset GBS after birth, modern medicine does a really great job of treating these complications.
With treatment, your baby has a 95% survival rate even if they do contract GBS disease. The risk of death is higher among premature babies.
It’s important to be aware of the long term development issues that GBS infection can cause for your baby, even with survival.
In particular, meningitis (an infection of the fluid around the brain) can lead to Cerebral Palsy, hearing problems, learning problems and delays, and seizures.
Does Group B Strep pose a risk to the birthing mama?
It is relatively rare for healthy women to experience any complications from GBS. Remember, this is considered a naturally occurring bacteria after all. However, during pregnancy and birth, it does occasionally cause complications.
GBS can lead to a uterine infection during or after pregnancy, increased UTIs during pregnancy, and/or a serious infection of the membranes surrounding baby (known as chorioamnionitis).
If you have any signs of infection during pregnancy or during the days and weeks after birth, it’s important to contact your provider immediately! These symptoms include fever, pain in your abdomen, increased heart rate, changes in blood pressure, or anything else feeling off.
When will I be tested for Group B Strep during pregnancy?
Okay, so now that you understand the risks involved with Group B Strep, it makes sense why GBS screening is a routine part of prenatal care, am I right? Antibiotics are highly effective at reducing baby’s risk of infection, so it’s important to know what we’re dealing with.
Luckily, it’s super easy to screen for GBS. The CDC recommends that all pregnant women be screened for GBS between weeks 35 and 37 of pregnancy. Remember, as high as one out of every four women are colonized, so it’s important to check.
Why is the screening at 35 to 37 weeks?
Providers screen for GBS when you are right before (or at) full term because it gives the most accurate picture of whether you are colonized or not. Interestingly, the presence of GBS bacteria can kind of come and go in adult women. Another way of saying this is that GBS colonization can change.
In fact, being GBS positive during one pregnancy, doesn’t necessarily mean you’ll be positive at the next (although this does seem to be the case most of the time). What’s more, you can develop it later in pregnancy, which is why it’s recommended to test so close to your birth.
Group B Strep test: What to expect
The Group B Strep test is seriously no big deal. It’s quick, painless and best of all – highly accurate!
Basically, you or your provider will use a giant swab (picture a giant Q-tip!) to swipe from front to back across your vagina and rectum. It’s then placed in a tube or bag and sent off to the lab to check for the presence of Group B Strep.
It’s that simple!
In some practices your provider or nurse will swab you on the table, but in many others, you’ll be sent into the bathroom with a swab and some simple directions.
What happens if I’m GBS positive?
If you test positive for GBS, you’ll need IV antibiotics during labor to help rid the bacteria from your body and protect baby. The standard safeguard is that you’ll need IV antibiotics at least 4 hours before you deliver. This gives the IV medication enough time to get to baby and provide protection, if baby is exposed to the GBS bacteria during labor and delivery.
According to the CDC, if you receive these antibiotics during labor within 4 hours of delivery, baby’s chances of contracting GBS infection is about 1 in 4000 (as opposed to 1 in 200 if antibiotics aren’t administered!).
A lot of mamas wonder why they can’t just take antibiotics leading up to birth…why does it have to be an IV?
Unfortunately, GBS grows back quickly. The only way to knock it out for the entire labor and delivery is with antibiotics administered via IV.
Group B strep positive when to go to hospital
So, if you’re GBS positive, you know that we ideally want to get you a dose of antibiotics within 4 hours of delivery. As you probably know, it’s impossible to predict how long any woman’s labor is actually going to be!
Usually the guidance is that if your water breaks and you are GBS positive, you need to come right in! This is because after your water breaks, labor can ramp up really quickly (but it’s not always the case).
Otherwise, if this is your first birth, the guidance isn’t too different. Most providers say it’s safe to wait to come in until you are in strong, active labor.
If this is a subsequent birth, you may want to err on the side of caution and come a bit sooner than the standard 4-1-1 (contractions 4 minutes apart, lasting for a full minute, for at least one hour) just in case things progress rapidly.
As with most things like this, you’ll need to ask YOUR provider the right plan for you.
What happens if I don’t make it to the hospital in time to administer antibiotics?
If for some reason you do not make it to the hospital to get your IV antibiotics in a timely manner, it’s OKAY! Yes, your baby is technically at an elevated risk for infection.
Luckily, modern medicine is highly effective, and you will be in the best hands for observation should symptoms arise!
In most hospitals, what will happen is they will want to watch baby in the hospital for any symptoms for at least 48 hours. It’s possible that this can lead to a slightly longer-than-average hospital stay.
They also MAY want to draw labs to see if baby’s blood counts indicate any signs of infection.
What’s birth like if you have Group B Strep?
There is a common misconception that if you’re GBS positive you have to be hooked up to an IV for your entire birth. Well, I’m here to tell you that’s 100% not true!
IV infusions are only 30 mins, so you can get the antibiotics, and then if you wish to be disconnected from the tubing, that’s totally fine! However, if any other IV medications are needed (like IV fluid, Pitocin, or Magnesium Sulfate) you may have to remain connected to the IV pump.
This means you can still totally utilize any and all labor positions and pain coping strategies you want to have your best birth! This will be especially important if you are planning for an epidural-free birth.
Related: 25 Tips for A Natural Birth
Tips for you and baby if you’re GBS positive
Having GBS at birth is certainly not the end of the world. In fact, we’re very fortunate that it’s so easy to identify and that effective treatments exist. Anything that can reduce the risk of complications in our sweet newborns is a seriously amazing thing!
However, I totally get that some mamas are bummed out about the idea of antibiotic use during birth – even if it doesn’t really impede the actual event too much.
Are there negative impacts of antibiotic use during birth?
You see, some research does indicate that antibiotic use during labor can delay the immediate production of healthy gut bacteria in babies. But the same research shows that by 12 weeks, their gut bacteria is just as developed as babies that weren’t subject to antibiotic use.
There is also some very new (and interesting) research occurring about the early development of a healthy microbiome and its effect on metabolic and immunologic processes later in life (source).
This may indicate a link between microbiome development delays and an increased risk for diseases such as obesity, allergies, inflammatory bowel disease, and even some cancers. I bring this up with a huge disclaimer that this research is very new and contains a LOT of unknowns.
And above all else – the use of antibiotics for the prevention of GBS infection FAR outweigh the negative impacts of antibiotics, so I am in no way suggesting you skip them if you are GBS positive!
What can I do to mitigate the effects of antibiotic use during birth?
One of the best things you can do is practice LOTS of skin-to-skin care and breastfeed, if possible.
There’s a lot of fascinating and positive research that shows how skin-to-skin care and breastfeeding can support the development of a healthy microbiome in C-section babies. And the same school of thought supports that it can help babies born under antibiotic use, too!
What’s more, YOU can probably benefit from taking a probiotic after birth, too. Like baby, your gut bacteria may be kinda wiped.
Some women also may be at an increased risk to develop thrush after heavy antibiotic use or have digestive issues. A probiotic can help with this!
Group B Strep unpacked
Well, Mama, there you have it! Whether your GBS test is at your next appointment, or you’ve been deemed Group B Strep positive you’ve now got the whole lowdown on why this screening is so important.
As you now know, antibiotics are a wonderful line of defense against infection. A positive screening means that your provider can make sure you and baby receive the best possible care to keep you both safe during delivery!
Were you GBS positive? How did it affect your birth experience? I’d love to hear about it in the comments below!
Happy Birth!
Liesel Teen, BSN-RN
Founder, Mommy Labor Nurse
Meet Liesel Teen
Hi there. I’m Liesel!
|
Skip to main content
Bacteria may hold key for energy storage, biofuels
The answer may come in a small package; a bacteria called Shewanella oneidensis. The microbe takes electrons into its metabolism, and uses the energy to make essential precursors for ‘fixing’ carbon, which occurs when plants or organisms take carbon from CO2 and add it to an organic molecule, usually a sugar. Barstow is working towards engineering a new bacteria that goes a step further by using those precursor molecules to make organic molecules, such as biofuels.
A new study, “Identification of a Pathway for Electron Uptake in Shewanella oneidensis,” published Aug. 11 in Communications Biology, describes for the first time a mechanism in Shewanella that allows the microbe to take energy into its system for use in its metabolism.
“There are only a very small number of microbes that can really store renewable electricity,” said Barstow, assistant professor of biological and environmental engineering in the College of Agriculture and Life Sciences and the paper’s senior author. He added that even fewer microbes can fix CO2.
“We want to make one,” Barstow said “And in order to do that we need to know the genes that are involved in getting the electrons into the cell.”
In the study, the researchers used a technique called ‘knockout sudoku,’ which Barstow and colleagues invented to allow them to inactivate genes one by one, in order to tell their functions.
“We found a lot of genes that we already knew about for getting electrons out of the cell are also involved in getting electrons in,” Barstow said. “Then we also found this totally new set of genes that nobody’s ever seen before that are needed to get electrons into the cell.”
First author Annette Rowe, Ph.D. ‘11, an assistant professor of microbiology at the University of Cincinnati, identified the pathway these genes facilitate that moves electrons into Shewanella’s metabolism.
“When we build a microbe that can eat electrons, which we are doing now, it will incorporate those genes,” Barstow said. He plans to start by adding the genes to Escherichia coli, a bacteria that is highly studied and easy to work with. Engineered bacteria powered by electrons opens the door for using renewable energy for making biofuels, food, chemicals, and for carbon sequestration.
Co-authors include Farshid Salimijazi, a doctoral student in Barstow’s lab; Leah Trutschel, a doctoral student in Rowe’s lab; and Michael Baym, assistant professor of biomedical informatics at Harvard.
The study was funded by the Burroughs Welcome Fund, the U.S. Department of Energy and the U.S. Air Force Office of Scientific Research.
Media Contact
Jeff Tyson
|
Descriptive and narrative essay
What are the differences between narrative and descriptive
Assignment 1. The Descriptive Narrative Essay. The requirements of this essay are as follows: 1. The essay should be around 3 pages, but at least two (2) full pages.. 2. After reading Roald Dahl's short story, "Lamb to the Slaughter," think about a time in your life that you overreacted to something someone else told you or that someone overreacted to wsomething you said. more
Discovering Essay Types: Narrative, Descriptive
Throughout your descriptive essay, you are asked to describe a certain situation, place, thing, or an individual. You are supposed to describe each specification that your instructor has asked for. Each of every four styles of writing has a specific purpose, and they both involve various forms of writing skills. You should have seen them often related to discourse styles or rhetorical styles in an academic context. Difference between descriptive … more
How to write a descriptive and narrative essay? - Tutoropedia
Mar 23, 2015 · Narrative and Descriptive essay are two different types of essay writing, where a clear difference between them can be highlighted in terms of the writer’s objective in compiling the essay. A narrative is usually where a person tells his or her experiences to the reader. This highlights that a narrative … more
Compare & Contrast, Descriptive, and Persuasive Essays
Description is a type of writing, but it is also a writing technique used in narrative writing. Description uses language that allows readers to see, feel, hear, taste, and/or smell the events within the story. When writing a story (narrative writing), description can enhance the readers’ interest in your story. more
Narrative Essay Outline - Format, Worksheet, and Examples
Oct 24, 2019 · To conclude, a narrative essay is all about narrating a story, while the idea of the descriptive essay is to describe something in a way that the reader should perfectly perceive it for himself. Essayists find it challenging to narrowly focus in a narrative essay, while a descriptive essay is difficult in terms of organization. more
Same Thing
Difference Between Narrative and Descriptive Essay
Selecting great descriptive essay topics | EssaysLeader
It is like a descriptive essay and expository essay, where the writer informs the reader about a particular event. It offers writers a chance to think and write about themselves. Elements of a Narrative Essay. Narrative essays rely on personal experiences and are written in the form of a story. When the writer uses this technique, he uses the more
How to Write a Winning Narrative Essay Outline
Apr 10, 2021 · The essay should contain an attention getting introduction with a thesis, a body which contains the narrative story, descriptive sensory details, transitional words and phrases, and a conclusion which restates the thesis and provides insight into … more
What is descriptive and narrative essay? ·
What is a descriptive narrative essay for who is to blame for romeo and juliet's death thesis. What would happen if someone had put took wear wearing wore I a meal now because sarah might not descriptive a what is narrative essay have composed less than euros per day. more
27+ Descriptive Essay Examples & Samples in PDF | DOC
Nov 09, 2020 · After waiting days essays narrative and of examples descriptive for students. As will become possible out of the blood, which would require an I am migrants, or social responsibility beyond what the appropriate indi viduals and groups the royal academy exhibi tion at somerset house later removed to paris, though their nature the network of more
Free Narrative Essay Examples - Samples & Format
Instructions (Steps) more
Descriptive Essays - Welcome to CK-12 Foundation
A lengthy description is a key feature of a descriptive text. Different styles are used: a narrative text features concrete words and phrases, while a descriptive one– abstract notions and concepts. Time and place matter only in narrative texts, while in descriptive ones … more
Narrative And Descriptive Writing Ppt - SlideShare
In nature, a descriptive essay is direct. It simplifies complicated scenarios for better understanding. A proficient writer will avoid unnecessary exaggerations and will stick to the rubric. Autobiographical Narrative Essay; This form of a narrative essay will involve the narration of some memorable event that took place in your life. more
11.2 NARRATIVE ESSAY.pptx - Narrative Essay Unit 11.2
Apr 26, 2020 · The main difference between descriptive and narrative essays lies in the structure and purpose of the essay. A descriptive essay is used to describe a subject to present a clear picture of it. As such, it only requires you to describe the item in a logic fashion. A narrative essay’s purpose is to tell a … more
Thesis and Essay: Narrative descriptive essay vacation and
Descriptive and narrative papers have the same structure. They include an introduction, body section, and conclusion. Analyze clear personal narrative essay examples to see how your piece of writing should be constructed. Note that a thesis statement should be … more
Important differences between descriptive - College Essay
Descriptive Narrative Descriptive Short Story. 1001 Words5 Pages. The sounds of the city at night mix with the laughter of my friends. Taxis honking, subways rushing under your feet, and buses rumbling, all carrying their cargo of dead-tired, empty-minded passengers, following the … more
Descriptive and Narrative -
Descriptive Writing paints pictures with words or recreates a scene or experience for the reader. Narrative Writing on the other hand, relates a series of events either real or imaginary or chronologically arranged and from a particular point of view. For short, the descriptive is to describe and the narrative is to tell information. more
How to Write a Narrative Essay | Example & Tips
Narrative vs. Descriptive, Sample of Essays
Review, also, the elements of the Personal Essay, as the personal essay and the narrative essay have much in common. Descriptive Elements. The ability to describe something convincingly will serve a writer well in any kind of essay situation. The most important thing to remember is that your job as writer is to show, not tell. more
How to Write a Descriptive Narrative Essay?
Jan 18, 2014 · Descriptive; Persuasive ; Reflective ; Expository; Narrative ; Compare and Contrast Essays One of the essay types that will help you hone your analytical, observational and critical thinking skills is called a compare and contrast essay. You can find a lot of free compare and contrast essay samples on! This type of essay shows more
Uni Writing: Examples of descriptive and narrative essays
65 Narrative Essay Topics To Impress Teacher in 2021
Descriptive Narrative Essay - Expert Custom Writing
Sep 08, 2019 · Descriptive And Narrative Essay Same Thing. or Visa, work with the best specialists based on the subject, log in to connect directly with your writer and upload the files you consider necessary, download a document made on the delivery date, get your jobs done by professionals! +1 (602) 730-1701. Hire. more
Descriptive Narrative Descriptive Short Story - 1001 Words
A desriptive essay describes a person, place, or thing for a reason. For example, you might describe a place such as your bedroom and analyze what the items, colors, and mood of your bedroom says about you. A narrative tells a story to make a poin more
What is the différence between a descriptive, narrative
Generally speaking, there are four types of essays: argumentative essays, descriptive essays, expository essays, and narrative essays. Narrative essays tell a vivid story, usually from one person's viewpoint. A narrative essay uses all the story elements — a beginning, middle and ending, as well as plot, characters, setting and climax — bringing them together to complete the story. The focus of a narrative essay is … more
Descriptive Essay: Topics, Outline and Writing Tips
Narrative dictionary definition | narrative defined more
Difference Between Narrative and Descriptive Essay
View 11.2 NARRATIVE ESSAY.pptx from ENGLISH 101 at Oxford High School, Oxford. Narrative Essay Unit 11.2 DEFINITION A narrative essay tells a story. It uses descriptive language to tell more
Composition Patterns: Narrative and Descriptive
Guide to Different Kinds of Essays – Gallaudet University
Examples of How to Write a Good Descriptive Paragraph more
Good Descriptive Essay Examples for All Students
Narrative vs. Descriptive Essay Example The imagery of the more descriptive sentence allows your reader to feel that sense of brightness on his/her own face while reading the story. There is a handful of questions you can ask yourself while writing a descriptive … more
Descriptive vs Narrative - YouTube
Friendly and knowledgeable support teams are dedicated to making your custom writing experience the Narrative And Descriptive Essay Topics best you’ll find anywhere. We’re always available via text message, email, or online Narrative And Descriptive Essay Topics chat to ensure on-time delivery. more
Narrative & Descriptive Essay - YouTube
May 01, 2021 · Writing a descriptive narrative essay – 6 essential tips. Try to remember an important occurrence in your life or the life of someone else (a friend, a relative). Write down everything you associate with the story. Now start your draft. Describe the situation in short. more
What are Different Types of Essays - PEDIAA
What essay narrative descriptive vacation is the famous principles of curriculum design and pursue a career teaching educational administration public universi- ties and lexical linking devices a final grade. Subject-centered designs subject-centered designs are considered limitations. 6. Ask someone in this manner: Platt, torocy, and mcglumphy more
|
Hip, Knee & Leg EBook - US
Step 1: Improve Your Stride
Do you sit for long periods at work or at home? Do you find yourself slightly limping when you first start walking? Prolonged sitting actually changes the way that you walk. Areas of your hips, pelvis and spine lose their strength and flexibility. This changes the way your hips and spine move when you walk, increasing the load on your knees. Furthermore, with sitting, your quadriceps muscles weaken, leading to poor tracking of the patella (kneecap). Finally, your knee and hip cartilage health relies on a squeezing pressure from walking. With less walking in the day, the joint fluid does not circulate properly leading to a decrease in lubrication of the joints. How to walk better Walking is often the best medicine. Learning towalk properly is a big part of the rehabilitation of knee and hip pain. Here are some simple tips to help you improve your gait (the way you walk), and decrease the strain on your joints: 1.Whenwalking, try tomaintain a tall posture. This brings your spine and center of gravity into a natural position that allows your joints tomove more freely. Imagine a string gently pulling you upwards through the top of your head. 3.Avoid sitting for long periods. Get up and move around during the day at work or at home. Try to move out of a seated position at least every 30 minutes. 4.Be aware of how your hips are moving when you walk. Do you notice if one or both of your hips are slightly dipping? This is a sign of hip weakness. 5.Wear supportive shoes that provide adequate cushion to the instep. Avoid wearing shoes without ankle support such as sandals or high heels. 2.Whenwalking, focus on taking a larger, but natural stride length. Imagine hitting the ground more with your heel.
Made with FlippingBook - Online catalogs
|
Format decimal in mvc view
If you store your number in a variable of decimal type, you can convert into a string with two decimal places (including .00) this way: Dim dAmount as Decimal Dim strFormattedAmount as String. strFormattedAmount= dAmount.ToString("N2") Hi, Could you please show me how to get 1.9 to show with two decimal places (1.90) you can use. Math.Round(1 ...
If all you want to do is display the date with a specific format, just call: @String.Format(myFormat, Model.MyDateTime) Using @Html.DisplayFor(...) is just extra work unless you are specifying a template, or need to use something that is built on templates, like iterating an IEnumerable<T>.Creating a template is simple enough, and can provide a lot of flexibility too.
My datagridview shows decimal values from the database with a comma as a symbol - I would like it to show a period (.). My regional settings are set to a period. However the DGV shows a comma. I have tried to change the DGV çolumn format to N2, but no change. What am I missing? Regards
c# - Formatting a Nullable Decimal in a Razor View in ASP.NET - i have asp.net mvc app. app using razor in views. trying display decimal? . twist on not want show decimals. in other words, if nullable decimal value 567.89 , want display 567 . currently, have plain old:
how to formatting a number in mvc view, following, with no result at all: [DisplayFormat(DataFormatString = "{0:00 0000 0000 0000 0000 0000 0000}", ApplyFormatInEditMode = true)]. Standard numeric format strings are supported by: Some overloads of the ToString method of all numeric types.
Stef. Telerik team. answered on 12 Jul 2016, 04:26 AM. Hi Frank, Please test using the built-in Format text function in an expression like: =Format (' {0:N'+ Parameters.Parameter1.Value+'}',1233311.001234567) where Parameter1 is an integer value in the example. The idea is to build a dynamic format string. I hope this information is helpful.
Jan 03, 2011 · i want the same functionality as per above quote (format number in textfield). this works fine except (ex. i type 15) whenever i type/input first number it gets formatted as 1.00 and next number is places after 1.005. may be this is cause we are formatting it in change event. i want to apply the same functionality on blur of the textfield.
For SQL Server, we have to decide the precision for the decimal type. For example, decimal(10, 3) means 7 integer place and 3 decimal place. However, the precision for the C# code mapping to SQL…
Dec 27, 2015 · Hi, I am getting the data from back-end server using oData and it is displaying correctly in my XML view. However, I am not able to format the quantity field with thousand separator in the second column in the below code. I tried to use " {path:'/number', type:'sap.ui.model.type.Float'}" But how to add this at line 29 in the below code?
Wolf size comparison to human
Format pattern with conditional decimal. The image above shows my current display in a chart, currently setup as Expression Default in the Chart Properties (Number tab). I would like to only show up to two decimal places when the decimal is not 0. Ideally I would like to see: If I try to force the format pattern "Fixed to 2 decimals", it always ...
In this article, we will see how data annotations work in MVC framework. We will see how annotations go beyond just validation. For keeping this article simple and easily understandable, I am dividing data annotation validation in two parts. Predefined data annotation validation in MVC; Custom defend data annotation validation in MVC.
In the Format sidebar, click the Cell tab, then click the Data Format pop-up menu and choose Currency. Do any of the following: Set the number of decimal places: In the Decimals field, type the number of decimal places you want to display. Numbers rounds the display value instead of truncating the display value.
Convert decimal hours/minutes to time format with formulas. Here are some simple formulas which can help you to quickly convert decimal hours or decimal minutes to hh:mm:ss. 1. Select a blank cell which you want to output the converted result, and enter this formula =A2/24, drag fill handle over the cells you need. See screenshot:
Display and Editor Templates - ASP.NET MVC Demystified. When dealing with objects in an MVC app, we often want a way to specify how that object should be displayed on any given page. If that object is only displayed on one page, we simply write HTML and CSS to lay out and style how that object should be shown to the user.
The Microsoft Access Format function takes a numeric expression and returns it as a formatted string. Syntax. The syntax for the Format function in MS Access is: Format ( expression, [ format ] ) Parameters or Arguments expression The value to format. format. Optional. It is the format to apply to the expression. You can either define your own ...
Just set the Format property for the text box to the date format you want. Open the form or report Layout View or Design View. Position the pointer in the text box with the number or currency. Press F4 to display the Property Sheet. Set the Format property to one of the predefined date formats. In a query. Open the query in Design View.
The Telerik UI Grid HtmlHelper for ASP.NET MVC is a server-side wrapper for the Kendo UI Grid widget. The Grid is a powerful control for displaying data in a tabular format. It provides options for executing data operations, such as paging, sorting, filtering, grouping, and editing, which determine the way the data is presented and manipulated.
|
Deep-seabed mining lastingly disrupts the seafloor food web
Deep-seabed mining lastingly disrupts the seafloor food web
Plow tracks are still clearly visible on the seafloor of the DISCOL area 26 years after the disturbance. Credit: ROV-Team/GEOMAR
The deep sea is far away and hard to envision. If imagined, it seems like a cold and hostile place. However, this remote habitat is directly connected to our lives, as it forms an important part of the global carbon cycle. Also, the deep seafloor is, in many places, covered with polymetallic nodules and crusts that arouse economic interest. There is a lack of clear standards to regulate their mining and set binding thresholds for the impact on the organisms living in affected areas.
Mining can reduce microbial carbon cycling, while animals are less affected
An international team of scientists around Tanja Stratmann from the Max Planck Institute for Marine Microbiology in Bremen, Germany, and Utrecht University, the Netherlands, and Daniëlle de Jonge from Heriot-Watt University in Edinburgh, Scotland, has investigated the food web of the to see how it is affected by disturbances such as those caused by activities.
For this, the scientists traveled to the so-called DISCOL area in the tropical East Pacific, about 3000 kilometers off the coast of Peru. Back in 1989, German researchers had simulated mining-related disturbances in this manganese nodule field, 4000 meters under the surface of the ocean, by plowing a 3.5 km wide area of seabed with a plow-harrow. "Even 26 years after the disturbance, the plow tracks are still there", Stratmann described the site. Previous studies had shown that microbial abundance and density had undergone lasting changes in this area. "Now we wanted to find out what that meant for carbon cycling and the food web of this deep ocean habitat."
Deep-seabed mining lastingly disrupts the seafloor food web
Sampling in the DISCOL area. Some larger animals recover faster than microbes. However, especially organisms living attached to manganese nodules, such as this stalked sponge, might be very vulnerable. Credit: ROV-Team/GEOMAR
"We looked at all different ecosystem components and on all levels, trying to find out how they work together as a team", de Jonge explained who carried out the project as part of her Master's Thesis at the NIOZ Royal Netherlands Institute for Sea Research and the University of Groningen, The Netherlands. The scientists quantified carbon fluxes between living and non-living compartments of the ecosystem and summed them up as a measure of the "ecological size" of the system.
They found significant long-term effects of the 1989 mining simulation experiment. The total throughput of carbon in the ecosystem was significantly reduced. "Especially the microbial part of the food web was heavily affected, much more than we expected", said Stratmann. "Microbes are known for their fast growth rates, so you'd expect them to recover quickly. However, we found that carbon cycling in the so-called microbial loop was reduced by more than one third."
The impact of the simulated mining activity on higher organisms was more variable. "Some animals seemed to do fine, others were still recovering from the disturbance. The diversity of the system was thus reduced", said de Jonge. "Overall, carbon flow in this part of the food web was similar to or even higher than in unaffected areas."
Deep-seabed mining lastingly disrupts the seafloor food web
Tanja Stratmann (left) and Danielle de Jonge (right) are shared first authors of the study now published in Progress in Oceanography. Credit: Sara Billerbeck (left) / Danielle de Jonge (right)
A mined seafloor might be more vulnerable to climate change
The simulated mining resulted in a shift in sources for animals. Usually, small fauna feed on detritus and bacteria in the seafloor. However, in the disturbed areas, where bacterial densities were reduced, the fauna ate more detritus. The possible consequences of this will be part of de Jonge's Ph.D. Thesis, which she just started. "Future climate scenarios predict a decrease of the amount and quality of detritus reaching the seafloor. Thus this shift in diet will be especially interesting to investigate in view of ", she looks forward to the upcoming work.
"You also have to consider that the disturbance caused by real deep-seabed mining will be much heavier than the one we're looking at here", she added. "Depending on the technology, it will probably remove the uppermost 15 centimeters of the sediment over a much larger area, thus multiplying the effect and substantially increasing recovery times."
More info
Polymetallic nodules and crusts cover many thousands of square kilometers of the world's deep-sea floor. They contain mainly manganese and iron, but also the valuable metals nickel, cobalt and copper as well as some of the high-tech metals of the rare earths. Since these resources could become scarce on land in the future—for example, due to future needs for batteries, electromobility and digital technologies—marine deposits are economically very interesting. To date, there is no market-ready technology for mining. However, it is already clear that interventions in the seabed have a massive and lasting impact on the affected areas. Studies have shown that many sessile inhabitants of the surface of the seafloor depend on the nodules as a substrate, and are still absent decades after a disturbance in the ecosystem. Also, effects on animals living in the seabed have been proven.
Explore further
Simulated deep-sea mining affects ecosystem functions at the seafloor
More information: Daniëlle S.W. de Jonge et al, Abyssal food-web model indicates faunal carbon flow recovery and impaired microbial loop 26 years after a sediment disturbance experiment, Progress in Oceanography (2020). DOI: 10.1016/j.pocean.2020.102446
Provided by Max Planck Society
Citation: Deep-seabed mining lastingly disrupts the seafloor food web (2020, October 8) retrieved 2 December 2021 from https://phys.org/news/2020-10-deep-seabed-lastingly-disrupts-seafloor-food.html
Feedback to editors
|
Components of a Compact Case
The CASE Journal
Compact Cases are intended to be no more than 1,000 words in length (about two single-spaced pages). Keep this in mind as you write by watching the word count as you write. In Microsoft Word, the word count can be viewed in the bottom left corner of the Word window.
1. Hook: Cases begin with a short description that is intended to “grab” or “hook” the reader and generate interest in reading further. It should not be a synopsis but stimulate the reader’s curiosity. The hook has two other important functions—providing enough information about the company that the reader understands what the case is about (this can be done in a single sentence or in a short phrase) and the timeframe of the case (this can be done by starting a sentence with “In 2015….”
2. Industry: This is a short description of the industry that includes some indication of the size of the industry, identification of the primary factors for producing profit within the industry, description of important industry structural factors—barriers to entry, competition, etc., and any other information that is deemed necessary to understanding the context of the business. Compact Cases are easier to write if the industry is less complex or well-known by the reader.
3. Company Story: This is a short description of the organizational history of the company (not too deep) and the most critical aspects of the business at this time. It provides background information unique to the firm that provides context for understanding the issues that are addressed in the case. Please note that sometimes authors choose to put this section before the industry section.
4. Manager(s): If the case focuses on a specific decision that must be made, this section provides a description of the decision-maker(s). If possible, use quotes from the managers to give the reader a better feel for his/her personality. Quotes must have appropriate citations of sources—you can use quotes from periodicals and newspaper interviews or from videos or audio speeches, etc.
5. Problem/Case Focus: Using a storytelling format, describe the problem/issue that is the focus of the case. This should be done from the manager or firm’s perspective. Try to describe the problem/issue without injecting your personal feelings or biases. Here we are interested in a factual account of the elements of the problem/issue. It is sometimes beneficial to show different perspectives of the problem/issue—for example, perhaps the manager sees things one way, but the customers have a completely different take on the issue. Quotes from the manager and Facebook posts from customers can illustrate this difference in opinions.
Take care not to write the case so that everything leads to a single clear solution. Cases are intended to prompt discussion, and there will be little to discuss if the correct course of action is abundantly clear by the end of the case!
6. Closing Hook: This ends the case and reminds the reader of what the case was supposed to be about. It sometimes returns to the opening scenario and restates the problems/issues. While it is common to end with a few questions, these should be written as internal questions that the manager or decision maker is considering rather than assignment questions for students.
7. Exhibits: In case writing, we call anything (table, chart, diagrams, photo, etc.) that is included in the case an Exhibit (easier than deciding which to label as Tables, which to call Figures, etc.). These are then labeled consecutively as Exhibit 1, Exhibit 2….
Information contained in the Exhibits does not “count” as part of the word limit for Compact Cases. Think creatively to develop ways to include important information as an exhibit. Exhibits should be included only if they contain essential information for the reader. Appropriate exhibits for Compact Cases include:
a. Graphs— Especially useful to show important trends—rather than tell the reader that the company had experienced phenomenal growth, it is better to show it with a graph. Make sure to label the graph axes and to give the graph a title. Pie graphs showing market share, product mix, etc. would also be appropriate.
b. Financial statements— Most strategy cases provide a balance sheet and income statement (3-5 years for each). In a Compact Case, we don't usually want to offer full statements but may provide selected highlights from these statements. Sometimes it is useful to provide competitive comparisons like the McDonald's vs. Chipotle exhibit from the Chipotle case example.
c. Photos—Photos should only be used to illustrate something that is difficult for the reader to appreciate from text descriptions. Examples might include photos of the product if it is not well known, photos of store displays if this is an issue in the case. We do not need pictures of the managers unless the race/gender/nationality of the manager is an essential aspect of the case issue or problem. Sources must be cited for all photos included.
d. Maps—Maps are often included if the case focus involves geographic considerations. For example, a supply chain case might provide a map so that the reader can readily appreciate some of the challenges of getting products from the port to major distribution centers. Sources must be cited for the maps used.
e. Diagrams/Charts—Sometimes it is useful to provide an organization chart to show the relationship of different managers to one another. Likewise, charts can be used to show steps in the manufacturing process, etc. It is assumed that if no citation is provided for a diagram/chart that it was created by the author(s) of the case.
Case Writing Conventions
1. PAST TENSE—Cases are always written in the past tense even though it may seem awkward. This is because the events you are writing about have already happened (even if it was just yesterday). The only exception is when direct quotations are used. If the manager says “I am worried about what might happen,“ do not change “am” to “was.” If you are writing a descriptive statement (not a quote ) write “Jeff Smith was worried about what might happen.”
2. Personal Pronouns When Referencing Firms—DO NOT use “they,” “their” or “them” when referencing companies. Use “it,” “the firm,” “the company,” the name of the company or its stock ticker. Often you will find that you can just delete the “they” without any problems.
3. NO CONTRACTIONS/SLANG—Unless it is in a direct quote, do not use contractions (didn’t instead of did not) or slang in the case narrative.
4. NO ANALYSIS—The case is supposed to tell the story about the problem/issue. Do not provide analysis (like SWOT, Porter’s Five Forces or PESTEL) in the case. Describe the issues that would allow the reader to do a SWOT etc.—do not provide an Exhibit that is a SWOT table. This type of analysis belongs in the Instructor’s Manual.
5. Appropriate Headers—Use bolded headers to help readers quickly locate specific information (note—we generally do not use any header for the opening hook nor do we use the header "Closing Hook" for the last section of the case). Each page of the case should have at least one or two headers to indicate the various parts of the narrative.
|
Readers ask: Where Is Maranatha In The New Testament?
What does Anathema Maranatha means in the Bible?
an expression commonly considered as a highly intensified form of anathema. Maran atha is now considered as a separate sentence, meaning, ” Our Lord cometh.” – 1 Cor. xvi.
What is the Aramaic word for Lord?
“Marya” (מריא) is the Aramaic title meaning “the Master/Lord,” which is the functional equivalent of the Hebrew title “haAdon” (הָאָד֣וֹן) meaning “the Master/Lord.” Note that the Hebrew Scriptures sometimes use the truncated/shortened version “Yah” (יָ֔הּ) for the full name of the Almighty Creator, which is (YHVH י-ה-
Where is Jesus called God in the New Testament?
What is a real name of Jesus?
Is Shalom in the Bible?
Biblically, shalom is seen in reference to the well-being of others (Genesis 43:27, Exodus 4:18), to treaties (I Kings 5:12), and in prayer for the wellbeing of cities or nations (Psalm 122:6, Jeremiah 29:7).
You might be interested: How Many Words Did Jesus Speak In The New Testament?
Is Maranatha in the Bible?
Maranatha (Aramaic: מרנאתא; Koinē Greek: Μαρανα θα, romanized: marana-tha, lit. ‘; Latin: Maran-Atha) is an Aramaic phrase. It occurs once in the New Testament (1 Corinthians 16:22). It also appears in Didache 10:14, which is part of the Apostolic Fathers’ collection.
Who started Maranatha?
Dr. Cedarholm started Maranatha’s Graduate School of Theology back in 1970, and three years later he founded the Maranatha Baptist Academy. In the fall of 1983, Dr. Cedarholm turned over the presidency to present leader, Dr.
What denomination is Every Nation Church?
Doctrine. Victory, as a member of Every Nation, adheres to the statement of faith of the World Evangelical Alliance, of which Every Nation is a member.
Who is Elohim?
What is God called in Hebrew?
What was the last name of Jesus?
You might be interested: Question: What Were The Last 7 Books Of The New Testament To Be Added To The Canon?
Who is the son of Jesus?
Jacobovici and Pellegrino argue that Aramaic inscriptions reading ” Judah, son of Jesus”, “Jesus, son of Joseph”, and “Mariamne”, a name they associate with Mary Magdalene, together preserve the record of a family group consisting of Jesus, his wife Mary Magdalene and son Judah.
Who is the father of Jesus?
What religion believes in God but not Jesus?
Leave a Reply
|
One-Child Policy
What Was China's One-Child Policy?
The one-child policy was a rule implemented by the Chinese government mandating that the vast majority of couples in the country could only have one child. This was intended to alleviate the social, economic, and environmental problems associated with the country's rapidly growing population. The rule was introduced in 1979 and phased out in 2015.
Key Takeaways
• The one-child policy was a Chinese government policy to control population growth. According to estimates, it prevented between 200 to 400 million births in the country.
• It was introduced in 1979 and discontinued in 2015, and enforced through a mix of incentives and sanctions.
• The one-child policy has had three important consequences for China's demographics: it reduced the fertility rate considerably, it skewed China's gender ratio because people preferred to abort or abandon their female babies, and resulted in a labor shortage due to more seniors who rely on their children to take care of them.
Understanding China's One-Child Policy
The one-child policy was introduced in 1979 in response to explosive population growth. China has a long history of encouraging birth control and family planning. By the 1950s, population growth started to outpace the food supply, and the government started promoting birth control. Following Mao Zedong’s Great Leap Forward in 1958, a plan to rapidly modernize China’s economy, a catastrophic famine ensued, which resulted in the deaths of tens of millions of Chinese.
However, by the late 70s, China's population was quickly approaching the 1 billion mark, and the Chinese government was forced to give serious consideration to curbing the population growth rate. This effort began in 1979 with mixed results but was implemented more seriously and uniformly in 1980, as the government standardized the practice nationwide.
There were, however, certain exceptions, for ethnic minorities, for those whose firstborn was handicapped, and for rural families in which the firstborn was not a boy. The policy was most effective in urban areas, where it was generally well-received by nuclear families, more willing to comply with the policy; the policy was resisted to some extent in agrarian communities in China.
Initially, the one-child policy was meant to be a temporary measure and is estimated to have prevented up to 400 million births since it was instituted. Ultimately, China ended its one-child policy realizing that too many Chinese were heading into retirement, and the nation's population had too few young people entering the labor force to provide for the older population's retirement, healthcare, and continued economic growth.
The government-mandated policy was formally ended with little fanfare on Oct. 29, 2015, after its rules had been slowly relaxed to allow more couples fitting certain criteria to have a second child. Now, all couples are allowed to have two children.
There were various methods of enforcement, both through incentives and sanctions. For those who complied there were financial incentives, as well as preferential employment opportunities. For those who violated the policy, there were sanctions, economic and otherwise. At times, the government employed more draconian measures, including forced abortions and sterilizations.
The one-child policy was officially discontinued in 2015 and the government attempted to replace it with a two-child policy. The efficacy of the policy itself, though, has been challenged, as it is true that populations, generally, naturally taper off as societies get wealthier. In China's case, as the birth rate declined, the death rate declined, too, and life expectancy increased.
One-Child Policy Implications
The one-child policy had serious implications for China's demographic and economic future. In 2017, China's fertility rate was 1.6, among the lowest in the world.
China now has a considerable gender skew—there are roughly 3-4% more males than females in the country. With the implementation of the one-child policy and the preference for male children, China saw a rise in female fetus abortions, increases in the number of baby girls left in orphanages, and even increases in infanticide of baby girls. There were 33 million more men, with 115 boys for every 100 girls, as compared to women in China.
This will have an impact on marriage in the country, and a number of factors surrounding marriage, for years to come. Lower numbers of females also mean that there were fewer women of child-bearing age in China.
The drop in birth rates meant fewer children, which occurred as death rates dropped and longevity rates rose. It is estimated that a third of China's population will be over the age of 60 by 2050. That means more elderly people relying on their children to support them, and fewer children to do so. So, China is facing a labor shortage and will have trouble supporting this aging population through its state services.
And finally, the one-child policy has led to the proliferation of undocumented, non-first-born children. Their status as undocumented makes it impossible to leave China legally, as they cannot register for a passport. They have no access to public education. Oftentimes, their parents were fined or removed from their jobs.
One-Child Policy FAQs
Does China Still Have the One-Child Policy?
No. China reverted to a two-child policy after its one-child policy ended in 2015. While restrictions had been gradually loosened over time.
What Caused China’s One-Child Policy?
China's one-child policy was implemented to curb overpopulation that strained the country's food supply and natural and economic resources following its industrialization in the 1950s.
What Are the Effects of China's One-Child Policy?
Gender imbalance, an aging population, and a shrinking workforce are all effects of China's 1979 policy. To this day, China has the most skewed sex ratio at birth in the world, due to a cultural preference for male offspring.
Who Ended the One-Child Policy?
The Chinese government, led by the Chinese Communist Party's Xi Jinping, ended the controversial one-child policy in 2015.
What Happened If You Broke the One-Child Policy?
Violators of China's one-child policy were fined, forced to have abortions or sterilizations, and lost their jobs.
Article Sources
1. U.S. National Library of Medicine, National Institutes of Health. "China's one child policy." Accessed July 7, 2021.
2. Britannica. "Great Leap Forward." Accessed July 7, 2021.
3. Congressional-Executive Commission on China. "One Year Later, Initial Impact of China’s Population Planning Policy Adjustment Smaller Than Expected." Accessed July 7, 2021.
4. Lancet. "The effects of China’s universal two-child policy." Accessed July 7, 2021.
5. The China Journal. "Challenging Myths About China’s One-Child Policy." Accessed July 7, 2021.
6. Journal of Biosocial Science. "Changes in sex ratio at birth in China: a decomposition by birth order." Accessed July, 2021.
7. Statista. "China: gender ratio by age group." Accessed July 7, 2021.
8. Handbook of Families in Chinese Societies. "The One Child Policy and Its Impact on Chinese Families." Accessed July 7, 2021.
|
Understanding Commodities
Commodities are basic materials that are used in commerce, which can be interchangeable with a product of the same type. Examples include grains, gold, meat, oil and natural gas.
Commodities are typically used as inputs in the production process of other goods or products. Precious metals, such as gold, are used as store-of-value investments, helping investors to avoid inflation.
The quality of commodities can differ slightly, though they have to meet a certain standard to be traded on an exchange. Investors can buy and sell commodities directly or by using derivatives.
Technological advances have led to the development of new commodities. It is now possible to trade things like internet bandwidth or smart phone minutes across commodity markets.
Test your knowledge...
1. How much can you expect to earn from a cash investment?
Copyright © 2021 Methodology
That's wrong - try again!
|
View all Personal Finance articles
Spaving: What Is It and How to Avoid It?
Spaving might not be a household term, but it's a concept almost everyone can relate to—and something you've likely done before. It refers to when people spend more money in an attempt to save money, such as when you add another $20 worth of filler items into your cart to avoid a $10 shipping charge.
Is Spaving Bad?
Spending more money now to save money, in the long run, can be a good idea. Perhaps you're able to lock in savings by purchasing an annual subscription for a service rather than paying month-by-month. Or, you invest in high-quality products that last decades rather than replacing cheaper, lower-quality products every few years.
However, spaving usually has a negative connotation because it refers to impulse purchases rather than thought-out decisions. You might feel like getting 90% off a $100 item is saving $90. But if you weren't going to buy that product before, you're spending $10 more than planned. In the end, you're $10 poorer, even if you feel good about getting a deal.
Look Out for Marketing Tricks That Encourage Spaving
For decades, researchers have explored the field of behavioral finance to uncover how our biases can influence financial decisions. In turn, companies use these insights to inform their marketing material and try to get shoppers to spend more money.
If you know what to look for, you can spot the ways that companies encourage us to spave:
• Setting a high anchor price: Anchoring is a fairly well-known concept where your mind uses the first piece of information it receives as an anchor for making other decisions. When you're shopping, a product's original price acts as an anchor that can signal value and relative savings.
Consider a shirt that was originally $120 and is now on sale for $50. The $120 price tag tells you (even if only subconsciously) that it's a luxury product, and the sale price means it's more than half off. Anchoring also plays into relative costs, such as when someone buys a car and is offered expensive upgrades that seem inexpensive compared to the car's total price.
• Pushing you toward the "middle" product: If a company wants you to purchase product Y for $125, it may show you how it compares to product X ($75) and product Z ($300). But the high-priced product Z isn't intended to be a realistic alternative. It's there to make you feel like product Y is the practical, middle-ground option. You may feel like you're saving money by avoiding Z but wind up spending more than you would if you only compared X and Y.
• Wording sales in different ways: In terms of cost, there's no difference between getting two products for 50% off and a "buy one, get one free" sale. But there can be a very real difference in how our brains react to getting something for free. Keep this in mind, particularly when you might wind up buying a perishable good that will go bad before you get a chance to use it.
• Highlighting scarcity: Companies can use scarcity in different ways to make you feel compelled to act quickly and make a purchase. For example, you might see that an item is on sale, but there are only 15 left. Alternatively, you may be offered a limited-time coupon or see a large clock counting down until a sale ends. In either case, the message is the same—better buy now before it's too late.
• Offering social proof: People tend to use others' behavior and actions as an indicator of what to do and buy. Social proof can take different forms, such as hiring an influencer to promote a product, showing customers' reviews, or displaying how many people have recently viewed or bought a product. Each of these can make it easier for someone to justify a purchase.
Often, these tactics aren't used in isolation. For example, you might be presented with three similar products and their prices (pushing you to choose the middle one and anchoring), each has hundreds of positive reviews (social proof), and there's a limited-time sale (scarcity).
With all this in mind—and other strategies as well—it's easy to see how companies can urge people to spend money while simultaneously feeling like they're saving money.
Two Ways Our Brains Can Confuse Saving and Spaving
The behavioral finance-backed marketing tactics rely on human biases and, as the name implies, behavior. But in addition to marketing, there are a couple of ways that people may trick themselves into spending more than they planned:
• We compare costs to our current financial situation: The first relates to anchoring, but our current financial situation is the anchor rather than the price of a product.
For example, if it's early in the month, your budget may be filled, and you could be more susceptible to making impulse purchases. But as the month goes on, each dollar represents a larger portion of your remaining budget, and you become more conscious of how much you're spending.
One way people use this to their advantage is to shop with a specific amount of cash on hand. As a result, they'll naturally compare prices to the cash in their wallet rather than their monthly budget, how much they have in a checking account, or their credit card's credit limit.
• We don't fully consider opportunity costs: The second is how we often narrowly frame our options.
Perhaps it's the end of the month, and you have $50 left in your grocery budget and everything in your cart only adds up to $40. Then you see a favorite snack is on sale—two for $10. You've mentally allocated the money for food, and it may feel like the decision is to buy it now, while there's a sale, or buy it later at full price.
You may be saving a few dollars compared to buying the snack later, but you're also spaving $10. The narrow framing means you fail to consider all the potential ways to use the money to buy something else, save, or invest.
It may not seem consequential when it's a snack and a few dollars, but the same principle comes into play when people make more substantial purchases, such as cars and homes.
Spave With a Purpose
Even with an understanding of our biases, overcoming these tendencies and desires can be difficult. It's worth noting that spending a little more than you planned because there's a sale or limited-time opportunity could be a good choice. However, you don't want to fool yourself into thinking you're saving money when you're not.
Louis DeNicola
Louis DeNicola is a finance writer based in Oakland, California. He specializes in consumer credit, personal finance, and small business finance, and loves helping people find ways to save money. In addition to FICO, Louis works with a variety of financial services firms, credit bureaus, and educational websites, including LendingTree, Credit Karma, and Experian.
|
The basic principles of cross-connection control are very simple, yet there are thousands of variables that enter the formula in designing and engineering backflow prevention into any potable water system.
Issue: 1/05
Editor's Note: "Back to Basics"
The hydraulic conditions of backpressure and backsiphonage can only cause a problem if there is a passageway from the unwanted material and the drinking water. This passageway is called a cross-connection. There are two types of cross-connections that can be created: either an actual (direct) or potential (indirect) connection. An example of an actual connection would be the feed line from the potable water supply connected to the boiler feed. An example of a potential connection would be a janitorial sink faucet with a hose thread outlet. This has the potential of connecting an open-end hose into the sink of soapy water or dangerous chemicals.
Once we know the hydraulic condition we are trying to prevent and the type of connection involved, we must evaluate the degree of hazard of the unwanted material we are connected to. There are two degrees of hazard to evaluate: low hazard (pollutant) or high hazard (contaminant). A low hazard would be a substance that would be an objectionable substance that could affect the aesthetic conditions of the potable water. The black stagnant water in a wet-charged (nonchemical) fire system would be an example of a low hazard substance.
A high hazard (contamination) substance is one that can present a health hazard. This high hazard substance can take many forms, such as sewage, toxic, biological or chemical. It breaks down to this: will the substance cause a health problem if a backflow condition occurs and the substance is transferred to the potable water through a cross-connection? If yes, it is a high hazard.
Now that we have identified the problem, we must look at how we can prevent this unwanted condition from affecting our potable water. This can be achieved by the proper application of various types of backflow preventers. I contend there are five basic means of backflow protection.
I know, I know, I can hear you screaming. You're right, there are many methods, devices and assemblies to use for the purpose of backflow prevention, but if you break them down, they will all fall under one of the five basic means of prevention. These five basic means of protection are: air gap, dual check valve, vacuum breaker, double check valve assembly and reduced pressure principle backflow prevention assembly. For those that will insist, there is the barometric loop. The barometric loop is a continuous section of the supply line that rises a minimum of 35' at sea level, above the highest point of the water used downstream, and returns back to the original level. The barometric loop works on the principle that a vacuum can only draw a head of water so far up a water column, and if the column is tall enough, a backsiphonage cannot draw the unwanted material back into the potable water.
Air gap is the highest level of protection, as it creates a physical separation from the potable water and the unwanted substance. An air gap can protect against both high and low hazards, as well as backsiphonage and backpressure. The air gap is an unobstructed vertical separation between the discharge end of a potable outlet and the flood level rim of a non-pressurized receiving receptacle. An air gap separation must be a minimum of two times the diameter of the outlet but never less than 1". If the air gap is installed next to an adjacent wall, the minimum separation is three times the diameter, and if adjacent to two adjoining walls, the minimum separation is four times the diameter.
A dual check valve is a device with two independent operating spring-loaded checks enclosed in the same body. A vented dual check is a variety of a dual check that has an atmospheric vent separating the checks. They are used for low hazard only, and protect against both backsiphonage and backpressure.
Vacuum breakers come in all shape and sizes. There are pipe atmospheric, pressure, hose bibb, laboratory, spill resistance and more. You will need to study the exact application requirements and installation criteria for each type. All have these common traits: they protect against low and high hazards, and are used to protect against backsiphonage only. Some vacuum breakers are assemblies and are testable, such as the pressure vacuum breaker.
The double check valve assembly is an assembly consisting of two independently operating internally loaded check valves with tightly closing shut-off valves on each side inlet and outlet, as well as tests cocks properly located for testing each of the check valves. This assembly protects against low hazards only, and both backpressure and backsiphonage.
A reduced pressure principle backflow prevention assembly offers the highest level of mechanical backflow prevention. The assembly consists of two internally loaded independent check valves, with a hydraulically operating independent pressure differential relief valve located between the two check valves. The assembly has properly located tests cocks to facilitate testing of the check and relief valve. Resilient seated shut-off valves are on the inlet and outlet of the assembly. The reduced pressure assembly protects against all levels of hazards, and against both backpressure and backsiphonage.
It is at this point that effective backflow prevention design and engineering becomes a bit more of a challenge. To implement the items we just discussed, we need to be aware of some local restrictions on use and application of backflow prevention equipment. These local restrictions can come from federal, state and local rules and regulations covering plumbing, fire and health codes. Product standards, approvals and listings may restrict your choices. Other jobsite environmental conditions and hydraulic concerns may also need to be analyzed, such as the effects of chemical and metrological reactions and biological environments, stagnant water and thermal expansion, just to mention a few.
To help in the selection and designing of backflow protection, ask yourself these 12 questions:
1. Is the application a low or high hazard?
2. Is the application subject to backpressure and/or backsiphonage?
3. Is there a danger of water damage?
4. Is the application subject to freezing?
5. Does the application require continuous service?
6. Will there be control valves downstream of the device?
7. Is there electrical equipment located near the installation?
8. Is there sufficient drainage for the application?
9. What will the pressure loss be through the device?
10. Is the device to be used at the point of use?
11. Is there acceptable room for maintenance and testing?
12. Can a remote location be utilized?
These are the fundamentals of good design and engineering criteria and are based on my experience in the design, engineering and providing of backflow protection in potable water systems. There is no "Backflow 101"
|
How Many Syllables are in Brecht | Divide Brecht into Syllables
How many syllables are in brecht? 1 syllable
Divide brecht into syllables: brecht
How to pronounce brecht:
US English Accent and Pronunciation:
British English Accent and Pronunciation:
Definition of: Brecht (New window will open)
Freelance Writing Opportunities
Brecht Poems: (See poems with this word. New window will open)
Synonyms and Words Related to Brecht
bertolt brecht (3 Syllables)
What do you think of our answer to how many syllables are in brecht? Are the syllable count, pronunciation, words that rhyme, and syllable divisions for brecht correct? There are numerous syllabic anomalies found within the U.S. English language. Can brecht be pronounced differently? Did we divide the syllables correctly? Do regional variations in the pronunciation of brecht effect the syllable count? Has language changed? Provide your comments or thoughts on the syllable count for brecht below.
Comment on the syllables in Brecht
A comprehensive resource for finding syllables in brecht, how many syllables are in brecht, words that rhyme with brecht, how to divide brecht into syllables, how to pronounce brecht in US and British English, how to break brecht into syllables.
|
Conflict & Harmony // Self & Community (jc)
rj30s 616graffiti_garages
What is more important: self or community? (English Language Arts)
Stage 1 – Desired Results:
Established Goals:
SL.9-10.5: Make strategic use of digital media (e.g. textual, graphical, audio, visual, and interactive elements) in presentations to enhance understandings of findings, reasoning, and evidence and to add interest.
Anticipated Language Demands:
Vocabulary: various words from Romeo & Juliet, character, choice, agency.
Function: summarize, interpret, synthesize, justify.
Discourse: Plays, novels, visual symbols in neighborhood
Students will understand that…
• People/characters are motivated by desires.
• Our identity affects our community and vice-versa.
• People/characters have agency.
• There can be conflict between the self and the community.
• Different cultures value ideas of self and community in different ways.
• Certain community members can control/attempt to control individuals’ expression of identity.
Unit/Essential Questions:
• What is more important: self or community?
• How does the community affect our self identify?
• How do individuals affect our community?
• How is self-expression limited by those in power in our neighborhood?
• How do multiple cultures and backgrounds add to the complexity or our neighborhood?
Learning Targets:
• Students will be able to identify key characters in Romeo & Juliet.
• Students will be able to identify a key theme in all our readings: self and the community.
• Students will be able to analyze the motivations of the main characters in multiple works.
• Students will be able to compare the characters and their motivations across two works of literature.
• Students will be able to analyze the theme “self and community.”
• Student can choose and explain their reasoning for a visual and written expression of their identity and their community.
• Students will be able to create and present a movie that synthesis our central text, themselves, and the community.
Stage 2 – Assessments:
Formative assessments will incude classroom discussion, notes, and journal entries.
Main assessments will be a comparative essay, scripts and storyboards for a video, and the video itself. Each project will have formative assessment check-points woven in, although the final project will be summative.
Word Processing:
1. Directed/Guided Reading; Drafting; Revising; Editing; Writing nonfiction
2. Students could use word processing to create reading guides for the text. By both coming up with questions and answering them, students are anticipating what will come next and using a tool to guide their reading and understanding of text. They could also use them to draft and write their essays and scripts.
3. I would need to investigate how to create good reading guides so I could give clear and simple directions. I would need to create a rubric or list of what goes in to a good reading guide so students can monitor their progress.
1. Reading Discussion; Writing other forms of text
2. Students could use wikis to add to their knowledge and insights into the classroom reading and the literature circle readings. They could also use a separate wiki to store and share their growing knowledge of the community.
3. I need to learn how to set up a wiki. I would need to survey the knowledge of my students on using them and provide supports or teach students how to use them.
Storyboard Mapping:
1. Sequencing/Outline/Storyboarding
2. Students could use this technology to make storyboards for the books they are reading. They could also use them to outline their movie.
3. I have no idea how to use storyboard mapping so I would need to research that.
Concept Mapping Software
1. Higher-order Webbing/Clustering
2. Students could use the software to organize their ideas for their essays.
3. I have no idea how to use or find concept mapping software so I would have a lot to learn.
Digital & Audio Video Recording
1. Performing/Performances
2. Students would record aspects of their community and their audio performances of their scripts for the final product/assessment.
3. The biggest issue with this one is finding/getting the right equipment and then making sure I am familiar with it. Surveying the students to determine how much I would need to teach them about using it.
Video Editing Software
1. Performing/Performance
2. Students would use the video editing software to create a movie from their audio and visual recordings. This would be the final synthesis of their knowledge on our main text, their community, and themselves.
3. Again the difficulties would be finding the resources to do this. I would want to survey the students to determine what I would need to teach. I might want to put together a tutorial or make a sample video.
One thought on “Conflict & Harmony // Self & Community (jc)
1. Pingback: Unit plan & ideas for meaningfully integrating technology | Investigations in Our Town
Leave a Reply
|
Causes of Breast Cancer
Causes and risk factors The causes of breast cancer aren't fully understood, making it difficult to say why one woman may develop breast cancer and another may not. However, there are risk factors that are known to affect your likelihood of deve
Symptoms of Breast Cancer
The first symptom of breast cancer most women notice is a lump or an area of thickened tissue in their breast. Most breast lumps (90%) aren't cancerous, but it's always best to have them checked by your doctor. You should see your doctor if you
Breast cancer screening
About one in eight women in the UK are diagnosed with breast cancer during their lifetime. There's a good chance of recovery if it's detected in its early stages. Breast screening aims to find breast cancers early. It uses an X-ray test called a
Causes of Breast Lumps
Most breast lumps are caused by benign (non-cancerous) conditions, although occasionally a breast lump can be a symptom of breast cancer. It's important to see your GP as soon as possible if you notice a lump in your breast so they can refer you for
Diagnosing A Breast Lump
It is important to be aware of how your breasts usually look and feel so you can quickly pick up on any changes that may occur. See your doctor if you notice a lump in your breast or any change in its appearance, feel or shape. Your doctor may ask a
Breast Lumps
Breast lumps are common and have a number of different causes. Although most lumps aren't breast cancer, any unusual changes to the breasts should be checked by a doctor as soon as possible. If your doctor finds a lump on examination, they will r
Treating A Breast Lump
How a breast lump is treated will largely depend on the underlying cause and any other symptoms you have. Benign breast lumps often do not need to be treated unless they are particularly large or painful, or are getting bigger. Some types of ben
Diagnosing Haemorrhoids
Your GP can diagnose haemorrhoids (piles) by examining your back passage to check for swollen blood vessels. Some people with haemorrhoids are reluctant to see their GP. However, there’s no need to be embarrassed – all GPs are used to diagnos
Haemorrhoids, also known as piles, are swellings containing enlarged blood vessels that are found inside or around the bottom (the rectum and anus). In many cases, haemorrhoids don't cause symptoms, and some people don't even realise they have t
Treating Haemorrhoids
Surgery for Haemorrhoids
Surgery may be recommended if other treatments for haemorrhoids (piles) haven't worked, or if you have haemorrhoids that aren't suitable for non-surgical treatment. There are many different surgical procedures for piles. The main types of operation
Bowel Cancer
Bowel cancer is a general term for cancer that begins in the large bowel. Depending on where the cancer starts, bowel cancer is sometimes called colon or rectal cancer. Cancer can sometimes start in the small bowel (small intestine), but small bow
Symptoms of Bowel Cancer
Signs and Symptoms of Bowel Cancer The three main symptoms of bowel cancer are blood in the stools (faeces), a change in bowel habit, such as more frequent, looser stools, and Abdominal (tummy) Pain. However, these symptoms are very common. Blo
Causes of bowel cancer
Cancer occurs when the cells in a certain area of your body divide and multiply too rapidly. This produces a lump of tissue known as a tumour. Most cases of bowel cancer first develop inside clumps of cells on the inner lining of the bowel. These cl
Diagnosing Bowel Cancer
When you first see your doctor, they will ask about your symptoms and whether you have a family history of bowel cancer. They will then usually carry out a simple examination of your abdomen (tummy) and your bottom, known as a digital rectal examina
Treating Bowel Cancer
Surgery is usually the main treatment for bowel cancer, and may be combined with chemotherapy, radiotherapy or biological treatments, depending on your particular case. The treatments recommended for you will depend on which part of your bowel is a
Preventing Bowel Cancer
There are some things that increase your risk of bowel cancer that you can't change, such as your family history or your age. However, there are several ways you can lower your chances of developing the condition. Diet Research suggests making
Living with Bowel Cancer
Bowel cancer can affect your daily life in different ways, depending on what stage it is at and what treatment you are having. How people cope with their diagnosis and treatment varies from person to person. There are several forms of support avail
Bowel Cancer Screening
Bowel cancer is the fourth most common cancer in the UK. If it's detected at an early stage, before symptoms appear, it's easier to treat and there's a better chance of surviving it. To detect cases of bowel cancer sooner, the NHS offers two types
Bone Cancer
Primary bone cancer is a rare type of cancer that begins in the bones. Around 550 new cases are diagnosed each year in the UK. This is a separate condition from secondary bone cancer, which is cancer that spreads to the bones after developing in a
Causes of Bone Cancer
Cancer occurs when the cells in a certain area of your body divide and multiply too rapidly. This produces a lump of tissue known as a tumour. The exact reason why this happens is often not known, but certain things can increase your chance of devel
Symptoms of Bone Cancer
Bone pain is the most common symptom of bone cancer. Some people experience other symptoms as well. Bone Pain Pain caused by bone cancer usually begins with a feeling of tenderness in the affected bone. This gradually progresses to a persistent ac
Diagnosing Bone Cancer
If you're experiencing bone pain, your doctor will ask about your symptoms and examine the affected area, before deciding whether you need to have any further tests. They will look for any swelling or lumps, and ask if you have problems moving the
Treating Bone Cancer
Treatment for bone cancer depends on the type of bone cancer you have, how far it has spread and your general health. The main treatments are surgery, chemotherapy and radiotherapy. Your Treatment Plan Your treatment should be managed by a speciali
Symptoms of allergies
Symptoms of an allergic reaction usually develop within a few minutes of being exposed to something you're allergic to, although occasionally they can develop gradually over a few hours. Although allergic reactions can be a nuisance and hamper
|
Should I Add Water To My Compost Bin?
Should I add water to my compost bin? Water is a key parameter in making compost. Microorganisms responsible for breaking down organic matter in your compost pile need water for the same reason all living things do. A steady supply of water helps the organisms to thrive, thus achieving rapid composting.
How often should I add water to my compost?
So you water if as often as needed to keep it moist. Seedfork said: You want to keep your compost moist, not soggy but not dry. It is the living organisms that break down the compost, and they will die if the pile is allowed to dry out. So you water if as often as needed to keep it moist.
Is compost better wet or dry?
Most expert composters suggest a moisture content of 40% to 60%. A quick, hands-on visual check should tell you if the pile is too dry: it will lack heat and there'll be little evidence of organic material break down. If you compost is too wet, it's probably slimy and smells bad.
Can you add hot water to compost?
Hot water bottles add warmth, compost duvets maintain warmth, helping to keep the optimum temperature for bacteria and fungi to break down organic materials.
Does compost need sun?
You can put your compost pile in the sun or in the shade, but putting it in the sun will hasten the composting process. Sun helps increase the temperature, so the bacteria and fungi work faster. If you do place your pile in full sun, just remember to keep it moist as it heats up.
Related faq for Should I Add Water To My Compost Bin?
How can you tell when compost is ready?
Generally compost is ready to be harvested when the finished product is a rich dark brown color, smells like earth, and crumbles in your hand. Some signs that it may not be ready include: Recognizable food content still visible. The pile is still warm.
How long until compost is usable?
Depending on the factors above your compost could take anywhere from four weeks to 12 months to fully decompose. If you're using a tumbler, you'll have ready-to-use compost in three weeks to three months.
Should egg shells be composted?
Does urine speed up composting?
Urine, too, is a great compost stimulator. Obviously, the stiff shot of nitrogen and a bit of moisture both help, and the uric acid (urea) is also very beneficial. Uric acid levels are said to be the highest in the morning, so that's the best time to rain down on the compost pile.
Should I cover my compost?
In most cases, a compost pile does not need a cover. A cover can limit airflow and water, interfering with the composting process. You should definitely cover finished compost. Otherwise, if it's exposed to the elements, the compost will break down further and lose nutrients as they leach into the surrounding soil.
How often should you turn your compost?
Was this post helpful?
Leave a Reply
|
Joseph Nollekens
Photo credit: National Portrait Gallery, London
How you can use this image
Buy a print or image licence
The sculptor Nollekens built his reputation on the production of portrait busts. While studying the antique and practising in Rome between 1762 and 1770, Nollekens established a network of aristocratic British patrons. After returning to Britain in 1770, he was elected a Royal Academician and quickly became London's most fashionable sculptor. In the 1770s and 1780s he produced several neoclassical marbles and developed a brisk trade in church monuments. In the last decades of the eighteenth century menswear was becoming more understated in both colour and materials. For everyday coats and breeches, fine wool began to replace silk and muted colours became the norm. A patterned waistcoat was often the new focus of attention. Striped, double-breasted waistcoats like that worn by the sculptor Joseph Nollekens in this portrait were particularly popular in the 1780s and 1790s.
National Portrait Gallery, London
oil on canvas
H 77.1 x W 63.5 cm
Accession number
Acquisition method
Given by Henry Labouchere, 1858
Work type
Normally on display at
National Portrait Gallery, London
St Martin’s Place, London, Greater London WC2H 0HE England
View venue
|
Is the Bible a metaphorical?
What is a biblical allusion?
Allusion is a device that activates and vitalizes our ideas, association, and information in the reader’s mind through words and reference. It reflects how the reader interprets the allusion. In this article, biblical allusions and the references are taken from the Holy Bible.
Did God write the Bible?
Is the Bible the Word of God?
What religion takes the Bible literally?
Protestants (including those who identify themselves as “Christian” but not Catholic or Mormon) are the most likely religious group to believe the Bible is literally true. Forty-one percent of Protestants hold this view, while a slightly larger 46% take the Bible to be the inspired word of God.
What is the true origin of the Bible?
THIS IS IMPORTANT: Best answer: What Psalms are lament psalms?
|
East Asian cultural sphere
The East Asian cultural sphere, also known as the Sinosphere, the Sinic world, the Sinitic world, the Chinese cultural sphere or the Chinese character sphere, encompasses countries in East and Southeast Asia that were historically influenced by Chinese culture. According to academic consensus, the East Asian cultural sphere is made up of four entities: Greater China (including China, Hong Kong, Macau, and Taiwan), Japan, Korea (both North Korea and South Korea), and Vietnam. Other definitions sometimes include other countries such as Mongolia[1][2][3] and Singapore, because of limited historical Chinese influences or increasing modern-day Chinese diaspora.[4] The East Asian cultural sphere is not to be confused with Greater China or the Sinophone, which includes countries where the Chinese-speaking population is dominant.[5]
East Asian cultural sphere
East Asian Cultural Sphere.svg
• East Asian cultural sphere
Chinese name
Traditional Chinese東亞文化圈
Simplified Chinese东亚文化圈
Vietnamese name
Vietnamese alphabetVùng văn hóa Đông Á
Vùng văn hóa chữ Hán
Đông Á văn hóa quyển
Hán tự văn hóa quyển
Korean name
Japanese name
East Asian Dragons are legendary creatures in East-Asian mythology and culture.
漢字文化圈/汉字文化圈 · 한자 문화권 · Vòng văn hóa chữ Hán · 漢字文化圏.svg
Imperial China was a regional power and exerted influence on tributary and neighbouring states, among which were Japan, Korea, and Vietnam.[n 1] These interactions brought ideological and cultural influences rooted in Confucianism, Buddhism, and Taoism. During classical history, the four cultures shared a common imperial system under respective emperors. Chinese inventions influenced, and were in turned influenced by, innovations of the other cultures in governance, philosophy, science, and the arts.[8][9][10] Written classical Chinese became the regional lingua franca for literary exchange, and Chinese characters (Hanzi) became locally adapted in Japan as Kanji, Korea as Hanja, and Vietnam as Chữ Hán.
In late classical history, the literary importance of classical Chinese diminished as Japan, Korea, and Vietnam each adopted their own literary device. Japan developed the Katakana and Hiragana scripts, Korea developed Hangul, and Vietnam developed Chữ Nôm (which is now obsolete; the modern Vietnamese alphabet is based on the Latin alphabet).[11][12] Classical literature written in Chinese characters nonetheless remains an important legacy of Japanese, Korean, and Vietnamese cultures. In the 21st century, ideological and cultural influences of Confucianism and Buddhism remain visible in high culture and social doctrines.
China has been regarded as one of the centers of civilization, with the emergent cultures that arose from the migration of original Han settlers from the Yellow River generally regarded as the starting point of the East Asian world. Today, its population is approximately 1.43 billion.[citation needed]
Japanese historian Nishijima Sadao [ja] (1919–1998), professor emeritus at the University of Tokyo, originally coined the term Tōa bunka-ken (東亜文化圏, 'East Asian Cultural Area'), conceiving of a Chinese or East-Asian cultural sphere distinct from the cultures of the west. According to Nishijima, this cultural sphere—which includes China, Japan, Korea, and Vietnam, stretching from areas between Mongolia and the Himalayas—shared the philosophy of Confucianism, the religion of Buddhism, and similar political and social structures.[13]
Sometimes used as a synonym for the East-Asian cultural sphere, the term Sinosphere derives from Sino- ('China, Chinese') and -sphere, in the sense of a sphere of influence (i.e., an area influenced by a country). (cf. Sinophone.)[citation needed]
As cognates of each other, the "CJKV" languages—Chinese, Japanese, Korean, and Vietnamese—translate the English term sphere as:
Victor H. Mair discussed the origins of these "culture sphere" terms.[14] The Chinese wénhuà quān (文化圈) dates back to a 1941 translation for the German term Kulturkreis, ('culture circle, field'), which the Austrian ethnologists Fritz Graebner and Wilhelm Schmidt proposed. Japanese historian Nishijima Sadao [ja] coined the expressions Kanji bunka ken (漢字文化圏, "Chinese-character culture sphere") and Chuka bunka ken (中華文化圏, "Chinese culture sphere"), which China later re-borrowed as loanwords. Nishijima devised these Sinitic "cultural spheres" within his Theory of an East Asian World (東アジア世界論, Higashi Ajia sekai-ron).[citation needed]
Chinese–English dictionaries provide similar translations of this keyword wénhuà quān (文化圈) as "the intellectual or literary circles" (Liang Shiqiu 1975) and "literary, educational circles" (Lin Yutang 1972).[citation needed]
The Sinosphere may be taken to be synonymous to Ancient China and its descendant civilizations as well as the "Far Eastern civilizations" (the Mainland and the Japanese ones). In the 1930s in A Study of History, the Sinosphere along with the Western, Islamic, Eastern Orthodox, Indic, etc. civilizations is presented as among the major "units of study."[15]
Comparisons with the WestEdit
British historian Arnold J. Toynbee listed the Far Eastern civilization as one of the main civilizations outlined in his book, A Study of History. He included Japan and Korea in his definition of "Far Eastern civilization" and proposed that they grew out of the "Sinic civilization" that originated in the Yellow River basin.[16] Toynbee compared the relationship between the Sinic and Far Eastern civilization with that of the Hellenic and Western civilizations, which had an "apparentation-affiliation."[17]
American Sinologist and historian Edwin O. Reischauer also grouped China, Japan, Korea, and Vietnam into a cultural sphere that he called the Sinic world, a group of centralized states that share a Confucian ethical philosophy. Reischauer states that this culture originated in Northern China, comparing the relationship between Northern China and East Asia to that of Greco-Roman civilization and Europe. The elites of East Asia were tied together through a common written language based on Chinese characters, much in the way that Latin had functioned in Europe.[18]
American political scientist Samuel P. Huntington considered the Sinic world as one of many civilizations in his book The Clash of Civilizations. He notes that "all scholars recognize the existence of either a single distinct Chinese civilization dating back to at least 1500 B.C. and perhaps a thousand years earlier, or of two Chinese civilizations one succeeding the other in the early centuries of the Christian epoch."[19] Huntington's Sinic civilization includes China, North Korea, South Korea, Mongolia, Vietnam and Chinese communities in Southeast Asia.[20] Of the many civilizations that Huntington discusses, the Sinic world is the only one that is based on a cultural, rather than religious, identity.[21] Huntington's theory was that in a post-Cold War world, humanity "[identifies] with cultural groups: tribes, ethnic groups, religious communities [and] at the broadest level, civilizations."[22][23] Yet, Huntington considered Japan as a distinct civilization.
Imperial City, Hue, Vietnam. Chinese architecture has had a major influence on the East Asian architectural styles of Vietnam, Korea, and Japan.
The cuisine of East Asia shares many of the same ingredients and techniques. Chopsticks are used as an eating utensil in all of the core East Asian countries.[25] The use of soy sauce, which is made from fermenting soybeans, is also widespread in the region.[citation needed]
Rice is a main staple food in all of East Asia and is a major focus of food security.[26] Moreover, in East Asian countries, the word for 'cooked rice' can embody the meaning of food in general.[25]
Popular terms associated with East Asian cuisine include boba, kimchi, sushi, hot pot, tea, dimsum, ramen, as well as phở, sashimi, udon, among others.[27]
East-Asian literary culture is based on the use of Literary Chinese,[citation needed] which became the medium of scholarship and government across the region. Although each of these countries developed vernacular writing systems and used them for popular literature, they continued to use Chinese for all formal writing until it was swept away by rising nationalism around the end of the 19th century.[29]
Throughout East Asia, Literary Chinese was the language of administration and scholarship. Although Vietnam, Korea, and Japan each developed writing systems for their languages, these were limited to popular literature. Chinese remained the medium of formal writing until it was displaced by vernacular writing in the late 19th and early 20th centuries.[30] Though they did not use Chinese for spoken communication, each country had its tradition of reading texts aloud, the so-called Sino-Xenic pronunciations, which provide clues to the pronunciation of Middle Chinese. Chinese words with these pronunciations were also borrowed extensively into the local vernaculars, and today comprise over half their vocabularies.[31]
Books in Literary Chinese were widely distributed. By the 7th century and possibly earlier, woodblock printing had been developed in China. At first, it was used only to copy the Buddhist scriptures, but later secular works were also printed. By the 13th century, metal movable type was used by government printers in Korea but seems to have not been extensively used in China, Vietnam, or Japan. At the same time manuscript reproduction remained important until the late 19th century.[32]
Japan's textual scholarship had Chinese origin which made Japan one of the birthplaces of modern Sinology.[33]
Philosophy and religionEdit
The Art of War, Tao Te Ching, and Analects are classic Chinese texts that have been influential in East Asian history.[citation needed]
The countries of China, Korea, Vietnam and Japan have been influenced by Taoism. Developed from Eastern philosophy, known as Tao, the religion was created in China from the teachings of Lao Tse. It follows the search for the tao, a concept that is equivalent to a path or course and represents the cosmic force that creates the universe and all things
According to this belief, the wisdom of the Tao is the only source of the universe and must be a natural path of life events that everyone should follow. Thus, the adherents of Taoism follow the search for Tao, which means path and represents the strength of the universe.
The most important text in Taoism, the Tao Te Ching (Book of the Way and Virtue, c. 300 BC), declares that the Tao is the “source” of the universe, thus considered a creative principle, but not as a deity. Nature manifests itself spontaneously, without a higher intention, it is up to the human being to integrate, through "non-action" ("wuwei") and spontaneity ("ziran"), to its flow and rhythms, to achieve happiness and a long life.
Taoism is a combination of teachings from various sources, manifesting itself as a system that can be philosophical, religious or ethical. This tradition can also be presented as a worldview and a way of life.
Mahayana Buddhism, particular to East Asian religion.
The countries of China, Japan, Korea, and Vietnam share a history of Mahayana Buddhism. It spread from India via the Silk Road through north-west India and modern day Pakistan, Xinjiang, eastward through Southeast Asia, Vietnam, then north through Guangzhou and Fujian. From China, it proliferated to Korea and Japan, especially during the Six Dynasties. It could have also re-spread from China south to Vietnam. East Asia is now home to the largest Buddhist population in the world at around 200-400 million, with the top five countries including China, Thailand, Myanmar, Japan, Vietnam—three of which falling within the East-Asian Cultural Sphere.[citation needed]
Buddhist philosophy is guided by the teachings of the Buddha, which lead the individual to full happiness through meditative practices, mind control and self-analysis of their daily actions.
Buddhists believe that physical and spiritual awareness leads to enlightenment and upliftment, called nirvana.
Nirvana is the highest state of meditation. According to Buddha, it is when the individual finds peace and tranquility, stopping the oscillations of thoughts and emotions, getting rid of the suffering of the physical world.
Confucianism plays a crucial part in East Asian culture.
Temple of Literature, Hanoi. Confucian education and imperial examinations played a huge role in creating scholars and mandarins (bureaucrats) for East-Asian dynasties.
The countries of China, Japan, Korea, and Vietnam share a Confucian philosophical worldview.[18] Confucianism is a humanistic[34] philosophy that believes that human beings are teachable, improvable and perfectible through personal and communal endeavor especially including self-cultivation and self-creation. Confucianism focuses on the cultivation of virtue and maintenance of ethics, the most basic of which are:[35]
• rén (): an obligation of altruism and humaneness for other individuals;
• (/): the upholding of righteousness and the moral disposition to do good; and
• (/): a system of norms and propriety that determines how a person should properly act in everyday life.
Mid-Imperial Chinese philosophy is primarily defined by the development of Neo-Confucianism. During the Tang dynasty, Buddhism from Nepal also became a prominent philosophical and religious discipline. Neo-Confucianism has its origins in the Tang dynasty; the Confucianist scholar Han Yu is seen as a forebear of the Neo-Confucianists of the Song dynasty.[36] The Song dynasty philosopher Zhou Dunyi is seen as the first true "pioneer" of Neo-Confucianism, using Daoist metaphysics as a framework for his ethical philosophy.[37]
Elsewhere in East Asia, Japanese philosophy began to develop as indigenous Shinto beliefs fused with Buddhism, Confucianism and other schools of Chinese philosophy. Similar to Japan, in Korean philosophy elements of Shamanism were integrated into the Neo-Confucianism imported from China. In Vietnam, neo-Confucianism was developed into Vietnamese own Tam giáo as well, along with Vietnamese folk religion and Mahayana Buddhism.[citation needed]
Other religionsEdit
Though not commonly identified with that of East Asia, the following religions have been influential in its history:[citation needed]
1. Hinduism, see Hinduism in Vietnam, Hinduism in China[citation needed]
2. Islam, see Xinjiang, Islam in China, Islam in Hong Kong, Islam in Japan, Islam in Korea, Islam in Vietnam.[citation needed]
3. Christianity, one of the most popular religions in after Buddhism. Significant Christian communities also found in China, Hong Kong, Japan, Macau, South Korea, Taiwan and Vietnam.[38]
Historical linguisticsEdit
Various languages are thought to have originated in East Asia and have various degrees of influence on each other.[citation needed] These include:
1. Sino-Tibetan: Spoken mainly in China, Singapore, Myanmar, Christmas Island, Bhutan, Northeast India, Kashmir and parts of Nepal. Major Sino-Tibetan languages include the varieties of Chinese, the Tibetic languages and Burmese. They are thought to have originated around the Yellow River north of the Yangzi.[39][40]
2. Austronesian: Spoken mainly in what is today Taiwan, Brunei, East Timor, Indonesia, Singapore, the Philippines, Malaysia, the Cocos (Keeling) Islands, Christmas Island, Madagascar and most of Oceania. Major Austronesian languages include the Formosan languages, Malay, Filipino, Malagasy and Māori.[41][42]
3. Turkic: Spoken mainly in China, Russia, Turkmenistan, Kyrgyzstan, Uzbekistan, Kazakhstan, Azerbaijan, Iran, Cyprus and Turkey. Major Turkic languages include Turkish, Azerbaijani, Kazakh, Kyrgyz and Uyghur.[43][44][45]
4. Austroasiatic: Spoken mainly in Vietnam and Cambodia. Major Austroasiatic languages include Vietnamese and Khmer.[citation needed]
5. Kra-Dai: Spoken mainly in Thailand, Laos, and parts of Southern China. Major Kra-Dai languages include Zhuang, Thai, and Lao.[citation needed]
6. Mongolic: Spoken mainly in Mongolia, China and Russia. Major Mongolian languages include Oirat, Mongolian, Monguor, Dongxiang and Buryat.[citation needed]
7. Tungusic: Spoken mainly in China and Russia. Major Tungusic languages include Evenki, Manchu, and Xibe.[citation needed]
8. Koreanic: Spoken mainly in Korea. Major Korean languages include Korean and Jeju.[citation needed]
9. Japonic: Spoken mainly in Japan. Major Japonic languages include Japanese, Ryukyuan and Hachijo.[citation needed]
10. Ainu: Spoken mainly in Japan. The only surviving Ainu language is Hokkaido Ainu.[citation needed]
The core Languages of the East Asian Cultural Sphere generally include the varieties of Chinese, Japanese, Korean, and Vietnamese. All of these languages have a well-documented history of having historically used Chinese characters, Japanese, Korean, and Vietnamese all having roughly 60% of their vocabulary stemming from Chinese.[46][47][48] There is a small set of minor languages that are comparable to the core East Asian languages such as Zhuang and Hmong-Mien. They are often overlooked since neither have their own country or heavily export their culture, but Zhuang has been written in Hanzi inspired characters called Sawndip for over 1000 years. Hmong, while having supposedly lacked a writing system until modern history, is also suggested to have a similar percentage of Chinese loans to the core CJKV languages as well.[49]
While other languages have been impacted by the Sinosphere such as the Thai with its Thai numeral system and Mongolian with its historical use of Hanzi: the amount of Chinese vocabulary overall is not nearly as expansive in these languages as the core CJKV, or even Zhuang and Hmong.[citation needed]
Various hypotheses are trying to unify various subsets of the above languages, including the Sino-Austronesian, Altaic and Austric language groupings. An overview of these various language groups is discussed in Jared Diamond's Germs, Guns, and Steel, among other places.[citation needed]
Writing systemsEdit
Writing systems around the world
East Asia is quite diverse in writing systems, from the Brahmic, inspired abugidas of SEA, the logographic hanzi of China, the syllabaries of Japan, and various alphabets and abjads used in Korea (Hangul), Mongolia (Cyrillic), Vietnam (Latin), etc.[citation needed]
Writing systems of the Far East
Writing system Regions
Logograms (Hanzi and it's variants) China, Japan, Korea, Malaysia, Singapore, Vietnam*, Taiwan
Logograms (Dongba symbols) China (Used by the Naxi ethnic minorities in China)
Syllabary (Kana) Japan
Syllabary (Yi script) China (Used by the Yi ethnic minorities in China)
Alphabet (Latin) Vietnam, China (Used by some ethnic minorities in China, such as the Miao people), Taiwan (Tâi-lô Latin script for Taiwanese Hokkien language)
Alphabet (Hangul) Korea, China (Used by the Choson ethnic minorities in Northeastern China)
Alphabet (Cyrillic) Mongolia (though there is movement to switch back to Mongolian script)[50]
Alphabet (Mongolian) Mongolia*, China (Inner Mongolia)
Alphabet (Vietnamese) Vietnam*, China (Dongxing, Guangxi) still used by the Gin people today
Abugida (Brahmic scripts of Indian origin) China (Tibet, Xishuangbanna Dai Autonomous Prefecture)
Abugida (Pollard script) China (Used by the Hmong ethnic minorities in China)
Abjad (Uyghur Arabic alphabet) China (Xinjiang)
* Official usage historically. Currently used unofficially.
Character influencesEdit
Development of kana from Chinese characters
Countries and regions using Chinese characters as a writing system:
Green: Simplified Chinese used officially but traditional form is also used in publishing (Singapore, Malaysia)[51]
Light Green: Simplified Chinese used officially, traditional form in daily use is uncommon (China, Kokang and Wa State of Myanmar)
Hanzi (漢字 or 汉字) is considered the common culture that unifies the languages and cultures of many East Asian nations. Historically, Japan, Korea, and Vietnam have used Chinese characters. Today, they are mainly used in China, Japan, and South Korea albeit in different forms.[citation needed]
Mainland China, Malaysia and Singapore uses simplified characters, whereas Taiwan, Hong Kong, and Macau use Traditional Chinese.
Japan still uses kanji but has also invented kana, believed to be inspired by the Brahmic scripts of southern Asia.[citation needed]
Korea used to write in hanja but has invented an alphabetic system called hangul (also inspired by Chinese and phags-pa during the Mongol Empire) that is nowadays the majority script. However, hanja is a required subject in South Korea. Most names are also written in hanja. Hanja is also studied and used in academia, newspapers, and law; areas where a lot of scholarly terms and Sino-Korean loanwords are used and necessary to distinguish between otherwise ambiguous homonyms.[citation needed]
Vietnam used to write in chữ Hán or Classical Chinese. Since the 8th century they began inventing many of their own chữ Nôm. Since French colonization, they have switched to using a modified version of the Latin alphabet called chữ Quốc ngữ. However, Chinese characters still hold a special place in the cultures as their history and literature have been greatly influenced by Chinese characters. In Vietnam (and North Korea), chữ Hán can be seen in temples, cemeteries, and monuments today, as well as serving as decorative motifs in art and design. And there are movements to restore Hán Nôm in Vietnam. (Also see History of writing in Vietnam.)[citation needed]
Zhuang people are similar to the Vietnamese in that they used to write in Sawgun (Chinese characters) and have invented many of their characters called Sawndip (Immature characters or native characters). Sawndip is still used informally and in traditional settings, but in 1957, the People's Republic of China introduced an alphabetical script for the language, which is what it officially promotes.[52]
Economy and tradeEdit
Before European imperialism, East Asia has always been one of the largest economies in the world, whose output had mostly been driven by China and the Silk Road.[citation needed] During the Industrial Revolution, East Asia modernized and became an area of economic power starting with the Meiji Restoration in the late 19th century when Japan rapidly transformed itself into the only industrial power outside the North Atlantic area.[53] Japan's early industrial economy reached its height in World War II (1939-1945) when it expanded its empire and became a major world power.[citation needed]
The business cultures within the Sinosphere in some ways are heavily influenced by Chinese culture. Important in China is the social concept of guanxi (關係), which has influenced the societies of Korea, Vietnam and Japan as well.[citation needed] Japan often features hierarchically-organized companies, and Japanese work environments place a high value on interpersonal relationships.[54] Korean businesses, adhering to Confucian values, are structured around a patriarchal family governed by filial piety (孝順) between management and a company's employees.[55]
Post-WW2 (Tiger economies)Edit
Following Japanese defeat, economic collapse after the war, and US military occupation, Japan's economy recovered in the 1950s with the post-war economic miracle in which rapid growth propelled the country to become the world's second-largest economy by the 1980s.[citation needed]
Since the Korean War and again under US military occupation, South Korea has experienced its postwar economic miracle called the Miracle on the Han River, with the rise of global tech industry leaders like Samsung, LG, etc. As of 2019 its economy is the 4th largest in Asia and the 11th largest in the world.[citation needed]
Hong Kong became one of the Four Asian Tiger economies, developing strong textile and manufacturing economies.[56] South Korea followed a similar route, developing the textile industry.[56] Following in the footsteps of Hong Kong and Korea, Taiwan and Singapore quickly industrialized through government policies. By 1997, all four of the Asian Tiger economies had joined Japan as economically developed nations.[citation needed]
As of 2019, South Korean and Japanese growth have stagnated (see also Lost Decade), and present growth in East Asia has now shifted to China and Vietnam.[57][58][59][60]
Modern eraEdit
Since the Chinese economic reform, China has become the 2nd and 1st-largest economy in the world respectively by nominal GDP and GDP (PPP). The Pearl River Delta is one of the top startup regions (comparable with Beijing and Shanghai) in East Asia, featuring some of the world's top drone companies, such as DJI.[citation needed]
Up until the early 2010s, Vietnamese trade was heavily dependent on China, and many Chinese-Vietnamese speak both Cantonese and Vietnamese, which share many linguistic similarities. Vietnam, one of Next Eleven countries as of 2005, is regarded as a rising economic power in Southeast Asia.[61]
East Asia participates in numerous global economic organizations including:[citation needed]
See alsoEdit
1. ^ Vietnam and Korea remained tributary states of China for much of their histories, while Japan only submitted to Chinese regional hegemony during 1404–1549.[6][7]
1. ^ Billé, Franck; Urbansky, Sören (2018). Yellow Perils: China Narratives in the Contemporary World. p. 173. ISBN 9780824876012.
2. ^ Christian, David (2018). A History of Russia, Central Asia and Mongolia, Volume II: Inner Eurasia from the Mongol Empire to Today, 1260–2000. p. 181. ISBN 9780631210382.
3. ^ Grimshaw-Aagaard, Mark; Walther-Hansen, Mads; Knakkergaard, Martin (2019). The Oxford Handbook of Sound and Imagination: Volume 1. p. 423. ISBN 9780190460167.
4. ^ Gold, Thomas B. (1993). "Go with Your Feelings: Hong Kong and Taiwan Popular Culture in Greater China". The China Quarterly. 136 (136): 907–925. doi:10.1017/S0305741000032380. ISSN 0305-7410. JSTOR 655596.
5. ^ Hee, Wai-Siam (2019). Remapping the Sinophone: The Cultural Production of Chinese-Language Cinema in Singapore and Malaya before and during the Cold War (1 ed.). Hong Kong University Press. ISBN 978-988-8528-03-5. JSTOR j.ctvx1hwmg.
6. ^ Kang, David C. (David Chan-oong), 1965- (2012). East Asia before the West : five centuries of trade and tribute (Paperback ed.). New York: Columbia University Press. ISBN 978-0-231-15319-5. OCLC 794366373.CS1 maint: multiple names: authors list (link)
8. ^ Nanxiu Qian et al, eds (2020). Rethinking the Sinosphere: Poetics, Aesthetics, and Identity Formation. Cambria Press. ISBN 978-1604979909.CS1 maint: extra text: authors list (link)
9. ^ Nanxiu Qian et al, eds (2020). Reexamining the Sinosphere: Cultural Transmissions and Transformations in East Asia. Cambria Press. ISBN 978-1604979879.CS1 maint: extra text: authors list (link)
10. ^ Jeffrey L. Richey (2013). Confucius in East Asia: Confucianism's History in China, Korea, Japan, and Vietnam. Association for Asian Studies. ISBN 978-0924304736., Rutgers University, ed. (2010). East Asian Confucianism: Interactions and Innovations. Rutgers University. ISBN 978-0615389325.CS1 maint: extra text: authors list (link)Chun-chieh Huang, ed. (2015). East Asian Confucianisms: Texts in Contexts. National Taiwan University Press and Vandenhoeck & Ruprecht. ISBN 9783847104087.CS1 maint: extra text: authors list (link)
11. ^ Benjamin A Elman, ed (2014). Rethinking East Asian Languages, Vernaculars, and Literacies, 1000–1919. Brill. ISBN 978-9004279278.CS1 maint: extra text: authors list (link)
12. ^ Pelly, Patricia (2018). "Vietnamese Historical Writing". The Oxford History of Historical Writing: Volume 5: Historical Writing Since 1945. Oxford University Press. doi:10.1093/oso/9780199225996.003.0028. ISBN 978-0-19-922599-6.
15. ^ See the "family tree" of Toynbee's "civilizations" in any edition of Toynbee's work, or e.g. as Fig.1 on p.16 of: The Rhythms of History: A Universal Theory of Civilizations, By Stephen Blaha. Pingree-Hill Publishing, 2002. ISBN 0-9720795-7-2.
24. ^ McCannon, John (February 2002). How to Prepare for the AP World History. ISBN 9780764118166.
27. ^ Kim, Kwang-Ok (1 February 2015). Re-Orienting Cuisine : East Asian Foodways in the Twenty-First Century. Berghahn Books, Incorporated. p. 14. ISBN 9781782385639.
28. ^ "Tradition: Okinawa Lunar New Year Celebration". Travelthruhistory. 20 January 2010. Retrieved 1 July 2021.
29. ^ Kornicki, P.F. (2011), "A transnational approach to East Asian book history", in Chakravorty, Swapan; Gupta, Abhijit (eds.), New Word Order: Transnational Themes in Book History, Worldview Publications, pp. 65–79, ISBN 978-81-920651-1-3.Kornicki 2011, pp. 75–77
30. ^ Kornicki (2011), pp. 66–67.
31. ^ Miyake (2004), pp. 98–99.
32. ^ Kornicki (2011), p. 68.
33. ^ "Given Japan’s strong tradition of Chinese textual scholarship, encouraged further by visits by eminent Chinese scholars since the early twentieth century, Japan has been one of the birthplaces of modern sinology outside China" Early China - A Social and Cultural History, page 11. Cambridge University Press.
39. ^ Jin, Li; Wuyun Pan; Yan, Shi; Zhang, Menghan (24 April 2019). "Phylogenetic evidence for Sino-Tibetan origin in northern China in the Late Neolithic". Nature. 569 (7754): 112–115. Bibcode:2019Natur.569..112Z. doi:10.1038/s41586-019-1153-z. ISSN 1476-4687. PMID 31019300. S2CID 129946000.
40. ^ Sagart, Laurent; Jacques, Guillaume; Lai, Yunfan; Ryder, Robin J.; Thouzeau, Valentin; Greenhill, Simon J.; List, Johann-Mattis (2019). "Dated language phylogenies shed light on the ancestry of Sino-Tibetan". Proceedings of the National Academy of Sciences. 116 (21): 10317–10322. doi:10.1073/pnas.1817972116. PMC 6534992. PMID 31061123.
41. ^ Fox, James (19–20 August 2004). Current Developments in Comparative Austronesian Studies. Symposium Austronesia, Pascasarjana Linguististik dan Kajian Budaya Universitas Udayana. ANU Research Publications. Bali. OCLC 677432806.
42. ^ Trejaut, Jean A; Kivisild, Toomas; Loo, Jun Hun; et al. (2005). "Traces of Archaic Mitochondrial Lineages Persist in Austronesian-Speaking Formosan Populations". PLOS Biology. 3 (8): e247. doi:10.1371/journal.pbio.0030247. PMC 1166350. PMID 15984912.
43. ^ Yunusbayev, Bayazit; Metspalu, Mait; Metspalu, Ene; et al. (21 April 2015). "The Genetic Legacy of the Expansion of Turkic-Speaking Nomads across Eurasia". PLOS Genetics. 11 (4): e1005068. doi:10.1371/journal.pgen.1005068. ISSN 1553-7390. PMC 4405460. PMID 25898006. Thus, our study provides the first genetic evidence supporting one of the previously hypothesized IAHs to be near Mongolia and South Siberia.
44. ^ Blench, Roger; Spriggs, Matthew (2003). Archaeology and Language II: Archaeological Data and Linguistic Hypotheses. Routledge. p. 203. ISBN 9781134828692.
45. ^ "Transeurasian theory: A case of farming/language dispersal". ResearchGate. Retrieved 13 March 2019.
46. ^ DeFrancis, John, 1911-2009. (1977). Colonialism and language policy in Viet Nam. The Hague: Mouton. ISBN 9027976430. OCLC 4230408.CS1 maint: multiple names: authors list (link)
47. ^ Sohn, Ho-min. (1999). The Korean language. Cambridge, UK: Cambridge University Press. ISBN 0521361230. OCLC 40200082.
48. ^ Shibatani, Masayoshi. (1990). The languages of Japan. 柴谷, 方良, 1944- (Reprint 1994 ed.). Cambridge [England]: Cambridge University Press. ISBN 0521360706. OCLC 19456186.
49. ^ Ratliff, Martha Susan. (2010). Hmong-Mien language history. Pacific Linguistics. ISBN 9780858836150. OCLC 741956124.
50. ^ "Why reading their own language gives Mongolians a headache". SoraNews24. 26 September 2013. Retrieved 27 April 2019.
51. ^ 林友順 (June 2009). "大馬華社遊走於簡繁之間" (in Chinese). Yazhou Zhoukan. Retrieved 30 March 2021.
52. ^ Zhou, Minglang, 1954- (24 October 2012). Multilingualism in China : the politics of writing reforms for minority languages, 1949-2002. Berlin. ISBN 9783110924596. OCLC 868954061.CS1 maint: multiple names: authors list (link)
56. ^ a b Compare: J. James W. Harrington; Barney Warf (1995). Industrial Location: Principles, Practice, and Policy. Routledge. p. 199. ISBN 978-0-415-10479-1. As the textile industry began to abandon places with high labor costs in the western industrialized world, it began to sprout up in a variety of Third World locations, in particular the famous 'Four Tiger' nations of East Asia: South Korea, Taiwan, Hong Kong, and Singapore. Textiles were particularly important in the early industrialization of South Korea, while garment production was more significant to Hong Kong.
57. ^ "Why South Korea risks following Japan into economic stagnation". Australian Financial Review. 21 August 2018. Retrieved 27 April 2019.
58. ^ Abe, Naoki (12 February 2010). "Japan's Shrinking Economy". Brookings. Retrieved 27 April 2019.
59. ^ "The rise and demise of Asia's four little dragons". South China Morning Post. 28 February 2017. Retrieved 27 April 2019.
60. ^ "YPs' Guide To: Southeast Asia—How Tiger Cubs Are Becoming Rising Tigers". spe.org. Retrieved 27 April 2019.
61. ^ "The story behind Viet Nam's miracle growth". World Economic Forum. Retrieved 27 April 2019.
• Ankerl, Guy (2000). Coexisting contemporary civilizations : Arabo-Muslim, Bharati, Chinese, and Western. Global communication without universal civilization. 1. Geneva, Switzerland: INU Press. ISBN 978-2-88155-004-1.
• Elman, Benjamin A (2014). Rethinking East Asian Languages, Vernaculars, and Literacies, 1000–1919. Leiden: Brill. ISBN 978-9004279278.
• Fogel, Joshua A. (2009). Articulating the Sinosphere : Sino-Japanese relations in space and time. Edwin O. Reischauer Lectures ([Online-Ausg.] ed.). Cambridge, Mass.: Harvard University Press. ISBN 978-0-674-03259-0.
• Huang, Chun-chieh (2015). East Asian Confucianisms: Texts in Contexts. Taipei and Göttingen, Germany: National Taiwan University Press and Vandenhoeck & Ruprecht. ISBN 9783847104087.
• Qian, Nanxiu (2020). Reexamining the Sinosphere: Cultural Transmissions and Transformations in East Asia. Amherst, N.Y.: Cambria Press. ISBN 978-1604979879. Lay summary.
• —— (2020). Rethinking the Sinosphere: Poetics, Aesthetics, and Identity Formation. Amherst, NY: Cambria Press. ISBN 978-1604979909. Lay summary.
• Richey, Jeffrey L. (2013). Confucius in East Asia: Confucianism's History in China, Korea, Japan, and Vietnam. Ann Arbor: Association for Asian Studies. ISBN 978-0924304736. Lay summary.
• Rutgers University, Confucius Institute (2010). East Asian Confucianism: Interactions and Innovations. New Brunswick, NJ: Rutgers University. ISBN 978-0615389325. Lay summary.
External linksEdit
|
First Britons vaccinated against Covid-19
On Tuesday, the UK began vaccinating citizens against coronavirus. 90-year-old Margaret Keenan was the first to be vaccinated with a vaccine made by BioNTech and Pfizer. Keenan called on her countrymen to participate in the country’s largest-ever vaccination program. European media give their assessment of the British initiative.
Pioneers — and Guinea pigs
Hospodarske news welcomes the fact that it is the British who have become pioneers in such an important matter:
“To some extent, the British in the eyes of the West have now become a kind of Guinea pigs. Their example will decide whether this largest scientific, industrial, and logistical operation was ever undertaken by humanity in the name of its salvation will end in success or failure. Over the next few weeks, we will expect reports from UK hospitals and nursing homes, with a detailed analysis of the problems caused by vaccination. Fortunately, Great Britain has been a pioneer many times before: after all, it was the British who first convened Parliament, invented the steam engine, and became viewers of the first television program in the history of mankind.»
For the benefit of the country’s image
The start of vaccination will benefit the shaken self-esteem of the British, — says La Repubblica:
“Now the UK will have to go its own way in this world, more or less devastated by covid. She will have to go outside the borders of the European Union, which will no longer be able to serve as a refuge for her. . … Given that since the beginning of the pandemic, the UK has made all possible (including fatal) mistakes, it needs more than other European countries to become a center of innovation, efficiency, and initiative again. Britain is driven by a desire to refute the long-standing reproach that the country is pupating and turning into a closed island. Against this background, the country should not behave like a nation with the ambitions of an Imperial superpower. It would be much better to appear in the eyes of the world community as a European state that has something to offer the world — and its own citizens.»
Leave a Reply
|
Your question: What can damage a baby’s hearing?
Can you damage a babies hearing?
How do I know if I damaged my baby’s hearing?
Signs of hearing loss in your baby can include: Not being startled by loud sounds. Not turning toward a sound after he’s 6 months old. Not saying single words like “mama” or “dada” by the time he’s 1 year old.
What can affect a child’s hearing?
Causes of hearing problems in babies and children
infections that develop in the womb or at birth, such as rubella (german measles) or cytomegalovirus, which can cause progressive hearing loss. inherited conditions which stop the ears or nerves from working properly.
Do babies with hearing loss cry?
Even if you baby does have a mild hearing loss, they will still be able to hear most or all the sounds in their own voice when they cry or babble.
Can yelling hurt my newborn hearing?
When you see a baby startle it might be because of a noise that for them is in the loud-painful zone. Do not attend live music or go near construction unless your baby is wearing ear protection. Recognise that noise from older siblings or shouting can damage your baby’s hearing over the long term.
IT IS INTERESTING: Your question: How do I tell tax credits my child is staying in education?
Can yelling hurt baby ears in womb?
Is baby deaf in baby Driver?
Jones is deaf in real life; Ansel Elgort had to learn sign language to communicate with him. In an introduction from Edgar Wright, he revealed that there was little to no CGI or green screen used to film the car chase sequences. The driving is all practically done.
Can baby hearing improve?
Changes in hearing thresholds of NICU infants
One infant with normal hearing progressed to severe hearing loss. Five infants who had SNHL in the initial hearing tests showed a hearing threshold improvement of more than 20 dB (mean difference of threshold, 35 dB), and four of them recovered to normal hearing.
What causes profound deafness in babies?
This type of hearing loss can be caused by: Exposure to certain toxic chemicals or medicines while in the womb or after birth. Genetic disorders. Infections the mother passes to her baby in the womb (such as toxoplasmosis, measles, or herpes)
|
Quick Answer: Is bug spray harmful to babies?
Why can’t babies use bug spray?
Generally, repellent with DEET should not be applied more than once a day. DEET can put on exposed skin, as well as clothing, socks, and shoes. But don’t use it on your child’s face, under clothing, on cuts or irritated skin, or on the hands of young children.
Are pesticides safe around babies?
Pesticides are more dangerous for babies and children than adults because their bodies are still developing. Some research shows that exposure to pesticides as a baby may be linked to childhood cancer and development or behavior problems.
How can I protect my baby from mosquitoes?
Preventing mosquito bites on your baby
2. Apply insect repellent. …
3. Use mosquito netting. …
4. Keep the windows closed.
Are pesticides harmful to children?
Pesticide poisoning is especially harmful to children since their brain and nervous systems are at early critical stages of development. Because their bodies are still growing, children have fewer natural defenses and can develop serious health effects if overexposed to pesticides.
Is pest control harmful for kids?
Generally, the exposure is not only through the skin but also inhalation can cause health problems like asthma. Lack of knowledge on pesticides and even natural pest controlling methods will only lead your kids with high health risks. Babies while toddling put things in their mouth and this poses a great risk.
IT IS INTERESTING: Frequent question: Why do toddlers need so much attention?
Is it safe to sleep in a room after spraying Raid?
Can You Sleep in a Room After Spraying Raid In It? As we have established, the odor is the best indicator of how safe a room is after a Raid application. So if you can’t smell the insecticide, it should be safe to sleep in the room — provided that you have aired it out properly.
|
Your fruit trees don't know it's winter
I don't know about you, but it is mid July and my fruit trees are clueless about what season it is. My dwarf black mulberry tree is actually fruiting (it is supposed to be fruiting in October/November) and the pomegranates are joyously pumping out new leaves. The figs, silvanberry and plums are still in leaf. Folks, this is NOT normal for this time of year! So what can we do?
Simply strip the leaves of your deciduous trees to force/trick them into dormancy. What this does is encourage the hormone Gibberellin - the initiator of flower bud formation - which then leads to fruit production. The colder it is during dormancy, the more Giberellins.
If you live in coastal Perth or far away from the Hills area where it doesn't get frosts or below 5C at night much, choose low chill fruit varieties when buying deciduous trees. Even if it is really cold at night, warm daytime temperatures will often cancel out that precious chill factor required to set flowers and fruit like plums, apricots, peaches, apples etc. Another hint is to locate your fruit trees away from heat retaining walls, paving and fences.
Happy growing
|
New recipes
Measure the Speed of Light with Chocolate and a Microwave!
Measure the Speed of Light with Chocolate and a Microwave!
Did you know that you can measure the speed of light with a chocolate bar and a microwave?
It's quantum mechanics, in your very own kitchen.
Here’s a really cool intersection of food and science that deserves attention: did you know that you can measure the speed of light using chocolate, a microwave, and a ruler?
All these humble instruments can come together to help you conduct “kitchen quantum mechanics,” as one researcher.
In order to measure the speed of light in your microwave, you’ll need to stop the plate inside from moving, so remove the wheels. Next, lay your chocolate flat in the microwave and set on high for 15 seconds.
By measure the distance between the melted and soft portions of chocolate on the bar you’ll find the wavelength, which you’ll multiply by the frequency (waves per second) of your microwave. Yes, it is more complicated, but I’ll leave the explanations to the team at the Bristol Science Center.
Watch the video below:
There’s an Easy (And Tasty) Way to Measure the Speed of Light at Home
The first successful measurement of the speed of light took place in 1676. Danish astronomer Ole Rømer was trying to measure the orbit of Io, Jupiter's third largest moon, by watching how long it took to pass around the planet. Watching Io over many years, Rømer made a surprising discovery, says the American Museum of Natural History:
The time interval between successive eclipses became steadily shorter as the Earth in its orbit moved toward Jupiter and became steadily longer as the Earth moved away from Jupiter. These differences accumulated. From his data, Roemer estimated that when the Earth was nearest to Jupiter. eclipses of Io would occur about eleven minutes earlier than predicted based on the average orbital period over many years. And 6.5 months later, when the Earth was farthest from Jupiter. the eclipses would occur about eleven minutes later than predicted.
Roemer knew that the true orbital period of Io could have nothing to do with the relative positions of the Earth and Jupiter. In a brilliant insight, he realized that the time difference must be due to the finite speed of light.
Before Rømer, scientists were unsure if light had a limited speed or if its speedometer was permanently stuck at “infinite.”
A few hundred years later, techniques for measuring the speed of light have grown astoundingly more precise and, in some cases, more complex. But in the video above, the folks at the At Bristol science center show off a relatively simple way to calculate the speed of light that doesn't involve years of looking through a telescope eyepiece. In fact, their approach uses nothing but simple kitchen equipment—and chocolate.
In the video, hosts Ross Exton and Nerys Shah use little more than a microwave oven and a chocolate bar to show how to calculate the speed of light. The video doesn't make it perfectly clear how measuring the melted bits on a chocolate bar relates to the speed of light. But breaking it down a little more just requires taking a look at some of the units used in their measurements.
Hertz is the physics stand-in for “cycles per second.” The microwave used in the video produced light waves with a frequency of 2,450,000,000 Hertz, or that many cycles per second. Going from peak to peak in a wave—in this case the distance between the first and third melted bit of chocolate—is one cycle. Exton and Shah measured that distance as 0.12 meters, or 0.12 meters per cycle. Multiplying something measured in “meters per cycle” by something in “cycles per second” will give a measurement in “meters per second.” That's the wave's velocity—the speed of light.
The trick that makes the At Bristol team's approach work is that we in the modern era already know a few important things about light: that it has a finite speed, and that that speed is largely constant. We also have the benefit of physicists having already teased out the relationship between wave length, frequency and velocity.
When Ole Rømer looked to Jupiter and first deduced the speed of light he came up with 214,000,000 meters per second. “This measurement, considering its antiquity, method of measurement, and 17th century uncertainty in exactly how far Jupiter was from the Earth, is surprisingly close to the modern value of [299 792 458] meters per second,” says Dave Kornreich for Cornell.
Using a microwave and a chocolate bar Exton and Shah got 294,000,000 meters per second—not bad for a little bit of kitchen science.
Measure the Speed of Light Using Your Microwave
What You Need:
Measure the Microwaves:
Find the Frequency:
Measure the Speed of Light:
Use Your Microwave to Measure the Speed of Light
Can your microwave oven really measure the speed of light? Yes, it can be done. And since many of the suggested experiments also involve chocolate, it will be done. Oh yes, it will be done.
First, a brief summary of the facts:
Microwaves are part of the electromagnetic spectrum. The electromagnetic spectrum includes radio waves, infrared waves, visible light, and ultraviolet, and can best be described as a bunch of things that behave the way visible light does, even though we can't see them, which is a shame, since that would eliminate the need for recreational drugs. Microwaves move at the same speed that light does.
Microwave ovens produce microwaves in a special configuration, called a standing wave. A standing wave. A standing wave is a wave that so perfectly fits its container that it looks like it looks like it's standing still. Most people have created standing waves as children playing with jump ropes. If you lift and push at just the right times, the jump rope will have one place that moves into peaks and valleys, while staying still at the two ends. If you put a little more effort into it, you can make the jump rope have two places that form peaks and valleys, and three points where it seems to be holding still.
This s-like curve is one wave, and the length of it is one wavelength. (Yes, I know that that's obvious. Just bring that up whenever people complain that physics is hard.)
Inside the microwave, the peaks and valleys of a standing wave translate to big time oscillation, and that oscillation cooks the food. The nodes, or places where the jump rope seems to stand still, translate to no oscillation.
That's why the microwave tray rotates. It has to move the food in order to make sure that every part of your frozen dinner is exposed to the places of highest oscillation. If it just stayed still, the peas would be roughly at the temperature of the center of the sun, and be little green time bombs waiting to nuke your tongue, while the tater tots would be frozen, ready to break your teeth when you bite into it. Because frozen foods hate us as much as we hate them. It's inarguable. That's why I put it in the ‘facts' section.
The number of waves that blow by a certain point per second is said to be the frequency of the waves. The frequency, the wavelengths, and the speed of waves have been established as having a set relationship with one another.
(Frequency) x (Wavelength) = Speed
This makes sense both logically and experimentally. For example, if you were sitting on the side of a one mile loop trail, and a runner ran past you once every ten minutes, you could determine their speed like this:
(6 loops per hour) X (1 mile per loop) = A speed of six miles per hour.
If six full waves cycled past you in one hour, the speed would be the same.
And so, we are armed with all the theoretical knowledge we need. Into the fray!
Every site I've been to agrees that you'll need a metric ruler and a microwave with the product label still attached, but the rotating tray brutally ripped out. They disagree, however, on the proper experimental material to nuke. Some sites say you'll need whipped egg whites on a plate. Others favor marshmallows in a dish. I'm going to recommend you go with the ones that recommend either wide chocolate bars or a layer of chocolate chips over a tray. Unless you can find chocolate marshmallows.
Speed of Light in a Microwave (with marshmallows!)
In your senior science studies, you may have learned about Hertz and his experiments with what we now recognise as radio waves. Through a series of experiments, he was able to demonstrate that the mystery radiation he was creating with the sparks from an induction coil behaved not only as a wave, by demonstrating that it showed the wave behaviours of reflection, diffraction, refraction and interference, but also that it was a transverse wave , demonstrated by the fact it could be polarised, just like Maxwell’s predicted electromagnetic radiation .
Many of Hertz’s experiments relied on his being able to use the reflection and interference properties of the mystery waves to create standing waves.
Standing waves are formed when a wave is reflected back and forth between surfaces n/2 wavelengths apart, where n is a positive whole number. The wave interferes with itself, creating static nodes, or areas where the amplitude is always zero, and antinodes , or areas where the amplitude varies between the absolute maximum and minimum values for the wave. For a sinusoidal wave, the spacing between any node to its nearest neighbour node, or antinode to its nearest neighbour antinode, is one half-wavelength.
Microwave ovens rely on the same principle. If you look inside your microwave, you will notice that the entire inside is made of metal, either solid pieces, or pieces perforated with small holes like on the door. (There’s usually also a rectangle that doesn’t look like it’s properly attached to the wall — that’s where you’ll find the antenna that produces the microwaves.) These are both very effective microwave mirrors. This not only shields the outside world from the microwaves generated inside the microwave oven, but also maximises the cooking efficiency by containing the energy in standing waves inside the microwave oven, and then rotating the food you are trying to heat so it passes alternately through areas of high and low intensity. Because of this, you can treat your microwave oven as a scaled down model of Hertz’s lab. The space is scaled down, and so is the wavelength of the radiation.
Maxwell not only predicted the existence and nature of electromagnetic waves, he was even able to predict their speed. The relationship between the wavelength, frequency and speed of a wave is a simple one: v = f•λ. Hertz was able to measure the wavelength and frequency of his mystery waves, thanks to being able to make standing waves, and thus he could easily calculate their speed. This speed was found to agree with Maxwell’s prediction, and also fell within the experimental error range of other scientists’ measurements of the speed of light .
There is a straightforward experiment you can do using your microwave oven to determine the speed of light using exactly the same principles Hertz did.
• A microwave oven with a removable rotating plate
• A large, flat, microwave-safe plate or board
• Mini marshmallows. Please note that the resolution given by full-sized marshmallows is inadequate. Alternatively, you could use:
• Shaved cooking chocolate. (Note: please use cooking chocolate. Other forms don’t melt adequately, or burn. The author has tried this experiment with Flake bars, which not only burn but produce large amounts of surprising green smoke.)
• Cheese, of any sort. It will melt, sweat, or in the case of wrapped cheese slices, dessicate or burn quite nicely.
• Thermal paper, e.g. fax paper roll. Not recommended, because it is not delicious.
Step 1: Remove the rotating plate from the microwave. If the T- or X-shaped piece that drives the rotation is removable, remove this too.
Step 2: Spread your marshmallows (or alternative) evenly across your plate or board. Place the plate or board into the microwave, taking care to ensure it is level, and that it will not rotate.
Step 3: Run the microwave at full power for 30 seconds. If this has been inadequate to cause regions of heating and/or melting, without moving the plate, you can run for additional 10 second bursts until the desired effect is achieved. If you are using marshmallows, they will inflate during heating, but do deflate again fairly quickly once they are allowed to cool slightly. This is okay: once deflated, you will usually find they have shrunk and melted slightly, so it is still possible to tell the “hot spots” from the rest.
Note that the longer heating takes/the more times you need to reheat, the more sideways heat transfer there will be, and therefore the wider the “hot spots”. In the above photograph, some extra re-heating was performed to maximise the puffiness of the marshmallows for the photo, and you can see how wide the “hot spots” have become.
Step 4: Measure the distance between two nearest-neighbour “hot spots”. This is your λ/2 value. But how do you find the frequency? It might seem a little like cheating, but because you can’t measure it directly without pulling your microwave apart and putting yourself in danger, you need to take advantage of the sticker on the back of the microwave oven that tells you about its operating parameters. Included on this sticker is the microwave frequency your oven uses. As you can see, the microwave oven used for this demonstration uses a frequency of 2450 MHz.
Step 5: Eat the melty marshmallowy mess.
In this demonstration, was found to be 7 ± 1 cm. f was given as 2450 MHz.
Therefore we can calculate:
This is about right, but a little off. There was some uncertainty in my measurement of the half-wavelength, though, which I can now include in my answer. I recorded an uncertainty of 1 cm. I can convert this to a percentage of 7 cm, and then back to an uncertainty value in my final calculated speed.
Uncertainty in c = (0.14×3.4)×10^8 = 0.5 × 10^8 m / s,
meaning my final answer should be expressed as c= (3.4 ± 0.5) × 10^8 m / s
The true value of the speed of light in air, 3.0 × 10^8 m / s, falls within this range.
Leftover Valentine's Chocolate? Use It to Measure the Speed of Light
If you're a long-time reader, you may remember the great leftover Easter Peeps microwave experiment. Well, today we're going to be nuking leftover Valentine's Day chocolate to demonstrate one of the constants of physics, the speed of light. Chocolate makes a very appropriate medium, because the heating property of microwaves was first discovered by a scientist whose candy bar melted in his pocket when he got too close to a microwave device being tested for use in radar.
WARNING: This experiment may take several tries to get right. We are not responsible for any weight gained. To avoid familial strife, be sure to only do this experiment with your own chocolates or with candy which you have been authorized to access. You can probably find some leftover boxes on sale this week.
The demonstration works because microwave ovens produce standing waves -- waves that move "up" and "down" in place, instead of rolling forward like waves in the ocean. Microwave radiation falls into the radio section of the electromagnetic spectrum. Most ovens produce waves with a frequency of 2,450 megahertz (millions of cycles per second). The oven is designed to be just the right size to cause the microwaves to reflect off the walls so that the peaks and valleys line up perfectly, creating "hot spots" (actually, lines of heat).
What you do with the candy is to find the hot spots and measure the distance between them. From that information, you can determine the wavelength. And when you multiply the wavelength by the frequency, you get the speed! Here's what you do:
1. Make sure the candy is in a microwave-proof box. Better yet, take the chocolate out and put in a microwave safe dish.
2. Remove the turntable in your oven. (You want the candy to stay still while you heat it.) Put an upside-down plate over the turning-thingy, and place your dish of candy on top.
3. Heat on high about 20 seconds.
4. Take the chocolate out and look for hot spots. Depending on the candy you use, you may have to feel the candy to see where it has softened. With the cherry cordials we used, we saw several shiny spots and one place where the chocolate shell melted through, releasing the sweet syrup inside.
5. Measure the distance between two adjacent spots. This should be the distance between the peak and the valley (crest and trough) of the wave. Since the wavelength is the distance between two crests, multiply by 2. Finally, multiply that result by the frequency expressed in hertz or 2,450,000,000 (2.45 X 10 9 for my son who is just learning scientific notation).
In our trial, we measured a distance of roughly 6 centimeters. 6 x 2 x 2,450,000,000 = 29,400,000,000 centimeters per second, or 294,000,000 meters per second. This is awfully close to 299,792,458 meters per second, which is the speed of light. Not bad for some leftover chocolate and a kitchen appliance!
I discovered this experiment at Null Hypothesis, although it can be found all over the Internet, including many versions with fancy charts and animations. By the way, melted chocolate bars are perfect as ice cream topping. Just saying.
Using a microwave oven and chocolate to measure the speed of light
I've been teaching science/physics for quite a while, and written lots of stuff along the way. Much of what I've written is for Nelson Thornes, OUP and SamLearning, but here are some things that are properly mine and I can publish here. Hope you find them useful. At you'll find some more items, and minisites about gcse radioactivity, energy resources and the electromagnetic spectrum which can occupy a class for a whole lesson and more.
Share this
Remove the turntable from the microwave oven, place a large bar of chocolate in there (maybe raise it a bit on a plastic plate).
Run the microwave oven, with luck and skill you can get melted chocolate spots at the antinodes.
Measure the distance between the spots, double it and you have the wavelength. Look up the frequency on the back of the microwave oven, use the wave equation and calculate c. Then eat the chocolate.
Your rating is required to reflect your happiness.
It's good to leave some feedback.
Something went wrong, please try again later.
This resource hasn't been reviewed yet
Remove the turntable from the microwave oven — including the wheels if your microwave has them. Place microwaveable tray upside down over the center rotator in the oven and turn on for 5 s. (If the tray rotates, you might need to use a larger platform, like a paper plate.) You do not want your chocolate bar to move in the microwave. Put the chocolate bar upside down on the back of the tray (see Image 2).
No safety issues are associated with this activity. When the activity is finished, just consume any and all of the chocolate you want.
Materials Used in this Experiment:
• Long chocolate bar – $2.00, $3.50 in NYC
• Microwave Oven: $100
• Ruler:
Practical Applications:
.11 if you buy by the thousands.
• Personal, intimate knowledge of the speed of light: Priceless
Measuring the speed of light melting cheese in a microwave oven
Follow Martin Varsavsky on Twitter:
Related Posts
No Comments
It´s a good way to show the nature of the waves (no matter its frecuency) but inside the microwave you don´t have a “pure” wave because the waves are reflecting so in fact you don´t have a pure wavelength equivalent to those 2450 MHz. Anyway, well done very instructive. In Spain are not used to show “practical” demonstrations of the theory. Congrats.
Watch the video: How to measure the speed of light - with CHOCOLATE! Do Try This At Home. We The Curious
|
Clean Home >SoSoWaM Source Solid Waste Management > Hazardous Waste
Hazardous Waste
What is hazardous waste?
Waste that causes environmental degradation and harms the health of the people.
The Hazardous waste list
1. Batteries
2. Torch Cells
3. Tube Lights / Florescent Lamps
4. Chemical Waste
5. Automobile Waste
6. Paint Waste
7. Medical Waste
8. Sanitary Napkins
Safe disposal of hazardous waste is the responsibility of your Local Government. Either they do it themselves or do it through a company dealing in waste-management on a contractual basis.
You have the responsibility of separating your house hazardous waste and handing it over to the collecting agency. As a Nature Lover and Environmentalist (yes by Practising Home Exnora Yoga, you are now an environmentalist), it is your duty to ensure that the hazardous waste goes to a secured landfill and nowhere else. If there is no secured landfill, you must organize with the other citizens for the creation of one by the government.
Disclaimer | License
All Rights reserved. www.homeexnora.org | Powered by FFMedias
|
The Kidderminster West Team have attempted to answer some intriguing questions put forward by our young people.
Session 1: Creation
How did God create the world? Do some Christians believe in the big bang theory? Why are there no dinosaurs in the Bible?
Session 2: Miracles
How could Jesus do things like control the sea when they aren’t possible to do? Who gave Jesus those powers?
Session 3: The Trinity
If God created Jesus, who created God? If Jesus is also God why do we say God is his father?
Session 4: The Church
Why do we have church? How do you feel when you are at church? how does the music at church make you feel? is it exciting to be at church? Do Christians invite friends over for dinner?
Session 5: The problem of evil
Why do bad things still happen if God is in charge?
Session 6: Atonement - Coming Soon!
Why did Jesus die? How could Jesus dying be for us?
|
Quick Answer: How do you extend Extrude in Solidworks?
How do you 3D Extrude in Solidworks?
Extruding Surfaces from a 2D or 3D Face
1. Click Insert > Surface > Extrude.
2. Select a face: …
4. Select the end condition.
How do you add a boss Extrude in Solidworks?
Extrude PropertyManager
1. Create a sketch.
2. Click one of the extrude tools: Extruded Boss/Base. on the Features toolbar, or click Insert > Boss/Base > Extrude. Extruded Cut. on the Features toolbar, or click Insert > Cut > Extrude. Extruded Surface. on the Surfaces toolbar, or click Insert > Surface > Extrude.
Can we extrude the sketch in both direction yes or no?
To extrude in both directions from the sketch plane in the PropertyManager, under Direction 1, select Through All – Both Directions. To extrude as a thin feature, set the PropertyManager options in Thin Feature.
How do you Extrude a 3D sketch cut?
You can also go to the toolbar and click Insert -> Cut -> Extrude to pull up the Cut-Extrude Feature menu as well. Once the Extrude menu appears on the left-hand side of your screen, it will prompt you to select a plane or sketch that will be used to create the 3D model feature.
IT IS IMPORTANT: What is Revit base families?
How do I sketch in Solidworks 3D?
Beginning a 3D Sketch
What is extrude boss in Solidworks?
Oysu. Chapter 3 Extruded Boss and Cut. Extrude tool is used to extend a sketched profile in one or two directions as either a thin feature or a solid feature. An extrude operation can either add material to a part (in a base or boss) or remove material from a part. 3.1 Extruded Boss.
Where is the extrude button in Solidworks?
Why can’t I see dimensions in Solidworks?
In your FeatureManager Tree, right-click on the Annotations folder and select Show Feature Dimensions. … If dimensions for a certain feature do not appear, you can right-click on the feature from either the graphics area or the FeatureManager Tree and select Show All Dimensions.
How do you hide a formula in Solidworks?
To disable equations, in Equations, Global Variables, and Dimensions dialog box, in any view, right-click an equation and click Disable Equation. The equation disappears from the view.
How do I see the size of a part in Solidworks?
IT IS IMPORTANT: What is the extension of a SketchUp make file?
Designer blog
|
Ben Tarnoff: Covid-19 and the Cloud
As of writing, roughly half of the world’s population is living under lockdown.
Not everyone can remain indoors, of course: millions of working-class people put their lives at risk every day to be the nurses, grocery store clerks, and other essential workers on whom everyone else’s survival depends. But globally, a substantial share of humanity is staying home.
One consequence is a sharp increase in internet usage. Trapped inside, people are spending more time online. The New York Times reports that in January, as China locked down Hubei province — home to Wuhan, the original epicenter of Covid-19 — mobile broadband speeds dropped by more than half because of network congestion. Internet speeds have suffered similar drops across Europe and the United States, as stay-at-home orders have led to spikes in traffic. In Italy, which has one of the highest coronavirus death tolls in the world, home internet use has increased 90 percent.
The internet is already deeply integrated into the daily rhythms of life in much of the world. Under the pressures of the pandemic, however, it has become something more: the place where, for many, life is mostly lived. It is where one spends time with family and friends, goes to class, attends concerts and religious services, buys meals and groceries. It is a source of sustenance, culture, and social interaction; for those who can work from home, it is also a source of income. Quarantine is an ancient practice. Connected quarantine is a paradox produced by a networked age.
Anything that can help people endure long periods of isolation is useful for containing the virus. In this respect, the internet is a blessing — if an unevenly distributed one. Indeed, the pandemic is highlighting the inequalities both within and across countries when it comes to connectivity, and underlining why internet access should be considered a basic human right.
But the new reality of connected quarantine also brings certain risks. The first is social: the greater reliance on online services will place more power in the hands of telecoms and platforms. Our undemocratic digital sphere will only become more so, as the firms that own the physical and virtual infrastructures of the internet come to mediate, and to mold, even more of our existence. The second danger is ecological. The internet already makes very large demands of the earth’s natural systems. As usage increases, those demands will grow.
In our efforts to mitigate the current crisis, then, we may end up making other crises worse. A world in which the internet as it is currently organized becomes more central to our lives will be one in which tech companies exercise more influence over our lives. It may also be one in which life of all kinds becomes harder to sustain, as the environmental impact of a precipitously growing internet accelerates the ongoing collapse of the biosphere — above all, by making the planet hotter.
Machines Heat the Planet
To understand how the internet makes the planet hotter, it helps to begin with a simplified model of what the internet is. The internet is, more or less, a collection of machines that talk to one another. These machines can be big or small — servers or smartphones, say. Every year they become more ubiquitous; in a couple of years, there will be thirty billion of them.
These machines heat the planet in three ways. First, they are made from metals and minerals that are extracted and refined with large inputs of energy, and this energy is generated from burning fossil fuels. Second, their assembly and manufacture is similarly energy-intensive, and thus similarly carbon-intensive. Finally, after the machines are made, there is the matter of keeping them running, which also consumes energy and emits carbon.
Given the breadth and complexity of this picture, it would take a considerable amount of time to map the entire carbon footprint of the internet precisely. So let’s zero in on a single slice: the cloud. If the internet is a collection of machines that talk to one another, the cloud is the subset of machines that do most of the talking. More concretely, the cloud is millions of climate-controlled buildings — ”data centers” — filled with servers. These servers supply the storage and perform the computation for the software running on the internet — the software behind Zoom seders, Twitch concerts, Instacart deliveries, drone strikes, financial trades, and countless other algorithmically organized activities.
The amount of energy consumed by these activities is immense, and much of it comes from coal and natural gas. Data centers currently require 200 terawatt hours per year, roughly the same amount as South Africa. Anders Andrae, a researcher at Huawei, predicts that number will grow 4 to 5 times by 2030. This would put the cloud on par with Japan, the fourth-biggest energy consumer on the planet. Andrae made these predictions before the pandemic, however. All indications suggest that the crisis will supercharge the growth of the cloud, as people spend more time online. This means we could be looking at a cloud even bigger than Japan by 2030 — perhaps even the size of India, the world’s third-biggest energy consumer.
Machine Learning is a Fossil Fuel Industry
What can be done to avert the climate damage of such a development? One approach is to make the cloud run on renewable energy. This doesn’t entirely decarbonize data centers, given the carbon costs associated with the construction of the servers inside of them, but it does reduce their impact. Greenpeace has been waging a campaign along these lines for years, with some success. The use of renewables by data centers has grown, although progress is uneven: according to a recent Greenpeace report, Chinese data centers are still primarily powered by coal. It also remains difficult to accurately gauge how much progress has been made, since corporate commitments to lower carbon emissions are often little more than greenwashing PR. “Greening” one’s data centers can mean any number of things, given a general lack of transparency and reporting standards. A company might buy some carbon offsets, put out a celebratory press release, and call it a day.
Another approach is to increase the energy efficiency of data centers. This is an easier sell for companies, because they have a strong financial incentive to lower their electricity costs: powering and cooling data centers can be extraordinarily expensive. In recent years, they have come up with a number of ways to improve efficiency. The emergence of “hyperscale” data centers, first developed by Facebook, has been especially important. These are vast, automated, streamlined facilities that represent the rationalization of the cloud: they are the digital equivalent of the Fordist assembly line, displacing the more artisanal arrangements of an earlier era. Their economies of scale and obsessive optimizations make them highly energy-efficient, which has in turn moderated the cloud’s power consumption in recent years.
This trend won’t last forever, however. The hyperscalers will max out their efficiency, while the cloud will continue to grow. Even the more conscientious companies will have trouble procuring enough renewables to keep pace with demand. This is why we may also have to contemplate another possibility: not just greening the cloud, or making it more efficient, but constraining its growth.
To consider how we might do that, let’s first consider why the cloud is growing so fast. One of the most important factors is the rise of machine learning (ML). ML is the field behind the current “AI boom.” A powerful tool for pattern recognition, ML can be put to many purposes, from analyzing faces to predicting consumer preferences. To recognize a pattern, though, an ML system must first “learn” the pattern. The way that ML learns patterns is by training on large quantities of data, which is a computationally demanding process. Streaming Netflix doesn’t place much strain on the servers inside a data center; training the ML model that Netflix uses for its recommendation engine probably does.
Because ML hogs processing power, it also carries a large carbon footprint. In a paper that made waves in the ML community, a team at the University of Massachusetts, Amherst found that training a model for natural-language processing — the field that helps “virtual assistants” like Alexa understand what you’re saying — can emit as much as 626,155 pounds of carbon dioxide. That’s about the same amount produced by flying roundtrip between New York and Beijing 125 times.
Training models isn’t the only way that ML contributes to climate change. It has also stimulated a hunger for data that is probably the single biggest driver of the digitization of everything. Corporations and governments now have an incentive to acquire as much data as possible, because that data, with the help of ML, might yield valuable patterns. It might tell them who to fire, who to arrest, when to perform maintenance on a machine, or how to promote a new product. It might even help them build new kinds of services, like facial recognition software or customer-service chatbots. One of the best ways to make more data is to put small connected computers everywhere—in homes and stores and offices and factories and hospitals and cars. Aside from the energy required to manufacture and maintain those devices, the data they produce will live in the carbon-intensive cloud.
The good news is that awareness of ML’s climate impacts is growing, as is the interest among practitioners and activists in mitigating them. Towards that end, one group of researchers is calling for new reporting standards under the banner of “Green AI.” They propose adding a carbon “price tag” to each ML model, which would reflect the costs of building, training, and running it, and which could drive the development of more efficient models.
This is important work, but it needs a qualitative dimension as well as a quantitative one. We shouldn’t just be asking how much carbon an ML application produces. We should also be asking what those applications do.
Do they enable people to lead freer and more self-determined lives? Do they cultivate community and solidarity? Do they encourage more equitable and more cooperative forms of living? Or do they extend corporate and state surveillance and control? Do they give advertisers, employers, and security agencies new ways to monitor and manipulate us? Do they strengthen capitalist class power, and intensify racism, sexism, and other oppressions?
Resistance with Transformation
A good place to start when we contemplate curbing the growth of the cloud, then, is asking whether the activities that are driving its growth contribute to the creation of a democratic society. This question will acquire new urgency in the pandemic, as our societies become more enmeshed in the internet. It is a question that cannot be resolved on a technical basis, however — it is not an optimization problem, like trying to maximize energy efficiency in a data center. That’s because it involves choices about values, and choices about values are necessarily political. Therefore, we need political mechanisms for making these choices collectively.
Politics is necessarily a conflictual affair, and there will be plenty of conflicts that arise in the course of trying to both decarbonize and democratize the internet. For one, there are obvious tensions between the moral imperative of improving and expanding access and the ecological imperative of keeping the associated energy inputs within a sustainable range. But there will also be many cases where restricting and even eliminating certain uses of the internet will serve both social and environmental ends simultaneously.
Consider the fight against facial recognition software that has erupted across the world, from protesters in Hong Kong using lasers to disrupt police cameras to organizers in the United States pushing for municipal bans. Such software is incompatible with basic democratic values; it also helps heat the planet by relying on computationally intensive ML models. Its abolition would thus serve both the people and the planet.
But we need more than abolition. We also need to envision and construct an alternative. A substantive project to decarbonize and democratize the internet must combine resistance with transformation; namely, it must transform how the internet is owned and organized. So long as the internet is held by private firms and run for profit, it will destabilize natural systems and preclude the possibility of democratic control. The supreme law of capitalism is accumulation for accumulation’s sake. Under such a regime, the earth is a set of resources to be extracted, not a set of systems to be repaired, stewarded, and protected. Moreover, there is little room for people to freely choose the course of their lives, because everyone’s choices — even those of capitalists — are constrained by the imperative of infinite accumulation.
Dissolving this law, and formulating a new one, will of course involve a much broader array of struggles than those aimed at building a better internet. But the internet, as its size and significance grows through the pandemic, may very well become a central point of struggle. In the past, the internet has been a difficult issue to inspire mass mobilization around; its current highly privatized form, in fact, is partly due to the absence of popular pressure. The new life patterns of connected quarantine might reverse this trend, as online services become, for many, both a window to the world and a substitute for it, a lifeline and a habitat. Perhaps then the internet will be a place worth struggling to transform, as well as a tool for those struggling to transform everything else.
Help us build the Blueprint
The Blueprint is the think tank for the planet's progressive forces.
Help us build this paradigm. Donate to the Blueprint.
Available in
Ben Tarnoff
More in Technology
Ben Tarnoff: Covid-19 and the Cloud
Receive the Progressive International briefing
Privacy PolicyManage CookiesContribution Settings
Site and identity: Common Knowledge & Robbie Blundell
|
Show All Answers
1. What are the benefits of getting the vaccine?
2. How do I sign up to get the vaccine?
3. Can children be vaccinated for COVID-19?
4. How many doses are needed?
5. What is a mRNA vaccine?
6. What are the side effects of the vaccine?
7. Is the vaccine safe?
8. Do people who get the vaccine still have to wear masks?
The following recommendations apply to non-healthcare settings. For related information for healthcare settings, visit Updated Healthcare Infection Prevention and Control Recommendations in Response to COVID-19 Vaccination.
Fully vaccinated people can:
1. Participate in many of the activities that they did before the pandemic; for some of these activities, they may choose to wear a mask.
2. Resume domestic travel and refrain from testing before or after travel and from self-quarantine after travel.
4. Refrain from routine screening testing if feasible.
Infections happen in only a small proportion of people who are fully vaccinated, even with the Delta variant. However, preliminary evidence suggests that fully vaccinated people who do become infected with the Delta variant can spread the virus to others. To reduce their risk of becoming infected with the Delta variant and potentially spreading it to others: CDC recommends that fully vaccinated people:
Wear a mask in public indoor settings if they are in an area of substantial or high transmission. (To find out the current level of transmission in Riley County please visit the CDC COVID Data Tracker )
Get tested if experiencing COVID-19 symptoms.
KDHE Advice About Masks
|
hand holding plant and eart
3 ways to achieve sustainable development!
Spread the love
The key to achieving sustainable governance in the new, full-world context is an integrated approach. This apply across disciplines, stakeholders groups, and generations which are based on paradigm of “adaptive management”, whereby policy-making is an iterative experiment acknowledging uncertainty. Are human causing more harm to the nature, click here to know!
The principles
Within this paradigm, 6 core principles or the Lisbon principles embody the essential criteria for sustainable governance and the use of common natural and social capital assets:
Principle 1: Responsibility
Principle 2: Scale-matching
Principle 3: Precaution
Principle 4: Adaptive Management
Principle 5: Full cost allocation
Principle 6: Participation
3 ways to achieve sustainable development!
1. Respecting ecological limits
Once society has accepted the world-view that the economic system is sustained and contained by our finite global ecosystem, it is obvious that we must respect ecological limits. This requires us to understand precisely what these limits entail, and where economic activity currently stands in relation to them.
However, our limited understanding of ecosystem structure and function, and the dynamic nature of ecological and economic systems, means that this precise point may be difficult to determine.
1. Protecting capabilities for flourishing
Reduced working hours can increase flourishing by improving the work-life balance, and there is evidence that fewer working hours can reduce the consumption-related environmental impacts. A sense of community which is necessary for democracy is hard to maintain across vast income difference.
The main justification for such differences has been that they stimulate growth, which will one day making everyone rich. However, in our full world, with its steady-state or contracting economy, this is unrealistic. Without aggregate growth, poverty reduction requires redistribution.
1. Building a sustainable macro-economy
The central focus of macro-economy policies is to maximize economic growth, lesser goals include price stabilization and ensuring full employment. If society adopt the central economic goal of sustainable human well-being, the macro-economy policy will change drastically. A key leverage point is the current monetary system.
However there are several serious with this monetary system as it is highly destabilizing, the current system systematically transfers resources to the financial sector, the banking system will only create money to finance market activities that can generate the revenue required to repay the debt plus interest, and the system is ecologically unsustainable.
👉👉👉Why we need to follow sustainable lifestyle and living? Follow 7greengreen for more updates!
Leave a Reply
|
Crimean-Congo: The 'Asian Ebola' Virus
Related articles
Ebola is the most famous of the hemorrhagic fever viruses, but it’s not the only one.
What is commonly called "Ebola" is more specifically the species Zaire ebolavirus, which belongs to the genus Ebolavirus. This group also contains nasty species called Bundibugyo, Sudan, and Taï Forest ebolavirus. Marburgvirus, a separate genus, contains the human pathogenic viruses called Marburg and Ravn. These diseases are largely limited to Africa.
Not so for Crimean-Congo hemorrhagic fever (CCHF). This killer, which is a member of the order Bunyavirales, is completely unrelated to the aforementioned viruses. CCHF has caused outbreaks throughout the Middle East and Asia, infecting more than 1,000 people every year since 2002. (See figure. The orange countries are endemic for CCHF, and "CFR" refers to the case-fatality rate, which ranges from roughly 4% to 20%.
Let us hope that future CCHF outbreaks go as smoothly.
Source: Kizito S, Okello PE, Kwesiga B, et al. "Notes from the Field: Crimean-Congo Hemorrhagic Fever Outbreak — Central Uganda, August–September 2017." MMWR 67 (22): 646-647. DOI: 10.15585/mmwr.mm6722a6.
|
Peru: A Journey in Time, a historic exhibition at the British Institution, has been a decade in the planning and allows the museum to highlight pieces from its own collections alongside Peruvian treasures shown for the first time in the UK. It opens on the 200th anniversary of Peru’s declaration of independence from Spain, with the United Kingdom being one of the first nations to acknowledge the new nation’s sovereignty.
However, for a western audience, the neatness of this chronology is nearly the only recognizable part of a program that continuously questions the most fundamental conceptions of how the world works and how it can, and should, be lived in. The idea of time is one of the most difficult to overcome.
The exhibition’s subtitle is both a practical description of a 3,500-year chronological analysis of many distinct civilizations and an introduction to how Andean time was perceived. “We tend to believe that we are in the present, that the past is behind us, and that the future is ahead of us,” co-curator Jago Cooper adds.
“In Andean communities, the past, present, and future are all parallel lines that happen at the same time. So the past isn’t dead; it’s occurring right now, and it has the power to affect the present. And the greatest way to prepare for the future is to acknowledge the link between the past and present.”
Other differences in ancient Peru (pre-Columbus) include the absence of a script-based writing tradition and a monetary transaction system.
“There’s also the extreme diversity of the environment,” says Cecilia Pardo, Cooper’s colleague curator. “Deeply sophisticated and sustainable innovation and technologies were required to negotiate life on the Pacific coast, in arid deserts, the high Andes, or the rainforest, all of which prompted unique ways of societies succeeding.”
A broad variety of spectacular objects on exhibit attest to this success: from exceptionally well preserved fabrics, some of which are over 2,000 years old, to wooden carvings that shed fresh light on ritual deaths, huge ceramic collections, and complex applications of precious metals.
Because civilizations’ economic foundations were not based on arbitrary currency valuations, reciprocal obligation systems generally drove advancement and output. According to Cooper, “people had an obligation to maintain and sustain each other and the world around them.”
“This had far-reaching implications for how resources were managed as well as how things were manufactured.” Large constructions, like textiles and other minor products, were created communally and willingly, rather than by slave labor as in other regions of the globe.”
In the absence of a written culture, artifacts took on greater significance as transmitters of cultural information, ideas, and beliefs. Because they were funeral gifts stored in sealed tombs, many of the artifacts in the display have survived and provide information on belief systems and customs.
However, as Cooper points out, the show’s core message is that this is a civilization in which the past is alive and only formed in the present.
“It is likely that fewer than 10% of potential sites have been excavated in Peru,” Pardo says. Many additional excavations are underway, with Peruvian and international archaeologists examining various facets of this lengthy narrative.
The curators are both humble and excited about what the future may hold in this exhibition, which provides a wonderful snapshot of what has been discovered and what we know now. These civilizations’ extraordinary history is continuously being written.”
Four ancient Peruvian artifacts from the past
Red mantle of a large size
The Nasca people, who typically buried their deceased in a sitting posture, clothed in layers of fabric, utilized this funeral blanket, which is one of the earliest relics in the show. The recurring figure on the fabric, wearing panther masks and bearing human heads, is most likely a symbol of an ancestor who would look after the departed in the afterlife.
The burial would have taken place in southern Peru’s dry deserts, where the absence of humidity has enabled the cloth to survive for over 2,000 years.
A prisoner who is confined
A wooden sculpture discovered amid layers of guano on an island off the coast of Peru. It portrays a powerful person restrained by a rope before being ritually executed. In ancient Peru, there was comparably less violence and more ritualised encounters, resulting in significantly less carnage than in European battles of the time.
These would culminate in the seizure of detainees and the public execution of vanquished side members. These fatalities, which, ironically, exalted the value of human life above mass killing on the battlefield, were often commemorated in public murals.
A guy and a woman copulating in a vessel
This porcelain stirrup pot portraying a couple having intercourse – her tattoos suggest it was manufactured around the end of the Nasca period, which ended in AD800AD – would have been inefficient for transporting fluids and was not meant to be used on a regular basis. Instead, it was a funerary gift, which is often discovered in fragments inside the tomb.
The corpse and the stirrup spout were in opposite places, suggesting that the stirrup spout had been purposefully damaged during a ritual before the burial was sealed. Women’s representation on pottery did not begin until about AD400, and was mainly associated with fecundity.
A beautiful plate dating back over 1,000 years and measuring more than 10cm in length. A big spool would be threaded through the ear lobe, and the piece would be worn as shown.
Wood, metal, mother of pearl, and other valuable shells are used to create it. The inlay work’s intricacy, as well as the variety of materials utilized, indicate it to be a product of a smart and rich culture.
The red substance, spondylus, a spiny bivalve known as the thorny oyster, was especially prized and was crushed into powder and spread over the ground to form something like to a crimson carpet for dignitaries, in addition to being used for ornamental reasons.
|
Macular Degeneration
Macular degeneration is most common in people over the age of 65, but there have been some cases affecting people as young as their 40s and 50s. Symptoms include blurry or fuzzy vision, a reduction in color vision, straight lines like telephone poles and sides of buildings appear wavy, and a dark or empty area may appear in the center of vision.
What is the macula?
The macula is the small, central portion of the retina, the light sensitive lining at the back of the eye. Light rays from objects that we are looking at come to a focus on the retina and are converted into electrical impulses that are then sent to the brain. The macula is responsible for sharp straight-ahead vision necessary for functions such as reading, driving a car, and recognizing faces.
The effect of this disease can range from mild vision loss to central blindness. That is, blindness "straight ahead" but with normal peripheral vision from the non-macular part of the retina which is undamaged by the disease.
Two types of macular degeneration
Ninety percent of ARMD is of the "atrophic" or "dry" variety. It is characterized by a thinning of the macular tissue and the development of small deposits on the retina called drusen. Dry ARMD develops slowly and usually causes mild visual loss. The main symptom is often a dimming of vision when reading.
The second form of ARMD is called "exudative" or "wet" because of the abnormal growth of new blood vessels under the macula where they leak and eventually create a large blind spot in the central vision. This form of the disease is of much greater threat to vision than the more common dry type.
What are the causes of ARMD?
Unfortunately, the cause of this eye condition is not fully understood, but it is associated with the aging process. As we age, we become more susceptible to numerous degenerative processes like arthritis, heart conditions, cancer, cataracts, and macular degeneration. These conditions may be caused by the body's overproduction of free radicals.
During the metabolic process, oxygen atoms with an extra electron are released. These extra electrons are quite destructive and cause cellular damage, alter DNA, and are thought to be at least partially responsible for many of the degenerative diseases mentioned above. The production of these free radicals is normal during metabolism, but the body produces its own "anti-oxidants" to neutralize them.
There is some evidence to suggest that ARMD has a genetic basis, as the condition tends to run in families. However, the exact nature of this familial tendency has not been clarified. It has been suggested from twin studies that there is a defect in the genes responsible for the integrity and health of the retina.
How is it treated?
The latest treatment includes anti-VEGF (vascular endothelial growth factor) drugs which are delivered by a shot into the eye. These drugs block the trouble-causing VEGF, reducing the growth of abnormal blood vessels and slowing their leakage.
Several new treatments are under development and scientific evaluation as well. These procedures may preserve more sight overall, though they are not cures that restore vision to normal.
Are vitamins and nutrition useful?
No treatment exists for the dry form but a combination of specific vitamins and minerals has been demonstrated to help slow the progression of the disease. Anti-oxidant vitamins may help to neutralize the free radicals that are associated with this degenerative process. Zinc, one of the most common trace minerals in our body, is highly concentrated in the retina and surrounding tissues and is a requirement for chemical reactions in the retina. Fat-soluble anti-oxidant vitamins like vitamin A and vitamin E are stored in the body and can increase to toxic levels if over used and zinc may interfere with other trace minerals like copper. Caution should therefore be exercised in the use of vitamins and minerals. Ask our optometrists which vitamins are right for you.
A low-fat diet, rich in dark green leafy vegetables, including spinach, some types of leaf lettuce and broccoli, can slow vision loss due to macular degeneration as well.
Our optometrists also fit low vision devices for those patients that have already lost some vision due to ARMD. These telescopic and microscopic lenses, magnifying glasses, illuminated magnifiers, and closed circuit television systems can be prescribed to help make the most effective use of remaining vision and restore function.
|
What Is Lsd Introduction
This is a book about psychedelic experience and about babies. The material in this book developed out of the distribution of approximately twelve thousand, 250-microgram doses of LSD over a period of ten years. This distribution was worldwide and included the following cultures:
1. Judeo-Christian: upper and middle classes, peons, dropouts, prison and jail inmates, and mental patients;
2. Moslem: middle and lower classes;
3. Hindu-Buddhist: middle and lower classes, yogis, and monks: and
4. Animist: no class structure.
Members of the community that produced this book have altogether ingested LSD on approximately four thousand occasions in every life situation imaginable. This amounts to a depth and variety of LSD experimentation that no other research venture has approximated. The conclusion of this experimentation is that the LSD experience reactivates the space-time reality and sense-perception awareness of childhood, infancy, and interuterine existence. Moreover, the degree to which an LSD user's experience is traumatic is the degree to which the user experienced trauma while in the womb, during birth, and in early childhood.
The principal focus for the structuring of the LSD experimentation has been motherhood and child development. The working hypothesis was that the foundation of society emerges from the relationship between mother and child, and that the encounter between mother and child is THE dimension in which microcosm and macrocosm intersect.
The invisible foundations of social consciousness are structured in infancy as contracts evolve between mother and child over such subjects as food, play, sleep, cleanliness, manners, relationships with siblings, father, and others. Later, these foundations are visibly reinforced by educational institutions and the advertising media.
The basis of Western culture is the nuclear family—a family unit consisting of father, mother, and child or children. Family structures condition the children born into them. A nuclear foundation manifests a different structure of body, mind, and environment than a social foundation that has a communal or extended family structure.
In the nuclear family, the infant and small child have one primary source of The Energy of Life—Mother. Mother'is a source to which the infant consciousness must of necessity learn to accommodate itself; that is, the child must imprint the values of The Source. The inevitable effect of a One-Source imprint is a competitive nature and an expectation of partiality. Since no one else is permitted access to The One Source, the child's energy is primarily directed to securing and maintaining The Source and any subsequent replacements.
In extended or communal families, the child can respond to many sources out Oi natural affinity rather than compulsion. The child imprints varying attitudes and values, which automatically makes it* less anxious and better qualified to adjust to broad life experiences.
Presently, humans' dominant relationship with the environment has been determined by a culture based on the nuclear family, a social system of competition that has its foundation in infancy. This system is wasteful, polluting, and sensually brutalizing. The only way, short of an apocalypse, to change this system isby establishing communities of people who share everything, incWding the nursing of infants. Only in this way can a group consciousness rather than an egocentric one be imprinted from birth.
This work reflects the dialogues and meditations of such a community as the members have come together and exorcised the nuclear, egocentric imprints of their childhoods, and in so doing,
"Fetuses, babies, and children are the principal subjects of this work. Because there are male and female babies, the current cultural issue arises of which pronoun to use when the prose calls for one. Shivalila culture is nonsexist, so there the issue is moot. In the children's reality they are "its"; so that is the pronoun used in this transmission of theirs.
eliminated the psychic barriers to their rebirths into egoless, collective consciousness. When such barriers as testing and competition have been eliminated and a place of apolarity is experienced, it is possible to communicate with and receive nonverbal transmissions from babies, children, plants, and animals. The children born to this community manifest a collective consciousness and do not grow up as egocentric competitors for power. Rather, they develop with collective power and have constant access to the collective, universal psyche.
The primary focus of the adult members of this community is to maintain the cultural and environmental conditions necessary to foster this collective consciousness. This collective consciousness was first experienced in the West during the '60's LSD, hippy phenomenon. However, since there is no Western cultural reflection of this phenomenon, the experience became romanticized and commercialized, thereby losing its validity and vitality. People of Shivalila, however, traveled to other cultures where collective unity is a living reality. Thus, we have associated with or lived in communities of Sufis in Afghanistan, Kashmir, and India, and of Tibetans in Kashmir and Kulu Valley in Northern India; in rural villages in Bali (Indonesia); and in communities of animist Native Americans. In addition, members of Shivalila have studied with both Vajrayana (Tibetan Buddhist) and Shaivite (Hindu) tantric adepts and were initiated to certain tantric dynamics that have not been previously revealed/transmitted to Westerners.
During these associations, Shivalila members used LSD and other psychedelic substances, principally cannabis, in order to facilitate detaching from the imprint of Western culture and re-imprinting from the cultures and people being studied, none of which are based on nuclear family dynamics. As a consequence of this process, people of the Shivalila community are no longer possessed by egocentric imagery, since all have expanded their consciousness to include the imagery and lifestyles of cultures that were not part of the conditioning influences of their childhoods. This imagery cum lifestyle has been integrated into the community consciousness to such an extent that the symbolic foundation of the consciousness of its members has been altered. Consciousness thus expanded manifests universality and is the repository of the final truths of the species and the earth.
Shivalila initiates know how to conceive infants consciously and to give birth naturally, without tension. At the other end of the spectrum, they do not die unconsciously. They know how to maintain a continuity of awareness through the death transition and into the next incarnation.
Part I of this work transmits Shivalila's symbolic foundation (dharma): Part II is the application of this order in the dynamic life of the community (sangha).
Lao-tzu did not say, "Those who know do not speak, and those who speak do not know."
What Lao-tzu said was, "Those who know do not offer proof, and those who offer proof do not know."
Because, who knows tao will BE tao and that being will express in sound, obviously.
Continue reading here: Truth
Was this article helpful?
0 0
|
The state Division of Marine Fisheries and Gloucester Marine Genomics Institute are working together to develop new genomic tools to detect norovirus in shellfish.
The two-year, collaborative research program, funded in the first year with a $200,000 line item in the newly approved $43.3 billion state budget, is not in response to increased human contraction of norovirus via shellfish. Instead, it's an attempt to marshal scientific resources in advance of a problem.
"One hundred years ago, the concern with shellfish sanitation was completely bacterial," said Jeff Kennedy, DMF's shellfish regional supervisor. "Now it's much more viral. We want to get ahead of any problem and hopefully protect our citizens and shellfish fisheries before it becomes a problem."
Norovirus is an extremely contagious virus that can cause vomiting and diarrhea in humans who contract it either from contact with an infected person or surface, or by consuming contaminated food or water. Most prominently, norovirus has been blamed for many viral outbreaks aboard cruise ships.
Kennedy said much of the research will be focused on studying something called a Male Specific Coliphage that shows promise as a possible indicator of the presence of norovirus. The MSC, he said, has been used extensively in the European Union to trace sewage pollution in marine environments.
"It's being viewed here as a newer, potential indicator for sanitation in shellfish," Kennedy said. "What we want to do is try to determine if there is a relationship in Massachusetts between the indicator and the pathogen norovirus."
Kennedy said DMF will provide the biological infrastructure for identifying and sampling shellfish — particularly those in proximity wastewater treatment plants — and GMGI will apply its genomic sequencing to help determine if the MSC is a dependable indicator of the presence of norovirus.
He said the specific testing sites have not been determined yet, a task made more complex by the varying degrees of water quality, location of the state's sewer and water treatment plants and water temperatures that differ by location.
"We've had a number of strategy sessions," said Tim Sullivan, a researcher at GMGI. "Now that we have the money, we can begin developing the tools we need to conduct the actual research to understand the molecular markers we'll need to study the relationship between the coliphage and norovirus."
GMGI, based on the Gloucester waterfront, also received $25,000 from another line item in the 2020 fiscal year budget to begin studying the development of a regional broadband infrastructure to support big-data science and data from other commercial and research disciplines on and off Cape Ann.
"Our two sequencers create a tremendous amount of data that we process in house but share with many of collaborators and partners," said Andrea Bodnar, GMGI's science director.
The volume of data, she said, often creates a data bottleneck in and out of Cape Ann on GMGI's current broadband network.
"The $25,000 will fund a feasibility study to answer questions on construction, cabling, hardware and other details, as well as the cost and the benefits to the region as a whole," Bodnar said. "This really lends itself to the portion of our mission dedicated to economic development."
Trending Video
Recommended for you
|
Inside The World's Toughest Prisons: The Truth About Greenland's Nuuk Prison
Despite its inclusion on Netflix's Inside The World's Toughest Prisons, Ny Anstalt, the "open prison" in the capital city of Nuuk, Greenland, has been called "the world's most humane prison" by Great Britain's Channel 4 News (posted on YouTube). Until its construction, Greenland had just six prisons, none of which were equipped to accommodate high security prisoners, per CNN. People who had to serve high security prison terms had to endure what the architectural company FRISS & MOLTKE called "'a double sentence' — both the actual prison sentence, and ... being sent far away from their home country, family, and culture to prisons in Denmark."
Greenland is a former Danish colony and its population is just 56,000 people, making it the least densely populated country in the world. They run an "open prison" system, allowing inmates to work, study, and even hunt during the day before returning to the penitentiary at night, which they deemed unfeasible for high-security inmates. Compared to other Nordic countries, Greenland has a proportion of citizens sentenced to prison terms that is three times higher than the rest of the region. This can be traced to post-World War II "reforms" — the "dismantling of a traditional hunting and fishing society provoked social problems," reported CNN. About 88 percent of Greenland's residents are indigenous Inuit people. CNN interviewed Naaja Nathanielsen, director of the Greenland Prison and Probation Service, who noted, "Greenlandic people suffer from similar difficulties to other indigenous groups who have experienced colonization."
Do humane prisons have better outcomes?
After a competition run by Denmark's Prison and Probation Service, which administers Greenland's prison system, FRIIS & MOLTKE and Denmark's Schmidt Hammer Lassen architectural firms won the contract to build the structure. Project manager Jette Birkeskov Mogensen told CNN: "Learning about the prisoners and thinking about what conditions brought them to this situation has affected me deeply." The architectural team designed the prison to operate similarly to a "small village" with "residential blocks, workplaces, education and sports facilities, a library, a health center and a church."
They didn't put bars on the windows of the cells, and designed them to face a tall mountain, Sermitsiaq, making sure sweeping natural views were available to allow inmates the opportunity to "escape in [their] mind[s]." Local artists were brought in to decorate common areas. Guards don't carry weapons, which allows them to develop better relationships with the prisoners.
The very idea of building a prison with humanity and rehabilitation at the forefront of its structure is shocking to those who are used to the more punitive ideology of the American and British prison systems. It seems to be effective, however. Professor of Criminology Yvonne Jewkes told CNN, "A number of studies indicate that reoffending rates are relatively low in Scandinavian countries — often less than 30 percent." In the United States, a study by the U.S. Bureau of Justice put the number at 76.6 percent, while the United Kingdom found that 44 percent of released prisoners were reconvicted within a year.
|
Connection lost. Please refresh the page.
Ready to learn?
Pick your favorite study tool
Medial circumflex femoral artery
Medial circumflex femoral artery (Arteria circumflexa medialis femoris)
Medial circumflex femoral artery (Arteria circumflexa medialis femoris)
The medial circumflex femoral artery (MFCA) is a posteromedial branch of the deep femoral artery, arising in the medial thigh compartment. The artery courses medially and posteriorly around the neck of the femur, emerging in the gluteal region where it divides into its terminal branches.
The medial femoral circumflex artery is the primary source of blood supply to the femoral head. A traumatic injury of this vessel may lead to avascular necrosis of the femoral head (AVN).
Additionally, this artery supplies the adductors of the hip, obturator externus muscle, hamstrings muscles and sciatic nerve.
Key facts about the medial circumflex femoral artery
Origin Deep femoral artery
Branches Ascending, descending, transverse, superficial, deep and acetabular branches
Supply Thigh adductors, gracilis muscle, obturator externus muscle, hamstring muscles, sciatic nerve, neck and head of femur
This article will discuss the anatomy and function of the medial circumflex femoral artery.
1. Course
2. Branches and supply
3. Anatomical variations
4. Sources
+ Show all
The medial femoral circumflex branch runs medially and posteriorly, initially passing between the tendon of iliopsoas muscle and the pectineus muscle. It then continues its course around the neck of the femur while passing between the obturator externus and adductor brevis muscles. Upon reaching the superior margin of the adductor magnus muscle, the medial circumflex femoral artery terminates by dividing into two branches.
Branches and supply
The branches of the medial circumflex artery vary greatly across the literature in terms of nomenclature, number and supply. According to some authors, the medial circumflex artery gives off two terminal branches; an ascending and descending branch. However, most authors describe additional three to four branches that can arise either from the medial circumflex femoral artery directly, or via its descending branch.
• The ascending branch runs superiorly across the tendon of the obturator externus muscle. It terminates in the trochanteric fossa by anastomosing with the inferior gluteal artery and lateral circumflex femoral artery.
• The descending branch courses between the quadratus femoris and adductor magnus muscles. It provides small muscular branches that supply the muscles of the posterior compartment of the thigh.
• The transverse branch contributes to the formation of the cruciate anastomosis.
• The acetabular branch enters the hip joint along with the acetabular branch of the obturator artery to supply the femoral head and other structures at the hip joint. It gives off the foveolar artery (medial epiphyseal artery).
• The superficial branch courses between the pectineus and adductor longus muscles.
• The deep branch runs superiorly towards the intertrochanteric crest to the head of the femur.
To learn more about the nerves and vessels of the hip and thigh check out our other articles, videos, quizzes and labeled diagrams.
Anatomical variations
The medial circumflex femoral artery typically arises as an independent branch either from the deep femoral artery, or directly the femoral artery itself. Occasionally, it can arise as a common trunk with the superficial, deep femoral and lateral circumflex femoral arteries.
Medial circumflex femoral artery: want to learn more about it?
What do you prefer to learn with?
Register now and grab your free ultimate anatomy study guide!
|
Connection lost. Please refresh the page.
Ready to learn?
Pick your favorite study tool
Zygomatic bone
Zygomatic bone (Os zygomaticum)
Zygomatic bone (Os zygomaticum)
The zygomatic bone (zygoma) is an irregularly shaped bone of the skull. It is often referred to as the cheekbone, and it comprises the prominence just below the lateral side of the orbit.
The zygomatic bone is nearly quadrangular in shape and it features three surfaces, five borders and two processes. Besides forming the prominence of the cheek, the zygomatic bone also contributes to the formation of the zygomatic arch, the walls of the temporal and infratemporal fossae, and the floor and lateral wall of the bony orbit.
This article will discuss the anatomy and function of the zygomatic bone.
Key facts about the zygomatic bone
Definition A quadrangular bone of the skull that participates in the formation of the skeletal framework of the orbit and cheeks
Surfaces Lateral, posteromedial, orbital
Borders Anterosuperior, anteroinferior, posterosuperior, posteroinferior, posteromedial
Processes Frontal process, temporal process, maxillary process
Foramina Zygomaticotemporal foramen, zygomatico-orbital foramen, zygomaticofacial foramen
Joints Zygomaticomaxillary suture, zygomaticofrontal suture, sphenozygomatic suture
1. Surfaces
2. Borders
3. Processes
1. Temporal process of zygomatic bone
2. Frontal process of zygomatic bone
3. Maxillary process of zygomatic bone
4. Fractures
5. Sources
+ Show all
The zygomatic bone has three surfaces: lateral, posteromedial and orbital.
• The lateral (facial) surface faces towards the outside. It is smooth and convex, and it features a small opening called the zygomaticofacial foramen. This foramen transmits the zygomaticofacial nerve, artery and vein between the orbit and the face. The lateral surface also serves as the attachment area of the zygomaticus major muscle on its anterior half, and the zygomaticus minor muscle on its posterior half.
• The posteromedial (temporal) surface faces towards the temporal and infratemporal fossae. Its anteriormost portion is rough and serves for the articulation with the zygomatic (malar) process of maxilla via the zygomaticomaxillary suture. The posteromedial surface spreads over the medial side of the temporal process, comprising a part of the lateral wall of the infratemporal fossa. Near the base of the frontal process, the posteromedial surface features the zygomaticotemporal foramen which transmits the zygomaticotemporal nerve from the orbit to the temporal fossa.
• The orbital surface is smooth and concave. It faces towards the orbit and forms the anterolateral part of its floor and the anterior part of its lateral wall. It features the zygomatico-orbital foramen, which is a gateway to the bony canal found within the zygomatic bone. This canal branches into the zygomaticofacial and zygomaticotemporal canals, which open on the corresponding surfaces of the zygomatic bone (explained above). The former transmits the zygomaticofacial nerve and vessels, while the latter is traversed by the zygomaticotemporal nerve and vessels.
Zygomatic bone anatomy (diagram)
The zygomatic bone has five borders:
• The anterosuperior (orbital) border is concave and smooth. It is the border between the lateral and orbital surfaces of the zygomatic bone.
• The anteroinferior (maxillary) border is the articular surface for the zygomaticomaxillary suture. It also serves as an attachment site for the levator labii superioris muscle.
• The posterosuperior (temporal) border is continuous with the superior border of zygomatic arch and the posterior border of the frontal process. It serves as an attachment point for the temporal fascia.
• The posteroinferior border is rough and serves as the attachment site for the masseter muscle.
• The posteromedial border is serrated and articulates with the greater wing of sphenoid bone superiorly via the sphenozygomatic suture, and with the orbital surface of maxilla inferiorly. Between the articular surfaces, there is a small free surface of the posteromedial margin that comprises the lateral border of the inferior orbital fissure.
Temporal process of zygomatic bone
The temporal process originates from the lower half of the zygomatic bone. It is oriented posteriorly and slightly superiorly towards the temporal bone. The terminal tip of the temporal process is oblique and jagged and it articulates with the zygomatic process of temporal bone with which it comprises the zygomatic arch.
Frontal process of zygomatic bone
The frontal process originates from the upper margin of the zygomatic bone. It is oriented superiorly, comprising the lateral outline of the orbit. It articulates with the zygomatic process of frontal bone superiorly via the zygomaticofrontal suture, and with the greater wing of sphenoid bone posteriorly via the sphenozygomatic suture.
The frontal process features a bony tubercle on its orbital surface called the Whitnall’s tubercle, which serves as an attachment site for the lateral palpebral ligament, suspensory ligament of the eye, and the aponeurosis of levator palpebrae superioris muscle.
Maxillary process of zygomatic bone
The maxillary process arises from the anterosuperior angle of the zygomatic bone. It extends anteriorly, comprising the inferolateral margin of the orbit. The inferior margin of this process participates in the joint with the maxilla. Posteriorly, it is continuous with the orbital surface of the bone.
Zygomatic bone: want to learn more about it?
What do you prefer to learn with?
Register now and grab your free ultimate anatomy study guide!
|
Home > network-security > cybersecurity > how to analyze network security?
how to analyze network security?
how to analyze network security - Related Questions
What is a network security analysis?
Organizations that are dedicated to protecting sensitive data and securing their networks must prioritize network security. Therefore, network security analysis results in observing, detecting, and eliminating potential vulnerabilities as they are observed, detected, and observed.
How do you analyze security?
Taking a look at the value of securities including stocks and other instruments is referred to as a security analysis. This helps investors make informed decisions about businesses. An analysis of a security's value can be performed in three ways: fundamentally, technically, and numerically.
How do you explain network security?
What are the ways to analyze network?
Formulating questions first, using networks later is a wise strategy. Using the right categorization will help you sort your network data. Tips 3: Use a network analysis software that is specifically designed for the task. The 4th tip is to keep in mind that network visualization is useful, but it can also be misunderstood.
What are the four types of network security?
Is there an access control system?... It is important to have antivirus software and anti-malware software... A security assessment of the application... The use of behavioral analytics... A way to prevent data loss... Denial of service prevention based on distributed denial of service... A security system for emails... There are firewalls.
How do you perform a network security assessment?
Do a resource assessment. Establish the value of the information. Make sure your IT infrastructure is not vulnerable. Make sure your defenses are up to par. Produce a report detailing the findings of the security assessment. Increasing cybersecurity through the implementation of security controls.
What is cybersecurity security analysis?
Cybersecurity analytics aims to identify proactive security measures by analyzing data. A monitoring of network traffic may detect indicators of compromise before a threat occurs, for example.
What is network security?
Whenever you take part in a security activity, you're safeguarding your network and data for overall usability and integrity. In addition to hardware, it also refers to software technology. There are many threats that are targeted by it. Your network is secured against them so that they cannot enter or spread. Secure network access is the result of effective network security.
What is security analysis?
Analyzing securities consists of evaluating financial instruments that can be traded. Individual securities are valued according to their true market value (i.e. The stock market and the bond market). Securities like these can be categorized as debt securities, equity securities, or some combination thereof. It is not possible to buy or sell securities through commodities or futures contracts.
What are types of security analysis?
Fundamental Analysis, Behavioral Analysis, and Technical Analysis are the three broad types of security analysis. An analysis of technical matters. This is a quantitative analysis.
Why analysis of security is necessary?
An investment analyst's job is to perform a security analysis because it allows him or her to establish the expected return and risk for a stock and evaluate its desirability logically and rationally. opposite would be true if the stock price were less than its intrinsic value. It might be a good time to sell the stock.
What is network security explain?
Using network security, a company can protect its infrastructure and the protection of its users by preventing a large number of potentially harmful threats from entering or spreading within its networks.
What is network security explain its types?
The security of a network is determined by the measures put in place to prevent the network and the data on it from being unintentionally discovered or stolen. Hardware, software, and cloud services all contribute to network security.
What is network security with example?
There are filters. Basically, network security refers to protecting computers, files, and directories on a network from hacking, misuse, and unauthorized access. Security in a network can be achieved by installing an anti-virus.
Why is network security?
The idea of network security is to prevent malicious use of the private data of the network, its users, and its devices by taking action. As long as the network runs smoothly and legitimate users are protected, it is considered to be secure. Keeping the network from being accessed by unauthorized users.
What are the types of network analysis?
Often, routing problems can be solved by a point-to-point analysis... How to find coverage... Make your fleet more efficient. Make sure that you are choosing the best site. Cost Matrix for the Origin-Destination Cycle.
What does network analysis include?
[71] Network Analysis analyzes systems based on connections (or edges) among nodes in a network. As a node, an object may be a person, an animal, a plant, or a patch of a landscape [72] whereas an edge the interaction between different connections between nodes.
Which is the tool to analyze network?
I recommend SolarWinds Network Performance Monitor (NPM) as the best network analysis solution. offers several useful network management features, such as network analyzing and optimizing Wi-Fi, and it can be used by beginners as well as experienced IT professionals.
|
Become a Patreon!
Excerpted From: Emily Hudson, The Constitutionality of the Indian Child Welfare Act, 47 Ohio Northern University Law Review 359 (2021) (415 Footnote) (Full Document)
EmilyHudsonThe Indian Child Welfare Act (ICWA) was passed in 1978 as a response to the disproportionate removal of Indian children from their homes compared to non-Indian children. It was found that this was disproportionality in part because judges and child welfare workers did not understand Indian culture-which led to prejudicial attitudes and the higher rates of removal. Congress enacted the ICWA through its plenary power over Indian tribes.
The constitutionality of the ICWA is currently being decided by the Fifth Circuit Court of Appeals. In a remarkable decision, the District Court for the Northern District of Texas, Fort Worth Division, found the act to be unconstitutional for violating Equal Protection, anti-Commandeering, and the non-delegation doctrine. Despite the recent developments related to the ICWA, the act's constitutionality has been questioned since its enactment.
This comment analyzes several constitutional arguments made against the ICWA. First the relationship between congressional power over tribes and tribal sovereignty will be described, as this foundational information is necessary to understanding the constitutional arguments that may be made against the act. Next, this comment will discuss the act itself, with a focus on the history and relevant sections of the act. This comment will then move into a discussion and analysis of some of the constitutional arguments that may be made against the act. The discussion will then move into recent developments related to the act-including discussion of Brackeen v. Bernhardt (formerly Zinke). Lastly, potential consequences if the act is found to be unconstitutional will be discussed.
[. . .]
Based purely on precedent and the deference given to Congress in relation to power over the tribes, the act is likely constitutional based on the three potential claims that have been discussed. However, if the issue is brought before the Supreme Court, there is a chance it could be struck down based on equal protection--if the Court does not avoid the constitutional question. The district court's finding that the act was unconstitutional as a violation of equal protection was unexpected. Based on comments made in dicta, the Supreme Court seems to have noticed there is the potential for an equal protection claim.
It is unlikely the act would be found unconstitutional under the Commerce Clause. The Supreme Court has long held that Congress gets its plenary power from the clause. However, the Court's treatment of Indian law could be described as “whimsical” and as such “the Court could conceivable abolish plenary power, [although] to do so would be a dramatic departure from centuries-old jurisprudence.”
The non-delegation argument is an interesting one as well. This argument seems to have the most potential of the two above mentioned. That is because Mazurie can easily be read to only apply to Indian's legislating on the reservation. It would be a very straight-forward interpretation.
In the end, all eyes are anxiously awaiting the decision from the Fifth Circuit and then the potential petition for certiorari that is expected to come after the decision. If this case gets to the United States Supreme Court, the decision could have an impact not just on the constitutionality of the ICWA, but potentially all Indian legislation past and future.
Licensed Ohio Attorney; Ohio Northern University, J.D.
Become a Patreon!
|
Sign In
- Or use -
Forgot Password Create Account
Stock# 69390
Seattle's First Zoning Ordinance
Scarce Zoning Guide for the City of Seattle, published in 1923. Printed in Seattle by Lowman & Hanford.
This is the first comprehensive ordinance for Seattle and from a publication standpoint is pre-dated only by the proposed ordinance, published in 1921 (OCLC locates 1 example).
Includes 14 pages and 39 maps showing Height and Zoning restrictions for the city.
Seattle Zoning Ordinance History
Prior to 1923, zoning in Seattle was adopted by ordinance, generally referred to as the Building Ordinance. Each building ordinance would be passed, and then later amended in its various sections by subsequent ordinances until they were eventually replaced and repealed. These included Ordinance 2833 in 1893; Ordinance 7040 in 1901; Ordinance 17240 in 1907; and Ordinance 31578 in 1913.
Compilations of the Building code were published every two to five years beginning in 1909. These early building ordinances are strongly focused on fire prevention, and divide buildings into "Classes" based on the construction materials and techniques used. They also divide the city into "building districts," but only in order to specify which classes of buildings can be built in which districts.
In 1920, Ordinance 40407 established the Seattle Zoning Commission, which was to "make a survey of the City of Seattle with a view of dividing the same into zones or districts, and report to the City Council a zoning or districting ordinance which shall specify the uses to which property in each district may be devoted...."
The eventual result of the Zoning Commission's work was Ordinance 45382, adopted in 1923, described as "An ordinance regulating and restricting the location of trades and industries; regulating and limiting the use of buildings and premises and the heights and size of buildings; providing for yards, courts or other open spaces; establishing districts for the said purposes; defining offenses; prescribing penalties and repealing all ordinances or parts of ordinances in conflict therewith."
Commonly known as the Zoning Ordinance, it was amended many times by subsequent ordinances, generally a section or a few sections at a time, until it was repealed and replaced in 1957.
This is among the earliest surviving zoning guides for Seattle.
|
What Is Access Control?
Access Control, When running a business, there are dozens of things that keep you busy. You need to worry about which marketing method to use, how to better reach your target market, how to get your cost of goods sold down and so much more.
However, one of the most important (if not the most important) is the security of your company. Despite security being a huge concern and source of investment for all businesses, more data breaches and hacks are taking place now than ever before. These can cost companies millions of dollars and the trust of their many users, and are generally not a fun thing to experience.
There are a variety of ways to help your business be as secure as possible, and this blog post is going to look at one, in particular, access control.
What is it?
It is when a company decides to selectively restrict the number of people who have access to a place or resource. It is a concept in security that looks to minimize risk to the business or organization. While hacks and cybercrimes do happen and can compromise data, it is actually a human error that leads to many of the breaches and other security incidents that companies experience. As a result, you need to be selective about who you authorize to view, change, or work with certain information and log data.
It comes in two varieties, physical or logical. Physical access control seeks to limit access to certain buildings, rooms, or campuses. For example, if you work for a law enforcement agency, there is a good chance you will require a key card or a special password to get into the building or certain rooms on the building.
Logical control is when a company will restrict access to a certain network or only allow certain people to access certain files and data. For example, only trained and qualified people might be allowed to look at private customer information.
This access can be controlled by the use of a PIN code being required to enter, fingerprint scanning, key cards or passwords. There are a variety of different options that companies can use, but they all seek to verify the user and confirm they are allowed access. So while there are two main varieties of access control, for the purpose of this blog post, we will be looking at logical access control.
The Different Types of Access Control:
So now that you have some basic information about what access control is, it’s time to learn about some of the different types of access control models.
These include (but are not limited to):
Discretionary –
The owner of the data (or an administrator in charge of it) will set policies surrounding who is allowed to access the data. So it is essentially at their discretion.
Users don’t have a lot of say in who has access to their data or files, as it is usually a centralized authority in charge. This type of access control is often used in military and government.
Attribute-based –
Access will be managed by evaluating a set of certain rules, conditions, and attributes.
This method restricts access to computer resources based on groups with a certain business description. So instead of allowing or disallowing on a person-to-person basis, this will do so based on job title. So maybe an Engineer 1 won’t have access, but an Engineer 2 or 3 will.
Why is it Beneficial?
Armed with the knowledge of what access control is, why is it so important and why should you care? Well, first of all, it can help beef up the overall security of your operation. The fewer people that have access to sensitive information, the lower the chance that someone mishandles it or accidentally leaks or forgets their credentials.
However, an increase in your security isn’t the only reason that access control is beneficial or a good idea to have. Utilizing access control can also help you remain compliant. Certain industries have rules surrounding compliance and controlling who has access to certain private and sensitive information is normally a big part of that compliance.
In conclusion, hopefully this article has helped you understand what access control and why it is important for your company. Nearly every company both connects to the internet and deals with some semblance of sensitive information and data, so no matter what industry your company is in, access control is something you should take very seriously.
Recently Published
Is Your Business Site Fully Accessible?
Secure Web: 5 Best Practices For Application Development
There are several essential best practices for secure web ...
Laravel PHP Framework: Features and benefits for web development
Companies and organizations are focusing on developing custom web ...
Web Development Vs. Software Development: What Is the Difference?
Web Redesign : Is Your Landing Page Begging for one?
Customer Acquisition: Few Steps To Improve This.
Mobile App Development Tools: 5 Enterprise-Ready For Rapid Deployment
Docker: How To Build Better Development Security With It
Mobile Development: How to Reduce Costs
|
Glyn's Web Site
The EU is NOT a Single Country, so Please Stop Suggesting it Acts Like One.
The EU was not set up to be a single organisation like the USA. It was set up so that a group of countries, could remain independent, and yet still work together.
Some people would like it to pull closer together; Some people would like to pull it apart. However, the fact remains, that it is not, and it never has been, a single country. It is a group of countries working together, and it's institutions reflect that.
The Primary Institutions of The EU
The Primary Institutions of the EU are;
1. The EU Council
2. The EU Commission
3. The EU Parliament
The EU Council
The EU Council is the Primary House of the EU. That means it's like the UK House of Commons. However, because the EU is a group of countries, and not a single country, it works slightly differently to the House of Commons.
For a new law to pass in the UK House of Commons a majority of MP's must support it. For a new law to pass in the EU Council, every single member has to agree. That means that no group of countries, can ever force EU laws onto any other country.
The EU Council is comprised of the heads of state of each of the members of the EU. So the current member for the UK is Boris Johnson. If we have new elections, and we elect a new Prime Minister, then they automatically become the UK member of the EU Council. The same is true for all the other members.
As such, each country elects their member, in the same way that each constituency elects their representative (MP) in the House of Commons. The EU Council is therefore directly elected, and very democratic.
The EU Council is basically the most important chamber, and NOTHING happens as far as new EU rules or laws goes without everyone in the EU Council agreeing to it. Every country can just say "nope", right up to the point where a new law is passed, if they don't like it.
The EU Commission
However, heads of state are pretty busy people, so they don't tend to do all the work themselves. As such, every head of state can appoint an EU Commissioner to represent them and their country.
The EU Commission is where all of the busy work gets done. It's where legislation is drafted. It's very much like the UK Civil Service. It's not directly elected, but EU Commissioners are directly appointed by heads of state.
No one ever complains that the UK Civil Service isn't directly elected, and why would they. It's down to the government to appoint them. Same goes for the EU Commission, and every country has an EU Commissioner.
The EU Parliament
The EU Parliament is directly elected and has been since 1979. It's where MEP's sit. It is there to conduct an additional level of democratic oversight to the whole process.
This next bit is important, so if you've been skim reading to this point, slow down, and make sure you take this bit in.
The EU Parliament cannot propose new legislation. It can only vote on whether or not to allow legislation which has been proposed by the EU Council, and EU Commission, and here's why;
If the directly elected EU Parliament could propose legislation, then the EU would be acting like a single country, and not like a group of countries.
It would make it possible for the EU Parliament to become more powerful than the EU Council, where the heads of the individual countries sit.
As such, only the EU Council, or the EU Commission, at the request of the EU Councilors, can propose new legislation.
This is to stop the EU acting like a single federal United States of Europe, and to ensure that it always acts like a groups of countries working together. Power resides with the EU Council, and not the EU Parliament, to ensure this is the case.
The EU IS Democratic, It's Just Not a Single Country, so its Institutions Reflect That.
So the next time you find yourself arguing that the EU is not democratic, it's primary institutions are just as democratic as the UK's, if not more so, given both it's primary and secondary chambers are directly elected. In the UK only the House of Commons is elected; There are good reasons why the House of Lords is not elected, but that's a totally different story.
What's more, in EU elections, proportional representation is used, which means that far more people actually get a say in choosing their representative. In the UK elections of 2015, UKIP received nearly four million votes, which under proportional representation would have given them more than sixty MP's, but under the UK First Past The Post electoral system UKIP got zero MP's. Whatever your opinion of UKIP (And I really don't think that much of them myself), I think you have to accept that is something that UKIP voters have got very good reason to be upset about.
So unless you actually want the EU to act like a single country, maybe appreciate why it is set up the way it is; To be a group of countries working together, where non can force the others to act in ways they do not wish to, and where its Parliament doesn't have the power to change that.
Unless you want it to become a single country of course.
|
6 illustrations to teach children about coronavirus
Experts are still learning about COVID-19. There are far fewer cases of the virus reported in children. Most of them caught the infection from someone they lived with or a family member. The virus seems to usually cause a milder infection in children than in adults or older people.
|
Chonburi (Thai: ชลบุรี ) is a province in Thailand. Neighboring provinces are (from north clockwise) Chachoengsao, Chanthaburi and Rayong. To the west is the Gulf of Thailand. The eastern seaboard is heavily industrialized and underpinned by shipping, transportation, tourism, and manufacturing industries, and second to only Bangkok in economic output.
From historical evidence, Mueang Chonburi has been settled since the Ayutthaya period. Originally, it comprised many small towns such as Mueang Bang Sai, Mueang Bang Pla Soi, and Mueang Bang Phra. Later, King Rama V combined these towns together into Chonburi Province.
|
Wescott Design Services logo Wescott Design Services
Systems - Embedded Software - Circuits
Measuring Frequency Response
by Tim Wescott, Wescott Design Services
(note: For more information on measuring and interpreting a system's frequency response, and other practical uses of control theory, see the book Applied Control Theory for Embedded Systems.)
1 Overview
When you design a control system using any of the frequency response methods � Bode plot, Nyquist plot or Nichols chart � it is not necessary to refer to the z-domain transfer function of the plant or controller except to find the system gain and phase response over the frequency range of interest. It is possible, therefore, to use a set of frequency response data without having an exact z-domain transfer function in hand. Furthermore, the frequency response data doesn't need to come from a mathematical model of the system � it can just as well come from a set of measured frequency responses. This means that you can do a very good job of control system design without ever having a known model of your plant; it is sufficient to have a set of measured frequency responses complete enough for your design.
Knowing this, one is immediately motivated to ask �so how do I measure the frequency response?� There are measuring instruments that have been developed to do this1, but they are expensive and in a digital control system they are difficult to integrate. For little more than the effort required to integrate a control systems analyzer into your system you can build the necessary interfaces into your software to allow you to collect and analyze the necessary data on your PC, with the added bonus that the resulting data will be in a form that you can immediately use for designing your control system.
There are a number of facets to measuring frequency response that must be addressed. You must excite the system with the correct sinusoidal waveform and you must collect the resulting system behavior and extract the relevant frequency response data from it. This must be done in a manner that is relatively immune from the noise than inevitably permeates a real-world control system, that is practical, easy to use, and doesn't add significantly to the cost of your system.
2 Measuring In Isolation
The most direct way to measure a system's response at a given frequency is accomplished by driving the system under test with a sine wave at the desired frequency while monitoring the relevant system output(s). Then find the amplitude and phase of the response, and compare this amplitude and phase to the amplitude and phase of the injected sine wave. To find a complete system frequency response over some span, you excite the system with a sine wave, taking a set of measurements at a frequency, stepping the frequency up (or down) and repeating as necessary until the desired data set has been collected.
Take the setup shown in Figure 1, where the goal is to find the frequency response of the plant who's transfer function is H(z). For any given frequency f (in cycles per sample) we set the input signal
where k is the sample index and AT is chosen to keep from overloading the plant. Assuming that the plant is stable and linear the output of the plant will be of the form
where A(f) and φ(f) are the gain and phase of the frequency response at f and yT(k) is a transient signal which will go to zero as k goes to infinity.
To actually find the values of A(f) and φ(f) you can measure the response over some finite number of samples and compute the first term of the Fourier series for y(k):
where (5) arc tangent takes the �four quadrant� arc tangent. The equations in (3) only work well if the samples are taken over an integer number of cycles of the input, i.e. if N is an integer multiple of 1/f, and they work best if the transient signal is reduced as much as possible � it is usually helpful to start from zero with a sine wave and to have a delay before the data is collected or to follow data collection at one frequency with data collection at a close-by frequency.
If you recall Euler's identity,
(3) and (4) become
where the π/2 term on the right-hand side compensates for the fact that the excitation is a sine wave rather than a cosine wave. This is a handy relationship to remember if you are using a math package to do your computation, as you can reduce (6) down to a single vector multiply with very efficient computation with such a package.
For example Figure 2 shows the response of a 2nd-order system at a frequency of f = 1/500. There is an initial turn-on transient, but the system is already settling out by the end of two cycles. If we apply (3) to the entire data sample in Figure 2 we see an error of about 20% from the actual transfer function, but applying it to the last 500 samples gives us an error under 3%.
Normally you are going to be interested in making measurements at a number of frequencies that will be fairly closely spaced. In this case the best way to reduce the startup transient is just to insure that the input sinusoid changes smoothly � as long as it does not stop or exhibit phase jums at the frequency boundaries then the transients will be negligible for most systems.
3 In-Loop Measurement
The setup in Figure 1 and equation (3) are sufficient for taking measurements of a well-behaved system in isolation. If, however, you are interested in more than just the behavior of one system (or subsystem) taken in isolation, if you want to know how your system behaves in closed loop, or if you are dealing with an unruly or unstable plant or other subsystem, it falls short.
What you need in such a case is a setup and analysis method that lets you measure the frequency response of a portion of a working system without significantly rearranging the system structure � specifically without opening up any working control loops.
What we need is a setup that will allow us to measure the frequency response of a portion of the system without opening the control loop. To do this we note that the transfer function of a block is the ratio of its input to its output. In the discussion so far we've excited a block with a sine wave and extracted its output � but in the process of extracting the output parameters in (9) we divided by the first Fourier term of the sine wave excitation, i.e. j AT. If we choose our block, insure that it has sinusoidal input and output at our desired frequency, then measure both its input and its output and divide their first Fourier term outputs then you'll have the blocks gain and phase at the frequency in question.
Taken from a purely signal-flow perspective the setup to measure frequency response is straightforward: place a summing junction in your system at a convenient location, then monitor the signals that sit at the input and output to the section of the system who's response you want to measure. The summing junction allows us to inject a sine wave which will then pervade the entire system, while the two outputs will let us get the pair of numbers that we need to get the ratio of a block's input to output.
Figure 3 shows the signal paths for measuring the response of the plant (note that any drivers, DACs, ADCs and sampling is happening within the plant model; we're only interested in the plant behavior as seen by software). A signal, uA is injected into the system right before the plant; the drive to the plant is picked off at the reference, uR, and the system error is picked off at y. Assuming that the system command is held at zero this setup will measure just the plant response.
Figure 4 shows the signal paths for measuring the open-loop response of the system. Here the signal is injected in the same place, and the reference pickoff is still immediately before the drive signal, but the output pickoff is measured after the signal has passed through both plant and controller.
The way to extract the frequency response of the subsystem under test is the same for Figure 3 and Figure 4: inject a swept-frequency sine wave at uA at each frequency point, apply (3) to uR and y, and divide the two resulting numbers � the result is the frequency response at that point.
This is done with the following set of equations. First find the first Fourier coefficient for the two variables:
Then divide the resulting complex numbers to find the value of the frequency response:
For example, say we wish to measure the plant response, open-loop response and closed-loop response of a system such as the one shown in Figure 3 and Figure 4, where we're sampling at 1kHz. We don't know it, but the plant transfer function is
We do know the controller transfer function, it is
with kd = 100, kp = 2 and ki = 0.01.
With a frequency sweep from 1/10th Hz to 500Hz, we get the plant input and output, which are shown in Figure 5 and Figure 6. You can see that as the magnitude of the response gets small the signal gets noisy, as seen on the right in Figure 5 and on the left in Figure 6.
Taking these two sets of data and performing complex division on each frequency point, then plotting the results yields the response shown in Figure 7. This measured plant response can then be use with a controller model (which should certainly be exact!) to tune the system.
Rearranging the data collection to that shown in Figure 4, we measure the output of the controller and the input to the plant. The input to the plant hasn't changed, and the measured controller output is shown in Figure 8 (which is, incidentally, the closed-loop response multiplied by 1). Dividing the controller output by the plant input gives us Figure 9.
On inspecting Figure 9 it can be immediately seen that the gain-crossing frequency occurs at about 16Hz with a phase margin of about 68, and that there are phase-crossing frequencies at 1.6Hz and 160Hz, with gain margins in excess of 20dB.
4 Real-World Issues
4.1 Noise
Noise is an inevitable part of any measurement, and in control system design it is often useful to get frequency response data over several decades of the plant response. When doing these measurements noisy data is inevitable, but as long as you understand the effects of the noise on your measurement you can mitigate them.
In Figure 7 we saw the effect of measurement noise at both ends of the spectrum. The measurement noise is constant across the frequency spectrum, but the quantities being measured varies: at the low-frequency end of the spectrum the loop gain is very high and the closed-loop gain is unity. This causes the signal at the input to the summing junction to be very small, so it can be corrupted by noise. At the high-frequency end of the spectrum the loop gain is small, so the signal level at the output of the summing junction is small and once again the measurement noise can dominate.
To mitigate the effects of noise in the measurements you must increase the signal to noise ratio in the final numbers arrived at using (3). This can be done in one of two ways: first, increase the signal by increasing the amplitude of the sine wave that you insert into your system; and second, increase the amount of time that you collect data at any particular frequency.
Increasing the signal amplitude gives the obvious advantage of more signal. You must take care, however, that you are not increasing the drive signal to the point where your system goes nonlinear through overflow or other effect (see the next section).
Increasing the amount of time over which you collect data gives you better signal to noise ratio because the operation of (3) gives a result that is coherent (i.e. synchronized) with the input sine wave, but the noise signal is incoherent. As a result the average signal amplitude rises in proportion to the length of time you collect data, but the expected amplitude of the noise only rises as the square root of this time � so increasing the collection time at each frequency by a factor of 4 will double the eventual signal to noise ratio.
4.2 Nonlinearities
Nonlinearities are an inevitable part of designing a control system, because ultimately there are no physical systems (including real software running on real processors) that don't exhibit nonlinearities. Often the nonlinearities that we must design around are slight compared to our control system requirements; in such cases it is not a bad idea to design our system using describing function analysis � and swept-sine frequency response data is exactly what is necessary for such design.
In our original analysis we justified using (3) to demodulate our data because of the form observed in (2). But (2) only follows from (1) if you assume that the system being tested is linear, or is being operated in a linear region. With a system that is not operating in a linear region (2) is not, in general, valid. In fact, the whole notion of using frequency response analysis in not generally valid for nonlinear systems.
What to do? Is this entire methodology one that only applies to that small subset of systems that is acting exactly like a linear system for the particular input that we expect? Fortunately the answer to this question is no. In the case when we are testing a stable nonlinear system then the response to a sinusoidal input is still periodic but in addition to containing energy at the fundamental frequency f it will also contain energy at DC (f = 0) 2f, 3f, etc., so (2) becomes
and A1 (the fundamental Fourier term) is the same as A in (3). If the value of A1 is significantly larger than any of the higher-order responses (A2 through A) then describing function analysis says that we can conveniently ignore the nonlinearities and just use the system as described by A1(f) and φ1(f).
Fortunately, the demodulation described in (9) or (3) will only respond to the fundamental energy in y(k), so the DC component (A0) and all of the higher-order terms in the summation evaluate to zero. What comes out of these demodulation operations are the amplitude and phase information for the fundamental frequency only � which is usually exactly what you want in describing function analysis.
But how can you tell if the system you are measuring is operating in its nonlinear region, and how can you quantify �how nonlinear� it is? The three methods that you can use are: One, look at your system's behavior to see if an output, input, or intermediate value is hitting a maximum limit during operation; two, repeat sweeps with different input amplitudes and compare their results and three, use measures of data correlation.
Observing your system behavior is probably the best way to insure that the system is operating in its linear region if you have a good idea of how your system works. Examples of things to look for are: the system drive to see flat-topping on its output, the mechanisms themselves to see if it is hitting any stops, the system sensors to see if they are saturating even when the thing they are measuring isn't, and finally making sure that no signals are so small that they are falling into the cracks of gear train backlash or ADC/DAC precision.
To tell if a system is in a linear region you should repeat your sweeps with several different input amplitudes and compare the results. If the results are substantially the same then chances are high that you are operating in a linear region. If, however, the results are different then you have a pretty good indication that you have left the linear region of operation for your system.
Repeating sweeps with different drive values will tell not only tell you whether you are taking data on a system that is in a nonlinear domain, it will also give you some guidance as to whether your system will be stable with that given input magnitude.
To use correlation to gage system linearity you need to observe that the only term measured in (12) that is used is the fundamental term. If you can estimate the extent of the higher-order terms then you can get an idea of how much your nice pretty sine wave is being mangled as it passes through the system. So you could express your correlation with the equation
where ρ = 1 indicates a perfectly linear system, and lesser values of ρ indicate a �less linear� system.
But how to extract this result from measured data? This can be done by observing that if you remove the DC component from your signal, square the result and average you get
Now recall that from the results of (3) you can extract the value for just the fundamental:
So you can find your correlation from your measured results
5 Software
So how do you integrate this all into your system? The answer to that question depends a lot on how your particular system is architected. Specifically how your sampling rate relates to your external communications bandwidth and real-time capabilities, and how much extra processor bandwidth you have at your disposal, determines how much functionality you put where.
I will give an example set of software that you might use if your control system sample rate is low enough that you can use a communications link to send text commands and receive text feedback.
The entire system is shown in Figure 10. The block marked �analyzer� is implemented on the host, and communicates to the embedded processor via the communications link. The dotted boxes marked �tap� enclose the summing junctions and switches that connect one tap or another to the analyzer. The block marked �controller� is the controller being developed, and the block marked �plant� is the system being controlled.
5.1 Code
Assuming that there is a communications link, the additional software consists of three parts: the host-based stimulus generator, the embedded summing junctions, and the host-based results parser.
5.1.1 Embedded Code
Listing 1 shows the generic portion of the analyzer code that must be embedded in the system. This code depends on having three functions from the data interface: analyzerDataAvailable must return true if there is data to be read by analyzerGetInput; analyzerGetInput must return the latest stimulus data point sent by the host; and analyzerOutput must send the given output values to the host.
The function analyzerTap implements the taps shown in Figure 10. Depending on the value of the �type� input it will set the analyzerOut variables and inject the analizerIn signal into the summing junction. The function analyzerUpdate implements the interface between the host and the embedded system. It insures that there is data waiting, receives it, and sends the most recent collected analyzer data to the host.
Listing 2 shows how the analyzer code is used. The normal operation of this code is to get the ADC input, update the controller, and send the controller's drive command to the DAC. To implement the analyzer we insert the two calls to analyzerTap and the call to analyzerUpdate.
The values of the analyzer type inputs must be managed to insure that each analyzer signal is only connected to one tap. How the values of inputTap and outputTap are set determines the measurement that will be taken. For instance, to implement the plant transfer function measurement shown in Figure 3 you would set inputTap = OUT1 and outputTap = IN | OUT2; to implement the loop transfer function measurement shown in Figure 4 you would set inputTap = NONE and outputTap = IN | OUT1 | OUT2.
5.1.2 Host-Side Code
The embedded code doesn't have much functionality: it just shoves the signals around. In order to make the system work it is necessary to put most of the �brains� of the into the host side. The host side must generate the appropriate waveforms at the appropriate frequencies for the appropriate amount of time, and it must be able to receive the data, parse it, and show results.
Listing 3 gives the code that generates the sine wave into standard output. It is presumed that this output will be piped to a file and sent to the embedded system using a terminal program or other means. The code generates a wave that is swept in frequency in a number of distinct segments, with the frequency varying logarithmically. The frequency of each segment and the segment length are calculated so the segment is an exact integer number of cycles long, with segments padded out in length as necessary to get a full cycle.
Listing 4 gives an example of the analysis code. Given the same set of parameters it generates the same frequency sequence as Listing 3, but in this case it reads in a line of system output at each sample step and performs demodulation described in (3) for each frequency step, then prints the result as text.
Of course, the code shown so far has a number of shortcomings that would need to be overcome in a real system: the Bode plot isn't actually printed, the format needed by Listing 4 is anonymous so the parameters must be remembered separately from the file, C++ isn't the best language for scientific computation of this sort, etc. All of these issues can and should be overcome in a real system, but these listings illustrate the point.
6 Other Methods
This subject is a small subset of the overall subject of system identification and control. Other popular methods of designing control systems for plants with unknown characteristics are the Ziegler-Nichols and Astrom-Hagglund methods for tuning PID controllers, ARMA system identification methods, and frequency response methods using random excitation. Each one of these methods has their advantages and adherents, and there are cases in control system design where one of these methods is clearly superior to the swept-frequency response measurement method shown here. I feel, however, that this method is the best overall method for tuning motion control loops in an embedded software environment. It has certainly served me well.
Tim Wescott, Z Transforms for the Embedded Engineer, http://www.wescottdesign.com/articles/zTransform/z-transforms.html
Peter A Cook, Nonlinear Dynamical Systems, Prentice-Hall International (UK), 1994.
Rick Lyons, Understanding Digital Signal Processing, 2nd Edition, Prentice-Hall, 2004
HP 3563A Operating Manual � Control Systems Analyzer, Hewlett-Packard Company, 1990, HP Part number 03563900000
HP 3563A Getting Started Guide � Control Systems Analyzer, Hewlett-Packard Company, 1990, HP Part Number 0356390202
1E.g. the Agilent HP3563A
Wescott Design Services (503) 631-7815 www.wescottdesign.com
|
When a sample the blood is be crazy in a centrifuge, the cells and cell fragments are separated indigenous the fluid intercellular matrix. Because the formed facets are heavier 보다 the fluid matrix, they space packed in the bottom of the tube by the centrifugal force. The irradiate yellow colored fluid on the optimal is the plasma, which accounts for about 55 percent of the blood volume and also red blood cell is referred to as the hematocrit,or packed cell volume (PCV). The white blood cells and platelets form a thin white layer, dubbed the "buffy coat", in between plasma and red blood cells.
You are watching: A patient with a thinner-than normal buffy coat may have
The watery fluid portion of blood (90 percent water) in i beg your pardon the corpuscular elements are suspended. That transports nutrients and also wastes transparent the body. Various compounds, including proteins, electrolytes, carbohydrates, minerals, and fats, are dissolved in it.
Formed Elements
The formed elements are cells and also cell pieces suspended in the plasma. The 3 classes that formed elements are the erythrocytes (red blood cells), leukocytes (white blood cells), and also the thrombocytes (platelets).
Erythrocytes (red blood cells)
Erythrocytes, or red blood cells, are the most countless of the created elements. Erythrocytes are tiny biconcave disks, thin in the middle and thicker approximately the periphery. The shape gives a combination of versatility for relocating through small capillaries v a maximum surface area for the diffusion the gases. The primary role of erythrocytes is to carry oxygen and, come a lesser extent, carbon dioxide.
Leukocytes (white blood cells)
Leukocytes, or white blood cells, are typically larger than erythrocytes, however they room fewer in number. Also though lock are thought about to be blood cells, leukocytes do most of their work-related in the tissues. They usage the blood together a deliver medium. Some room phagocytic, others develop antibodies; part secrete histamine and also heparin, and others neutralize histamine. Leukocytes are able to relocate through the capillary walls into the organization spaces, a process called diapedesis.In the organization spaces they carry out a defense versus organisms that cause disease and either promote or inhibit inflammation responses.
There are two main groups of leukocytes in the blood. The cells that build granules in the cytoplasm are dubbed granulocytes and also those that do not have actually granules are referred to as agranulocytes. Neutrophils, eosinophils, and basophils are granulocytes. Monocytes and lymphocytes space agranulocytes.
Neutrophils, the most numerous leukocytes, room phagocytic and also have light-colored granules. Eosinophils have granules and assist counteract the results of histamine. Basophils secrete histomine and also heparin and also have blue granules. In the tissues, lock are referred to as mast cells. Lymphocytes room agranulocytes that have a special duty in immune processes. Some strike bacteria directly; others create antibodies.
See more: How Many Gigs In A Movie ? How Many Gigabytes Is A Film
Thrombocytes (platelets)
Thrombocytes, or platelets, space not finish cells, however are small fragments the very big cells referred to as megakaryocytes. Megakaryocytes construct from hemocytoblasts in the red bone marrow. Thrombocytes become sticky and clump together to type platelet plugs that close breaks and tears in blood vessels. They also initiate the development of blood clots.
|
Does JavaScript work on notepad?
Which Notepad is best for JavaScript?
6 Best JavaScript Editor Choices
1. Atom. Before diving straight into the features of Atom, let’s first understand what Electron is. …
2. Visual Studio Code. …
3. Eclipse. …
4. Sublime Text. …
5. Brackets. …
6. NetBeans.
Where can I write JavaScript?
You can use the JavaScript Console from Google Chrome . Go on Chrome and Press the key sequence: CTRL+SHIFT+j for Windows or CMD+OPT+j for Mac. You can write JavaScript on any editor just like Ruby and then paste it to the JS Console.
Can JavaScript save a file?
A JavaScript function that fire on the button click event. Create a Blob constructor, pass the data in it to be to save and mention the type of data. And finally, call the saveAs(Blob object, “your-file-name.
Which software is used for JavaScript?
Most likely, you’ll find your JavaScript editor of choice in Sublime Text, Visual Studio Code, or Brackets. But several other tools—Atom, BBEdit, Komodo Edit, Notepad++, Emacs, and Vim—all have something to recommend them. Depending on the task at hand, you might find any one of them handy to have around.
What do I need to code in JavaScript?
To write a JavaScript, you need a web browser and either a text editor or an HTML editor. Once you have the software in place, you can begin writing JavaScript code. To add JavaScript code to an HTML file, create or open an HTML file with your text/HTML editor.
IT IS INTERESTING: Does Java still work on Mac?
What do I need to run JavaScript?
To execute JavaScript in a browser you have two options — either put it inside a script element anywhere inside an HTML document, or put it inside an external JavaScript file (with a . js extension) and then reference that file inside the HTML document using an empty script element with a src attribute.
How do I run JavaScript on Windows?
To try it, complete the following actions.
1. Open the Console. Select Control + Shift + J (Windows, Linux) or Command + Option + J (macOS).
2. Type 2 + 2 . The Console already displays the result 4 on the next line while you type it. The Eager evaluation feature helps you write valid JavaScript.
Does script go in body?
Scripts can be placed in the , or in the
section of an HTML page, or in both.
What is the main use of JavaScript?
How can I write my name in JavaScript?
javaScript, function, priting my name (beginner)
1. Write a function called nameString()
2. It should take name as a parameter.
3. The function returns a string equal to “Hi, I am” + ” ” + name.
4. Call nameString() by passing it your name, and use console. log to print the output.
How is JavaScript written?
Chrome’s Javascript engine, V8, is written in C++. From the project page: V8 is written in C++ and is used in Google Chrome, the open source browser from Google.
IT IS INTERESTING: How can I use PHP in my PC?
|
EU and Russia’s Fraught Relationship – A Timeline
Russia and Europe share strong historical and cultural ties from time immemorial. But these ties did not translate into amicable trade relations between the two. The Russian Federation and the European Union both came into existence in the early 1990’s. With the dissolution of the Soviet Union in 1991, the Russia we all know today came into being. During the same time period, the European Union also came into existence with the signing of the Maastricht treaty. When the EU was coming into existence in 1993, Boris Yelstin, the premier of Russia did not show great eagerness in joining the European Union, as there was much to handle within Russia. Moreover, at that time the economic structure of Russia was very different from other member countries of the EU and it would not have been in the best interest of the EU to also include Russia in it.
To begin with, Russia did not formally enter into the EU, nonetheless, a declaration was signed between the two, to strengthen their relationship. The Partnership and Cooperation Agreement (PCA) was signed in 1994. With the signing of the PCA, the European Union aimed at developing better bilateral relationships with member countries of the former Soviet Union; on the other hand, Russia was aiming at forging better relations with other western democracies, especially the United States. When it came to the practical implementation of the PCA agreement, Russia was not willing to fully comply with EU standards. Instead of focusing on dealing with the EU collectively, Russia preferred to deal with member countries individually. Whereas on part of the EU, PCA didn’t show any flexibility towards Russia; it was expected that either Russia would comply with the entire EU policy or it would have to stay outside the European Union.
In 2000, when Vladimir Putin got elected as the next President of Russia, he initially exhibited aspiration in cooperating with the EU and eventually joining it. Putin’s Russia harbored the wish of regaining its old glory; and in this light, Russia saw the United States and NATO as a threat, and the European Union as an ally. However, with the Iraq invasion by United States led forces in 2003, there was a paradigm shift in the Russian thinking. Now, President Putin wanted Russia to be a powerful country in Eurasia, with its neighbors aligning with it rather than the EU. In 2004 when several more countries (including the three Baltic countries) became members of the European Union, Russia began to see the EU as an expanding power, which might eventually encroach its sovereignty. Russia accused the European Union of being a participant in the “color revolutions” in Georgia (2003), Ukraine (2004) and Kyrgyzstan (2005).
During the period of 2006-2011, paths of Russia and EU further diverged. In this period, Russia began to see the EU with the same threat perception as the United States and NATO. Putin was not happy with Russia’s position in the post-cold war scenario. Thus, Russia began on the path of “independent foreign policy.” Following this, Russia’s war with Georgia in 2008 further strained the relationship between the two. This led to suspension of negotiations by the EU with Russia with regard to the new Partnership and Cooperation Agreement. Then in 2009 EU started a partnership program with six Eastern European countries, namely Armenia, Azerbaijan, Belarus, Georgia, the Republic of Moldova and Ukraine. As a counterweight, the Eurasian Customs Union was established, with Russia, Belarus and Kazakhstan as its members. Thus, both the EU and Russia strived to enhance their regional dominance.
Putin’s reelection as President in 2012 was marred by protests. Russia claimed that these protests were backed by Western democracies and the EU. February 2014 Revolution of Dignity in Ukraine was also seen as being supported by the EU. The situation went from bad to worse with the Crimean Crisis. After the Ukrainian Revolution of Dignity, Russia via military intervention annexed the Crimean Peninsula in March 2014. This was a step towards increasing the regional power of Russia. As a result, March 2014 started a period of restrictive measures against Russia by the European Union. These restrictive measures took many forms. Russia was excluded from G8. Russia’s induction into OECD and International Energy Agency was stalled. Bilateral summits between the EU and Russia also got deferred. Furthermore, economic sanctions targeting specific economic sectors were put in place. As a retaliation to these sanctions and restrictions imposed by the EU and certain western countries, Russia too responded with counter-sanctions on agricultural goods, raw materials and food.
Poisoning of the Russian opposition leader Alexei Navalny in August 2020 further deteriorated the relations. As a result, in March 2021 new sanctions were imposed on senior Russia officials by EU and US. In a retaliatory action Russia banned senior EU officials in April 2021.
Presently, the EU-Russia relations have reached their lowest level. From here it is going to be a tightrope walk for both Russia and the European Union. The basic conflict arises due to the inability of Russia to shrug off its Soviet era pride. Russia is not ready for a secondary role in the European Union. Moreover, the EU and Russia follow different schools of thought when it comes to international relations. The EU adheres to Liberalism and Russia prefers Neorealism. The European Union and Russia’s relationship is fraught with more downs than ups. But there is no denying the fact that both of them long for a more harmonious relation with each other. The bright spot in this topsy-turvy EU-Russia relationship is that both the European Union and Russia have a continental vision of Europe. Notwithstanding the fact that at present Russia prefers to be a free agent, the continental vision of Europe is going to act as catalyst of better EU-Russia relations.
Check Also
Imperialist alliance fuels arms race in Indo-Pacific
At the G7 Summit last June, Joe Biden, Boris Johnson and Scott Morrison, representing the …
|
Your question: Which type of unemployment affects South Africa the most?
Structural unemployment is clearly the most serious form of unemployment and poverty arises from this form. Cyclical unemployment can also lead to severe hardship, especially during economic downturns of long duration. Cyclical unemployment can, furthermore, lead to structural employment.
What is the most common type of unemployment in South Africa?
The major proportion of unemployment in South Africa is Structural. Structural unemployment is caused by changes in the composition of labour supply and demand. Structural unemployment is part of the nation’s natural rate of unemployment.
What type of unemployment affects society most?
Frictional unemployment.
Frictional unemployment also includes people just entering the labor force, such as freshly graduated college students. It is the most common cause of unemployment, and it is always in effect in an economy.
What is the most dangerous type of unemployment?
Structural unemployment is the most serious kind of unemployment because it points to seismic changes in an economy. It occurs when a person is ready and willing to work, but cannot find employment because none is available or they lack the skills to be hired for the jobs that do exist.
IT IS INTERESTING: How do I get a VPN in South Africa?
What is the most common type of unemployment?
Structural unemployment is the most common type of unemployment.
Why is unemployment in South Africa so high?
Statistics South Africa (Stats SA) released the Quarterly Labour Force Survey (QLFS) on Tuesday, which showed that the increasing unemployment rate is due to more people joining the labour force, as economists expected. According to Stats SA, one million more people joined the labour force.
What are the evil effects of unemployment?
What are three negative effects of unemployment?
Concerning the satisfaction level with main vocational activity, unemployment tends to have negative psychological consequences, including the loss of identity and self-esteem, increased stress from family and social pressures, along with greater future uncertainty with respect to labour market status.
What are three causes of unemployment?
Main types of unemployment
• Occupational immobilities. …
• Geographical immobilities. …
• Technological change. …
• Structural change in the economy. …
• See: structural unemployment.
Which is worse inflation or unemployment?
Unemployment makes people unhappy, according to economic research. So does inflation. A one percentage point increase in unemployment lowers well-being nearly four times as much as an equivalent rise in inflation, the paper says. …
IT IS INTERESTING: What is the literacy rate in South Africa 2019?
What type of unemployment is quitting?
If a worker quit voluntarily to pursue better opportunities/aspirations, it is termed as Voluntary unemployment. On the other hand, if a worker was laid off by the employer, it is known as Involuntary unemployment.
Does the type of unemployment matter?
Your type of unemployment can determine what steps you need to take in your job search. Whether structural, cyclical, or frictional unemployment, understanding your unemployment is key to a faster, more effective job search.
What are the two main types of unemployment?
Types of Unemployment
• Demand deficient unemployment. Demand deficit unemployment is the biggest cause of unemployment that typically happens during a recession. …
• Frictional unemployment. …
• Structural unemployment. …
• Voluntary unemployment.
Why is 0% unemployment bad for the economy?
Low unemployment is usually regarded as a positive sign for the economy. A very low a rate of unemployment, however, can have negative consequences, such as inflation and reduced productivity.
What are the main reasons for unemployment?
• • Legacy of apartheid and poor education and training. …
• • Labour demand – supply mismatch. …
• • The effects of the 2008/2009 global recession. …
• • …
• • General lack of interest for entrepreneurship. …
• • Slow economic growth.
What are the six types of unemployment?
Types of Unemployment:
• Frictional Unemployment:
• Seasonal Unemployment:
• Cyclical Unemployment:
• Structural Unemployment:
• Technological Unemployment:
• Disguised Unemployment:
Across the Sahara
|
Does a light bulb work without the glass?
Do light bulbs need to be covered?
Cover your bare bulbs!
Although a lamp shade is usually seen as a decorative element, its main purpose is to diffuse or redirect the light from the bulb for maximum effectiveness and protect your eyes from the bulb’s glare. With no shade at all, a bare bulb’s light goes out equally in all directions.
Why is glass used for light bulbs?
A glass bulb, then, is used to keep oxygen away from the filament. … The presence of an inert gas, such as Argon, actually inhibits this deterioration, allowing higher filament temperatures and brighter light bulbs. So the glass globe can also help enhance a bulb’s capability.
Can you remove the glass from an LED bulb?
Hold one side of the black glass insulator at the bottom of the bulb with your pliers. Twist it up to snap the glass apart. The glass here is thick, so it will take a lot of force to actually break it. Make sure that you hold onto the bulb firmly with your other hand as you work.
Are LED light bulbs glass or plastic?
LED bulbs are made of hard durable plastic, making them almost indestructible. Something to keep in mind is how much you can save on your power bill. LED bulbs use 90% LESS electricity vs your traditional glass incandescent bulb.
IT IS SURPRISING: Quick Answer: Is the gas in a fluorescent light bulb poisonous?
Are light bulbs glass?
Can you still use a broken LED bulb?
Although they contain hazardous materials, such as lead and nickel, LEDs are considered safe because the concentration of these substances is so minimal. Beyond the obvious dangers of shattered glass, broken LEDs have no dangerous implications and can easily be disposed of.
Why are light bulbs air tight?
The filament in a light bulb is housed in a sealed, oxygen-free chamber to prevent combustion. In the first light bulbs, all the air was sucked out of the bulb to create a near vacuum — an area with no matter in it. … In a modern light bulb, inert gases, typically argon, greatly reduce this loss of tungsten.
|
luksfuks 0 points1 point 2 months ago
DDR3 is made to talk to one memory controller. If you're crazy enough, you can create some kind of bridge that talks to multiple MCUs on one side, and to the DDR3 memory on the other side. That would effectively implement what you're asking for.
However, such a hack would be far slower than DDR3 itself, unless you implement everything, including the MCUs, inside an FPGA. Also, the bridge still needs to solve the congestion arbitration problem, so your MCUs must have memory interfaces that allow to be blocked by the bridge.
Ideally the MCU should not only support blocking wait states, but also request queues and out of order delivery. But, such features are seldomly available on the external memory interface. They may be found between the cache and the memory controller itself. Easy to "rewire" when your MCU is modeled in an FPGA, but impossible when you have a physical chip that you want to solder to a board. The "dumber" the interface, the less efficient it will be.
Anyway, real-world designs don't do this. It is not efficient for all but the most obscure uses. They usually share some memory, not everything.
In embedded, "some" can be as little as a 1K RAM page, or 64K. That's usually called an "mbox RAM" page, because it's used like a mailbox. You write a command followed by some data, and trigger an interrupt. The other side receives the interrupt, reads the command and the data and does something with it. Sometimes it's true dualport/multiport memory, but other times it's a one-way mbox (you write, the other MCU reads). In this latter case, there will be a second mbox for the opposite direction.
It's easier to make shared memory efficient when it's only such a small amount. Also, using an mbox design forces the software to strictly distinguish between local and shared data, thus any hardware inefficiencies will only impact the shared data (rather then everything, as implied by your question).
|
What Do You Mean by Chiropractic Care
Chiropractic care focuses on the diagnosis, intervention, and prevention of mechanical ailments of the musculoskeletal system, and in particular the spinal column, pursuant to the speculation that these disorders influence normal health through the neurological system. You can get more information about the chiropractor Dowers Grove at https://www.northstarmedicalcenter.com/chiropractor-clinic-downers-grov.
Chiropractic care is based on the idea that the nervous system coordinates all the operations of the body and that any malady can be traced back to a poor neurological operation. Chiropractic care is designed to restore the body’s natural neurological function, and allow the body to correct these conditions through its inherent ability to heal.
Manual therapy is the most common chiropractic treatment. This involves adjustments to the spine, joints, soft tissues, and other body parts. It may also include lifestyle and health guidance, as well as exercises. Although doctors of chiropractic don't recommend prescribing drugs or performing surgeries, they can refer patients for these treatments if clinically indicated.
Although chiropractic health care professionals typically use hands-on treatment, such as a spinal or extremity adjustment and other procedures, they may also employ different methods of treating patients. Chiropractic care is based on the belief that the relationship between the body's structure and performance, primarily that of the spine column, has an effect on health.
|
Military Wiki
American Revolutionary War
Surrender of Lord Cornwallis.jpg
Armed Forces
United States
Continental Army
→ Commander-in-Chief
→ Regional departments
→ Units (1775, 1776, 1777–1784)
Continental Navy
Continental Marines
State forces
→ List of militia units
→ List of state navies
→ Maritime units
Great Britain
List of British units
List of French units
Related topics
List of battles
Military leadership
This is a list of the United States state navies in the American Revolutionary War.
Beginning in 1775 after the American Revolutionary War, eleven of the thirteen colonies established state navies or owned one or more armed vessels. [1] Some (like that of Massachusetts) were established prior to the creation of the Continental Navy. They were usually created to provide some measure of coastal defense against the actions of the Royal Navy, Loyalist smugglers, British privateers, and pirates, or to assist in shore defenses. Some navies, like those of New Hampshire and Georgia were quite small; New Hampshire only commissioned one ship. Delaware and New Jersey were the only states that did not commission and operate any ships.
List of state navies[]
States without navies[]
New Jersey never authorized the purchase of any ships, or established admiralty courts. Both matters were proposed to the state assembly in 1776, but were not acted upon.[2]
Delaware never authorized the purchase of armed vessels.[3] Some of its apparently commissioned ships for specific purposes; the Farmer was commissioned by the Sussex County committee to sail to St. Eustatius to purchase gunpowder.[4]
See also[]
1. Paullin, 1906 p.315
2. Paullin, 1906 pp. 477-478
3. Paullin, 1906, p. 315
4. Shomette, p. 47
|
Stanford Encyclopedia of Philosophy
Indispensability Arguments in the Philosophy of Mathematics
One of the most intriguing features of mathematics is its applicability to empirical science. Every branch of science draws upon large and often diverse portions of mathematics, from the use of Hilbert spaces in quantum mechanics to the use of differential geometry in general relativity. It's not just the physical sciences that avail themselves of the services of mathematics either. Biology, for instance, makes extensive use of difference equations and statistics. The roles mathematics plays in these theories is also varied. Not only does mathematics help with empirical predictions, it allows elegant and economical statement of many theories. Indeed, so important is the language of mathematics to science, that it is hard to imagine how theories such as quantum mechanics and general relativity could even be stated without employing a substantial amount of mathematics.
From the rather remarkable but seemingly uncontroversial fact that mathematics is indispensable to science, some philosophers have drawn serious metaphysical conclusions. In particular, Quine (1976; 1980a; 1980b; 1981a; 1981c) and Putnam (1979a; 1979b) have argued that the indispensability of mathematics to empirical science gives us good reason to believe in the existence of mathematical entities. According to this line of argument, reference to (or quantification over) mathematical entities such as sets, numbers, functions and such is indispensable to our best scientific theories, and so we ought to be committed to the existence of these mathematical entities. To do otherwise is to be guilty of what Putnam has called "intellectual dishonesty" (Putnam 1979b, p. 347). Moreover, mathematical entities are seen to be on an epistemic par with the other theoretical entities of science, since belief in the existence of the former is justified by the same evidence that confirms the theory as a whole (and hence belief in the latter). This argument is known as the Quine-Putnam indispensability argument for mathematical realism. There are other indispensability arguments,[1] but this one is by far the most influential, and so in what follows I'll concentrate on it.
Spelling Out the Quine-Putnam Indispensability Argument
The Quine-Putnam indispensability argument has attracted a great deal of attention, in part because many see it as the best argument for mathematical realism (or platonism). Thus anti-realists about mathematical entities (or nominalists) need to identify where the Quine-Putnam argument goes wrong. Many platonists, on the other hand, rely very heavily on this argument to justify their belief in mathematical entities. The argument places nominalists who wish to be realist about other theoretical entities of science (quarks, electrons, black holes and such) in a particularly difficult position. For typically they accept something quite like the Quine-Putnam argument[2]) as justification for realism about quarks and black holes. (This is what Quine (1980b, p. 45) calls holding a "double standard" with regard to ontology.)
For future reference I'll state the Quine-Putnam indispensability argument in the following explicit form:
Thus formulated, the argument is valid. This forces the focus onto the two premises. In particular, a couple of important questions naturally arise. The first concerns how we are to understand the claim that mathematics is indispensable. I address this in the next section. The second question concerns the first premise. It is nowhere near as self-evident as the second and it clearly needs some defense. I'll discuss its defense in the following section. I'll then present some of the more important objections to the argument, before considering the Quine-Putnam argument's role in the larger scheme of things - where it stands in relation to other influential arguments for and against mathematical realism.
What is it to be Indispensable?
The question of how we should understand `indispensability' in the present context is crucial to the Quine-Putnam argument, and yet it has received surprisingly little attention. Quine actually speaks in terms of the entities quantified over in the canonical form of our best scientific theories rather than indispensability. Still, the debate continues in terms of indispensability, so we would be well served to clarify this term.
The first thing to note is that `dispensability' is not the same as `eliminability'. If this were not so, every entity would be dispensable (due to a theorem of Craig).[3] What we require for an entity to be `dispensable' is for it to be eliminable and that the theory resulting from the entity's elimination be an attractive theory. (Perhaps, even stronger, we require that the resulting theory be more attractive than the original.) We will need to spell out what counts as an attractive theory but for this we can appeal to the standard desiderata for good scientific theories: empirical success; unificatory power; simplicity; explanatory power; fertility and so on. Of course there will be debate over what desiderata are appropriate and over their relative weightings, but such issues need to be addressed and resolved independently of issues of indispensability. (See Burgess (1983) and Colyvan (1999) for more on these issues.)
These issues naturally prompt the question of how much mathematics is indispensable (and hence how much mathematics carries ontological commitment). It seems that the indispensability argument only justifies belief in enough mathematics to serve the needs of science. Thus we find Putnam speaking of "the set theoretic `needs' of physics" (Putnam 1979b, p. 346) and Quine claiming that the higher reaches of set theory are "mathematical recreation ... without ontological rights" (Quine 1986, p. 400) since they do not find physical applications. One could take a less restrictive line and claim that the higher reaches of set theory, although without physical applications, do carry ontological commitment by virtue of the fact that they have applications in other parts of mathematics. So long as the chain of applications eventually "bottoms out" in physical science, we could rightfully claim that the whole chain carries ontological commitment. Quine himself justifies some transfinite set theory along these lines (Quine 1984, p. 788), but he sees no reason to go beyond the constructible sets (Quine 1986, p. 400). His reasons for this restriction, however, have little to do with the indispensability argument and so supporters of this argument need not side with Quine on this issue.
Naturalism and Holism
Although both premises of the Quine-Putnam indispensability argument have been questioned, it's the first premise that is most obviously in need of support. This support comes from the doctrines of naturalism and holism.
Following Quine, naturalism is usually taken to be the philosophical doctrine that there is no first philosophy and that the philosophical enterprise is continuous with the scientific enterprise (Quine 1981b). By this Quine means that philosophy is neither prior to nor privileged over science. What is more, science, thus construed (i.e. with philosophy as a continuous part) is taken to be the complete story of the world. This doctrine arises out of a deep respect for scientific methodology and an acknowledgment of the undeniable success of this methodology as a way of answering fundamental questions about all nature of things. As Quine suggests, its source lies in "unregenerate realism, the robust state of mind of the natural scientist who has never felt any qualms beyond the negotiable uncertainties internal to science" (Quine 1981b, p.72). For the metaphysician this means looking to our best scientific theories to determine what exists, or, perhaps more accurately, what we ought to believe to exist. In short, naturalism rules out unscientific ways of determining what exists. For example, naturalism rules out believing in the transmigration of souls for mystical reasons. Naturalism would not, however, rule out the transmigration of souls if our best scientific theories were to require the truth of this doctrine.[4]
Naturalism, then, gives us a reason for believing in the entities in our best scientific theories and no other entities. Depending on exactly how you conceive of naturalism, it may or may not tell you whether to believe in all the entities of your best scientific theories. I take it that naturalism does give us some reason to believe in all such entities, but that this is defeasible. This is where holism comes to the fore: in particular, confirmational holism.
Confirmational holism is the view that theories are confirmed or disconfirmed as wholes (Quine 1980b, p. 41). So, if a theory is confirmed by empirical findings, the whole theory is confirmed. In particular, whatever mathematics is made use of in the theory is also confirmed (Quine 1976, pp. 120-122). Furthermore, as Putnam (1979a) has stressed, it is the same evidence that is appealed to in justifying belief in the mathematical components of the theory that is appealed to in justifying the empirical portion of the theory (if indeed the empirical can be separated from the mathematical at all). Naturalism and holism taken together then justify P1. Roughly, naturalism gives us the "only" and holism gives us the "all" in P1.
It is worth noting that in Quine's writings there are at least two holist themes. The first is the confirmational holism discussed above (often called the Quine-Duhem thesis). The other is semantic holism which is the view that the unit of meaning is not the single sentence, but systems of sentences (and in some extreme cases the whole of language). This latter holism is closely related to Quine's well-known denial of the analytic-synthetic distinction (Quine 1980b) and his equally famous indeterminacy of translation thesis (Quine 1960). Although for Quine, semantic holism and confirmational holism are closely related, there is good reason to distinguish them, since the former is generally thought to be highly controversial while the latter is considered relatively uncontroversial.
Why this is important to the present debate is that Quine explicitly invokes the controversial semantic holism in support of the indispensability argument (Quine 1980b, pp. 45-46). Most commentators, however, are of the view that only confirmational holism is required to make the indispensability argument fly (see, for example, Colyvan (1998); Field (1989, pp. 14-20); Hellman (199?); Resnik (1995a; 1997); Maddy (1992)) and my presentation here follows that accepted wisdom. It should be kept in mind, however, that while the argument, thus construed, is Quinean in flavor it is not, strictly speaking, Quine's argument.
There have been many objections to the indispensability argument, including Charles Parsons' (1980) concern that the obviousness of basic mathematical statements is left unaccounted for by the Quinean picture and Philip Kitcher's (1984, pp. 104-105) worry that the indispensability argument doesn't explain why mathematics is indispensable to science. The objections that have received the most attention, however, are those due to Hartry Field, Penelope Maddy and Elliott Sober. In particular, Field's nominalisation program has dominated recent discussions of the ontology of mathematics.
Field (1980) presents a case for denying the second premise of the Quine-Putnam argument. That is, he suggests that despite appearances mathematics is not indispensable to science. There are two parts to Field's project. The first is to argue that mathematical theories don't have to be true to be useful in applications, they need merely to be conservative. (This is, roughly, that if a mathematical theory is added to a nominalist scientific theory, no nominalist consequences follow that wouldn't follow from the nominalist scientific theory alone.) This explains why mathematics can be used in science but it does not explain why it is used. The latter is due to the fact that mathematics makes calculation and statement of various theories much simpler. Thus, for Field, the utility of mathematics is merely pragmatic - mathematics is not indispensable after all.
The second part of Field's program is to demonstrate that our best scientific theories can be suitably nominalised. That is, he attempts to show that we could do without quantification over mathematical entities and that what we would be left with would be reasonably attractive theories. To this end he is content to nominalise a large fragment of Newtonian gravitational theory. Although this is a far cry from showing that all our current best scientific theories can be nominalised, it is certainly not trivial. The hope is that once one sees how the elimination of reference to mathematical entities can be achieved for a typical physical theory, it will seem plausible that the project could be completed for the rest of science.[5]
There has been a great deal of debate over the likelihood of the success of Field's program but few have doubted its significance. Recently, however, Penelope Maddy, has pointed out that if P1 is false, Field's project may turn out to be irrelevant to the realism/anti-realism debate in mathematics.
Maddy presents some serious objections to the first premise of the indispensability argument (Maddy 1992; 1995; 1997). In particular, she suggests that we ought not have ontological commitment to all the entities indispensable to our best scientific theories. Her objections draw attention to problems of reconciling naturalism with confirmational holism. In particular, she points out how a holistic view of scientific theories has problems explaining the legitimacy of certain aspects of scientific and mathematical practices. Practices which, presumably, ought to be legitimate given the high regard for scientific practice that naturalism recommends. It is important to appreciate that her objections, for the most part, are concerned with methodological consequences of accepting the Quinean doctrines of naturalism and holism - the doctrines used to support the first premise. The first premise is thus called into question by undermining its support.
Maddy's first objection to the indispensability argument is that the actual attitudes of working scientists towards the components of well-confirmed theories vary from belief, through tolerance, to outright rejection (Maddy 1992, p. 280). The point is that naturalism counsels us to respect the methods of working scientists, and yet holism is apparently telling us that working scientists ought not have such differential support to the entities in their theories. Maddy suggests that we should side with naturalism and not holism here. Thus we should endorse the attitudes of working scientists who apparently do not believe in all the entities posited by our best theories. We should thus reject P1.
The next problem follows from the first. Once one rejects the picture of scientific theories as homogeneous units, the question arises whether the mathematical portions of theories fall within the true elements of the confirmed theories or within the idealized elements. Maddy suggests the latter. Her reason for this is that scientists themselves do not seem to take the indispensable application of a mathematical theory to be an indication of the truth of the mathematics in question. For example, the false assumption that water is infinitely deep is often invoked in the analysis of water waves, or the assumption that matter is continuous is commonly made in fluid dynamics (Maddy 1992, pp. 281-282). Such cases indicate that scientists will invoke whatever mathematics is required to get the job done, without regard to the truth of the mathematical theory in question (Maddy 1995, p. 255). Again it seems that confirmational holism is in conflict with actual scientific practice, and hence with naturalism. And again Maddy sides with naturalism. (See also Parsons (1983) for some related worries about Quinean holism.) The point here is that if naturalism counsels us to side with the attitudes of working scientists on such matters, then it seems that we ought not take the indispensability of some mathematical theory in a physical application as an indication of the truth of the mathematical theory. Furthermore, since we have no reason to believe that the mathematical theory in question is true, we have no reason to believe that the entities posited by the (mathematical) theory are real. So once again we ought to reject P1.
Maddy's third objection is that it is hard to make sense of what working mathematicians are doing when they try to settle independent questions. These are questions, that are independent of the standard axioms of set theory - the ZFC axioms.[6] In order to settle some of these questions, new axiom candidates have been proposed to supplement ZFC, and arguments have been advanced in support of these candidates. The problem is that the arguments advanced seem to have nothing to do with applications in physical science: they are typically intra-mathematical arguments. According to indispensability theory, however, the new axioms should be assessed on how well they cohere with our current best scientific theories. That is, set theorists should be assessing the new axiom candidates with one eye on the latest developments in physics. Given that set theorists do not do this, confirmational holism again seems to be advocating a revision of standard mathematical practice, and this too, claims Maddy, is at odds with naturalism (Maddy 1992, pp. 286-289).
Although Maddy does not formulate this objection in a way that directly conflicts with P1 it certainly illustrates a tension between naturalism and confirmational holism.[7] And since both these are required to support P1, the objection indirectly casts doubt on P1. Maddy, however, endorses naturalism and so takes the objection to demonstrate that confirmational holism is false. I'll leave the discussion of the impact the rejection of confirmational holism would have on the indispensability argument until after I outline Sober's objection, because Sober arrives at much the same conclusion.
Elliott Sober's objection is closely related to Maddy's second and third objections. Sober (1993) takes issue with the claim that mathematical theories share the empirical support accrued by our best scientific theories. In essence, he argues that mathematical theories are not being tested in the same way as the clearly empirical theories of science. He points out that hypotheses are confirmed relative to competing hypotheses. Thus if mathematics is confirmed along with our best empirical hypotheses (as indispensability theory claims), there must be mathematics-free competitors. But Sober points out that all scientific theories employ a common mathematical core. Thus, since there are no competing hypotheses, it is a mistake to think that mathematics receives confirmational support from empirical evidence in the way other scientific hypotheses do.
This in itself does not constitute an objection to P1 of the indispensability argument, as Sober is quick to point out (Sober 1993, p. 53), although it does constitute an objection to Quine's overall view that mathematics is part of empirical science. As with Maddy's third objection, it gives us some cause to reject confirmational holism. The impact of these objections on P1 depends on how crucial you think confirmational holism is to that premise. Certainly much of the intuitive appeal of P1 is eroded if confirmational holism is rejected. In any case, to subscribe to the conclusion of the indispensability argument in the face of Sober's or Maddy's objections is to hold the position that it's permissible at least to have ontological commitment to entities that receive no empirical support. This, if not outright untenable, is certainly not in the spirit of the original Quine-Putnam argument.
It is not clear how damaging the above criticisms are to the indispensability argument. Indeed, the debate is very much alive, with many recent articles devoted to the topic. (See bibliography notes below.) Closely related to this debate is the question of whether there are any other decent arguments for platonism. If, as some believe, the indispensability argument is the only argument for platonism worthy of consideration, then if it fails, platonism in the philosophy of mathematics seems bankrupt. Of relevance then is the status of other arguments for and against mathematical realism. In any case, it is worth noting that the indispensability argument is one of a small number of arguments that have dominated discussions of the ontology of mathematics. It is therefore important that this argument not be viewed in isolation.
The two most important arguments against mathematical realism are the epistemological problem for platonism - how do we come by knowledge of causally inert mathematical entities? (Benacerraf 1983b) - and the indeterminacy problem for the reduction of numbers to sets - if numbers are sets, which sets are they (Benacerraf 1983a)? Apart from the indispensability argument, the other major argument for mathematical realism is that it is desirable to provide a uniform semantics for all discourse: mathematical and non-mathematical alike (Benacerraf 1983b). Mathematical realism, of course, meets this challenge easily, since it explains the truth of mathematical statements in exactly the same way as in other domains.[8] It is not so clear, however, how nominalism can provide a uniform semantics.
Finally, it is worth stressing that even if the indispensability argument is the only good argument for platonism, the failure of this argument does not necessarily authorize nominalism, for the latter too may be without support. It does seem fair to say, however, that if the objections to the indispensability argument are sustained then one of the most important arguments for platonism is undermined. This would leave platonism on rather shaky ground.[9]
Although the indispensability argument is to be found in many places in Quine's writings (including 1976; 1980a; 1980b; 1981a; 1981c), the locus classicus is Putnam's short monograph Philosophy of Logic (included as a chapter of the second edition of the third volume of his collected papers (Putnam, 1979b)). See also Putnam (1979a) and the introduction of Field (1989) which has an excellent outline of the argument.
See Chihara (1973), and Field (1980; 1989) for attacks on the second premise and Malament (1982), Resnik (1985) and Shapiro (1983) for criticisms of Field's program. For a fairly comprehensive look at nominalist strategies in the philosophy of mathematics, see Burgess and Rosen (1997), while Feferman (1993) questions the amount of mathematics required for empirical science. See Azzouni (1997), Balaguer (1996b; 1998), Maddy (1992; 1995; 1997), Peressini (1997), Sober (1993) and Vineberg (1996) for attacks on the first premise. Colyvan (1998; 199?), Hellman (199?) and Resnik (1995a; 1997) reply to some of these objections.
For variants of the Quinean indispensability argument see Maddy (1992) and Resnik (1995a).
Other Internet Resources
[Please email the author with information about other internet resources on this topic.]
Related Entries
holism | inference to the best explanation | naturalism | nominalism | platonism | Quine, W. V. | realism
Copyright © 1998, 1999 by
Mark Colyvan
Table of Contents
First published: December 21, 1998
Content last modified: February 5, 1999
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.