text
stringlengths 263
344k
| id
stringlengths 47
47
| dump
stringclasses 23
values | url
stringlengths 16
862
| file_path
stringlengths 125
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
81.9k
| score
float64 2.52
4.78
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Water-rich planets outside our solar system common
Water is likely to be a major component of those exoplanets which are between two to four times the size of Earth, suggests new research that may have ...
Water is likely to be a major component of those exoplanets which are between two to four times the size of Earth, suggests new research that may have implications for the search of life in our galaxy. Water has been implied previously on individual exoplanets, but this work, presented at the Goldschmidt conference in Boston, Massachusetts, concludes that water-rich planets outside our solar system are common.
The new research, based on data from the exoplanet-hunting Kepler Space Telescope and the Gaia mission, indicates that many of the known planets may contain as much as 50 per cent water, which is much more than the Earth's 0.02 per cent (by weight) water content. "It was a huge surprise to realise that there must be so many water-worlds," said lead researcher Li Zeng of Harvard University.
Scientists have found that many of the 4,000 confirmed or candidate exoplanets discovered so far fall into two size categories -- those with the planetary radius averaging around 1.5 times that of the Earth, and those averaging around 2.5 times the radius of the Earth. For this study, the scientists developed a model for internal structures of the exoplanets after analysing the exoplanets with mass measurements and recent radius measurements from the Gaia satellite. "We have looked at how mass relates to radius, and developed a model which might explain the relationship", said Li Zeng.
"The model indicates that those exoplanets which have a radius of around x1.5 Earth radius tend to be rocky planets (of typically x5 the mass of the Earth), while those with a radius of x2.5 Earth radius (with a mass around x10 that of the Earth) are probably water worlds," he added. "Our data indicate that about 35 per cent of all known exoplanets which are bigger than Earth should be water-rich," he said, adding that surface of these exoplanets may be shrouded in a water-vapour-dominated atmosphere, with a liquid water layer underneath. The researchers believe that these water worlds likely formed in similar ways to the giant planet cores (Jupiter, Saturn, Uranus, Neptune) which we find in our own solar system.
|
<urn:uuid:4f34b2e1-e488-4770-a58e-350b70500f91>
|
CC-MAIN-2020-16
|
https://www.thehansindia.com/posts/index/Hans/2018-08-20/Water-rich-planets-outside-our-solar-system-common/406633
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371805747.72/warc/CC-MAIN-20200407183818-20200407214318-00283.warc.gz
|
en
| 0.962368 | 497 | 3.484375 | 3 |
Due to the increased use of computers, and particularly e-learning, it’s easy to ask yourself, is handwriting still important? Psychologists and neuroscientists say it is far too soon to declare handwriting as no longer valuable. Evidence has shown there are intrinsic links between handwriting and broader educational development. Here’s how handwriting trains the brain and improves children’s cognitive function more than typing - and why handwriting is still important.
1. More of the areas of the brain are activated when writing than typing.
A 2012 study asked children who had not yet learned to read or write, to try writing and typing a letter of the alphabet freehand. When writing, the children exhibited increased activity in three areas of the brain that are activated when adults read and write: the left fusiform gyrus, the inferior frontal gyrus and the posterior parietal cortex. By contrast, the children who typed the letter showed no such effect. Dr. James who ran the study stated “When a kid produces a messy letter, that might help them learn it.” Unlike typing, the sequential hand movements required in handwriting activate regions of the brain responsible for thinking, language, and memory.
2. Writing by hand sharpens critical thinking.
A University of Washington study looked at children’s ability to write sentences using both a pen and a keyboard, concluding the children (aged between 6 and 10) “consistently did better, wrote more and they wrote faster” when they wrote the sentences by hand rather than typing them.
When writing by hand, not only did the children produce more words, but they expressed more ideas. The study’s brain imaging demonstrated when these children were asked to come up with ideas, the ones with better handwriting exhibited greater neural activation in areas associated with working memory — and increased overall activation in the reading and writing networks.
3. Handwriting increases retention.
Children can better remember the words whilst writing them by hand than by typing on a keyboard. A study at a university in California concluded students who took handwritten notes were better able to answer questions on the lecture than those who used a laptop. This is because handwriting requires a preliminary process of summarising and comprehension; in contrast, those working on a keyboard do not ‘mentally engage’ with the information, thus cannot retain the information as well.
4. Writing by hand improves reading comprehension.
Another study in France monitored children aged three to five, asking half the group to write letters by hand and the other half to type them on a computer. Those who wrote by hand were better at recognising them than those who typed them. This is because when writing by hand, the movements involved leave a motor memory, which helps you to recognise letters. Cursive handwriting (script, rather than of printed letters) may even help children with dyslexia remember the order of letters in words better.
5. Handwriting helps develop cognitive motor skills.
Writing and typing require very different cognitive processes. “Handwriting is a complex task which requires various skills – feeling the pen and paper, moving the writing implement, and directing movement by thought,” says Edouard Gentaz, professor of developmental psychology. “Children take several years to master this precise motor exercise: you need to hold the scripting tool firmly while moving it in such a way as to leave a different mark for each letter.”
Operating a keyboard is not the same at all: all you have to do is press the right key. It is easy enough for children to learn very fast, but above all the movement is exactly the same whatever the letter.
When handwriting, you not only see the letter appear on the page, but also feel the movement of the pen or pencil as it moves along the page. This is a far more enriching experience with higher levels of neurosensory activity. The benefits of this become more apparent with younger children, as they learn important distinctions between the shapes of letters and gain a better understanding of language in general.
|
<urn:uuid:4eea8d80-edbb-4323-969d-f94a3cf3ea72>
|
CC-MAIN-2023-40
|
https://lovewritingco.com/blogs/blog/step-away-from-the-keyboard-how-handwriting-benefits-children-s-brains
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00615.warc.gz
|
en
| 0.963495 | 827 | 3.1875 | 3 |
After 10 difficult years of work, researchers announced in 1993 that they had located the gene causing Huntington’s disease. Since then, the pace at which other disease-causing genes have been found has greatly accelerated, so that genes are typically located in no more than two years. Now a new tool that can reduce the time needed for a critical step involved in gene identification, from perhaps as much as 10 months to a day, should allow gene discoveries to be made much faster still.
The tool is a map showing the approximate positions of thousands of genes-of still largely unidentified function-along the genome, the complete set of our 3 billion chemical “base pairs” of DNA. The map, which is being developed by an international consortium of 104 scientists, including researchers at the Whitehead/MIT Center for Genome Research (CGR), was made possible by a concerted previous effort that identified the chemical makeup of more than 450,000 short sections of DNA that lead to the manufacture of protein fragments. Researchers recognize that since genes direct protein production, the short sections-known as complementary DNAs, or cDNAs-are portions of genes and thus of potentially great value. But no widespread group of scientists has previously determined just what genes these DNA sections belong to.
The consortium workers began by taking copies of the cDNAs and, by comparing them for similarities, have created clusters, with each group representing an individual gene. From each cluster the researchers have then identified a representative cDNA and also looked for its presence in DNA sections whose locations on the genome are known from previous research. Where the cDNA shows up, as determined by a test that makes millions of copies of the representative material, suggests the proper map position for the cDNA and hence the probable location of that gene. To gain confidence in their placement decisions, the researchers have repeated this procedure. Because the task entails an enormous amount of repetitive work, Whitehead workers have relied on robots they earlier developed to compare stretches of DNA.
Rush to Publish
So far, the consortium members have positioned on the new “gene map” representative cDNAs corresponding to some 16,000 genes, of a total of perhaps 80,000 human genes, says Thomas J. Hudson, assistant director of CGR and recently also appointed assistant professor of medicine and human genetics at McGill University. These results, published October 25 in the journal Science, mean that the gene map today can help researchers rapidly locate genes of interest maybe one of every five times. But Hudson says he anticipates that the gene map could include perhaps 55,000 gene sites-corresponding to possibly two-thirds of all genes-within two years, with all information being added immediately to a new site on the World Wide Web. He notes that the consortium, which also includes researchers at the National Center for Biotechnology Information at the National Institutes of Health (NIH), Stanford and Oxford universities, the nonprofit institutes Gnthon in France and the Sanger Centre in England, decided to publish the results after only one and a half years of work because the information could prove so valuable to genetic research.
A gene hunter can use the new map after first carefully studying families with a certain disease or trait and finding a large region of DNA inherited along with the condition. The map then immediately indicates to the researcher at least some of the genes lying in that region. In the past, finding candidate genes meant searching through amounts ranging from tens of thousands to perhaps a couple of million DNA base pairs-a step that could take months-for series of chemical units indicating various portions of genes. Removing any part of that process reduces the research period. The final step in identifying the correct gene remains the same: the scientist has to figure out which gene mutates to result in the condition of interest.
Consortium members have also included on the Web site some additional information on the mapped genes. By comparing the chemical makeup of those genes with that of other species’ genes whose functions are already known, the researchers have been able to make educated guesses about the function of one-fifth of the genes listed on the site. Such details could make gene hunters’ tasks easier still.
Francis S. Collins, the director of NIH’s National Center for Human Genome Research, points out that the evolving gene map will not only speed up efforts to find single genes associated with certain diseases and traits but will be essential for locating the suites of genes associated with common conditions such as diabetes and obesity in which more than one gene plays a role. The use of “brute force” alone-the technique that had to be employed before the development of the gene map-to genetically identify the causes of such complex diseases would be “extremely difficult,” he says, because many genes are involved. Without the new tool, he points out, researchers would be stuck with “a whale of a lot of DNA” to pick through.
|
<urn:uuid:b06aa69a-07d5-4d9b-a4bd-56b1969818a5>
|
CC-MAIN-2013-48
|
http://www.technologyreview.com/article/400008/putting-a-rush-on-identifying-genes/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163056120/warc/CC-MAIN-20131204131736-00047-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.963357 | 1,007 | 3.984375 | 4 |
Overview of transient liquid phase and partial transient liquid phase bonding
- First Online:
- Cite this article as:
- Cook, G.O. & Sorensen, C.D. J Mater Sci (2011) 46: 5305. doi:10.1007/s10853-011-5561-1
Transient liquid phase (TLP) bonding is a relatively new bonding process that joins materials using an interlayer. On heating, the interlayer melts and the interlayer element (or a constituent of an alloy interlayer) diffuses into the substrate materials, causing isothermal solidification. The result of this process is a bond that has a higher melting point than the bonding temperature. This bonding process has found many applications, most notably the joining and repair of Ni-based superalloy components. This article reviews important aspects of TLP bonding, such as kinetics of the process, experimental details (bonding time, interlayer thickness and format, and optimal bonding temperature), and advantages and disadvantages of the process. A wide range of materials that TLP bonding has been applied to is also presented. Partial transient liquid phase (PTLP) bonding is a variant of TLP bonding that is typically used to join ceramics. PTLP bonding requires an interlayer composed of multiple layers; the most common bond setup consists of a thick refractory core sandwiched by thin, lower-melting layers on each side. This article explains how the experimental details and bonding kinetics of PTLP bonding differ from TLP bonding. Also, a range of materials that have been joined by PTLP bonding is presented.
Transient liquid phase (TLP) bonding
Transient liquid phase (TLP) bonding is a joining process that was developed to improve upon existing bonding technologies. Specifically, this process was patented by Paulonis et al. in 1971 to overcome deficiencies of then current bonding techniques in joining Ni-based superalloys [2, 3, 4, 5, 6]. TLP bonding’s main advantage is that resulting bonds have a higher melting point than the bonding temperature. This bonding process characteristically lies between diffusion bonding and brazing—for this reason, it is commonly called diffusion brazing. The process is also referred to by names such as transient insert liquid metal bonding and is sometimes mistakenly referred to as diffusion bonding (which by definition relies solely on solid-state diffusion). See reference for a detailed history of TLP bonding and its many names.
TLP bonding process
setting up the bond
heating to the specified bonding temperature to produce a liquid in the bond region
holding the assembly at the bonding temperature until the liquid has isothermally solidified due to diffusion
homogenizing the bond at a suitable heat-treating temperature.
thin foil (rolled sheet) [2, 3, 4, 7, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]
evaporating an element out of the substrate material to create a “glazed” surface .
Fixturing pressures used during TLP bonding
The bonding process is usually confined in a vacuum [3, 4, 5, 7, 12, 14, 15, 16, 17, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 35, 38, 39, 40, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 61, 62, 63, 65, 66, 68, 69, 70, 71, 72, 76, 77, 78, 79, 80, 81, 82, 83, 86, 87, 88, 93, 94, 95, 96, 97, 98, 99, 100, 103, 104, 105, 108, 110, 112, 113, 114, 115, 117, 119, 122, 123, 124, 125, 126, 127, 130, 132, 133, 134, 135, 136, 137, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157], although an inert atmosphere, such as argon, can be used [6, 11, 14, 32, 33, 43, 45, 60, 67, 74, 75, 90, 111, 121, 134, 158, 159]. On rare occasions, TLP bonding is performed under a different atmosphere, such as nitrogen , hydrogen , nitrogen and hydrogen , or open air . The vacuum pressures used in the experiments referenced above are normally distributed about 0.1 μmHg (millitorr) with minimum and maximum values of 0.00015 and 34 μmHg, respectively.
TLP bonding kinetics
melting of the interlayer
dissolution of the substrate material
homogenization of the bond region.
Concentration profile 1 (CP1) in Fig. 1 shows the TLP bonding setup at room temperature. The interlayer element (i) is sandwiched between two pieces of the substrate material element (s). The thickness of the bond region in Fig. 1 has been exaggerated to display changes in the concentration profile. The interlayer can be composed of a single element, an alloy, or a multi-layer combination of elements and/or alloys.
Interlayer thicknesses for TLP bonding
Thickness range (μm)
Common thickness(es) (μm)
[5, 8, 10, 20, 21, 22, 23, 24, 25, 34, 35, 39, 44, 48, 54, 59, 60, 61, 62, 63, 66, 68, 74, 75, 76, 78, 79, 80, 82, 87, 89, 105, 107, 110, 117, 118, 120, 121, 122, 123, 126, 131, 134, 137, 141, 142, 143, 145, 146, 147, 150, 152, 155, 166, 167, 168, 169, 170, 171]
[2, 5, 20, 26, 27, 32, 33, 36, 37, 40, 42, 46, 51, 52, 55, 56, 57, 58, 59, 60, 64, 69, 70, 71, 77, 78, 81, 83, 84, 87, 88, 90, 101, 115, 119, 124, 125, 126, 127, 128, 135, 136, 145, 148, 149, 153, 155, 157, 166, 167, 172]
As the bond assembly is heated, the interlayer begins to diffuse into the substrate materials (CP2). The amount of diffusion that occurs is dependent upon the interdiffusion coefficient between the substrate and interlayer materials as well as the heating rate.
Upon reaching the interlayer element’s melting point (CP3), the pure portion of the interlayer liquefies (L). Heating of the bond region continues until the bonding temperature has been reached. The bonding temperature is usually well above the interlayer’s melting point to ensure complete melting of the interlayer and to increase the rate of diffusion (see Optimal bonding temperature).
During heating past the melting point, the concentrations of the liquid region follow the solidus (c4,S) and liquidus (c4,L) lines of the phase diagram (CP4). This causes the liquid region to melt back, or dissolve, the substrate material to conserve mass. The movement of the solid–liquid interface continues until the bonding temperature has been reached (CP5a); at this point the liquid has attained its maximum width and has consumed some of the diffused solute. The amount of melt-back is dependent upon the solidus (c5,S) and liquidus (c5,L) compositions for the given material system at the bonding temperature (see Optimal bonding temperature). The main two effects that lower melt-back distance are (1) significant diffusion of the interlayer material into the substrate before melting (see Critical interlayer thickness) and (2) loss of liquid due to wetting of the substrate’s sides or a high bonding pressure that squeezes liquid out [22, 44, 62, 134, 165].
Many materials that are joined by TLP bonding have carefully designed microstructures to achieve certain mechanical properties. Too much melt-back of the substrate by the liquid interlayer can have detrimental effects on the final bond in addition to lengthening the isothermal solidification time (see Critical interlayer thickness). And, in some systems melt-back can reach five to fifteen times the original interlayer thickness [43, 121, 146]. To prevent drastic melt-back that can adversely affect the microstructure, the interlayer should be thin [57, 62], of a eutectic composition , or of a composition similar to the substrate material .
After the liquid interlayer has reached its maximum width, the interlayer material diffuses into the substrates at a rate somewhere between the diffusivity of the liquid and solid [25, 43, 85, 163]. As this diffusion occurs isothermally, the liquid region contracts (CP5b) to conserve mass as the solidus and liquidus concentrations are now fixed. Isothermal solidification occurs until all of the liquid has disappeared (CP5c). At this point, the TLP bonding process can be stopped if desired . The bond already has an elevated remelting temperature (T5) compared to the melting temperature of the interlayer (T3).
In most cases TLP bonding is continued in order to homogenize the bond. This can be an extended time in the same heating apparatus or a post-bond heat treatment applied at some other time . Furthermore, if the substrate material’s microstructure is extremely sensitive, this stage can be conducted at a lower temperature [2, 9]. In either case, the bond undergoes homogenization for some predetermined time which causes smoothing of the solute peak (CP5d) that remained at the end of isothermal solidification (CP5c). The resulting remelting temperature of the bond in this case is TR,P. If the peak concentration (c5,P) is within the room-temperature solid-solubility limit of the binary system, the precipitation of strength-reducing intermetallic compounds upon cooling will be avoided [12, 110, 147, 150, 160].
If the bond is homogenized for a sufficient amount of time, there is no gradient in the concentration profile (CP5e) and the bond’s remelting temperature is even higher (TR,F). However, despite the increases in bond remelting temperature that can be achieved by complete homogenization, an adequate homogenization time is usually determined by a sufficiently high bond strength [2, 24, 26, 42, 83, 103, 122, 123, 146, 150] or economic considerations that limit furnace time [77, 125, 173]. Nonetheless, the bond’s remelting temperature is often hundreds of degrees (°C) above the melting point of the interlayer and can be about 1000 °C higher if refractory metals such as Ir, Mo, Nb, Os, Re, Ta, or W are used as the substrate or if low melting-point metals such as Al, Ga, In, Mg, Pb, Sb, Sn, or Zn are used as the interlayer.
Time frame of TLP bonding
Heating to the bonding temperature, CP1–5a: less than a minute to about an hour; dependent on the method of heating, the heating rate of the heating apparatus, and the substrate material’s thermal properties
Isothermal solidification, CP5a–5c: minutes [7, 25, 36, 37, 56, 74, 77, 95, 96, 105, 121, 125, 153] to hours [7, 11, 33, 36, 37, 43, 57, 58, 105, 146, 157, 175], although it can occur in less than a minute [73, 161] or take more than a day
The general trend is that initial melting of the interlayer occurs an order of magnitude faster than melting back of the substrate, which occurs an order of magnitude faster than isothermal solidification, which occurs an order of magnitude faster than complete homogenization. Isothermal solidification ends up being the limiting, or controlling, time in producing a successful TLP bond [12, 36, 104, 125, 135, 138, 166, 167]. While the homogenization stage takes longer if carried to completion, it rarely is. As previously stated, homogenization can be performed during a subsequent heat treatment or skipped in some cases; it can even occur once the part is in service.
The foregoing explanation of TLP bonding kinetics also applies to eutectic systems when the interlayer is a eutectic composition alloy. The kinetics are slightly different (and more importantly, TLP bonding takes much longer) for a eutectic system when pure elements are used. (See references [19, 44] for particulars of eutectic system kinetics). Tuah-Poku et al. reported a drastic decrease in time to isothermally solidify by changing the interlayer: 200 h when using the pure element as compared to 8 h when using the eutectic composition. This occurs because (1) the interlayer has to undergo a certain amount of solid-state diffusion with the substrate at the bonding temperature before any liquid appears and (2) melt-back of the substrate is then greater.
Critical interlayer thickness
During initial heating, the interlayer element diffuses into the substrates. The magnitude of diffusion depends upon the specific material combination, but all solid-state diffusion rates increase as the temperature rises. Depending on the heating rate and the thickness of the interlayer, the amount of diffusion can significantly decrease the interlayer’s width. In fact, for a combination of high diffusion rate, slow heating, and/or thin interlayer, it is possible to diffuse all of the interlayer material into the substrate before reaching the interlayer melting point [114, 173], although this is a rare occurrence. Because TLP bonding requires the formation of a bulk liquid phase [9, 109, 114] to create a consolidated, void-free bond while also increasing diffusion rates, the interlayer must exceed a minimum, or critical, thickness [105, 114].
In addition to the parameters listed above, the critical interlayer thickness has been shown to depend on other variables such as applied clamping force, solid/liquid surface tension, surface roughness of the substrate, and intermetallic formation [18, 94, 114]. In short, experiments must be conducted for each material combination to empirically reveal its critical interlayer thickness.
On the other hand, analytical models of TLP bonding indicate that the isothermal solidification process time is roughly proportional to the square of the interlayer thickness [9, 18, 19, 25, 36, 44, 86, 104, 110, 114, 124, 138, 161, 173, 175, 176]; experimental data often corroborates this trend [3, 26, 43, 44, 71, 86, 88, 153]. Therefore, to minimize bonding time, an interlayer slightly thicker than the critical thickness is ideal.
Optimal bonding temperature
The bonding temperature is sometimes completely limited by the microstructural stability of the substrate material [7, 9, 87, 125]. If, however, the substrate material allows flexibility in selecting an optimized bonding temperature, a minimum isothermal solidification time (and therefore bonding time) can be achieved at a certain temperature.
If phase diagram and diffusion data are available for the material system in question, the isothermal solidification time can be characterized with respect to temperature. However, it is usually the case that experiments are the only way to discover this relationship. In general, the relationship is parabolic, yielding a minimum isothermal solidification time at a given intermediate temperature (between the melting points of the interlayer and substrate materials) [9, 18, 43, 50, 78, 79, 159, 162]. And yet, in some cases the variables of the system yield either (1) a monotonically increasing time, in which case the optimal bonding temperature is just above the interlayer’s melting point, [6, 9, 43, 114, 158, 162] or (2) a monotonically decreasing time, in which case the optimal bonding temperature is as high as the substrate material allows [4, 104, 114, 135, 141].
Systems a and b in Fig. 3 have the same convex-shaped liquidus line and therefore experience the same amount of melt-back (see the top concentration profiles). The same is true for the concave-shaped liquidus line in systems c and d. Systems a and c have the same partition coefficient (0.9). The same is true for systems b and d (0.4).
If the substrate material has a sensitive microstructure that could be damaged by significant melt-back, then phase diagrams such as systems c and d should be avoided. Systems b and d will take much longer to isothermally solidify, raising operating costs.
Systems a and c are quite similar in rate of isothermal solidification. Because the solidus composition of system c is closer to the completely homogenized composition (shown as an x on the gray line), homogenization of the solute peak after isothermal solidification will likely proceed more rapidly than in system a. However, because the solidus line of system a has a convex shape, increases in bond remelting temperature due to homogenization will likely occur faster and be larger in this system.
Modeling of TLP bonding
Analytical models have been developed by many researchers for the four stages of TLP bonding to provide quick estimates or general trends, such as those illustrated in the previous section. Equations for and descriptions of TLP bonding analytical models are included in references [8, 19, 162, 178]. Assumptions made for these models are similar to those made in this article (see TLP bonding kinetics). In some cases these equations provide good results, but for many systems these simplified, binary-system approaches do not supply accurate estimates [5, 8, 38, 78, 79, 86]. This is due in part to the diffusion coefficients being assumed independent of composition.
Some complexities of TLP bonding are quite difficult to model. For example, grain boundaries can cause isothermal solidification to occur at a different rate than that predicted by analytical models using a bulk diffusion coefficient [25, 36, 37]. Indeed, grain boundary diffusion is faster than bulk diffusion in a certain temperature range (based on the alloy’s melting point) . Grain boundary diffusion rates also increase as the substrate material’s grain size decreases . Further, grain boundaries can be penetrated by the liquid to cause a non-planar solidification front, thereby increasing the area over which diffusion occurs [5, 11, 18, 138]. See references [180, 181] for more information on the effect of grain boundaries in TLP bonding.
Another interesting deviation is that isothermal solidification can occur in two different “regimes” [38, 50, 138]. The faster solute element of a multi-component interlayer controls the rate of solidification for the first regime. Then, a second solute element controls the rate of solidification during the second regime, resulting in complex concentration–time profiles.
Numerical models can account for some of the complexities of TLP bonding to accurately predict bonding kinetics [7, 8, 161, 162, 168, 173, 176, 179, 182, 183] and can even be extended to multi-component systems [174, 184]. Despite the complexities and extra time required in numerical modeling, especially for multi-component systems, the limiting factor is most often the lack of necessary diffusion data [7, 8, 178]. But, when the necessary data is available, modeling of TLP bonding can drastically reduce the number of experiments required to determine optimal bonding parameters [37, 162, 166].
Advantages and disadvantages of TLP bonding
The most distinctive advantage of TLP bonding is that the resulting bond can operate at the bonding temperature or higher temperatures. In other words, materials can be bonded at a temperature equal to or lower than what the assembled part will experience in service. This is especially important for temperature-sensitive materials whose microstructures can be damaged by too much thermal energy input and therefore need to be joined at lower temperatures.
Another advantage is that the resulting TLP bonds often have microstructural, and therefore mechanical, properties similar to the properties of the base materials [7, 12, 13, 14, 24, 32, 42, 49, 61, 62, 75, 77, 81, 95, 105, 109, 115, 119, 123, 141, 149, 153, 166, 175]. In fact, in some cases the bond area becomes indistinguishable from other grain boundaries [18, 35, 37, 68, 108, 109, 130, 185] due to significant diffusion at high temperature. Such bonds are often as strong as the bulk substrate material [14, 164], or stronger, causing the joined assembly to fail in the substrate material rather than in the bond [14, 31, 71, 76].
the process is highly tolerant to the presence of a faying surface oxide layer [2, 6, 7, 11, 13, 21, 42, 47, 48, 56, 67, 80, 93, 136, 147, 148, 149, 186] and therefore requires less joint preparation and no fluxing agents [11, 18, 42, 173, 187]; in a few rare cases surface oxides are actually beneficial to the process
For some material systems, bond properties and performance capabilities that are difficult or impractical to achieve using conventional joining methods are more accessible .
See reference for examples of specific difficulties that occur in TLP bonding applications. Although many disadvantages of TLP bonding can be overcome by optimized bonding parameters, the optimization process often requires much experimentation.
Applications of TLP bonding
A spectrum of materials joined by TLP bonding
NB 30, NB 150, BNi-3, MBF-60, MBF-80, DF-3
Ni–B, Ni–Cr–Si–Fe–B, MBF-80
F20, F24, F25, F26, F27, MBF-80
F20, F24, F25, F26, F27, MBF-80
Ni–Ge, Ni–Mn, Ni–Mn–Si, D-15
Ni–15Cr–11.5Al–3 W–0.2Hf–0.1Si–0.1Mn (γ/γ′/β type)
MBF-80, Ni–Cr–B–Ce (various combinations)
Ag, Al–Si, BAg-8
Ni–Cr, 304L SSc, BNi-2
Cu, Fe–B–Si, Ni–Si–B, MBF-20, MBF-30, MBF-35, MBF-50, MBF-80
Ni–B–Cr–Si (various combinations)
Fe–B–Si, BNi-1a, BNi-3
Low carbon steel
ODSc steel (Fe–Cr–W–Y2O3–Ti)
Fe–B–Si, Fe–Ni–Cr–Si–B, BNi-2
Cu–Ti, Ni–Ti, Ti–Cu–Zr
Co alloy (unspecified)
Cu | Sn | Cu
Cu | Sn | Cu
Sn–Bi, Bi–Sn (various combinations)
IC 6 (with and without B)
PWA 1483 (Ni–Cr–Co–Ta–Ti–W–Al–Mo)
Ni (glaze), BNi-3
Ti | Cu, Ti | Ni, Ti | Fe
Ti–45Al–2Nb–2Mn (a) + 0.8 vol.% TiB2
Ti–Cu–Ni, Cu–Ni | Ti | Cu–Ni
γ-TiAl [Ti–47Al–2Cr–2Nb (a)]
Cu, Cu & Ti–Al–Cr–Nb, Cu & TiAl
Gamma Met PX
Cu & Gamma Met
Ag, Cu, Ga, Al–Cu, Al–Si–Cu
Au–Sn, Sn, In, Ti | In
Ag, Sn, Ag–Cu, BiIn, BiIn2, BiSn, InSn, NB 51
Sb, Fe–P, Fe–B
Ti, Zr, V
B, Cu, Hf, BNi-3, BNi-6, MBF-60, MBF-80
Metal matrix compositesd
Ag, Cu, Al–Cu, Cu–Ti
Haynes 230 doped with B
Al, Al & SiO2, B2O3
Mar-M247 (directionally solidified)
Cu–Cr–Zr and Cu (ODSc)
Cu | Sn | Cu
Inconel 738 and 939
BNi-3, Niflex-110, Niflex-115
M963 (Ni–W–Co single crystal)
NiAl-Hf (single crystal)
Cu, NiAl & Cu, Ni3Al & Cu
Low carbon steel
TS7 (Ti alloy)
5VMTs (Nb alloy) with W, Mo, & Zr; and TV10 (Ta alloy) with W
Ti–45Al–2Nb–2Mn (a) + 0.8 vol.% TiB2
Metals to Metal matrix compositesd
Metals to Ceramics
Ni–Si | Mo
ODSc Fe alloy (Fe–Cr–Al–Y2O3)
W18Cr4 V tool steel
Ti(C,N) (50%TiC & 50%TiN)
Metal matrix compositesdto ceramics
Variants of TLP bonding
Wide-gap TLP bonding: gaps of 100–500 μm can be bonded or repaired by the use of a melting and a non-melting constituent (multiple layers or mixed powders) [7, 16, 57, 92, 94, 95, 96, 100, 101, 136, 149, 173, 195]. This technique can also be used in conventional TLP bonding to accelerate isothermal solidification [13, 99, 140]
Active TLP bonding: a ceramic and metal can be joined by a multi-component interlayer; at least one constituent reacts with the ceramic while another diffuses into the metal to cause isothermal solidification [28, 42, 52, 54, 116, 132, 196]
Partial TLP bonding (see next section).
Bonds made using temperature gradient, wide-gap, and active TLP bonding have been included in Table 3.
Partial transient liquid phase (PTLP) bonding
Partial transient liquid phase (PTLP) bonding is a variant of TLP bonding mainly used to join ceramics. PTLP bonding overlaps both wide-gap and active TLP bonding, although articles defining PTLP bonding predate the other two techniques by a few years. Many advantages of conventional TLP bonding carry over to PTLP bonding . The ensuing sections focus on how PTLP bonding differs from TLP bonding.
PTLP bonding process
A spectrum of materials joined by PTLP bonding
Cr | Cu | Ni | Cu | Cr, Cu | Nb | Cu, Cu | Ni | Cu, Cu | Ni–Cr | Cu, Cu | Pt | Cu, In | Ag ABAc | In, In | Cusil ABAc | In, In | Incusil ABAc | In, Ni | Nb | Ni, Ti | Al | Ti
Au | InBi | Au
Al | Ti | Al, Au | Ni–Cr | Au, Cu–Au | Ni | Cu–Au, Co | Nb | Co, Co | Ta | Co, Co | Ti | Co, Co | V | Co, Cu–Au–Ti | Ni | Cu–Au–Ti, Cu–Ti | Pd | Cu–Ti, Ni | Ti | Ni | Ti | Ni, Ni | V | Ni, Ti | Au | Cu | Au | Ni | Au | Cu | Au | Ti, Ti | Cu | Ti, Ti | Cu | Ni | Cu | Ti, Ti | Ni | Ti, Ti | Ni | 304SSc | Ni | Ti, Ti | Ni | Kovarc | Ni | Ti, V | Co | V
C | Si | C, Cu–Au–Ti | Ni | Cu–Au–Ti, Ni–Si | Mo | Ni–Si, Ti | Au | Cu | Au | Ni | Au | Cu | Au | Ti
Zn | Pd | Zn
Al | Ni | Al, Ni | Nb | Ni
Ni | Nb | Ni
Al / SiC
Cu | Ni | Cu
Si3N4 / TiC
Ti | Ni | Ti
Al 6061 / Al2O3
Cu | Ni | Cu
C / C
Ti | Ni | Ti
Metals to ceramics
FA-129 (Fe3Al alloy, Fe–Al–Cr–Nb)
Cu–Ti (ABAc) | Cu | Cu–Ti (ABAc), Cu–Ti | Cu | Ni | Al
Ni | Ti | Ni
Ti | Ni | Ti
Ti | Cu | Sn | Au | Cu
The refractory core tends to be a foil that is 20–30 μm [143, 207, 208, 209, 210, 211, 212, 213, 214] or 100–127 μm thick [116, 188, 189, 201, 207, 211, 212, 213, 215, 216, 217, 218, 219, 220, 221, 222], although it can be in the 200–1000 μm range [202, 203, 204, 205, 208, 223, 224, 225]. The refractory core element is often Ni [144, 189, 196, 201, 202, 203, 204, 207, 208, 209, 210, 218, 224, 226, 227]; other elements (and an alloy) that have been used include Au, Co, Cu, Nb, Ni–Cr, Pd, Pt, Si, Ta, Ti, and V [117, 188, 189, 200, 201, 202, 205, 206, 207, 208, 214, 215, 216, 217, 218, 222, 227, 228, 229]. The thin layers can be most of the formats used for TLP bonding interlayers (see TLP bonding process) and are often in the 1–10 μm thick range. The ratio of the thin layer thickness over the refractory core thickness is usually 1–5% [143, 188, 189, 201, 202, 203, 204, 207, 208, 211, 212, 213, 215, 216, 217, 218, 220, 222, 223, 224, 225, 230, 231], although it can be 6–20% [207, 210, 211, 212, 213, 219, 221, 223], and some PTLP bond experiments have utilized a ratio of 50% or higher [116, 144, 208, 214, 226].
PTLP bonding kinetics
The second and fourth assumptions highlight the major differences between TLP and PTLP bonding. First, the multi-layer interlayer used during PTLP bonding has been termed “self-contained” because the liquid phases must diffuse into the rc, rather than the much larger substrate materials, to induce isothermal solidification. Second, the liquid phases must wet the ceramic substrates to create a strong bond. This tends to be difficult due to the chemical inertness of ceramics [117, 180, 196, 216] and usually requires the use of active elements such as Al, Cr, Hf, Nb, Ni, Sc, Ta, Ti, V, or Zr [65, 117, 187, 189, 198, 200, 201, 202, 206, 215, 216, 218, 222, 228, 232, 233]. Also, when analyzing the critical interlayer thickness of the thin layers, a portion of the liquid that forms from those thin layers will react with the ceramic substrate and add to the critical thickness.
The PTLP bonding setup at room temperature is shown in Figs. 5 and 6 as concentration profiles 1A and 1B, CP1A and CP1B, respectively. Both binary systems exhibit complete solid solubility. As the temperature of the bond is raised to the melting points of each thin layer (T4 for tlA and T3 for tlB), both thin layers diffuse into the rc (see CP2A and CP3A as well as CP2B). Despite the small amount of liquid that initially forms from tlB due to its high diffusivity (CP3B), the liquid drastically melts back the rc on further heating (CP4B) due to the concave shape of the liquidus. This melt-back continues until the assembly is heated to the bonding temperature (T5) shown in CP5aB. On the other hand, the liquid formed from tlA (CP4A) widens slightly to be about the same width as the original thin layer (CP5aA) due to that system’s convex liquidus.
At this point, isothermal solidification occurs on both sides of the multi-layer interlayer. It proceeds much faster for tlB due to its high partition coefficient and diffusivity. In fact, isothermal solidification is complete for tlB (CP5bB) when the other liquid region has only solidified about halfway (CP5bA), despite the considerable melt-back of the rc.
The liquid formed from tlA eventually solidifies isothermally (CP5cA). On the other side of the bond, the solute peak has been smoothed due to homogenization (CP5cB), and the remelting temperature on that side has increased to TR,P.
Further homogenization causes the remaining gradient in the tlB element to disappear (CP5dB), thereby raising the remelting temperature of the bond next to substrate B to its final value, TR,F. A similar melting temperature increase (to TR,P1) simultaneously occurs on the other side of the bond due to smoothing of its solute peak (CP5dA).
Prolonging the homogenization process continues to raise the remelting temperature of the left side of the bond. However, once its remelting temperature has reached TR,P2 (CP5eA), which is higher than TR,F for the right side of the bond, further homogenization will have little effect on raising the bond’s remelting temperature. From an optimization standpoint, homogenization should be stopped at this time. However, real-world considerations usually determine the homogenization time, which can be less than—or greater than—the optimized time due to various factors, such as cost, microstructural considerations, or resulting bond strength.
The time frame of PTLP bonding is very similar to that of TLP bonding. Isothermal solidification and homogenization times for TLP bonding depend on high-diffusivity elements diffusing into “infinite” substrate materials. In PTLP bonding, the elements tend to have lower diffusivities, but the maximum diffusion path is on the order of 100 μm, resulting in similar bonding times.
Advantages and disadvantages of PTLP bonding
because diffusion occurs on a smaller scale (on the order of 100 μm), bonding using slow-diffusing elements occurs in a reasonable amount of time.
matching the thermal expansion coefficients of the ceramic substrates and metallic interlayer elements is sometimes necessary to prevent thermally induced stresses and cracking [117, 143, 188, 196, 203, 215]
However, most disadvantages of PTLP bonding can be overcome by proper design. In the end, the limiting factor is wettability on the specific ceramic material.
Applications of PTLP bonding
TLP bonding is a relatively new bonding process that results in a bond with a higher melting temperature than that used to join the materials. Specific details of this process, including experimental details, process kinetics, and optimal bonding temperature, have been outlined in this article. Also, the broad range of materials that have been joined by TLP bonding was presented.
PTLP bonding, a more recent variant of TLP bonding used to bond hard-to-join materials, was also outlined. PTLP bonding has been successful in joining a smaller range of materials, most notably, ceramics.
Both TLP and PTLP bonding are specialized joining processes that require more resources to implement compared to typical bonding processes. However, in some cases these bonding processes are the best—or only—way to join materials for specialized applications.
A numerical model was developed to calculate one-dimensional solid-state diffusion in conjunction with a liquid region that expands or contracts, assuming infinite diffusivity for the liquid region. The model also accounts for a heating period and diffusivity data as a function of concentration and temperature. Diffusivity data along with solidus and liquidus profiles for a hypothetical binary system were used to output concentration profiles that were the basis for the concentration profiles in Figs. 1, 3, 5, and 6.
This study was funded by the Office of Naval Research under grant number N00014-07-1-0872, Dr. William Mullins, Program Officer.
|
<urn:uuid:07f960ae-bbb2-4c1a-b168-91fdb4d057a2>
|
CC-MAIN-2017-26
|
https://link.springer.com/article/10.1007%2Fs10853-011-5561-1
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320243.11/warc/CC-MAIN-20170624082900-20170624102900-00522.warc.gz
|
en
| 0.856074 | 8,056 | 2.96875 | 3 |
David's Activities in Jerusalem
(2 Sam 5.11–16)
1King Hiram of Tyre sent a trade mission to David; he provided him with cedar logs and with stonemasons and carpenters to build a palace. 2And so David realized that the Lord had established him as king of Israel and was making his kingdom prosperous for the sake of his people.
3There in Jerusalem, David married more wives and had more sons and daughters. 4The following children were born to him in Jerusalem: Shammua, Shobab, Nathan, Solomon, 5Ibhar, Elishua, Elpelet, 6Nogah, Nepheg, Japhia, 7Elishama, Beeliada,#14.7 beeliada: Called Eliada in 3.8. and Eliphelet.
Victory over the Philistines
(2 Sam 5.17–25)
8When the Philistines heard that David had now been made king over the whole country of Israel, their army went out to capture him. So David marched out to meet them. 9The Philistines arrived at the Valley of Rephaim and began plundering. 10David asked God, “Shall I attack the Philistines? Will you give me the victory?”
The Lord answered, “Yes, attack! I will give you the victory!”
11So David attacked them at Baal Perazim and defeated them. He said, “God has used me to break through the enemy army like a flood.” So that place is called Baal Perazim.#14.11 baal perazim: This name in Hebrew means “Lord of the Break-through”. 12When the Philistines fled, they left their idols behind, and David gave orders for them to be burnt.
13Soon the Philistines returned to the valley and started plundering it again. 14Once more David consulted God, who answered, “Don't attack them from here, but go round and get ready to attack them from the other side, near the balsam trees. 15When you hear the sound of marching in the treetops, then attack, because I will be marching ahead of you to defeat the Philistine army.” 16David did what God had commanded, and so he drove the Philistines back from Gibeon all the way to Gezer. 17David's fame spread everywhere, and the Lord made every nation afraid of him.
Loading reference in secondary version...
|
<urn:uuid:69be2629-6113-46dc-80b8-3a1c83c6708b>
|
CC-MAIN-2014-10
|
https://www.bible.com/en-GB/bible/296/1ch.14.gnb
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678693548/warc/CC-MAIN-20140313024453-00090-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.964741 | 534 | 3.03125 | 3 |
All 4 Non-Human Apes – A Detailed Comparison
As our closest relatives, apes are some of the most fascinating creatures on earth. In total, there are 27 species of ape, which include chimpanzees, bonobos, gorillas, orangutans and gibbons. In addition to the obvious differences in physical appearance, there are many elements that make each of these types of animals quite unique and interesting. In this article, we’ll compare the four different types of apes from their behavioural tendencies and reproduction to physical characteristics and location. Let’s dive in!
- Reproduction & Life Span/Cycle
- Taxonomy, Location, Population & Conservation
- Physical Characteristics
- Ecology, Diet & Movement
Reproduction & Life Span/Cycle
Chimpanzees and bonobos are born after a gestation period of 8 months and, sharing a powerful bond with their mother, will cling to her chest for the first 6 months. The baby is then carried on its mother’s back until around 2 years of age but will not become fully weaned until 4 or 5. Independence occurs after 7-9 years but chimpanzees won’t reach sexual maturity until 13 for females and 15 for males. Upon reaching maturity, females will venture off to find another community, preventing inbreeding, whereas males will stay with theirs for life. Chimps live for 40-50 years in the wild and an adult female gives birth once every 5 years.
The reproductive cycle of a gorilla is shorter, producing offspring every four years after a slightly longer gestation period of 8.5 months. Like chimpanzees, gorilla offspring are helpless when born and cling to their mother’s chest until around 4 months old. At this point, a young gorilla will ride on its mother’s back until around 2-3 years of age. Independence is reached after just 3-4 years but sexual maturity takes 10 for females and 15 for males. Gorillas will also live for 40+ years in the wild but with this species, it is usually the males who leave to establish new troops. Some males will stay, however, and queue for dominance of the troop, which some take over from their father.
Lifespan is a similar story for orangutans at 35-40 years but their reproductive cycle is quite different to that of the African apes; at 8 years, they have the longest interbirth interval of any land mammal, meaning females usually have no more than 3 babies during their lifetime. Gestation is around the same as a gorilla at 8.5 months and like the other great apes, their offspring, who have white patches on their face and bodies, will be carried everywhere by their mother for the first 4 months and will remain by her side until 6-8 years of age. Sexual maturity depends on gender and species and can range anywhere from 6 to 11 years for Bornean females all the way up to 15 to 24 years for Sumatran males.
The life cycle of a gibbon is also quite different from other apes. Gibbons produce offspring every 2-3 years normally bearing a single child after a shorter gestation period of around 7 months. Depending on species, weaning is usually complete within 18 to 24 months, around half that of the great apes but independence is still not reached until 6 to 8 years of age when they will leave their family unit to start their own. The lifespan of a gibbon is also shorter than that of the great apes at around 25-30 years.
Taxonomy, Location, Population & Conservation
Gibbons are classed as lesser apes and are found in a separate family to the greater apes, Hylobatidae. It is by far the most diverse group of apes with 20 of the 27 species being found within this family. Most of their population is found on mainland Southeast Asia but like orangutans, are present on the islands of Borneo and Sumatra. Their range is split geographically between their four genera with the range of hylobates overlapping with that of the monotypic.
Of the 20 species of gibbon, 1 has a conservation status of vulnerable, 14 are endangered and a further 5 are critically endangered. Some of the species most at risk are the newly discovered Skywalker hoolock gibbon, with just 150 individuals and the eastern black crested gibbon, who number no more than 50. Like other apes, poaching and habitat destruction are both to blame for much of their population decline but the latter is even more of an issue for gibbons and orangutans who spend the vast majority of their time in the trees.
Orangutans are the most distant of the great apes from humans and although they are found in the same family as gorillas and chimpanzees, they are contained within their own subfamily, Ponginae, which contains a single genus, Pongo. Bornean and Sumatran orangutans are found on their respective islands with a 3rd species identified in 2017, the Tapanuli orangutan, being found to the south of the Sumatran population.
All three of these species are critically endangered. The Bornean orangutan has the highest numbers, with a population of around 100,000 but this drops off significantly on the other side of the Javan sea with the Sumatran orangutan down to just 7,500 individuals and the Tapanuli orangutan down below 800.
Gorillas are the first of the apes we’ll discuss in the subfamily, Homininae. There are two species found in separate locations on either side of Central Africa, divided by the Congo river and its tributaries. Each of these species is split further, geographically, by their respective subspecies; the Cross River gorilla, the western lowland gorilla, the eastern lowland gorilla and the mountain gorilla.
Habitat destruction and the bushmeat trade also pose a threat to gorillas, in addition to being hunted for game. Both species of gorilla are critically endangered; the most at threat is the eastern gorilla, whose combined numbers are estimated to be around 5,000, with just 1,000 mountain gorillas remaining. On the other side of the continent, there are estimated to be around 100k western gorillas but the Cross River gorilla is the subspecies most at risk with an estimated 200-300 individuals remaining.
Finally, the genus Pan is found in the same tribe as humans and contains two species; the bonobo is found to the south of the river Congo in the heart of the DRC and the common chimpanzee, which is located on the northern side of the river and, geographically, is also split further by its 4 subspecies found as far west as Senegal and as far east as Tanzania.
While the 150 – 250 thousand chimpanzees found in the wild is significantly higher than the 10 – 50 thousand bonobos, both of these species are classified as endangered with threats coming from habitat loss and hunting for meat.
The smallest of these animals are gibbons, which thanks to their high number of species, are also the most diverse in appearance. These apes have exceptionally long arms—sometimes up to 2.6 times the length of their bodies—that help them navigate their arboreal habitat with ease.
The Siamang is contained in its own monotypic genus and, physically, is quite different to other gibbons. At up to 26.5 lb / 12 kg it is the largest species and is also fairly easy to distinguish visually with its thick black fur surrounding a nearly hairless, light grey face and a large throat sac they use to produce loud vocalisations. Interestingly, their Latin name Symphalangus syndactylus refers to the fused 2nd and 3rd toes on their feet, similar to those of a koala.
There are many examples of gibbons that exhibit sexual dichromatism, where males display a different colour to their female counterparts. The Lar or White-handed gibbon is one such species and belongs to the Hylobates genus, also referred to as the dwarf gibbons, whose members usually weigh between 8.8 lb – 17.6 lb / 4 to 8 kg. Both males, who are black and females, who are cream or buff, have a white circle around the face. This genus also contains the silvery gibbon, which exhibits a beautiful silvery-grey pelage.
Other species include the northern white-cheeked gibbon, which don’t look too dissimilar to the Lar gibbon but are from the Nomascus genus. Males are black with white fur around their cheeks and females are buff with a black patch on the top of their heads. Finally, members of the hoolock genus are also quite unique, exhibiting a white brow across the forehead.
Like gibbons, orangutans also have bodies built for the trees; their enormous arm span can measure up to 7ft long, which when combined with their incredibly flexible legs and opposable thumbs on all four extremities, allows them to move gracefully through the trees.
Orangutans display sexual dimorphism; adult males are around twice the size of females and usually weigh around 200 lb / 91 kg but can be found closer to 300 lb / 136 kg. They measure around 4-5 ft (1.2m – 1.5m) in height, which is roughly the same as a chimpanzee but shorter than a gorilla.
Males also develop large cheek pads as they age, which are through to be associated with increased levels of testosterone and appeal to the opposite sex, with females more often mating with adult males who have the cheek pads over subadult males who have yet to develop them.
Their bright orange fur is, oddly enough, thought to be for camouflage; in direct sunlight, it is distinct and obvious but in the shadows of the forest canopy their dark tan skin absorbs the light and makes orangutans difficult to see.
Bornean orangutans can be distinguished by the white hairs on their face and longer fur, including their beards, as well as exhibiting a more slender build. Tapanuli orangutans differ from other species by their dental structure and the structure of the skull, including but not limited to a shallower face depth.
Even bigger than orangutans, Gorillas are the largest of the apes with adult males usually topping out around 5.5 ft tall (1.7m) and close to 500lbs (485 lbs / 220kg) with the largest specimens recorded at well over 6 ft (1.8 m). They are also sexually dimorphic; females measure just under 5 ft (1.5 m) in height but are half the weight due to the stocky build of the adult males who are referred to as silverbacks, a reference to the grey or silver hairs they develop on their backs as they age.
Gorillas also have opposable digits on their hands and feet, allowing them to manipulate objects with all four extremities. Although their arms are proportionately shorter than those of orangutans and gibbons, they are still 15-20% longer than their legs.
Eastern gorillas are slightly larger than their western counterparts and have darker longer fur. Specifically, the western lowland gorilla is often sighted as the smallest of the four subspecies and the eastern lowland gorilla as the largest. Mountain gorillas also have the longest hair, which they use to keep warm at higher altitudes and, like the Tapanui orangutan, the Cross River gorilla exhibits noticeable differences in skull and dental structure.
Chimps and bonobos are the smallest of the great apes but at 4 – 5ft (1.2 – 1.5m) and no more than 130 lb (59 kg) are much larger than gibbons. Chimpanzees tend to be slightly larger than bonobos, who were referred to as ‘pygmy chimps’ until they were found to be a separate species, and have a more stocky build, with shorter legs but wider chests.
While chimps and bonobos are sexually dimorphic, this is to a much lesser extent than the other great apes; the size difference between males and females is a ratio of roughly 1.3, which, for the sake of comparison, is still greater than humans, who are closer to 1.15.
Both chimps and bonobos have long black hair that covers their bodies but one of the most interesting differences between species is the appearance of their young. Chimpanzees are born with white skin on their faces and ears, which darkens with age, whereas bonobos are born with this dark skin. As both species age, some of their hair also tends to turn grey and the forehead can also become bald.
Ecology, Diet & Movement
In terms of ecology, bonobos are found only in lowland tropical forests whereas the habitat of the chimpanzee extends into grassland and woodland areas. Both species use arboreal and terrestrial locomotion to move around their territory. Each night, they make a nest in the trees before rising early to find breakfast, which ideally is made up of nutritious fruits. Unlike some of the other apes, chimps will cover long distances on the ground where they use a type of locomotion referred to as knuckle-walking—bearing their weight on the knuckles of their hands—and are also able to move in an upright posture.
The diet of both chimpanzees and bonobos is made up mostly of vegetation such as fruits, leaves, roots and seeds, however, both species are omnivores and, although a lot less frequently, will also consume invertebrates and mammals such as duiker—a small antelope. Chimpanzees eat more meat than bonobos and have been observed pinching eggs and chicks as well as hunting pigs and even other primates such as colobus monkeys.
Both species typically consume their food in the trees and do so at two times during the day; once in the morning followed by a midday rest and then a longer session of eating follows in the afternoon and evening. Tool usage is a particularly interesting trait that was first observed by Jane Goodall in the 1960s at Gombe Stream Game Reserve in Tanzania. These primates use sticks to fish termites out of their mounds and leave sponges to soak up water.
Gorillas are also found in the tropical forests of Central Africa but the elevation at which they live varies by subspecies. The western lowland gorilla, which commands the largest range of any subspecies, lives at the lowest average altitude, mostly preferring lowland tropical forests whereas the Cross River gorilla lives in the mountainous border region between Cameroon and Nigeria. Likewise, on the other side of the Congo, the eastern lowland gorilla lives in lowland tropical forests but can also be found at higher altitudes, and as you might expect, the mountain gorilla is found at the highest elevations up to 14,100 ft / 4,300 m.
The diet of a gorilla is more strictly vegetarian than that of a chimp or bonobo; they mostly gorge on leaves, stalks and fruit although they will also feast on ants and termites as well as their larvae. The makeup of their diet depends on subspecies, for example, fruit trees and termites are much more common in lowland tropical forests and thus are consumed a lot more by western lowland gorillas than they are by mountain gorillas who eat larger amounts of leaves, stems and other similar vegetation. Like chimps and bonobos, they eat in two intervals per day and a large silverback can consume up to 60 lb (27 kg) in one day.
Although gorillas can and do climb trees, they are mostly terrestrial and, like chimps, will move around using knuckle-walking. Females and young climb more than adult males whose weight is simply not supported by the canopy. As such, gorillas will sleep in nests in the trees or on the ground on a bed of foliage.
Moving to Asia, the ecological preferences of the orangutan are much simpler than that of the gorilla; all three species live in the lush tropical forests of Borneo and Sumatra and none of them ventures above 4,600 ft (1,500 m). Orangutans spend the vast majority of their time in trees where they build their nests to sleep and move about with ease thanks to their long arms and hook-like hands. They can often be seen high up in the forest canopy, which is quite incredible for such a large ape. On the occasion they do spend time on the forest floor, they do so on all fours but usually do not use knuckle-walking like gorillas and chimps.
Like the western lowland gorilla, fruit makes up a large portion of their diet at around 50%, which is supplemented with other types of vegetation in addition to small amounts of insects such as ants and termites. Very occasionally, orangutans have been observed eating meat when fruit is scarce, specifically their fellow primates, slow lorises. Like chimps and gorillas, orangutans have also been observed using sticks as tools to extract insects from trees as well as using leaves as umbrellas when it rains!
Gibbons are also thought of as arboreal, in fact, hylobates, one of the four genera can be translated to either “forest walker” or “dweller in the trees”. They are perhaps the most efficient ape in the trees and use their long arms and flexible shoulder joints to brachiate through the forest. On the ground, gibbons walk in a fully upright posture and will also use their long arms to balance when walking along a branch.
All species are found in various types of tropical forests, however, the altitude at which they are found varies by species. The northern white-cheeked gibbon is found at low altitudes of between 985 – 1970 ft (300 – 600 m) in Laos, Vietnam and Southern China. The Black-crested gibbon, which is very similar in appearance, is found at higher elevations between 1805 – 8822 ft (550 – 2690 m). Unlike great apes, gibbons choose to sleep on open branches and rarely sleep in the same place more than once to reduce predation by animals such as pythons or birds of prey.
They are also thought of as frugivorous, with ripe fruit often making up a large portion of their diet, which they will supplement with leaves and other vegetation as well as insects and less frequently bird eggs. Of the fruits available, figs are a particularly popular food source; 26% of a pleated gibbon’s diet is made up of this nutritious fruit. Siamangs eat a higher proportion of leaves than any other gibbon, making up 40-50% of their diet.
Chimps and bonobos have by far the largest group structure and territory; they live in communities of between 30-100 but spend most of their time split into smaller groups known as parties with whom they forage for food. Chimpanzee communities are led by a dominant male and a coalition of allied males who are notoriously violent and use aggression to assert their dominance. Their territory ranges from 5 sq km (1.9 sq mi) in the forest to 500 sq km (193 sq mi) on the savanna. Bonobo communities on the other hand are female-dominated and are thought of as more peaceful societies with female members controlling the aggression of males, often with intercourse. The group size of bonobos is roughly the same but the territory is limited to just under 30 sq km (11.6 sq mi) as they are found only in tropical forests.
Intergroup relationships are quite different between chimps and bonobos. Chimpanzees are hostile and aggressive while defending their territory while bonobos have been observed sharing food with other communities as well as spending time with other groups.
Grooming is an important part of life in the pan genus and is used to bond with other members. Both species use a collection of gestures and facial expressions to communicate face to face and have a repertoire of vocalisations to do so over both long and short distances.
A group of gorillas is known as a troop and are smaller than chimpanzee communities at between 6 to 30 individuals. Troops consist of a dominant silverback male and several adult females with their offspring. Eastern gorilla troops are usually larger than that of the western species and can accommodate multiple males who are closely related. Troops of mountain gorillas in particular have been known to support up to 8 silverbacks. Their territory is also smaller than that of a chimpanzee community at between 2 to 40 sq km (0.8 – 15.5 sq mi).
Grooming plays less of a role in gorilla societies than it does in that of chimps and bonobos, nevertheless, it does take place and is also used to strengthen bonds between individuals and usually occurs between males and females or females with their young.
Gorillas are often portrayed as aggressive creatures in popular culture but they are reserved by nature and silverbacks will usually only display aggression in the form of roaring and chest beating when defending other troop members or maintaining dominance within the troop.
This social group behaviour is not mirrored by Asian apes; Orangutans are mostly thought of as solitary, especially adult males who roam the forest on their own and will usually only spend time with a mating female. Females are more social than males, travelling with their offspring and coming together to feed with other adults in their range when there is an abundance of fruit. The territory of a male orangutan is around the same as a gorilla troop at up to 40 sq km (15.5 sq mi) and will overlap with those of multiple females whose ranges are smaller at around 9 sq km (3.5 sq mi). Like gorillas, orangutans are thought of as placid and confrontation usually only occurs between adult males competing for females or territory.
Orangutans use both verbal and non-verbal communication. The latter is used more often for long-distance signalling, in particular, a “long call” is used by males both to signal their presence to other males and also to attract females. A wide range of gestures are commonly used face to face, however, as the most solitary of the great apes, the extent to which these have been studied in the wild is much less.
Although gibbons are the least closely related apes to humans, they have the most similar social structure found in family groups made up of a monogamous mating pair and their immature offspring. Territory size is quite small, usually between 0.1 to 0.5 sq km (0.04 – 0.19 sq mi). The Hainan black crested gibbon has the largest territory of any gibbon species at 2 to 5 sq km (0.8 – 1.9 sq mi).
Unfortunately, gibbons are the least studied of the ape species however, they also have one of the most interesting and heartwarming forms of communication. The male and female will sing a duet known as the “great call” used to mark their territory which is often joined by their young
Chimpanzee Breakfast, Chimpanzee Diet, Chimps vs Bonobos, Chimp Social Hierarchy, Chimp Bonobo Comparison, Chimp Facts, Bonobo Communities, Chimpanzee Baby Care, Gorilla Social Groups, Gorilla Social Structure, Gorilla Species, Gorilla Diet, Gorilla Social Bonding, Orangutan ecology, Orangutan colour, Tapanuli Orangutan Morphology, Orangutan social behaviour, Orangutan territory, Orangutan breeding, Orangutan communication, Orangutan tool usage, Gibbon social structure, Nuclear Families, Gibbon reproduction, Gibbon sleeping, Encyclopaedia Britannica, Animal Diversity, National Geographic, WWF, IUCN Red List & Wikipedia.
|
<urn:uuid:14bb2529-95c0-4532-9739-aa03ad6d0f9b>
|
CC-MAIN-2023-23
|
https://www.textbooktravel.com/all-4-types-of-ape/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655446.86/warc/CC-MAIN-20230609064417-20230609094417-00529.warc.gz
|
en
| 0.964407 | 4,944 | 3.5 | 4 |
Every renovation project or build relies heavily on the details—what materials you use, your budget, specs—the list goes on. That’s why it’s always important to know exactly what you need for your project and why the products you choose will work best for you.
Although drywall might seem like a simple material to understand, there are a few types of drywall to consider: traditional drywall, fire resistant drywall, and mold and moisture resistant drywall. Like any other material, different types of drywall were made for different types of environments and uses, so it’s important to know what the drywall you’re using is made of and why it is the best choice for your project.
What is Drywall Made Of?
Drywall is made from calcium sulfate dihydrate, also known as gypsum. According to the Gypsum Association, gypsum is an inert compound that contains 21% by weight chemically combined with water, which allows for buildings to be fire resistant. When the gypsum starts to heat up, it releases water which helps prevent ignition or crumbling until the water has steamed away.
There are many different types of drywall to choose for your project—from fire-resistant to soundproof, knowing which drywall to use for your project is very important.
Traditional drywall is mostly used in residential construction at a 1/2-inch thickness and a standard size of 4 by 8-foot panels. By using traditional drywall, you one will be able to finish standard walls with a minimal number of joints. If needed, traditional drywall is also available in thicker and larger sizes.
There are two main types of fire-resistant drywall: Type X and Type C. Both types were created for different purposes have certain pros and cons depending on their use, so it’s good to understand how Type C and Type X differ.
Type X is when the manufacturer adds glass fiber to the gypsum slurry and rolls it into a 5/8-inch minimum. Thanks to the fibers, this allows for the board to last up to an hour in a fire. Many local buildings require Type X drywall around furnaces, boiler rooms, and attached garages.
Type C drywall was invented by one of our vendors, USG. Just like Type X, Type C also has the glass fiber mixture, but with more glass and a form of mineral vermiculite. Vermiculite expands under high heat at the same rate that gypsum shrinks, which helps the board maintain integrity. Type C last longer than Type X in high heat situations.
Mold & Moisture Resistant Drywall
Mold and moisture resistant drywall is mostly used in kitchens, bathrooms, laundry rooms, and basements. These rooms tend to have more moisture than other places in a building, so it’s always wise to use materials that can handle a lot of moisture.
A popular type of mold and moisture resistant drywall is Greenboard. The green backing of this type of drywall has a petroleum-based coating that helps resist water and mold growth—although it is water resistant, it is not waterproof. The color green helps identify this type of drywall as well as helps the drywall installer see where to apply the joint compound.
Find Out What Drywall Works Best for You at Freedom Materials
Choosing the right drywall is an important part of your project process, so make sure you fully understand the differences between each type of drywall! If you still have questions about what type of drywall is best for you or you, contact our team today.
At Freedom Materials, our qualified team of drywall experts is ready to help you with whatever questions you might have about your drywall project or any of the drywall products we carry. We are ready to make your project a success!
|
<urn:uuid:85d49c21-1d8f-403e-a6ac-7a292b1ce3cb>
|
CC-MAIN-2023-14
|
https://freedommaterials.com/drywall-types/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00036.warc.gz
|
en
| 0.930612 | 823 | 2.625 | 3 |
Posts by Author
On the election of the United States' first African-American president
Barack Obama's inauguration opens another chapter in the history of race relations in the United States. He is very fond of quoting Abraham Lincoln, and seems his natural heir. Certainly the victory that Lincoln achieved over the South helped make the long and tortuous path to Obama possible. It is, however, worth remembering Jefferson's role in Lincoln's bold attempt to remake race relations in the United States. Lincoln's Gettysburg Address used Jefferson's iconic words in the Declaration of Independence, "all men are created equal" to help prepare the nation for the end of slavery and the incorporation of black people as full participants in American society. While many applauded the idea, others, then and now, were critical. "Surely," they said, "Jefferson didn't mean black people. He owned slaves." Over the years, as other relatively powerless people or disfavored groups have used that phrase to demand a share of the American dream, they, too, are often met with the answer, "Jefferson didn't mean you." While I would be among the last to deny the fun of trying to figure out what Jefferson meant in a given situation, wondering whether he believed "all men are created equal" in the way Lincoln meant it or we mean it, has always struck me as beside the point when considering claims for full inclusion in American life. Although it is the foundational document of America, as it established the break with Great Britain, the words of the Declaration are not law in the way the Constitution is law. They are a statement of universal principles written in soaring language. Like all great works of art, its meaning announces itself to each who reads it. "All men are created equal," therefore, can and will have meaning to people throughout the generations. I suspect Jefferson knew this.
ANNETTE GORDON-REED is Professor of Law at New York Law School, Professor of History at Rutgers University-Newark, and the author of The Hemingses of Monticello: An American Family, for which she received a 2008 National Book Award.
|
<urn:uuid:9854f5aa-4994-40b1-9c46-d32859bbb365>
|
CC-MAIN-2014-35
|
http://www.monticello.org/site/blog-and-community/posts/universal-principles
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535923940.4/warc/CC-MAIN-20140909040807-00465-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.970143 | 434 | 2.921875 | 3 |
The committee’s examination of breast cancer and the environment required considerations at the intersection of diverse fields, including the biology and epidemiology of breast cancer, the identification of carcinogens and cancer-promoting agents, exposure assessment, toxicity and carcinogenicity testing, and the design and interpretation of research studies. This chapter provides some brief, fundamental background on these topics as a basis for the discussions in subsequent chapters.
The breast begins forming during the prenatal period and undergoes substantial changes during adolescence and adulthood. Breast cancer arises when abnormal cellular growth occurs in certain structures and types of cells within the breast.
Although breast cancer is often spoken of as if it were a single disease, evolving techniques of analysis of the molecular characteristics of tumors are pointing to a variety of types of potentially differing origins. Gaining a better understanding of the nature of the heterogeneity of breast cancer will be critical in helping researchers improve the design and interpretation of studies of possible risk factors, and it may influence approaches to prevention.
Described here are the basics of the anatomy of the breast and breast development, types of breast cancer, and levels and trends in the incidence of the disease, focusing primarily on experience in the United States. The mechanisms that appear to result in female breast cancers and the pathways
Approximately 1 percent of breast cancer cases occur in men, and less than 1 percent of men’s cancer diagnoses are for breast cancer (ACS, 2011b). Because it is rare, breast cancer in men has been difficult to study. Based on what is known, however, it is considered to resemble breast cancer in postmenopausal women (Korde et al., 2010).
As in women, men’s breasts respond to changes in sex hormone concentrations (both estrogens and androgens), but under normal circumstances they do not undergo the differentiation and lobular development that women’s breasts experience with puberty, pregnancy, and lactation (Johansen Taber et al., 2010). Either an excess of estrogens or deficit of androgens appears to increase risk of breast cancer in men (Korde et al., 2010). Beginning after age 20, rates rise steadily with age. Approximately 92 percent of male breast cancers are estrogen receptor positive, compared with approximately 78 percent of breast cancers in women (Anderson et al., 2010). As is the case for women, inherited mutations in BRCA1 and especially BRCA2, as well as other mutations, are associated with an increased risk of male breast cancer, but the majority of cases are not associated with a family history of the disease (Korde et al., 2010).
The Breast, Breast Development, and Breast Cancer
The development of the human female breast begins during gestation but is not complete at the time of birth. Further development and differentiation of breast tissue occurs over time and especially in response to fluctuating estrogen and other hormonal signals beginning in puberty, continuing through the reproductive years, during pregnancy and lactation, and at menopause. Monthly ovulatory cycles are accompanied by cyclical changes in the form and behavior of cells and structures in the breast, including progressive differentiation. Pregnancy and lactation trigger maximal differentiation of the breast. When pregnancy and lactation end, as well as at menopause, breast tissue regresses to a less differentiated state.
Within the breast are adipose and connective tissues that surround multiple collections of lobules in which milk is produced during lactation. Milk moves to the nipple through ductal structures. The ducts are lined by luminal epithelial cells and have an outer layer of myoepithelial cells. Popu-
lations of stem cells that can give rise to either luminal or myoepithelial cells are also found in the ductal tissue. The ducts are anchored to a basement membrane, which contributes to both the structure and the function of the ductal tissue. Connective tissue within and between the lobules, known as the stroma, further contributes to the structure of the breast and plays an important role in regulating both normal and abnormal breast cell growth and function (Arendt et al., 2010). Cell types within the stroma include (but are not limited to) fibroblasts, adipocytes, macrophages, and lymphocytes (Johnson, 2010). These cells and structures in the breast generate and respond to a diverse mix of hormones, especially estrogen, and other regulatory factors.
Certain disruptions in the complex processes that govern the structure and function of breast tissue may set the stage for breast cancer. Some carcinogenic events occur spontaneously in the course of normal biological processes and others are triggered by external factors. Although the body has efficient protective responses, such as DNA repair and immune surveillance, that can reduce the effect of such events, these protective responses are not always successful. The interval between the earliest “event” and the detection of a cancer may span several decades.
Specific mechanisms that may play a role in breast cancer are noted here but discussed further in Chapter 5. The contribution of genetic mutations to cancer is well known. They may be inherited (e.g., germline mutations in the BRCA1 or BRCA2 genes, which normally have a role in DNA repair) or develop in some cells during a person’s lifetime (somatic mutations) as a result of reactive by-products of normal biological processes, or from the effects of external exposures. Other mechanisms include epigenetic changes that can alter gene expression without changes to DNA, promotion of cell growth by estrogen and other hormones or cell-signaling proteins, and evasion of the immune system.
Types of Breast Cancer
Most commonly, breast cancers develop in the ducts, but cancers also develop in the lobules or take other forms. Several systems are used to characterize breast cancers, with the systems developed primarily to provide information on prognosis and treatment decisions. For example, breast tumors may be classified by tumor size, extent of spread beyond the tumor site (localized, regional, distant), the anatomical characteristics of the tumor cells (e.g., ductal or lobular histology), and the molecular features of the tumor cells, such as presence or absence of estrogen and progesterone receptors and human epidermal growth factor receptor 2 (HER2/neu).
The age at which a woman is diagnosed with breast cancer is associated with tumor characteristics, such as the likelihood that the breast cancer
is estrogen receptor positive or negative (ER+ or ER–). In addition, age or menopausal status also guides treatment decisions. For example, aromatase inhibitors are part of treatment for postmenopausal women who have ER+ breast cancers, but tamoxifen is used among premenopausal women. Except for reference to menopausal status, breast cancers in men are characterized in similar ways. Differences in patterns of such features as tumor histology, grade, and receptor status may distinguish between a more aggressive form of breast cancer with a generally earlier onset and a more common and less aggressive form that tends to occur at older ages (see Anderson et al., 2006b, 2007; Kravchenko et al., 2011).
Another major distinction is between invasive and noninvasive (or in situ) tumors. As the terms suggest, invasive tumors spread beyond the site at which they arise, while in situ tumors remain within the tissue where they originate, such as the epithelial cells lining the breast ducts. About 20 percent of reported tumors are noninvasive (ACS, 2011a). Ductal carcinoma in situ (DCIS) is the most common form of abnormal but noninvasive growth in the breast. Although DCIS can, in some cases, progress to an invasive cancer, the natural history of these tumors is poorly understood, and it is not yet possible to identify which ones are likely to progress (Allred, 2010). As a result, most women with in situ tumors receive treatment that is similar to the treatment for early-stage invasive tumors.
Estrogen and Progesterone Receptor Status
The molecular and genetic characteristics of breast tumors are used to guide treatment and assess prognosis. A feature for which breast tumors are now commonly evaluated is whether the cells express estrogen or progesterone receptors. Tumors that express these receptors are designated ER+ or PR+, and those that do not as ER– or PR–. In the United States, approximately 75 percent of invasive tumors for which receptor status is reported are ER+ and 65 percent are PR+ (Ries and Eisner, 2007; Kravchenko et al., 2011). ER+ and PR+ tumors have a generally better prognosis than tumors that do not express these receptors. These receptor characteristics are correlated with other tumor markers related to regulation of cell growth and proliferation and appear to reflect important differences in tumor origin (Phipps et al., 2010). Researchers are also finding that they are associated with differences in response to risk factors (e.g., Althuis et al., 2004; Yang et al., 2011).
Triple Negative Breast Cancer
Tumors lacking not only ER and PR expression but also HER2 are called triple negative breast cancers (TNBCs), and they are considered
closely related to basal-like breast cancers (Carey et al., 2006; Foulkes et al., 2010). Triple negative breast tumors are typically aggressive and are more likely to be diagnosed in women who are younger (below age 50) and are African American. These cancers in African American women tend to be more advanced and of higher grade at the time of diagnosis than tumors in other racial groups (Carey et al., 2006; Stead et al., 2009; Trivers et al., 2009). Triple negative tumors have been associated with BRCA1 and BRCA2 mutations (Armes et al., 1999; Foulkes et al., 2003; Turner et al., 2007; Atchley et al., 2008). Additionally, a large proportion of TNBCs have altered p53 levels (Carey et al., 2006; Kreike et al., 2007; Rakha et al., 2007).
Genetic Susceptibility to Breast Cancer
Genetic mutations may contribute to breast cancer by altering various critical processes such as those related to DNA repair, hormone synthesis, and metabolism of carcinogens. Two types of genetic mutations are possible. Germline mutations are genetic variants that are passed from parents to offspring and are present in all cells. Genetic changes can also occur in specific cells during a person’s lifetime; these changes, which can persist as cells divide, are called somatic mutations. They can arise by chance, as a by-product of normal processes such as cellular respiration or DNA replication, or from external exposures. Such mutations may lead to that cell becoming a cancer cell.
Inherited genetic variation is found across the population. Many of these variations, called polymorphisms, may have little or no impact on the function of a gene, but some of them are associated with increased susceptibility to disease. Common genetic variants are found in 1 percent or more of the population.
Every breast cancer contains somatic genetic changes, but only a few inherited mutations are known to convey a high risk of breast cancer in the carrier. The strongest evidence of inherited genetic susceptibility is for germline mutations in the BRCA1 and BRCA2 genes. Research suggests that a larger number of lower-risk germline variants also exist.
A family history of breast cancer is an established breast cancer risk factor. This risk factor represents both inherited genetic risks as well as environmental factors that may cluster in families. Overall an inherited susceptibility to breast cancer contributes to about 10 percent of breast cancer cases, and in about 5 percent of breast cancer cases this inherited susceptibility is attributed to mutation in the BRCA1 or BRCA2 genes.
Mutations in these two genes are associated with increased susceptibility not only for breast cancer, but also for other cancers such as ovarian cancer.
BRCA1/2 mutations are high-penetrance mutations, meaning that women with these mutations have a very high lifetime risk of developing breast cancer. This risk is estimated to be at least 40 percent and possibly as high as 85 percent (Oldenburg et al., 2007). However, these mutations are rare, with substantially less than 1 percent of women in most populations carrying them (Narod and Offit, 2005). In addition to increasing the risk of breast cancer for women, they also increase risk for male breast cancer. Families in which such mutations may be present may have multiple cases of breast cancer, occurring at younger ages and in multiple generations, and a family history of ovarian cancer (Narod and Offit, 2005). Other sources of increased familial genetic risk include the Li-Fraumeni syndrome1 from germline mutations in the p53 gene (Malkin et al., 1990) and Cowden disease2 from germline mutations in the PTEN gene (Liaw et al., 1997).
Genetic testing is available to identify BRCA1 and BRCA2 mutations. Identification of a familial mutation that carries an increased risk of breast cancer allows women, and men, who carry such a mutation to seek closer monitoring of their health and to consider primary and secondary preventive measures, such as increased screening, bilateral prophylactic mastectomy and, for women, bilateral salpingo-oophorectomy (Walsh et al., 2006). Use of medications that can reduce the risk of breast cancer (i.e., tamoxifen and raloxifene) may also be appropriate for some women (USPSTF, 2002).
Breast Cancers in Women Without a Strong Family History
Most women diagnosed with breast cancer do not have a strong family history of the disease and do not carry mutations in highly penetrant cancer-susceptibility genes. They may, however, have other more common genetic variants that affect gene function and that may be responsible for a proportion of the breast cancer cases that develop. These genetic variants are called low-penetrance variants because they are associated with only a small degree of risk for breast cancer. Yet because they are common, they may contribute to the burden of disease. In addition, these variants may interact with environmental exposures such that risk is only expressed in the presence of the environment exposure (gene–environment interaction).
Two approaches have been used to identify low-penetrance genetic variants: a candidate gene approach and genome-wide association studies.
1Li-Fraumeni syndrome is characterized by a predisposition to sarcomas, lung cancer, brain cancer, leukemia, lymphoma, adrenal-cortical carcinoma, and breast cancer.
2Cowden disease is a syndrome involving mucocutaneous and gastrointestinal lesions and breast cancer.
Studies initially relied on the candidate gene approach, in which polymorphic variants of genes that plausibly influence breast cancer risk are assessed in epidemiologic studies (i.e., case–control or cohort studies) for their association with breast cancer. For example, the Breast and Prostate Cancer Cohort Consortium has conducted extensive analyses of genetic variation in large numbers of specific genes in biological pathways thought to be most relevant to breast cancer, such as the steroid hormone metabolism and insulin-like growth factor pathways (Canzian et al., 2010; Gu et al., 2010). These studies did not find an association with breast cancer risk. In general, the candidate gene approach has had limited success in consistently identifying specific variants associated with breast cancer.
Genome-wide association studies (GWAS) allow for a comprehensive and unbiased search for modest associations across the genome. The approach in these studies is to identify a relatively limited set of readily recognized single nucleotide polymorphisms (SNPs) that are highly correlated with a larger block of genetic variants and to use the limited set of “tagSNPs” in the analysis (Manolio, 2010). These studies require very large sample sizes (thousands or tens of thousands of cases and controls) because these variants tend to be associated with a small degree of risk. Because these studies make use of large numbers of statistical tests, they require extreme levels of statistical significance to identify true positive results (Hunter et al., 2008).
Results from several GWAS of breast cancer in women of European ancestry have been published (Easton et al., 2007; Hunter et al., 2007; Stacey et al., 2007; Turnbull et al., 2010), and one of women of Asian ancestry (Zheng et al., 2009). Out of the many variants studied, approximately 20 risk variants have been robustly associated with breast cancer risk, all having only modest influence on risk (relative risks in the range of 1.05–1.3 per allele). Stronger associations with common variants are unlikely to exist, but they may be possible for rarer variants (e.g., those with minor allele frequencies of <5 percent) that have not been tested with the technologies available to date. Even so, statistical modeling suggests that low-penetrance gene variants may do at least as well in predicting risk as using traditional risk factors such as age at first birth, family history of breast cancer, and history of breast biopsy(ies) (Wacholder et al., 2010). This is a rapidly evolving area of research.
As noted in Chapter 1, an estimated 230,480 new cases of invasive breast cancer were diagnosed among women in the United States in 2011 and another 2,140 new cases among men (ACS, 2011a). In addition, approximately 57,650 in situ cases were diagnosed in women, of which
For data on patterns and trends in incidence and mortality for all forms of cancer in the United States, researchers generally rely on data from the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) Program. In 1973, SEER began systematic collection of data from cancer registries in sites selected to characterize the diversity of the U.S. population. The number of participating registries has increased, and as of 2005 covered approximately a quarter of the U.S. population (NCI, 2005). The SEER Program establishes standards for completeness and quality of the data provided to it, and it works with participating registries to achieve those standards. As practices change, new data elements may be collected. For breast cancer, for example, data on estrogen and progesterone receptor status of tumors were added in 1990 (Ries and Eisner, 2007). Annual reports present data and analysis on cancer incidence, mortality, survival, and trends since 1975. Datasets can also be made available to qualified researchers for independent analyses.
States also have cancer registries, but some of these registries are less than 20 years old (CDC, 2010). Through the National Program of Cancer Registries (NPCR), which was established by federal legislation in 1992 and is administered by the Centers for Disease Control and Prevention, states receive assistance to improve the quality and completeness of their cancer registries. The NPCR now produces an annual report that combines data from state registries with data from the SEER program.
about 85 percent were DCIS (ACS, 2011a). Sources of surveillance data on breast cancer are described in Box 2-2.
Age Patterns and Changes Over Time
Breast cancer can occur in women and men of any age, but it is predominantly a disease of middle and older ages. Rates of invasive cancer increase rapidly after age 35 and currently peak at approximately 432 cases per 100,000 women in the age group 75–79 years (NCI, 2011) (see Figure 2-1). Rates of in situ disease rise more slowly and increase as women reach ages at which mammographic screening becomes common. The peak rate is 99 cases per 100,000 women at ages 65–69 (NCI, 2011). Among men, cases of invasive breast cancer are found at young ages, but incidence peaks at ages 85 and older at a rate of approximately 10 cases per 100,000 men (NCI, 2011).
The incidence of breast cancer has increased since at least the mid-1970s but has dropped from its peak in 1999. Figure 2-2 shows the rates
over time for both older (age 50 and older) and younger women (ages 20–49) and for invasive and in situ cases. Among older women, rates of invasive cancer rose during the 1980s and showed a slower increase during the 1990s. During the 1980s, use of menopausal hormone therapy had increased (Hersh et al., 2004; Glass et al., 2007). The 1980s and 1990s were also a period when use of screening mammography increased (Breen et al., 2001; Anderson et al., 2006a; Glass et al., 2007). In 1987, roughly 23 to 32 percent of women were screened, depending on their age, and by 1997, screening rates were as high as 74 percent among women ages 50–64 (Breen et al., 2001). Increased screening allowed for the earlier detection of tumors and for the detection of tumors that might never have progressed. When more tumors are detected at earlier stages, it will appear as if incidence rates are rising even if they are not, or are rising more rapidly than they actually are.
A decline in breast cancer incidence occurred between 1999 and 2003 (Figure 2-2), principally in ER+ tumors in women ages 50–69 (Jemal et al., 2007). The decline is widely attributed to reductions in the use of hormone therapy (HT) (Clarke et al., 2006; Ravdin et al., 2007; Robbins and Clarke, 2007). In 1998, the Heart and Estrogen/Progestin Replacement Study (HERS) reported that use of combined estrogen–progestin HT failed to show an anticipated protective effect against coronary heart disease and was associated with an increase in risk for blood clots (Hulley et al., 1998). The subsequent publication of findings from the Women’s Health Initiative confirmed the lack of benefit for heart disease and also showed an increased risk for breast cancer with use of combined estrogen–progestin therapy (Writing Group for the Women’s Health Initiative Investigators, 2002). Reports from these studies were a major factor in the decline in use of HT.
As reflected in Figure 2-2, a recent analysis found that for 2003–2007 incidence rates of invasive cancer did not significantly change, although use of HT continued to decline (DeSantis et al., 2011). Use of screening mammography in 2008 remained similar to rates seen in 1997 (Breen et al., 2011). Rates of in situ cancer among older women also rose somewhat
in the 1980s and into the 1990s, but they have remained relatively stable since the late 1990s.
Although the perception is widespread that breast cancer is becoming more common among young women, the best data available indicate that invasive breast cancer incidence rates have been almost unchanged since 1975 in women ages 20–49 (Figure 2-2). What has changed is the rate of in situ breast cancer, which has been rising since the introduction of mammography screening in the 1980s (Breen et al., 2001; Kerlikowske, 2010). The perception that breast cancer is increasing in younger women may come from several factors. First, any cancer diagnosis in a young woman in her prime working and reproductive years is notable, emotionally laden, and an event that will gain attention in many settings. An analysis of vignettes about breast cancer in popular magazines found that nearly half the stories were about women who were diagnosed before age 40 (Burke et al., 2001), a group that accounts for approximately 5 percent of cases (ACS, 2011a). Second, diagnosis of cases of “carcinoma in situ,” especially DCIS, has increased, but its relation to invasive cancer can be unclear to women, at least in part because of the terminology and because of the aggressive treatment that may be recommended (De Morgan et al., 2002; Partridge et al., 2008; Liu et al., 2010). As noted, even within the research and medical communities, the natural history of DCIS is poorly understood, so the proportion of DCIS cases that would become invasive if untreated is unclear (Allred, 2010).
Race and Ethnicity
Differences can be seen in the age patterns and trends in breast cancer among the country’s racial and ethnic groups. For 2004–2008, the overall incidence of breast cancer was 136 cases per 100,000 among non-Hispanic white women, 120 per 100,000 among African American women, 94 per 100,000 among Asian and Pacific Islander women, and 78 per 100,000 among Hispanic women (who can be of any race) (NCI, 2011).3
For African American women, the lower incidence rates compared with white women are most evident at older ages (Figure 2-3). However, incidence rates are higher among African American women under age 45. At ages 30–34, for example, African American women have an incidence of breast cancer of 31.8 cases per 100,000, compared with a rate of 25.8 for
3Throughout the report, incidence rates such as these are age-adjusted using the U.S. standard population for 2000. Age adjustment applies each group’s incidence rates at specific ages to a single common population, the U.S. population for 2000 in this case. This process ensures that comparisons of rates are not affected by differences among the groups the age distribution of their populations.
white women in that age group (NCI, 2011). At ages 40–44 the differences are smaller; the incidence rates are 123.6 for African American women and 122.4 for white women.
Despite ongoing efforts to improve detection and treatment of breast cancer for all women, African American women continue to experience greater mortality from breast cancer compared to women from other ethnic and racial groups. Surveillance, Epidemiology, and End Results (SEER) data from the National Cancer Institute show that the 5-year survival rate for women diagnosed with breast cancer during the period 2001–2007 was 77 percent among African American women and 91 percent among white women (NCI, 2011). These differences in breast cancer survival have been attributed in part to a higher proportion of African American women being diagnosed with advanced-stage disease; only 51 percent of breast cancers among African American women are localized at diagnosis compared with 61 percent of cancers among white women (NCI, 2011). Among women diagnosed with localized cancer, the 5-year survival rate for 2001–2007 was
93 percent for African American women and 99 percent for white women (NCI, 2011), reflecting a smaller but persistent difference in outcomes. Other factors contributing to poorer survival rates for African American women may include less access to early detection and treatment services as well as differences in tumor characteristics.
Among Hispanic women, the incidence of breast cancer is consistently lower than for non-Hispanic white women or African American women, with greater differences at older ages (NCI, 2006; Hines et al., 2010; Liu et al., 2011). Data from California show that the incidence of breast cancer for the period 1988–2004 was lower among the foreign-born Hispanic women: 68.2 per 100,000 for the foreign-born, 93.8 per 100,000 for U.S.- born Hispanic women, and 125.7 per 100,000 for non-Hispanic white women (Keegan et al., 2010). Approximately 40 percent of the Hispanic population living in the United States in 2007 was born in other countries (Grieco, 2010).
Analysis of the breast cancer experience of Hispanic women is still limited and based primarily on populations in specific areas of the United States, such as California (e.g., Keegan et al., 2010; Liu et al., 2011) or the Southwest (e.g., Hines et al., 2010). Additional research will be needed to assess whether the observations in these areas are representative of the experience of Hispanic women who live in other parts of the country and whose countries of origin and history of residence in the United States may differ from those of the women in the available studies.
The incidence of breast cancer has also traditionally been lower in Asian women, compared to white and black women, as reflected in both international and U.S. surveillance data (Stanford et al., 1995; Parkin et al., 1997, 2005; Jemal et al., 2005; Joslyn et al., 2005; Miller et al., 2008). Incidence rates commonly transition to higher levels as Asian women who migrate to the United States and their descendents experience greater acculturation. This pattern of increasing incidence among immigrants is often cited as evidence for the influence of social and environmental factors in disease risk because genetic factors are unlikely to be able to account for differences from the rates in their countries of origin (Buell, 1973; Thomas and Karagas, 1987; Ziegler et al., 1993; Kolonel and Wilkens, 2006).
Evaluating breast cancer incidence in the Asian and Pacific Islander population4 is challenging because it is highly heterogeneous, with more than 60 distinct ethnicities. There is increasing evidence that the aggregate data on breast cancer incidence for these women tend to obscure large differences, including striking elevations in incidence for some subgroups (Deapen et al., 2002; Keegan et al., 2007; McCracken et al., 2007; Miller
4The Asian and Pacific Islander populations are combined as a standard reporting category for race and ethnicity for many federal data collection activities.
et al., 2008). Moreover, two studies that used different methods for assessing nativity suggest that young U.S.-born women from some Asian groups, especially women of Japanese and Filipina ancestry, are actually experiencing a higher risk for breast cancer than their white or African American contemporaries (Gomez et al., 2010; Reynolds et al., 2011).
Although Asian and Pacific Islanders, as a group, are less likely to receive an initial diagnosis of late-stage breast cancer than non-Hispanic white women (Hedeen et al., 1999; Morris and Kwong, 2004), foreign-born Asian women and some ethnic groups, including Hawaiians and South Asian Indians, are diagnosed with significantly more late-stage tumors than non-Hispanic white women (Li et al., 2003). Likewise, data from the 2001 California Health Interview Survey suggest that Asian women and Pacific Islander women have lower rates of mammography screening (67.2 percent and 63.4 percent, respectively) than non-Hispanic white women (78.1 percent) (Ponce et al., 2003a). The differences are further accentuated when disaggregated by ethnicity (53.1 percent among Korean women, 56.6 percent among Cambodian women) (Ponce et al., 2003b).
Racial and ethnic differences are also seen in terms of tumor types. The likelihood of having triple negative breast cancer, which is more difficult to treat, is significantly higher in African American women compared to women from other racial and ethnic groups (Bauer et al., 2007; Kwan et al., 2009; Stead et al., 2009). An analysis of SEER data for California found that African American women had a 1.98 percent lifetime risk of developing triple negative breast cancer, whereas Hispanic women had a 1.04 percent lifetime risk and white women had a 1.25 percent risk (Kurian et al., 2010). A high prevalence of triple negative tumors has also been reported in breast cancer cases from Nigeria and Senegal; of 507 cases, 27 percent were triple negative (Huo et al., 2009).
Reproductive Risk Factors
Several factors that are generally considered associated with increased risk for breast cancer include having a family history of the disease, particular reproductive characteristics (e.g., earlier age at menarche, later age at menopause, later age at first live birth), and certain forms of benign breast disease, as determined by breast biopsies (ACS, 2011a). Greater mammographic density, which reflects a higher proportion of connective and epithelial tissue in the breast, is a physiologic characteristic that is consistently associated with increased risk of breast cancer (Boyd et al., 2010). Studies in twins indicate that it is a heritable trait (e.g., Boyd et al., 2002; Ursin et al., 2009).
Differences in breast cancer incidence among population groups may reflect, in part, differences among them in the patterns of these types of risk
factors. For example, data from the Third National Health and Nutrition Examination Survey (NHANES III) show that the median age at menarche for non-Hispanic black girls is 12.06 years compared to 12.25 years for Mexican American girls, and 12.55 years for non-Hispanic white girls (Chumlea et al., 2003).
In a review of epidemiologic studies, Bernstein and colleagues (2003) also found differences between African American and white women in reproductive risk factor profiles. For example, African American women have a higher birth rate than white women until age 30. This is important because while there may be a short-term increase in breast cancer risk immediately following pregnancy, earlier childbearing and higher numbers of births appear to be associated with a long-term reduction in risk. Lactation has been associated with a reduced risk of developing breast cancer; it induces additional differentiation in the breast and delays the re-initiation of ovulation. Studies included in the review conducted by Bernstein et al. (2003) found that, compared to African American women, white women are about twice as likely to breastfeed, and their cumulative time spent breastfeeding is longer.
Differences in breast cancer incidence and reproductive risk factor profiles have also been reported for Hispanic and non-Hispanic white women (e.g., Hines et al., 2010). Both premenopausal and postmenopausal Hispanic women had a higher prevalence of factors that have been associated with decreased breast cancer risk, including younger age at first birth and greater parity. But they were also more likely to have a younger age at menarche and to breastfeed less, characteristics associated with greater risk.
However, some of the associations between reproductive factors and breast cancer risk may be stronger for white non-Hispanic women than for women of other races and ethnicities. Hines and colleagues (2010) found that among premenopausal Hispanic women, only late age at first birth had a statistically significant association with increased risk of breast cancer. Reproductive factors were not associated with breast cancer risk among postmenopausal Hispanic women.
The contribution of differences in patterns of reproductive factors may also be influenced by racial and ethnic differences in risk for particular subtypes of breast cancer. Some reproductive factors appear to be more closely associated with ER+/PR+ tumors (Althuis et al., 2004; Ma et al., 2006) or lobular (versus ductal) tumors (Kotsopoulos et al., 2010; Newcomb et al., 2011). The risk for ER–/PR– and triple negative breast cancers is greater for African American women than for non-Hispanic white women, and reproductive factors have a more limited influence on risk for these forms of breast cancer.
As noted in Chapter 1, the committee adopted a broad interpretation of the environment that encompasses all factors that are not directly inherited through DNA. This definition allows for the consideration of a broad range of factors that may be encountered at any time in life and in any setting: the physiologic and developmental course of an individual, diet and other ingested substances, physical activity, microbial agents, physical and chemical agents encountered at home or work, medical treatments and interventions, social factors, and cultural practices. Figure 2-4 illustrates the multiple levels of biologic and social organization through which potential environmental exposures can influence breast cancer, and Figure 2-5 illustrates one approach to integrating this socio-ecologic perspective into investigation of potential contributions to breast cancer over the life course.
Many of these environmental influences overlap. For example, the physical environment encompasses medical interventions, dietary exposures to nutrients, energy and toxicants, ionizing radiation, and chemicals from industrial and agricultural processes and from consumer products. These in turn are influenced by the social environment, because cultural and economic factors influence diet at various stages of life, reproductive choices, energy balance, adult weight gain, body fatness, voluntary and involuntary physical activity, medical care, exposure to tobacco smoke and alcohol, and
SOURCE: Personal communication, R. A. Hiatt, University of California, San Francisco, September 16, 2010.
FIGURE 2-5 A schematic illustration of the potential for environmental exposures at various levels and times over the life course to influence the initiation and progression of breast cancer.
SOURCE: Personal communication, R. A. Hiatt, University of California, San Francisco, September 16, 2010.
occupational exposures, including shift work. Exposures at the tissue level are further influenced by metabolic and physiologic processes that modify the body’s internal environment.
A full appreciation of environmental influences on breast cancer calls for an analysis at multiple levels (Anderson and May, 1995), from genetic and cellular mechanisms to the influence of societal factors. Applying this perspective to research requires a transdisciplinary approach. A previous Institute of Medicine committee advanced this socio-ecologic model as a way to understand the relationship of health and disease to complex societal influences (IOM, 2000; Smedley and Syme, 2001). Social determinants then encompass various factors: social and economic conditions such as poverty; the conditions of work, and access to health care delivery; the chemical toxicants and pollutants associated with industrial development; and the positive aspects of human settlements that make active living and healthy eating possible (Hiatt and Breen, 2008). The socio-ecologic model also incorporates and augments discoveries in cancer biology and toxicology, in addition to those from the behavioral and social sciences.
Within this framework, the committee’s predominant focus was on exposure to physical and chemical toxicants, and on individual behavior related to diet and physical activity. When possible, the committee examined evidence regarding the implications of the timing of those exposures
across the life course. Although the committee recognizes that the nature of households, families, workplaces, communities, and societies in which people live play a major role in determining these exposures (Hiatt and Breen, 2008), the focus of this report was on the more proximate environmental exposures that may increase the risk of breast cancer. As understanding of the epidemiology, toxicology, and mechanisms of breast cancer continues to improve, efforts to develop effective interventions to mitigate risk may be aided by approaches that include modification of the social determinants of exposure to various risk factors.
Efforts to determine whether exposure to an aspect of the environment is related to the development of breast cancer depend on many types of research, including laboratory analyses of the response of cells or tissues (in vitro testing), experimental studies of effects in laboratory animals (in vivo testing), and epidemiologic studies of human subjects. U.S. regulatory agencies, including the Environmental Protection Agency (EPA) and the Food and Drug Administration (FDA), require a variety of in vitro and animal tests for cancer and other endpoints for licensing or registering pesticides, food additives, and pharmaceuticals (NRC, 2006). In laboratory studies, exposures are determined by the researcher, but in studies of human subjects, exposure assessment becomes a crucial part of the investigation.
Reviewed briefly here are basic features of this range of studies and of exposure assessment. Chapter 4 provides discussion of the challenges in using these various research tools to study breast cancer and draw valid conclusions about environmental risk factors.
In Vitro Testing
In vitro testing makes use of artificial environments to study tissues, cells, and cellular components. In the context of breast cancer, this type of testing allows for detailed examination of behavior of specific parts of larger, more complex organisms. Increasingly, in vitro testing allows for rapid analysis of a large number of variables, such as changes in gene expression. Although in vitro testing does not capture the critical interactions of the multiple systems in an intact organism, it provides a means to explore biological processes that are otherwise difficult to isolate.
In vitro tests for genotoxicity are an integral part of screening chemicals for their potential to cause DNA damage and thereby contribute to tumor formation. Various assays are used to assess gene mutations (e.g., Ames test, mouse lymphoma TK+/– assay) and structural or numerical aberrations in
chromosomes (e.g., Chinese hamster ovary cells or mouse lymphoma TK+/– assay). Chemicals that show potential for genotoxicity are often avoided in product development programs for pesticides and pharmaceuticals.
Advances in molecular genetics, proteomics, and immunohistochemistry are fine-tuning investigations of mechanisms of action and treatment for breast cancer through studies of gene amplification, hormone receptor binding, biomolecular analysis of cells derived from tissue microdissection, and genome and transcriptional analysis (Thayer and Foster, 2007; Pasqualini, 2009). For example, such tools have led to the development of selective estrogen receptor modulators (SERMs; e.g., tamoxifen and raloxifene) and down-regulators (SERDs) that have provided both new therapeutic approaches to treating breast cancer and pharmacologic approaches to the prevention of breast cancer in some women (McDonell and Wardell, 2010). Next-generation SERMs and SERDs are now in clinical trials. Such tools will also allow a deeper understanding of the cell signaling events that are disrupted in the process of breast carcinogenesis, providing a rational basis from which to identify potential environmental influences on breast cancer risk. For example, they can aid in studying the potential role of melatonin and circadian disruption as a modulator of breast cancer risk (Blask et al., 2011). High-throughput microarray methods are used to examine various global gene expression changes related to high tumor aggressiveness, potentially leading to a new breast cancer molecular taxonomy and multigene signatures that might predict outcome and response to systemic therapies (Colombo et al., 2011).
Cell cultures from normal breast tissue and from breast tumors are being used to screen for the potential for chemicals to promote the growth of breast cancer cells or to evaluate the effectiveness of various therapeutic agents. Immortalized human breast cell lines (e.g., MCF-10F) have been established to study various aspects of tumorigenicity (e.g., Russo et al., 2002), and immortalized breast cancer cell lines (e.g., MCF-7) to study tumor progress and response to therapeutic agents (Wistuba et al., 1998; Fillmore and Kuperwasser, 2008). In vitro tests of the potential for chemicals to interact with estrogen, androgen, and thyroid hormonal systems may eventually be applied to most pesticides to generate other mechanistic information related to carcinogenicity. At present, while much has been learned about the potential for hormonal activity for some chemicals, data are limited on many others. In 2009, EPA required that about two dozen pesticides be screened for these effects (EPA, 2009).
Whole Animal (In Vivo) Studies of Carcinogenicity
Rodents have long been used to study mammary tumorigenesis. Specific rat and mouse strains have been selected for routine screening of chemi-
cals and pharmaceuticals for carcinogenic effects. This testing is generally intended to detect any indication of carcinogenicity at any site in the body; it is not designed to identify likely sites for specific human cancers, such as breast cancer. EPA’s (2005) Guidelines for Carcinogen Risk Assessment notes, however, that certain modes of action (e.g., disruption of thyroid function) will have consequences for particular tissues and that this provides a basis for anticipation of site concordance between rodents and humans in certain cases. Rodent models are also widely used by research scientists to investigate mammary carcinogenesis and the effects of timing and combinations of exposure to environmental factors. Challenges in using these models are discussed in Chapter 4.
Scope of Carcinogenicity Testing
Carcinogenicity testing in two species, typically rodents, is part of the standard battery of tests required for most pharmaceuticals, pesticides, and some food additives. Registration or licensing for marketing for products that require such approval involves establishing to the satisfaction of the appropriate government agency that the compound can be safely used under the registered use scenarios or, in the case of a pharmaceutical, that it has an adequate “risk–benefit” ratio.
Premarket testing of chemicals used in consumer products and in industry is rarely undertaken because the federal government has limited authority to require it under the Toxic Substances Control Act, which was enacted in 1976 (GAO, 2009). Only about 15 percent of the notices submitted to EPA for manufacturing or importing new industrial chemicals have any specific health or safety data (GAO, 2009). Instead, considerable reliance is placed on evaluating, qualitatively or through modeling, the similarities in structure to compounds that are carcinogenic or mutagenic (GAO, 2005; NRC, 2006). Each year, the National Toxicology Program (NTP) of the National Institute of Environmental Health Sciences conducts carcinogenicity screening for a few chemicals that would otherwise go untested. These chemicals are selected based on concern about their potential toxicity or the extent of human exposure. In 2007, the European Union began transferring responsibility for safety testing to manufacturers under the REACH program (Registration, Evaluation, Authorisation and Restriction of Chemical Substances) (European Chemicals Agency, 2007).
Carcinogenicity testing is also generally not required before new cosmetics and dietary supplements are marketed (FDA, 2005, 2009). Manufacturers are responsible for identifying ingredients and declaring that they are safe for the intended use. The FDA does have the authority to remove products from the market if they are found to be adulterated or misbranded.
NTP Carcinogenicity Study Protocols
Whole-animal studies are conducted as part of many types of academic and industry research on breast cancer and carcinogenicity. These studies can vary widely in design, depending on their purpose. For formal carcinogenicity reviews by EPA or the International Agency for Research on Cancer (IARC), the NTP study designs for whole-animal bioassays typically represent a recognized standard for carcinogenicity testing.
Under NTP (2006) protocols, carcinogenicity testing is usually based on a 2-year chronic dosing program. Testing uses three or more exposure-level groups and one unexposed control group, with separate test groups for male and female animals. Each group typically has 50 animals. The highest dose used in the assays is usually the maximally tolerated dose, with the aim of maximizing the ability to detect effects in small numbers of animals and minimizing the loss of animals from acutely toxic effects of the test substance. Dosing usually begins when the animals are 5 to 6 weeks of age. Under revised NTP (2010) study designs, rats (but not mice) may receive in utero and lactational exposure to the test substance, which will allow the testing procedures to identify adverse effects associated with exposures at the very earliest times of life.
The NTP currently uses Harlan Sprague Dawley rats, and one strain of mice, the B6C3F1 hybrid. Previously, other rat strains have been used (typically F344/N, although some chemicals were tested in Sprague Dawley and Osborne Mendel strains). Tests of similar design are required for pesticide registration (EPA, 1998) and pharmaceutical testing (FDA, 1997), although the animal strains used typically differ, and in utero testing is rarely performed (EPA, 2002).
At the end of the 2-year test period, the surviving animals are killed and necropsied. Any animals that die during the study period are also necropsied. To date, the NTP (2011) has tested more than 500 chemicals. Overall evaluation of the test results for carcinogenic hazard includes consideration of both malignant and benign tumors found anywhere in the animals.
Assessing the Process of Carcinogenesis and Susceptibility to Environmental Exposures
In addition to the use of experimental animals for standardized carcinogen bioassays, several animal models of chemically induced breast cancer have been used to evaluate (1) the cellular and molecular development and progression of breast cancer, and (2) the ability of environmental and developmental factors to modify breast carcinogenesis. The two most common models use induction of mammary tumors in rodents by the administration of N-methyl-N-nitrosourea (MNU) or 7,12-dimethylbenz[a]anthracene
(DMBA) (Russo and Russo, 1996; Thompson and Singh, 2000; Medina, 2010). In rats, these carcinogen-induced tumors arise from terminal end buds, which are similar in structure to the terminal ductal lobular unit in the human breast. Similar to human breast cancers, these chemically induced mammary carcinomas have altered expression of proteins that regulate cell growth and differentiation (e.g., HER2), and most rat mammary tumors express estrogen and progesterone receptors. For example, rat mammary tumors induced by MNU appear to be similar to low- to intermediate-grade human breast cancers that are ER+ and noninvasive (Chan et al., 2005).
Although these rodent models differ in important ways from human breast cancer (e.g., specific gene mutations, metastatic potential), they have been used extensively to explore mechanisms of mammary carcinogenesis and ways environmental factors influence that process. For example, studies have used DMBA-induction of mammary tumors in rats to demonstrate that obesity enhances tumor incidence and shortens the time to tumor development (e.g., Hakkak et al., 2005). These models make it possible to explore the impact of exposure to environmental agents at different times in life. For example, as discussed in Chapter 3, dioxins do not induce mammary tumors in rats in the 2-year chronic bioassay, but rats with prenatal exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) have shown altered mammary gland differentiation and an increased susceptibility to DMBA-induced mammary tumors (Jenkins et al., 2007). However, prenatal exposure of mice to TCDD delayed DMBA-induced tumor formation by 4 weeks relative to controls, and resulted in lower tumor incidence throughout the 27-week time course (Wang et al., 2011). The authors suggested that activation of the aryl hydrocarbon receptor (AhR) by TCDD slows the promotion of preneoplastic lesions to overt mammary tumors in mice. Interpreting such differences in response between rats and mice is among the challenges discussed in Chapter 4.
Another example of the use of whole animal models of carcinogen-induced mammary tumors in evaluating environmental risk factors for breast cancer was provided by La Merrill et al. (2009). Because some forms of breast cancer are associated with greater adiposity, these authors used three mouse models of breast cancer to examine the effect of prenatal TCDD exposure and high- or low-fat diet on physical characteristics associated with metabolic syndrome. The models were the DMBA mouse model and two different transgenic models of ER– breast cancer. Each model showed a different response (e.g., increase in body fat with or without changes in fasting glucose), but the TCDD exposure was associated with effects (reduced triglycerides) in only one of the models and only in the animals on the high-fat diet. The variation in response in models such as
these may help in exploring the variability in human susceptibility to factors that increase risk of breast cancer.
Case–control studies compare exposures to the factor of interest (an “exposure”) among individuals who have a disease of interest (cases) and individuals who do not have the disease (controls). The controls should come from a population that is judged comparable to the one from which the cases were identified (e.g., people with similar characteristics from the same community or the same hospital). Because of their more efficient study design, case–control studies are often done when a disease is rare or to explore a suspected association within a shorter period than a cohort approach would require. They are usually retrospective, looking back at exposure histories among the cases and controls. But assessing the timing of the exposures can be challenging. Among cases, it can difficult to be certain that the exposure preceded the disease. Studies with retrospective data collection that involves patient interviews can be subject to recall bias.6 For example, cases, who have been diagnosed with cancer and who are likely to have thought carefully about why they have it, may be more likely to recall an exposure than controls, who do not have the disease and therefore may not have thought quite as carefully about whether they may have been exposed.
Cohort studies compare the occurrence of health outcomes among groups with different levels of exposure to a factor of interest. These studies may be prospective, beginning before individuals have been diagnosed with a disease and following them for a given period of time, or retrospective, using records or interviews to collect information about past exposures and health outcomes. For example, cohorts of smokers and nonsmokers could be followed to assess the incidence of lung cancer in each group. A prospective study ensures that exposure precedes diagnosis but exposure levels are not controlled by the investigator. Collection of information on exposures that vary over time is difficult and often not carried out with sufficient detail. Cohort studies avoid the problem of recall bias, but they can be subject to other forms of bias. The time frame for prospective cohort studies may be several years or as long as decades, depending on the hypothesized nature of the relation between the exposure(s) and the disease being studied. With breast cancer, for example, the disease may become evident only many years after an exposure of interest, so cohorts must be followed long enough to
5Additional information about study design and analysis is available from sources such as Rothman (2002) and Szklo and Nieto (2004).6Forms of bias in epidemiologic studies are discussed in Chapter 4.
allow for this interval. If childhood or prenatal exposures play a role, then it could require five or more decades of follow-up. Extended follow-up of a study population can be expensive and administratively challenging. A listing of approximately 50 cohorts in the United States and other countries that have investigated breast cancer risks has been compiled by the Silent Spring Institute (2011). The listing illustrates the variation in characteristics and size of these study populations.
Controlled trials, also referred to as clinical trials, are experiments in which the investigator makes the decision as to who is assigned to receive the treatment (exposure) versus being in the comparison group. If the assignment is made at random and the sample size is adequate to ensure that confounding was minimized by the random assignment, then the result of the experiment can have a causal interpretation. For example, to determine if a medication that lowers serum cholesterol prevents heart attacks, it is possible to treat one group of individuals with a cholesterol-lowering medication and compare their cholesterol levels and incidence of heart attacks to those of a control group that did not receive the intervention. If the study is sufficiently large (in this case, takes place over a long enough time period for the number of events in the comparison group to be sufficient) and the assignment to treatment is random, then any reduction in incidence of heart attacks among the treated group, relative to the controls, can be interpreted to be a causal one. The comparison of measurements of cholesterol can also be used in drawing conclusions about the mechanism of action of the medications, although other mechanisms would also need to be taken into account. Studies that are investigating preventive care may be referred to as intervention trials. If an exposure is potentially harmful, controlled trials can examine ways to minimize or eliminate the exposure, but studies that deliberately expose participants to something expected to be harmful are not done. An optimal design of a clinical trial includes not only random assignment of study participants to the treatment or comparison group but also blinding of study participants and researchers to those assignments. Such blinding will minimize bias in the assessment of the outcomes.
Studying the potential effects of environmental factors on risk for breast cancer requires some basis for distinguishing the women who have been exposed to the factor from those who have not. Exposure assessment is the process of establishing that an exposure has occurred and determining critical features of the exposure, including who is exposed and the magnitude, route, and timing of exposure. Errors in classifying who is more and who is less exposed (exposure misclassification) can limit the ability of a
study to determine whether the environmental factor is associated with an increase or decrease in risk for breast cancer.
The approach to exposure assessment may depend on the type of study, the nature of the environmental factor of interest, the way exposure occurs, and the tools available to measure the exposure. In clinical trials or intervention trials, the population to be exposed and the exposure are determined in advance by the researchers. Even so, study participants may deviate from their prescribed exposures. In cohort and case–control studies, exposure status can sometimes be objectively determined (e.g., by measuring weight), but it often depends on reports by study participants of past or present experience (e.g., exposure to tobacco smoke in childhood or use of specific products in the home). Researchers may also use indirect means to estimate exposures, such as residence in a particular locality or distance from a particular source of concern (e.g., an air pollution source). Exposure to some chemicals can be established with tests of biologic specimens (e.g., blood, urine), but many exposures are not detectable in this manner and collection of specimens may not be possible. Because the first steps in breast cancer may begin decades before the diagnosis, relevant exposures may occur several decades before a cancer is detected.
Historically, studies in occupational settings have been an important means for identifying chemical carcinogens. The types and amounts of chemicals used may be documented, and exposure levels may be higher than in other settings. Studies in an occupational setting may be able to draw on records of job histories, understanding of production processes and chemicals used, or data from personal or area sampling. Exposure of certain workers to some chemicals may be thousands of times greater (or more) than that experienced by the general public, while other workers with different job tasks might experience a wide range of exposures. These pronounced variations in exposure allow for firmer conclusions as to whether exposure is associated with risk of disease. When exposure levels are low, contrasts between the exposed and unexposed are smaller, and associations with differences in disease risk may be more difficult to detect. However, the relatively small number of women in industries with heavy exposures, except during World War II, has limited the opportunity to study risks for breast cancer in those settings.
A potentially hazardous environmental factor can only pose a risk when it can enter the body and interact with tissues where it can do harm. Thus, an understanding of the possible points of entry of a given substance into the body, called “routes of exposure,” is fundamental to evaluating its potential effects. These routes of exposure are inhalation, ingestion, or contact with the skin (dermal exposure). In occupational settings, inhalation is frequently the primary route of exposure, with dermal contact as a secondary route. In the general population, ingestion and dermal expo-
sure play a large role, but inhalation is highly relevant for tobacco smoke and other air pollutants. Sometimes potential routes of exposure can be overlooked. For example, when taking showers, people experience dermal exposure to chemicals in the water supply, but showers also present an opportunity to inhale (typically low levels of) any water contaminants that readily volatilize.
The potential effect of an environmental exposure is usually strongly influenced by the magnitude of that exposure—the dose. A higher dose of a hazardous exposure is generally more likely to be associated with adverse health effects than a lower dose is. Factors that influence dose include the duration and frequency of exposure and the biologic processes that govern the absorption, distribution, metabolism, excretion, and storage of a substance in the body. The results of these toxicokinetic processes differ depending on the substance introduced into the body. Some ingested chemicals, for example, are poorly absorbed and rapidly excreted, while others may be readily absorbed, transformed by metabolism into new substances, and possibly stored in body tissues such as fat. The route of exposure may influence how the body responds to a substance. Also, differences among individuals in their genetics or exposure to other risk factors can result in differing responses to equal doses of a substance.
Estimates of disease risk associated with a factor of interest—such as a personal characteristic (e.g., age), an environmental exposure (e.g., alcohol consumption or radiation exposure), or a medical treatment (e.g., a prescribed medication)—can be measured in multiple ways, including absolute risk, relative risk, hazard ratios, odds ratios, attributable risk, population attributable risk, and number needed to treat (NNT) or number needed to harm (NNH). The measure that is used depends on the study design, the available data, and in some cases the purpose for which the information is presented.7
In case–control studies, the prevalence of the factor of interest among cases and controls is compared using an odds ratio: the odds that a case is exposed compared to the odds that a control is exposed. Odds ratios of 1.0 mean that cases and controls were equally likely to have been exposed, and therefore the exposure is not associated with the disease and it is not a risk factor. An odds ratio that is statistically significantly less than 1.0 means that cases were less likely to have been exposed than controls. An odds ratio that is statistically significantly greater than 1.0 indicates that the
7Additional methodologic information is available from sources such as Rothman (2002) and Jewell (2004).
exposure is more likely to be reported among the case group than among the control group, indicating that the exposure is statistically associated with the disease, and thus is a potential risk factor for the disease.
Cohort studies typically use the measure of relative risk or the hazard ratio. Relative risk is a ratio of the absolute risk (incidence) of disease in an exposed group (or groups with different levels of exposure) to the absolute risk (incidence) of disease in an unexposed group (or some other designated comparison group). A hazard ratio incorporates information on the pace at which events (e.g., cases of breast cancer) occur over the course of a study. Clinical trials also use relative risk and hazard ratios. The relative risk is interpreted in much the same way as the odds ratio. A relative risk of 1.0 means the exposure is not associated with development of disease; a ratio that is statistically significantly less than 1.0 means that those who were exposed were less likely to develop the disease than those who were not (indicating that the exposure is protective); and a ratio that is statistically significantly greater than 1.0 means that the exposure is associated with the disease, indicating that it is potentially a risk factor for the disease.
Relative risk estimates and odds ratios represent an estimate of the strength of the association of a risk factor with breast cancer, but by themselves they do not provide insight into the underlying incidence of the disease and the absolute impact of a given factor. A relative risk of 2.0 means that a factor is associated with a doubling of the incidence of the health outcome in the exposed group compared to the unexposed. But this can mean an increase to 2 cases per 100,000 people or 200 cases per 100,000 people, depending on whether the underlying incidence is 1 case per 100,000 people or 100 cases per 100,000 people. Measures such as NNT and NNH are other ways of relating estimates of risk to absolute numbers. NNT is the number of people who would have to receive a treatment during a given time period for one person to benefit.
Other measures that are used to assess the impact of a risk factor include attributable risk (AR) and population attributable risk (PAR). The AR is defined as the percentage of cases that occur in the exposed group that are in excess of the cases in the comparison group. The PAR is a population-based measure of the percentage of excess cases associated with the exposure of interest that also takes into account the distribution of the risk factor within the population. If a risk factor is rare, it may contribute only a small proportion of a population’s disease risk, even if the incidence of the disease is much higher among those who are exposed (which would produce a high relative risk). To adequately estimate the PAR requires high-quality studies in which confounding and overlapping contributions from multiple factors are analyzed appropriately. There are numerous pitfalls in interpreting the PAR (discussed in Chapter 4) (Rockhill et al., 1998). Ideally, the PAR provides information on the percentage of disease that can
be eliminated by avoiding the exposure, but the variation in estimates of PAR underscores how difficult it is to separate the effects from multiple risk factors. Because of this problem, and because PARs for individual factors cannot simply be added together, PARs are sometimes calculated for a group of factors rather than single factors. Appendix D shows, for instance, a range of estimated PAR values (see e.g., physical activity or hormone therapy). These ranges may reflect variation in the contribution of a given factor across different populations, or variation in the degree to which the different studies adequately controlled confounding, or a combination of the two.
Overall, breast cancer becomes increasingly common as women grow older, but the patterns of the disease vary among women in different racial and ethnic groups. These differences are likely to reflect the influence of a mix of genetic and environmental factors. Although the scope of environmental influences can be understood to encompass cultural and societal factors, most of the human, animal, and mechanistic research to date has focused more narrowly on individual exposures and the related biological processes. In the following chapter, the committee examines evidence regarding a set of environmental factors that illustrate varied types of exposures that may occur and the range of evidence available to assess whether exposure is associated with increased risk of breast cancer.
ACS (American Cancer Society). 2011a. Breast cancer facts and figures 2011–2012. Atlanta, GA: ACS. http://www.cancer.org/acs/groups/content/@epidemiologysurveilance/documents/document/acspc-030975.pdf (accessed October 24, 2011).
ACS. 2011b. Cancer facts and figures 2011. Atlanta, GA: ACS. http://www.cancer.org/Research/CancerFactsFigures/CancerFactsFigures/cancer-facts-figures-2011 (accessed June 22, 2011).
Allred, D. C. 2010. Ductal carcinoma in situ: Terminology, classification, and natural history. J Natl Cancer Inst Monogr 2010(41):134–138.
Althuis, M. D., J. H. Fergenbaum, M. Garcia-Closas, L. A. Brinton, M. P. Madigan, and M. E. Sherman. 2004. Etiology of hormone receptor-defined breast cancer: A systematic review of the literature. Cancer Epidemiol Biomarkers Prev 13(10):1558–1568.
Anderson, L. M., and D. S. May. 1995. Has the use of cervical, breast, and colorectal cancer screening increased in the United States? Am J Public Health 85(6):840–842.
Anderson, W. F., I. Jatoi, and S. S. Devesa. 2006a. Assessing the impact of screening mammography: Breast cancer incidence and mortality rates in Connecticut (1943–2002). Breast Cancer Res Treat 99(3):333–340.
Anderson, W. F., R. M. Pfeiffer, G. M. Dores, and M. E. Sherman. 2006b. Comparison of age distribution patterns for different histopathologic types of breast carcinoma. Cancer Epidemiol Biomarkers Prev 15(10):1899–1905.
Anderson, W. F., B. E. Chen, L. A. Brinton, and S. S. Devesa. 2007. Qualitative age interactions (or effect modification) suggest different cancer pathways for early-onset and late-onset breast cancers. Cancer Causes Control 18(10):1187–1198.
Anderson, W. F., I. Jatoi, J. Tse, and P. S. Rosenberg. 2010. Male breast cancer: A population-based comparison with female breast cancer. J Clin Oncol 28(2):232–239.
Arendt, L. M., J. A. Rudnick, P. J. Keller, and C. Kuperwasser. 2010. Stroma in breast development and disease. Semin Cell Dev Biol 21(1):11–18.
Armes, J. E., L. Trute, D. White, M. C. Southey, F. Hammet, A. Tesoriero, A. M. Hutchins, G. S. Dite, et al. 1999. Distinct molecular pathogeneses of early-onset breast cancers in BRCA1 and BRCA2 mutation carriers: A population-based study. Cancer Res 59(8):2011–2017.
Atchley, D. P., C. T. Albarracin, A. Lopez, V. Valero, C. I. Amos, A. M. Gonzalez-Angulo, G. N. Hortobagyi, and B. K. Arun. 2008. Clinical and pathologic characteristics of patients with BRCA-positive and BRCA-negative breast cancer. J Clin Oncol 26(26):4282–4288.
Bauer, K. R., M. Brown, R. D. Cress, C. A. Parise, and V. Caggiano. 2007. Descriptive analysis of estrogen receptor (ER)-negative, progesterone receptor (PR)-negative, and HER2-negative invasive breast cancer, the so-called triple-negative phenotype: A population-based study from the California Cancer Registry. Cancer 109(9):1721–1728.
Bernstein, L., C. R. Teal, S. Joslyn, and J. Wilson. 2003. Ethnicity-related variation in breast cancer risk factors. Cancer 97(1 Suppl):222–229.
Blask, D. E., S. M. Hill, R. T. Dauchy, S. Xiang, L. Yuan, T. Duplessis, L. Mao, E. Dauchy, et al. 2011. Circadian regulation of molecular, dietary, and metabolic signaling mechanisms of human breast cancer growth by the nocturnal melatonin signal and the consequences of its disruption by light at night. J Pineal Res 51(3):259–269.
Boyd, N. F., G. S. Dite, J. Stone, A. Gunasekara, D. R. English, M. R. McCredie, G. G. Giles, D. Tritchler, et al. 2002. Heritability of mammographic density, a risk factor for breast cancer. N Engl J Med 347(12):886–894.
Boyd, N. F., L. J. Martin, M. Bronskill, M. J. Yaffe, N. Duric, and S. Minkin. 2010. Breast tissue composition and susceptibility to breast cancer. J Natl Cancer Inst 102(16):1224–1237. Breen, N., D. K. Wagener, M. L. Brown, W. W. Davis, and R. Ballard-Barbash. 2001. Progress in cancer screening over a decade: Results of cancer screening from the 1987, 1992, and 1998 National Health Interview Surveys. J Natl Cancer Inst 93(22):1704–1713.
Breen, N., J. F. Gentleman, and J. S. Schiller. 2011. Update on mammography trends: Comparisons of rates in 2000, 2005, and 2008. Cancer 117(10):2209–2218.
Buell, P. 1973. Changing incidence of breast cancer in Japanese-American women. J Natl Cancer Inst 51(5):1479–1483.
Burke, W., A. H. Olsen, L. E. Pinsky, S. E. Reynolds, and N. A. Press. 2001. Misleading presentation of breast cancer in popular magazines. Eff Clin Pract 4(2):58–64.
Canzian, F., D. G. Cox, V. W. Setiawan, D. O. Stram, R. G. Ziegler, L. Dossus, L. Beckmann, H. Blanche, et al. 2010. Comprehensive analysis of common genetic variation in 61 genes related to steroid hormone and insulin-like growth factor-I metabolism and breast cancer risk in the NCI breast and prostate cancer cohort consortium. Hum Mol Genet 19(19):3873–3884.
Carey, L. A., C. M. Perou, C. A. Livasy, L. G. Dressler, D. Cowan, K. Conway, G. Karaca, M. A. Troester, et al. 2006. Race, breast cancer subtypes, and survival in the Carolina Breast Cancer Study. JAMA 295(21):2492–2502.
CDC (Centers for Disease Control and Prevention). 2010. National Program of Cancer Registries. http://www.cdc.gov/cancer/npcr/about.htm (accessed November 8, 2011).
Chan, M. M., X. Lu, F. M. Merchant, J. D. Iglehart, and P. L. Miron. 2005. Gene expression profiling of NMU-induced rat mammary tumors: Cross species comparison with human breast cancer. Carcinogenesis 26(8):1343–1353.
Chumlea, W. C., C. M. Schubert, A. F. Roche, H. E. Kulin, P. A. Lee, J. H. Himes, and S. S. Sun. 2003. Age at menarche and racial comparisons in U.S. girls. Pediatrics 111(1):110–113.
Clarke, C. A., S. L. Glaser, C. S. Uratsu, J. V. Selby, L. H. Kushi, and L. J. Herrinton. 2006. Recent declines in hormone therapy utilization and breast cancer incidence: Clinical and population-based evidence. J Clin Oncol 24(33):e49–e50.
Colombo, P. E., F. Milanezi, B. Weigelt, and J. S. Reis-Filho. 2011. Microarrays in the 2010s: The contribution of microarray-based gene expression profiling to breast cancer classification, prognostication and prediction. Breast Cancer Res 13(3):212.
De Morgan, S., S. Redman, K. J. White, B. Cakir, and J. Boyages. 2002. “Well, have I got cancer or haven’t I?” The psycho-social issues for women diagnosed with ductal carcinoma in situ. Health Expect 5(4):310–318.
Deapen, D., L. Liu, C. Perkins, L. Bernstein, and R. K. Ross. 2002. Rapidly rising breast cancer incidence rates among Asian-American women. Int J Cancer 99(5):747–750.
DeSantis, C., N. Howlader, K. A. Cronin, and A. Jemal. 2011. Breast cancer incidence rates in U.S. women are no longer declining. Cancer Epidemiol Biomarkers Prev 20(5):733–739.
Easton, D. F., K. A. Pooley, A. M. Dunning, P. D. Pharoah, D. Thompson, D. G. Ballinger, J. P. Struewing, J. Morrison, et al. 2007. Genome-wide association study identifies novel breast cancer susceptibility loci. Nature 447(7148):1087–1093.
EPA (Environmental Protection Agency). 1998. Health effects test guidelines. OPPTS 870.4200: Carcinogenicity. EPA 712–C–98–211. Washington, DC: Government Printing Office. http://hero.epa.gov/index.cfm?action=search.view&reference_ID=6378 (accessed November 16, 2011).
EPA. 2002. A review of the reference dose and reference concentration processes. EPA/630/P-02/002F. Washington, DC: EPA. http://www.epa.gov/raf/publications/pdfs/rfd-final.pdf (accessed November 17, 2011).
EPA. 2005. Guidelines for carcinogen risk assessment. Washington, DC: EPA. http://www.epa.gov/osa/mmoaframework/pdfs/CANCER-GUIDELINES-FINAL-3-25-05%5B1%5D.pdf (accessed October 23, 2011).
EPA. 2009. Final list of initial pesticide active ingredients and pesticide inert ingredients to be screened under the Federal Food, Drug, and Cosmetic Act. Federal Register 74(71): 17579–17585. http://www.epa.gov/scipoly/oscpendo/pubs/final_list_frn_041509.pdf (accessed October 25, 2011).
European Chemicals Agency. 2007. REACH. http://echa.europa.eu/reach_en.asp (accessed November 8, 2011).
FDA (Food and Drug Administration). 1997. Guidance for industry: S1B testing for carcinogenicity of pharmaceuticals. Rockville, MD: FDA. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/ucm074916.pdf (accessed November 17, 2011).
FDA. 2005. Fair Packaging and Labeling Act, Title 15—Commerce and Trade, Chapter 39—Fair Packaging and Labeling Program. http://www.fda.gov/regulatoryinformation/legislation/ucm148722.htm (accessed November 8, 2011).
FDA. 2009. Overview of dietary supplements. http://www.fda.gov/Food/DietarySupplements/ConsumerInformation/ucm110417.htm (accessed November 8, 2011).
Fillmore, C. M., and C. Kuperwasser. 2008. Human breast cancer cell lines contain stem-like cells that self-renew, give rise to phenotypically diverse progeny and survive chemotherapy. Breast Cancer Res 10(2):R25.
Foulkes, W. D., I. M. Stefansson, P. O. Chappuis, L. R. Begin, J. R. Goffin, N. Wong, M. Trudel, and L. A. Akslen. 2003. Germline BRCA1 mutations and a basal epithelial phenotype in breast cancer. J Natl Cancer Inst 95(19):1482–1485.
Foulkes, W. D., I. E. Smith, and J. S. Reis-Filho. 2010. Triple-negative breast cancer. N Engl J Med 363(20):1938–1948.
GAO (Government Accountability Office). 2005. Chemical regulation: Options exist to improve EPA’s ability to assess health risks and manage its chemical review program. GAO-05-458. Washington, DC: GAO. http://www.gao.gov/new.items/d05458.pdf (accessed December 12, 2011).
GAO. 2009. Chemical regulation: Observations on improving the Toxic Substances Control Act. GAO-10-292T. Washington, DC: GAO. http://www.gao.gov/products/GAO-10-292T (accessed October 24, 2011).
Glass, A. G., J. V. Lacey, Jr., J. D. Carreon, and R. N. Hoover. 2007. Breast cancer incidence, 1980–2006: Combined roles of menopausal hormone therapy, screening mammography, and estrogen receptor status. J Natl Cancer Inst 99(15):1152–1161.
Gomez, S. L., C. A. Clarke, S. J. Shema, E. T. Chang, T. H. Keegan, and S. L. Glaser. 2010. Disparities in breast cancer survival among Asian women by ethnicity and immigrant status: A population-based study. Am J Public Health 100(5):861–869.
Grieco, E. 2010. Race and Hispanic origin of the foreign-born population in the United States: 2007. http://www.census.gov/prod/2010pubs/acs-11.pdf (accessed November 8, 2011).
Gu, F., F. R. Schumacher, F. Canzian, N. E. Allen, D. Albanes, C. D. Berg, S. I. Berndt, H. Boeing, et al. 2010. Eighteen insulin-like growth factor pathway genes, circulating levels of IGF-I and its binding protein, and risk of prostate and breast cancer. Cancer Epidemiol Biomarkers Prev 19(11):2877–2887.
Hakkak, R., A. W. Holley, S. L. Macleod, P. M. Simpson, G. J. Fuchs, C. H. Jo, T. Kieber-Emmons, and S. Korourian. 2005. Obesity promotes 7,12-dimethylbenz(a)anthracene-induced mammary tumor development in female Zucker rats. Breast Cancer Res 7(5):R627–R633.
Hedeen, A. N., E. White, and V. Taylor. 1999. Ethnicity and birthplace in relation to tumor size and stage in Asian American women with breast cancer. Am J Public Health 89(8):1248–1252.
Hersh, A. L., M. L. Stefanick, and R. S. Stafford. 2004. National use of postmenopausal hormone therapy: Annual trends and response to recent evidence. JAMA 291(1):47–53.
Hiatt, R. A., and N. Breen. 2008. The social determinants of cancer: A challenge for transdisciplinary science. Am J Prev Med 35(2 Suppl):S141–S150.
Hines, L. M., B. Risendal, M. L. Slattery, K. B. Baumgartner, A. R. Giuliano, C. Sweeney, D. E. Rollison, and T. Byers. 2010. Comparative analysis of breast cancer risk factors among Hispanic and non-Hispanic white women. Cancer 116(13):3215–3223.
Hulley, S., D. Grady, T. Bush, C. Furberg, D. Herrington, B. Riggs, and E. Vittinghoff. 1998. Randomized trial of estrogen plus progestin for secondary prevention of coronary heart disease in postmenopausal women. JAMA 280(7):605–613.
Hunter, D. J., P. Kraft, K. B. Jacobs, D. G. Cox, M. Yeager, S. E. Hankinson, S. Wacholder, Z. Wang, et al. 2007. A genome-wide association study identifies alleles in FGFR2 associated with risk of sporadic postmenopausal breast cancer. Nat Genet 39(7):870–874. Hunter, D. J., D. Altshuler, and D. J. Rader. 2008. From Darwin’s finches to canaries in the coal mine—mining the genome for new biology. N Engl J Med 358(26):2760–2763.
Huo, D., F. Ikpatt, A. Khramtsov, J. M. Dangou, R. Nanda, J. Dignam, B. Zhang, T. Grushko, et al. 2009. Population differences in breast cancer: Survey in indigenous African women reveals overrepresentation of triple-negative breast cancer. J Clin Oncol 27(27):4515–4521.
IOM (Institute of Medicine). 2000. Promoting health: Intervention strategies from social and behavioral research. Washington, DC: National Academy Press.
Jemal, A., T. Murray, E. Ward, A. Samuels, R. C. Tiwari, A. Ghafoor, E. J. Feuer, and M. J. Thun. 2005. Cancer statistics, 2005. CA Cancer J Clin 55(1):10–30.
Jemal, A., E. Ward, and M. J. Thun. 2007. Recent trends in breast cancer incidence rates by age and tumor characteristics among U.S. women. Breast Cancer Res 9(3):R28.
Jenkins, S., C. Rowell, J. Wang, and C. A. Lamartiniere. 2007. Prenatal TCDD exposure predisposes for mammary cancer in rats. Reprod Toxicol 23(3):391–396.
Jewell, N. P. 2004. Statistics for epidemiology. Boca Raton, FL: Chapman & Hall/CRC.
Johansen Taber, K. A., L. R. Morisy, A. J. Osbahr, 3rd, and B. D. Dickinson. 2010. Male breast cancer: Risk factors, diagnosis, and management (review). Oncol Rep 24(5):1115–1120.
Johnson, M. C. 2010. Anatomy and physiology of the breast. In Management of breast diseases, edited by I. Jatoi and M. Kaufmann. Berlin, Germany: Springer-Verlag.
Joslyn, S. A., M. L. Foote, K. Nasseri, S. S. Coughlin, and H. L. Howe. 2005. Racial and ethnic disparities in breast cancer rates by age: NAACCR Breast Cancer Project. Breast Cancer Res Treat 92(2):97–105.
Keegan, T. H., S. L. Gomez, C. A. Clarke, J. K. Chan, and S. L. Glaser. 2007. Recent trends in breast cancer incidence among 6 Asian groups in the Greater Bay Area of Northern California. Int J Cancer 120(6):1324–1329.
Keegan, T. H., E. M. John, K. M. Fish, T. Alfaro-Velcamp, C. A. Clarke, and S. L. Gomez. 2010. Breast cancer incidence patterns among California Hispanic women: Differences by nativity and residence in an enclave. Cancer Epidemiol Biomarkers Prev 19(5):1208–1218.
Kerlikowske, K. 2010. Epidemiology of ductal carcinoma in situ. J Natl Cancer Inst Monogr 41:139–141.
Kolonel, L. N., and L. Wilkens. 2006. Migrant studies. In Cancer epidemiology and prevention, 3rd ed. Edited by D. Schottenfeld and J. Fraumeni. New York: Oxford University Press.
Korde, L. A., J. A. Zujewski, L. Kamin, S. Giordano, S. Domchek, W. F. Anderson, J. M. Bartlett, K. Gelmon, et al. 2010. Multidisciplinary meeting on male breast cancer: Summary and research recommendations. J Clin Oncol 28(12):2114–2122.
Kotsopoulos, J., W. Y. Chen, M. A. Gates, S. S. Tworoger, S. E. Hankinson, and B. A. Rosner. 2010. Risk factors for ductal and lobular breast cancer: Results from the Nurses’ Health Study. Breast Cancer Res 12(6):R106.
Kravchenko, J., I. Akushevich, V. L. Seewaldt, A. P. Abernethy, and H. K. Lyerly. 2011. Breast cancer as heterogeneous disease: Contributing factors and carcinogenesis mechanisms. Breast Cancer Res Treat 128(2):483–493.
Kreike, B., M. van Kouwenhove, H. Horlings, B. Weigelt, H. Peterse, H. Bartelink, and M. J. van de Vijver. 2007. Gene expression profiling and histopathological characterization of triple-negative/basal-like breast carcinomas. Breast Cancer Res 9(5):R65.
Kurian, A. W., K. Fish, S. J. Shema, and C. A. Clarke. 2010. Lifetime risks of specific breast cancer subtypes among women in four racial/ethnic groups. Breast Cancer Res 12(6):R99.
Kwan, M. L., L. H. Kushi, E. Weltzien, B. Maring, S. E. Kutner, R. S. Fulton, M. M. Lee, C. B. Ambrosone, et al. 2009. Epidemiology of breast cancer subtypes in two prospective cohort studies of breast cancer survivors. Breast Cancer Res 11(3):R31.
La Merrill, M., D. S. Baston, M. S. Denison, L. S. Birnbaum, D. Pomp, and D. W. Threadgill. 2009. Mouse breast cancer model-dependent changes in metabolic syndrome-associated phenotypes caused by maternal dioxin exposure and dietary fat. Am J Physiol Endocrinol Metab 296(1):E203–E210.
Li, C. I., K. E. Malone, and J. R. Daling. 2003. Differences in breast cancer stage, treatment, and survival by race and ethnicity. Arch Intern Med 163(1):49–56.
Liaw, D., D. J. Marsh, J. Li, P. L. Dahia, S. I. Wang, Z. Zheng, S. Bose, K. M. Call, et al. 1997. Germline mutations of the PTEN gene in Cowden disease, an inherited breast and thyroid cancer syndrome. Nat Genet 16(1):64–67.
Liu, Y., M. Perez, M. Schootman, R. L. Aft, W. E. Gillanders, M. J. Ellis, and D. B. Jeffe. 2010. A longitudinal study of factors associated with perceived risk of recurrence in women with ductal carcinoma in situ and early-stage invasive breast cancer. Breast Cancer Res Treat 124(3):835–844.
Liu, L., J. Zhang, A. H. Wu, M. C. Pike, and D. Deapen. 2011. Invasive breast cancer incidence trends by detailed race/ethnicity and age. Int J Cancer. doi: 10.1002/ijc.26004. [Epub ahead of print]
Ma, H., L. Bernstein, M. C. Pike, and G. Ursin. 2006. Reproductive factors and breast cancer risk according to joint estrogen and progesterone receptor status: A meta-analysis of epidemiological studies. Breast Cancer Res 8(4):R43.
Malkin, D., F. P. Li, L. C. Strong, J. F. Fraumeni, Jr., C. E. Nelson, D. H. Kim, J. Kassel, M. A. Gryka, et al. 1990. Germline p53 mutations in a familial syndrome of breast cancer, sarcomas, and other neoplasms. Science 250(4985):1233–1238.
Manolio, T. A. 2010. Genomewide association studies and assessment of the risk of disease. N Engl J Med 363(2):166–176.
McCracken, M., M. Olsen, M. S. Chen, Jr., A. Jemal, M. Thun, V. Cokkinides, D. Deapen, and E. Ward. 2007. Cancer incidence, mortality, and associated risk factors among Asian Americans of Chinese, Filipino, Vietnamese, Korean, and Japanese ethnicities. CA Cancer J Clin 57(4):190–205.
McDonnell, D. P., and S. E. Wardell. 2010. The molecular mechanisms underlying the pharmacological actions of ER modulators: Implications for new drug discovery in breast cancer. Curr Opin Pharmacol 10(6):620–628.
Medina, D. 2010. Of mice and women: A short history of mouse mammary cancer research with an emphasis on the paradigms inspired by the transplantation method. Cold Spring Harb Perspect Biol 2(10):a004523.
Miller, B. A., K. C. Chu, B. F. Hankey, and L. A. Ries. 2008. Cancer incidence and mortality patterns among specific Asian and Pacific Islander populations in the U.S. Cancer Causes Control 19(3):227–256.
Morris, C.R., and S. L. Kwong, eds. 2004. Breast cancer in California, 2003. Sacramento, CA: California Department of Health Services.
Narod, S. A., and K. Offit. 2005. Prevention and management of hereditary breast cancer. J Clin Oncol 23(8):1656–1663.
NCI (National Cancer Institute). 2005. SEER brochure. http://seer.cancer.gov/about/SEER_brochure.pdf (accessed November 8, 2011).
NCI. 2006. Supplemental data for the annual report to the nation (1975–2003). http://www.seer.cancer.gov/report_to_nation/1975_2003/supplemental.html (accessed November 8, 2011).
NCI. 2011. SEER cancer statistics review, 1975–2008. Edited by N. Howlader, A. M. Noone, M. Krapcho, N. Neyman, R. Aminou, W. Waldron, S. F. Altekruse, C. L. Kosary, J. Ruhl, Z. Tatalovich, H. Cho, A. Mariotto, M. P. Eisner, D. R. Lewis, H. S. Chen, E. J. Feuer, K. A. Cronin, and B. K. Edwards. Bethesda, MD: NCI. (based on November 2010 SEER data submission, posted to the SEER website, 2011). http://seer.cancer.gov/csr/1975_2008/ (accessed June 1, 2011).
Newcomb, P. A., A. Trentham-Dietz, J. M. Hampton, K. M. Egan, L. Titus-Ernstoff, S. Warren Andersen, E. R. Greenberg, and W. C. Willett. 2011. Late age at first full term birth is strongly associated with lobular breast cancer. Cancer 117(9):1946–1956.
NRC (National Research Council). 2006. Toxicity testing for assessment of environmental agents: Interim report. Washington, DC: The National Academies Press.
NTP (National Toxicology Program). 2006. Specifications for the conduct of studies to evaluate the toxic and carcinogenic potential of chemical, biological and physical agents in laboratory animals for the National Toxicology Program (NTP). http://ntp.niehs.nih.gov/?objectid=72015DAF-BDB7-CEBA-F9A7F9CAA57DD7F5 (accessed November 8, 2011).
NTP. 2010. Toxicology/carcinogenicity. http://ntp.niehs.nih.gov/?objectid=72015DAF-BDB7-CEBA-F9A7F9CAA57DD7F5 (accessed November 3, 2011).
NTP. 2011. Long-term study reports and abstracts. http://ntp.niehs.nih.gov/index.cfm?objectid=0847DDA0-F261-59BF-FAA04EB1EC032B61 (accessed November 1, 2011).
Oldenburg, R. A., H. Meijers-Heijboer, C. J. Cornelisse, and P. Devilee. 2007. Genetic susceptibility for breast cancer: How many more genes to be found? Crit Rev Oncol Hematol 63(2):125–149.
Parkin, D. M., S. L. Whelan, J. Ferlay, L. Raymond, and J. Young, eds. 1997. Cancer incidence in five continents. Vol. VII. Lyon, France: International Agency for Research on Cancer (IARC).
Parkin, D. M., F. Bray, J. Ferlay, and P. Pisani. 2005. Global cancer statistics, 2002. CA Cancer J Clin 55(2):74–108.
Partridge, A., K. Adloff, E. Blood, E. C. Dees, C. Kaelin, M. Golshan, J. Ligibel, J. S. de Moor, et al. 2008. Risk perceptions and psychosocial outcomes of women with ductal carcinoma in situ: Longitudinal results from a cohort study. J Natl Cancer Inst 100(4):243–251.
Pasqualini, J. R. 2009. Breast cancer and steroid metabolizing enzymes: The role of progestogens. Maturitas 65(Suppl 1):S17–S21.
Phipps, A. I., C. I. Li, K. Kerlikowske, W. E. Barlow, and D. S. Buist. 2010. Risk factors for ductal, lobular, and mixed ductal-lobular breast cancer in a screening population. Cancer Epidemiol Biomarkers Prev 19(6):1643–1654.
Ponce, N. A., S. H. Babey, D. A. Etzioni, B. A. Spencer, E. R. Brown, and N. Chawla. 2003a. Cancer screening in California: Findings from the 2001 Health Interview Study. Los Angeles, CA: University of California at Los Angeles, Center for Health Policy Research. http://www.healthpolicy.ucla.edu/pubs/files/Cancer_Screening_Report.pdf (accessed December 12, 2011).
Ponce, N. A., M. Gatchell, and E. R. Brown. 2003b. Cancer screening rates among Asian ethnic groups. Los Angeles, CA: University of California at Los Angeles, Center for Health Policy Research. http://www.healthpolicy.ucla.edu/pubs/files/Asian_Cancer_FactSheet.pdf (accessed December 12, 2011).
Rakha, E. A., M. E. El-Sayed, A. R. Green, A. H. Lee, J. F. Robertson, and I. O. Ellis. 2007. Prognostic markers in triple-negative breast cancer. Cancer 109(1):25–32.
Ravdin, P. M., K. A. Cronin, N. Howlader, C. D. Berg, R. T. Chlebowski, E. J. Feuer, B. K. Edwards, and D. A. Berry. 2007. The decrease in breast-cancer incidence in 2003 in the United States. N Engl J Med 356(16):1670–1674.
Reynolds, P., S. Hurley, D. Goldberg, T. Quach, R. Rull, and J. Von Behren. 2011. An excess of breast cancer among young California-born Asian women. Ethn Dis 21(2):196–201.
Ries, L. A. G., and M. P. Eisner. 2007. Chapter 13: Cancer of the female breast. In SEER survival monograph: Cancer survival among adults: U.S. SEER Program, 1988–2001, patient and tumor characteristics. Edited by L. A. G. Ries, J. L. Young, G. E. Keel, M. P. Eisner, Y. D. Lin, and M. J. Horner. Bethesda, MD: National Cancer Institute, SEER Program.
Robbins, A. S., and C. A. Clarke. 2007. Regional changes in hormone therapy use and breast cancer incidence in California from 2001 to 2004. J Clin Oncol 25(23):3437–3439.
Rockhill, B., B. Newman, and C. Weinberg. 1998. Use and misuse of population attributable fractions. Am J Public Health 88(1):15–19.
Rothman, K. J. 2002. Epidemiology: An introduction. New York: Oxford University Press.
Russo, J., and I. H. Russo. 1996. Experimentally induced mammary tumors in rats. Breast Cancer Res Treat 39(1):7–20.
Russo, J., Q. Tahin, M. H. Lareef, Y. F. Hu, and I. H. Russo. 2002. Neoplastic transformation of human breast epithelial cells by estrogens and chemical carcinogens. Environ Mol Mutagen 39(2–3):254–263.
Silent Spring Institute. 2011. Silent Spring Institute guide to cohort studies for environmental breast cancer research. http://www.silentspring.org/pdf/our_tools/CohortStudiesTable.pdf (accessed November 8, 2011).
Smedley, B. D., and S. L. Syme. 2001. Promoting health: Intervention strategies from social and behavioral research. Am J Health Promot 15(3):149–166.
Stacey, S. N., A. Manolescu, P. Sulem, T. Rafnar, J. Gudmundsson, S. A. Gudjonsson, G. Masson, M. Jakobsdottir, et al. 2007. Common variants on chromosomes 2q35 and 16q12 confer susceptibility to estrogen receptor-positive breast cancer. Nat Genet 39(7): 865–869.
Stanford, J. L., L. J. Herrinton, S. M. Schwartz, and N. S. Weiss. 1995. Breast cancer incidence in Asian migrants to the United States and their descendants. Epidemiology 6(2):181–183.
Stead, L. A., T. L. Lash, J. E. Sobieraj, D. D. Chi, J. L. Westrup, M. Charlot, R. A. Blanchard, J. C. Lee, et al. 2009. Triple-negative breast cancers are increased in black women regardless of age or body mass index. Breast Cancer Res 11(2):R18.
Szklo, M., and J. Nieto. 2004. Epidemiology: Beyond the basics, 2nd ed. Seattle, WA: Jones and Bartlett.
Thayer, K. A., and P. M. Foster. 2007. Workgroup report: National Toxicology Program workshop on hormonally induced reproductive tumors—relevance of rodent bioassays. Environ Health Perspect 115(9):1351–1356.
Thomas, D. B., and M. R. Karagas. 1987. Cancer in first and second generation Americans. Cancer Res 47(21):5771–5776.
Thompson, H. J., and M. Singh. 2000. Rat models of premalignant breast disease. J Mammary Gland Biol Neoplasia 5(4):409–420.
Trivers, K. F., M. J. Lund, P. L. Porter, J. M. Liff, E. W. Flagg, R. J. Coates, and J. W. Eley. 2009. The epidemiology of triple-negative breast cancer, including race. Cancer Causes Control 20(7):1071–1082.
Turnbull, C., S. Ahmed, J. Morrison, D. Pernet, A. Renwick, M. Maranian, S. Seal, M. Ghoussaini, et al. 2010. Genome-wide association study identifies five new breast cancer susceptibility loci. Nat Genet 42(6):504–507.
Turner, N. C., J. S. Reis-Filho, A. M. Russell, R. J. Springall, K. Ryder, D. Steele, K. Savage, C. E. Gillett, et al. 2007. BRCA1 dysfunction in sporadic basal-like breast cancer. Oncogene 26(14):2126–2132.
Ursin, G., E. O. Lillie, E. Lee, M. Cockburn, N. J. Schork, W. Cozen, Y. R. Parisky, A. S. Hamilton, et al. 2009. The relative importance of genetics and environment on mammographic density. Cancer Epidemiol Biomarkers Prev 18(1):102–112.
USPSTF (U.S. Preventive Services Task Force). 2002. Chemoprevention of breast cancer: Recommendations and rationale. http://www.uspreventiveservicestaskforce.org/uspstf/uspsbrpv.htm (accessed August 9, 2011).
Wacholder, S., P. Hartge, R. Prentice, M. Garcia-Closas, H. S. Feigelson, W. R. Diver, M. J. Thun, D. G. Cox, et al. 2010. Performance of common genetic variants in breast-cancer risk models. N Engl J Med 362(11):986–993.
Walsh, T., S. Casadei, K. H. Coats, E. Swisher, S. M. Stray, J. Higgins, K. C. Roach, J. Mandell, et al. 2006. Spectrum of mutations in BRCA1, BRCA2, CHEK2, and TP53 in families at high risk of breast cancer. JAMA 295(12):1379–1388.
Wang, T., H. M. Gavin, V. M. Arlt, B. P. Lawrence, S. E. Fenton, D. Medina, and B. A. Vorderstrasse. 2011. Aryl hydrocarbon receptor activation during pregnancy, and in adult nulliparous mice, delays the subsequent development of DMBA-induced mammary tumors. Int J Cancer 128(7):1509–1523.
Wistuba, I. I., C. Behrens, S. Milchgrub, S. Syed, M. Ahmadian, A. K. Virmani, V. Kurvari, T. H. Cunningham, et al. 1998. Comparison of features of human breast cancer cell lines and their corresponding tumors. Clin Cancer Res 4(12):2931–2938.
Writing Group for the Women’s Health Initiative Investigators. 2002. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: Principal results from the Women’s Health Initiative randomized controlled trial. JAMA 288(3):321–333.
Yang, X. R., J. Chang-Claude, E. L. Goode, F. J. Couch, H. Nevanlinna, R. L. Milne, M. Gaudet, M. K. Schmidt, et al. 2011. Associations of breast cancer risk factors with tumor subtypes: A pooled analysis from the Breast Cancer Association Consortium studies. J Natl Cancer Inst 103(3):250–263.
Zheng, W., J. Long, Y. T. Gao, C. Li, Y. Zheng, Y. B. Xiang, W. Wen, S. Levy, et al. 2009. Genome-wide association study identifies a new breast cancer susceptibility locus at 6q25.1. Nat Genet 41(3):324–328.
Ziegler, R. G., R. N. Hoover, M. C. Pike, A. Hildesheim, A. M. Nomura, D. W. West, A. H. Wu-Williams, L. N. Kolonel, et al. 1993. Migration patterns and breast cancer risk in Asian-American women. J Natl Cancer Inst 85(22):1819–1827.
|
<urn:uuid:bc403232-ab9e-4dec-90c6-fd3a47ab8fb9>
|
CC-MAIN-2020-16
|
https://www.nap.edu/read/13263/chapter/4
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810617.95/warc/CC-MAIN-20200408041431-20200408071931-00120.warc.gz
|
en
| 0.903656 | 22,881 | 3.015625 | 3 |
Today, a large majority of the food we consume comes from stores, produced on a massive farm halfway across the world. If you prefer to source some of your food a little closer to home, you might be surprised to find out that no matter where you live, there is an abundance of edible greenery free for the picking. Here are five easy ways to start urban foraging in your neighbourhood today.
#1. Know Your Local Bylaws
Before you start foraging from any plants or trees you see, double-check your local bylaws. Some municipalities don’t allow foraging on city property like parks or ravines. If you have permission, find out whether your city sprays any pesticides or other chemicals. This will give you a better understanding of areas to avoid or whether a simple rinsing will get rid of any pesticides. Either way, remember to always wash your harvest before you eat.
#2. Know What You’re Gathering
Make sure you always know what you are foraging. Numerous plants, mushrooms and fruits can make you very sick, so always be absolutely sure what you’re picking is edible, safe, and in season. It’s also a good idea to have a plan of how you will harvest, store, and use your foraged edibles.
Related: Wildcrafting With Kids
While out foraging, carry an identification book for your local plants and trees to double-check what you plan to harvest. If you don’t want to carry a book with you, just take a small sample of an unidentified plant home with you and do some research so you know for next time.
#3. Know Where to Look
In any urban area, you will find a wide variety of edible greenery. However, knowing how to forage won’t do you much good if you don’t know where to look.
A great resource at Falling Fruit allows users to map the exact location of edible finds in urban areas for others to find. If you want to discover some on your own, ravines and wooded areas or parks will definitely have some hidden gems. If you’re on the hunt for mushrooms, check near the base of certain trees, or in areas that have a lot of dead trees on the forest floor.
#4. Join a Local Urban Foraging Group
In recent years, urban foraging has gained quite a following in many of the metropolitan areas of North America. Cities like Los Angeles, New York City, Portland and Seattle all have organizations that teach beginners ways to start urban foraging with a safe and sustainable technique.
Do some quick research and find out if there is a local foraging group in your city to join. The Meetup website has foraging groups from dozens of cities, and over 53,000 foragers for you join. These groups will allow you to ask more experienced foragers questions regarding ripeness, edibility, bylaws and possibly some secret spots!
#5. Be a Conscientious Urban Forager
While you can find a vast supply of edible plants in any urban area, be certain to responsibly harvest them. For example, only take about one-third of anything you find so that other foragers have a chance to harvest that plant, as well as giving it a chance to regrow.
Overharvesting can ruin great foraging spots and even affect food resources for some wildlife. When exploring your city for foraging options, remember to always ask permission before harvesting something from private property. If they haven’t already been harvested, most people will let you pick a few berries or leaves from their plants, since it keeps rotting fruit off their lawns and little critters out of their yards.
As a new urban forager, remember to always be able to identify what you want to harvest. Joining a local foraging group will give you an opportunity to learn from more experienced foragers, as well as finding good areas in your city to explore for edibles.
|
<urn:uuid:641dab49-952f-4c3b-bcf0-add33a841205>
|
CC-MAIN-2020-16
|
https://www.anoffgridlife.com/ways-to-start-urban-foraging/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00494.warc.gz
|
en
| 0.939862 | 825 | 2.703125 | 3 |
It’s clear that environmental concerns and climate change are at the forefront of political discourse in today’s society. However, you may be wondering how you can contribute to the environment by using more green energy. This article contains a number of tips to help you use green energy in your daily life.
If you are considering purchasing outdoor lighting look into solar lamps. This type of lamp is cost-effective and needs no power other than sunlight. This will save you a ton of energy. It also means you do not to have string up those outdoor lights.
Recycling is one of the easiest tasks that can make a greener home. Some towns automatically include costs for recycling in their garbage collection, so look into this! If not, certain states including Michigan will pay consumers to return bottles after use. Recycling is one of the best ways to cut energy costs!
If you are planning on switching to green energy, it can seem too discouraging to jump in and do it all at once. While an entire home and land can be overwhelming, try narrowing your efforts to one room at a time. A good first step is a bedroom, where you can use solar power for just a reading lamp and a radio or alarm clock. Then work up from there!
Try to use cold water for washing clothes. Almost 90 percent of the consumed energy while washing your clothes is spent on heating up the water. If your detergent is decent, cold water will be as effective as hot when cleaning your laundry. Also, don’t wash your clothes until you can completely fill the washer, as this saves energy over time.
If switching your home to solar power is beyond your financial capabilities, try switching just one room, like a bedroom, to solar power. There are solar kits available online that can help you green a room, and this will positively affect your energy bills and carbon footprint for years to come.
Buy a box of Ziplock quart size baggies and use these to make your own snacks. Whether you enjoy a bit of trail mix, Chex Mix, or a tasty muffin, you can use this bag and wash it when you get home to use the next day. Keep your snacks green by washing and reusing these baggies for your snacks until they are too worn.
Don’t try to install a wind generator on a small piece of property. First of all, you’ll likely get complaints from the neighbors, as an efficient wind turbine needs to be at least 30 feet off the ground. Secondly, you need about an acre of land in order to ensure an unobstructed prevailing wind.
As this article has previously discussed, green energy is a topic that is in the forefront of everybody’s mind in today’s society, as environmental concerns become even greater. Fortunately, there are plenty of things that you can do to use more green energy and live a more eco-friendly life. Apply this article’s advice and you’ll be on your way to green living.
|
<urn:uuid:aa3c1e38-0f4a-4539-9b60-0627c93314cd>
|
CC-MAIN-2020-29
|
https://chasestreasures.com/tips-and-advice-on-clean-and-green-energy/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00092.warc.gz
|
en
| 0.94527 | 622 | 2.890625 | 3 |
Alpha-1 antitrypsin deficiency (AATD) is a hereditary, monogenic disorder with no unique clinical features. AATD can be difficult to diagnose as patients commonly present with respiratory symptoms often mistaken for other respiratory syndromes such as asthma or smoking-related chronic obstructive pulmonary disease. In addition, symptoms related to AATD may also affect other organs, including the liver, vasculature, and skin. The severity of AATD varies between individuals, and in severe cases, the irreversible lung damage can develop into emphysema. Early diagnosis is critical to enable the implementation of lifestyle changes and therapeutic options that can slow further deterioration of pulmonary tissue. Once AATDis suspected, a range of tests are available (serumalpha-1 proteinase inhibitor [A1-PI] level measurement, phenotyping, genotyping, gene sequencing) for confirming AATD. Currently, intravenous infusion of A1-PI is the only therapy that directly addresses the underlying cause of AATD, and has demonstrated efficacy in a recent randomized, placebo-controlled trial. This review discusses the etiology, testing, and management of AATD from the allergist's and/or immunologist's perspective. It aims to raise awareness of the condition among physicians who care for peoplewith obstructive lung disorders and are therefore likely to see patients with obstructive lung disease that may, in fact, prove to be AATD.
|Original language||English (US)|
|Number of pages||6|
|Journal||Journal of Allergy and Clinical Immunology: In Practice|
|State||Published - Jan 1 2015|
All Science Journal Classification (ASJC) codes
- Immunology and Allergy
|
<urn:uuid:19b8fa93-32e0-4703-ac6d-b4bb1091f6df>
|
CC-MAIN-2020-16
|
https://pennstate.pure.elsevier.com/en/publications/suspecting-and-testing-for-alpha-1-antitrypsin-deficiency-an-alle
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00559.warc.gz
|
en
| 0.900993 | 356 | 2.53125 | 3 |
What Are the Benefits of Technology in Schools?
Technology is an up and coming way of allowing students to develop their skills in new and inviting ways. They’re working on skills that will benefit their development well into the future, now that technology is a way to grow our future generations. Here are some top benefits to using technology in schools from this sixth form in Somerset.
Children are already aware of how interesting and engaging technology is to their overall development and more likely to take on what they’ve been learning. Interactive software and tasks where children have to work on computers to complete work will keep their retention high and motivate them to be more productive in class.
Increased inclusion among students
Students with exceptional circumstances are more likely to be engaged in work when technology’s involved, due to many accessible features computers, smartphones and tablets have. Word processors are able to help students understand grammar and spelling mistakes; spreadsheet technology will allow students to improve their math skills, and browsing the Internet can arm them with more educational resources.
Automation of assignments
Having the capabilities for teachers to upload assignments onto an online resource will give children a different way to look through homework and future course work. This also means that parents and students can look through work together at home and view past assignments to help them with future revision.
Arms students with the expertise for the future
The future is in technology, and there will be many more ways to develop a child’s repertoire when they’re working on future assignments. It prepares children for work if they enter university, teaches students about how to use computers and other technology in a future career and even helps children find their passion in interests like coding or technical engineering. The possibilities are endless!
|
<urn:uuid:a0a1e993-bd7b-4c48-85ae-30fbcab844a9>
|
CC-MAIN-2023-14
|
https://www.educationalstar.com/what-are-the-benefits-of-technology-in-schools/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00220.warc.gz
|
en
| 0.960372 | 355 | 3.296875 | 3 |
I made a trip to my local Blue Seal store on Thursday to pick up dog food. As I walked through the door, the gentle peeping sounds of dozens of fluffy baby chicks filled the air.
I was just about to sneak my finger into one of the enclosures to pet their darling little heads, until I remembered a Tweet (no pun intended) I’d seen earlier in the week from Maine CDC.
Many Mainers purchase baby chicks in the spring, but even healthy-looking chicks and ducklings can carry Salmonella. I know, sorry to rain on your Easter parade. Kids in particular are at risk, since they’re more likely to handle chicks and then touch their mouths, plus their immune systems are still developing.
Salmonella bacteria carried in poultry intestines can contaminate their environment and the surface of their bodies, according to the U.S. CDC. Even if they look clean, feces might be lingering on their feathers and beaks. Not so cute now, huh?
In a recent blog post, health officials offered a few tips.
Keep kids from getting sick by making sure they:
- Don’t put their hands in their mouths after touching chicks
- Don’t kiss chicks on their beak or feathers
- Don’t handle or clean cages or food containers
- Don’t eat or drink near baby chicks
- Don’t put their mouths on objects that have been near chicks or their cages
Children younger than five should not handle baby chicks. If they do:
- Keep chicks out of the kitchen and other living areas
- Wash children’s hands thoroughly with plenty of running water and soap after contact with chicks
- Contact your health care provider or go to a clinic if your child has diarrhea or vomiting
For more information, visit http://go.usa.gov/mZF.
|
<urn:uuid:43a4398c-cd94-48d4-9ff8-03b0f905dfda>
|
CC-MAIN-2020-34
|
http://vitalsigns.bangordailynews.com/2014/04/18/home/think-twice-before-handling-baby-chicks/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737319.74/warc/CC-MAIN-20200808080642-20200808110642-00590.warc.gz
|
en
| 0.954163 | 390 | 2.625 | 3 |
|King of Burma
Prince of Pye
|Reign||3 June 1661 – 14 April 1672|
|Coronation||7 September 1661
Wednesday, Full moon of Tawthalin 1023 ME
|Consort||Khin Ma Latt|
|Issue||6 sons and 8 daughters including:
|Maha Pawara Dhamma Yaza Lawka Dipadi|
|Mother||Khin Myat Hset of Pinya|
|Born||26 May 1619
Sunday, 14th waxing of Nayon 981 ME
|Died||14 April 1672
Thursday, 2nd waning of Tagu 1034 ME
|Burial||15 April 1672
Pye Min (Burmese: ပြည်မင်း, pronounced: [pjé mɪ́ɴ]; 26 May 1619 – 14 April 1672) was king of Toungoo dynasty from 1661 to 1672. Pye Min was a son of King Thalun. During the reign of his brother Pindale, the Prince of Pyay (Prome) led the Burmese resistance against Southern Ming and Qing incursions. King Pindale, however, lost his popularity and Pye was urged to take the throne. Pye staged a coup in 1661, overthrowing Pindale and crowning himself King of Ava. Pye was determined to reduce the power of Yongli Emperor of Southern Ming at Sagaing and held a conference of Chinese officials. Yongli suspected this as an assassination trick and instead ordered his armies to clash with the Burmese. However, the Chinese were largely decimated. In 1662, the Qing armies invaded Burma and Pye Min decided to leave the last Ming Emperor to the Qing. Yongli Emperor was carried out of Burma.
There was a Mon rebellion around Martaban in 1661 and in 1662 Lan Na was invaded by the Siamese armies under King Narai who held the city temporarily. The rest of his reign were largely uneventful and Pye Min died in 1672, succeeded by his son Narawara.
The future king was born to King Thalun and a minor queen Khin Myat Hset of Pinya on 26 May 1619. The young prince received the title of Minye Kyawkhaung. He was appointed governor of Prome (Pyay or Pye) on 13 September 1650 (Tuesday, 4th waning of Tawthalin 1012 ME) by King Pindale.
- Hmannan Vol. 3 2003: 270
- Hmannan Vol. 3 2003: 250
- Hmannan, Vol. 3 2003: 285–287
- Maha Yazawin Vol. 3 2006: 212
- Maha Yazawin 2006: 215
- Kala, U (1724). Maha Yazawin (in Burmese) 1–3 (2006, 4th printing ed.). Yangon: Ya-Pyei Publishing.
- Royal Historical Commission of Burma (1829–1832). Hmannan Yazawin (in Burmese) 1–3 (2003 ed.). Yangon: Ministry of Information, Myanmar.
Pye MinBorn: 26 May 1619 Died: 14 April 1672
|King of Burma
3 June 1661 – 14 April 1672
as Governor of Prome
|Viceroy of Prome
13 September 1650 – 3 June 1661
Minye Zeya Thura
as Mayor of Prome
|This biography of a member of a Burmese royal house is a stub. You can help Wikipedia by expanding it.|
|
<urn:uuid:429a12dc-d0ff-4a09-8bfe-bb266f1fff15>
|
CC-MAIN-2014-23
|
http://en.wikipedia.org/wiki/Pye_Min
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776428772.70/warc/CC-MAIN-20140707234028-00030-ip-10-180-212-248.ec2.internal.warc.gz
|
en
| 0.92085 | 789 | 2.984375 | 3 |
Kittens and puppies practise pouncing on moving objects, or will snatch at butterflies with their paws. Human children meeting a small animal usually react with a sense of wonder and admiration.
Humans do not have the same instincts as carnivores, which shows that we are not natural carnivores and that meat eating has been acquired somehow against our natural instincts. This would explain why many people who give up meat fell as though they have been liberated from a bad habit.
Children's diet, often based on widely advertised stodgy food, is generally considered to be unhealthy, but some schools have experimented in offering fruit to small children. Many of them never eat fruit, or only rarely, but most of them are happy to eat it when offered.
Other schools are running allotments for small children to work in and take an interest in how their food is grown, and to taste what is produced. This helps them to experiment and learn to love foods they had previously avoided.
In another school bigger children are learning to care for farm animals and even present them at shows. Although this is encouraging meat eating, it may at least lead them to treat farm animals with more respect than simply seeing them as food from the counter.
Teaching nutrition and cookery in schools may be one way of promoting healthier eating habits, but growing vegetables for themselves seems a much better way to get them to eat more fruit and vegetables.
This issue is full of contributions, ideas and questions from our readers. Contributions from readers are what makes this magazine more personal, varied, interesting and valuable to many vegans who never have contact with other vegans. So please write in with your ideas, experiences, and drawings, and continue to keep this magazine interesting and valuable.
|
<urn:uuid:f79c05c4-e1c4-437c-9daf-074cb1b1ab15>
|
CC-MAIN-2017-39
|
http://www.veganviews.org.uk/vv89/vv89editorial.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687834.17/warc/CC-MAIN-20170921172227-20170921192227-00031.warc.gz
|
en
| 0.983505 | 352 | 3.109375 | 3 |
SECTION 504 OF THE REHABILITATION ACT
Identification, Evaluation and Education
Section 504 of the federal Rehabilitation Act of 1973 reads “No otherwise qualified handicapped individual in the United States, as defined in Section 705 (20), of this title shall, solely by reason of his handicap be excluded from participation in, be denied the
benefits of, or be subjected to discrimination under any program or activity receiving Federal financial assistance.” (29 U.S.C. 794(a)) Identified individuals will be placed in the least restrictive educational nvironment. (34 C.F.R. 104.34(a).
A student is considered disabled and eligible for protection under Section 504 if he/she:
- Has a physical or mental impairment, which substantially limits one or more major life activities
- Has a record of such an impairment, or
- Is regarded as having such an impairment
A physical or mental impairment is: (1) any physiological disorder or condition, cosmetic disfigurement, or anatomical loss affecting one or more of the following body systems: neurological; musculoskeletal; special sense organs; respiratory, including speech organs; cardiovascular; reproductive; digestive; genito-urinary; hemic and lymphatic; skin; and endocrine; or (2) any mental or psychological disorder, such as mental retardation, organic brain syndrome, emotional or mental illness, and specific learning
Examples of physical or mental impairments include, but are not limited to: mobility impairments, medical conditions such as epilepsy, hemophilia, diabetes, AIDS, arthritis, allergies/asthma, tuberculosis, cancer, spina bifida, cerebral palsy, and ADHD.
Substantially Limits: The term “substantially limits” is not defined under Section 504. However, an impairment that substantially limits one major life activity does not have to also limit another major life activity. Also, an impairment that is episodic or in remission is a disability if it would substantially limit a major life activity when active.
Major Life Activities: Major life activities are functions such as caring for one’s self, performing manual tasks, seeing, hearing, eating, sleeping, walking, standing, lifting, bending, speaking, breathing, learning, reading, concentrating, thinking, communicating and working.
Mitigating Measures Not Considered: The determination of whether impairment substantially limits a major life activity must be made without regard to the ameliorative effects of mitigating measures such as:
Medication, medical supplies, equipment or appliances such as low-vision devices (which do not include ordinary eye glasses or contact lenses), prosthetics including limbs and devices, hearing aids and cochlear implants or other implantable hearing devices, mobility devices, or oxygen therapy equipment and supplies;
- Use of assistive technology;
- Reasonable accommodations or auxiliary aids or services; or
- Learned behavioral or adaptive neurological modifications.
The term “auxiliary aides and services” includes:
- Qualified interpreters or other effective methods of making aurally delivered materials available to individuals with hearing impairments;
- Qualified readers, taped texts, or other effective methods of making visually delivered materials available to individuals with visual impairments;
- Acquisition or modification of equipment or devices; and
- Other similar services and actions.
Mitigating Measures Considered: Conversely, the ameliorative effects of the mitigating measures of ordinary eyeglasses or contact lenses must be considered in determining whether an impairment substantially limits a major life activity. Thus, if a student’s vision while using ordinary eyeglasses or contact lenses is not substantially limited, he or she would not qualify as a student with a disability under Section 504 on that basis.
Students who, because of a disability, need or are believed to need, accommodations to their educational program are referred for an evaluation. Such evaluation may include, but is not limited to, classroom observation, performance based testing, academic assessment information and any additional information offered by the parent or guardian. The school’s Student Study Team (SST) will review the information and determine whether the student qualifies as disabled under Section 504 and requires adjustments to his/her educational program. The SST is a group of persons knowledgeable about the child, the meaning of the evaluation data, and placement options. See Pre-referral chapter of this Handbook for more information regarding the SST process.
Referral, Assessment, and Evaluation Procedures
- The District will evaluate any student who, because of disability, needs or is believed to need reasonable accommodations to regular or special education and/or related aids and services to allow a student an equal opportunity to participate in school and school-related activities.
- Each school shall designate a Section 504 school site contact to be responsible for implementing referral, assessment, and evaluation procedures
- A student may be referred by anyone, including a parent/guardian, teacher, other school employee, or community agency, for consideration as to whether the student qualifies as a student with disabilities under Section 504. This referral should be made to the site contact who will schedule a SST meeting.
- The District has the responsibility to ensure that students with disabilities are evaluated if there is a reason to suspect that they may qualify under Section 504. Therefore, it is important that students who are or may be disabled are referred to Section 504 school site contact so that the assessment process is initiated if determined appropriate. Some examples of students who should be considered for referral include the following:
- Students with medical conditions such as severe asthma, diabetes, AIDS, or heart disease.
- A student who uses a wheelchair or other mobility device.
- Student with a degenerative neurological disorder, a student who is missing a limb, or
- A student with other impaired manual skills.
- A student with poor or failing grades over a lengthy period of time.
- A student with frequent referrals for behavior problems.
- A student with a temporary medical condition due to illness or accident.
- The SST Team meeting initiated by the Section 504 school site contact will be composed of the student’s parents/guardians and other persons knowledgeable about the student (such as the student’s regular education teacher, school nurse, psychologist), the student’s school history, the student’s individual needs (such as a person knowledgeable about the student’s disabling condition), the meaning of evaluation data, and the options for placement and services.
- The SST Team shall promptly consider the referral and determine what assessments are needed in all suspected areas of disability to evaluate whether the student is a student with a disability under Section 504 and what special needs the student may have. Staff must use the Permission for Student Data Form. Additionally, staff will send out the Request for Teacher Information form to obtain written feedback/input from the student’s current classroom teacher(s).
- The parents/guardians shall be given an opportunity in advance of the Section 504 team meetings to examine assessment results and all other relevant records.
- If a parent request for evaluation is denied, the Section 504 District Coordinator will inform the parents/guardians in writing of this decision and of their procedural rights.
- After an assessment is completed, the Section 504 team will convene a meeting to review and consider the results of the assessment. The Section 504 Team will be composed of the student’s parents/guardians, or other individuals holding educational rights, and other persons knowledgeable about the student (such as the student’s regular education teacher, school nurse, psychologist).
Individual Section 504 Accommodation Plan
- When a student is identified as disabled within the meaning of Section 504, the Section 504 Team shall determine what services are necessary to ensure that the student’s individual educational needs are met as adequately as the needs of non-disabled students.
- The team responsible for making the placement decision shall include the parents/guardians and other persons knowledgeable about the child who can interpret evaluation data, and identify placement options.
- For each identified disabled student, the district will develop a Section 504 Accommodation Plan describing the student’s disability and the reasonable accommodations to the regular or special education and/or related aids and services needed in order to allow the student an equal opportunity to participate in school and school-related activities. The Section 504 Plan will specify how the accommodations will be provided to the disabled student and by whom. The Section 504 Plan will also identify the person responsible for ensuring that all the components of the Section 504 Plan are implemented.
- The student’s teacher and any other staff who are to provide services to the student or who are to make modifications in the classroom for the student shall be informed of the services or modifications necessary for the student and provided a copy of the Section 504 Plan. A copy of the Section 504 Plan shall be kept in the student’s cumulative file in a manner that limits access to those persons involved in the Section 504 process and/or the provision of services and modifications.
- The disabled student shall be placed in the regular education environment unless it is demonstrated that the student’s needs cannot be met in the regular education environment. The disabled student shall be educated with students who are not disabled to the maximum extent appropriate to his/her individual needs.
- The referral, assessment, evaluation, and placement process will be completed within a reasonable time from receipt of the parents’ consent to the evaluation. It is generally not reasonable to exceed 60 calendar days in completing this process.
- The parents/guardians shall be notified in writing of the final decision concerning the student’s identification as a person with disabilities, the reasonable accommodations to be provided, if any, and of the Section 504 procedural safeguards, as described below, including the right to an impartial hearing to challenge the decision.
Review of Student Progress
- The District staff shall monitor the progress of the disabled student and the effectiveness of the student’s Section 504 Plan. According to the review schedule set out in the student’s Section 504 Plan, the District staff including persons knowledgeable about the child shall periodically determine whether the reasonable accommodations are appropriate.
- If the student moves to a different site, it will be the responsibility of the site contact to inform the new school within the district of the active Section 504 Plan.
- The Laguna Beach Unified School District has designated the following person as its Section 504 Compliance Officer: Deni Christensen, 550 Blumont Street, Laguna Beach, CA, 92651, (949) 497-7700. The Section 504 Compliance Officer is responsible for addressing complaints regarding the identification, evaluation, or educational placement of a student with a disability under Section 504 and complaints alleging discrimination or harassment of a student based on his/her actual or perceived disability.
- Parents/guardians shall be notified in writing of all District decisions regarding the identification, evaluation, or educational placement of students with disabilities or suspected disabilities. This notice will also be provided to students who are entitled to these rights at age 18. Notifications shall include a statement of their rights to:
- Examine relevant records
- Have an impartial hearing with an opportunity for participation by the
- Seek review in federal court if the parents/guardians disagree with the hearing parents/guardians and their counsel decision.
- Notifications shall also set forth the procedures for requesting an impartial hearing. Written requests shall be made to Deni Christensen, 550 Blumont St. Laguna Beach, CA 92651. (949) 497-7700.
- Deni Christensen, Laguna Beach Unified School District Section 504 Compliance Officer, shall maintain a list of impartial hearing officers who are qualified and willing to conduct Section 504 hearings. To ensure impartiality, such officers shall not be employed by or under contract within the district or the County Office of Education in any capacity other than that of hearing officer and shall not have any professional or personal involvement that would affect their impartiality or objectivity in the matter.
- If a parent/guardian disagrees with the identification, evaluation, or educational placement of a student with disabilities under Section 504, he/she may request a hearing before an impartial hearing officer. The parent/guardian shall set forth in writing his/her request for a hearing. A request for hearing should include:
- The specific decision or action with which the parent/guardian disagrees.
- The changes to the Section 504 Plan the parent/guardian seeks.
- Any other information the parent/guardian believes is pertinent.
- Within 30 calendar days of receiving the student’s Section 504 Accommodation Plan, a parent/guardian may set forth in writing his/her disagreement and request that the school principal review the 504 Plan in an attempt to resolve the disagreement. This review shall be held within 14 school days of receiving the request and the parent/guardian shall be invited to attend the meeting at which the review is conducted. However, the timeline for the hearing shall remain in effect unless it is extended by mutual written agreement of the parent/guardian and the District.
- If a disagreement continues, a parent/guardian may request a meeting with the District’s 504 Compliance Officer to review the student’s Section 504 Accommodation Plan. This review shall be held within 14 school days of receiving the request and the parent/guardian shall be invited to meet with the District’s 504 Compliance Officer to discuss the review. However, the timeline for the hearing shall remain in effect unless it is extended by mutual written agreement of the parent/guardian and the District.
- Within 20 school days of receiving the parent/guardian’s request for hearing, the Superintendent or designee shall select an impartial hearing officer. These 20 days may be extended for good cause or by mutual agreement of the parent/guardian and the District.
- Within 45 school days of the selection of the hearing officer, the hearing shall be conducted and a written decision mailed to all parties. These 45 days may be extended for good cause or by mutual agreement of the parent/guardian and the District.
- The parent/guardian and the District shall be afforded the rights to:
- Be accompanied and advised by counsel and by individuals with special
- Present written and oral evidence.
- Question and cross-examine witnesses.
- Receive written findings by the hearing officer.
- If desired, either party may seek a review of the hearing officer’s decision by a federal court. The decision shall be implemented unless the decision is stayed, modified, or overturned by a court.
- File a complaint with the Office for Civil Rights if you believe the District has not acted in compliance with the law. The Regional Office that covers Southern California is: knowledge or training related to the individual needs of students who are qualified as disabled under Section 504.
OFFICE FOR CIVIL RIGHTS, REGION IX
U.S. Department of Education
50 Beale Street, Suite 7200
San Francisco, CA 94105
|
<urn:uuid:1ffbd848-2d0e-4154-b02a-aac50b620aeb>
|
CC-MAIN-2017-30
|
http://www.lbusd.org/page.cfm?p=754
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424960.67/warc/CC-MAIN-20170725022300-20170725042300-00486.warc.gz
|
en
| 0.9216 | 3,098 | 2.796875 | 3 |
May Night salvia, also known as Mainacht, is a perennial that is cold hardy as far north as USDA Hardiness Zone 5. The dark green, leafy base of salvia produces upright spikes of violet-blue blooms from as early as May and extending into August. Snipping off spent blooms encourages new blooms. Salvia May Night can be used as a border or grouped in the flower garden. The plant attracts bees and butterflies, and is rabbit and deer resistant. Plant May Night salvia like other sun-loving perennials.
Choose a sunny, well-drained location for salvia May Night. The plant can grow up to 24-inches tall and wide. If planting more than one May Night, plan for about 18 inches between plants.
Dig a hole twice as wide and deep as the May Night plant container. Place the removed soil on a tarp or in a wheelbarrow.
Mix about 25 percent organic matter, like sphagnum peat moss or compost, with the removed soil.
Remove the potted May Night from the container. If roots are wrapped about the outside of the root ball, use your fingers to unwrap the roots or use a utility knife to make about six cuts evenly spaced down the sides of the root ball and about 1/2 deep to free the roots.
Backfill the hole partially so the root ball is setting on the bottom while the top of the root ball is level to the ground. Continue to backfill the hole halfway up the root ball.
Water around the rootball to settle the soil and then finish backfilling the hole. Water again.
Apply 2 to 3 inches of mulch, like wood chips, over the worked soil. Keep the mulch 2 inches from the stem of the plant.
Water every seven to 10 days if there is no rainfall.
Things You Will Need
- Tarp or wheelbarrow
- Organic matter
- Utility knife (optional)
- Deadheading, removing spent blooms, can be achieved with garden clippers or by hand. Snip or pinch the stem about 2 inches below the dead bloom.
- Transplant Bleeding Heart Plants
- Transplant Clivia
- Transplant Cardinal Flowers
- Get Rid of Ant Mounds
- How Much Sun Do Knockout Roses Need?
- Transplant Hollyhocks
- Transplant Hibiscus Plants
- Plant Dahlia Tubers in Zone 5
- Care for a Night Blooming Cereus
- Care for Evening Primrose
- Transplant Peonies in the Spring
- The Best Plants to Grow in Full Sun in Southwest Virginia
|
<urn:uuid:2743d6c7-3359-4f9e-80c6-87595b7d6761>
|
CC-MAIN-2020-10
|
https://www.gardenguides.com/108674-plant-may-night-salvia.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145538.32/warc/CC-MAIN-20200221203000-20200221233000-00240.warc.gz
|
en
| 0.883373 | 542 | 2.703125 | 3 |
Advanced Art students practiced the techniques of "one stroke" painting, using resource materials by popular decorative painters, such as Donna Dewberry and Priscilla Hauser.
Additional resources included a wide variety of books on decorative and "folk art" painting, and an attractive display was set up in the classroom by a parent who works in this art style.
Many students--who are highly skilled in realistic drawing and painting--found this to be a surprisingly difficult technique to master! They first practiced the brushstrokes on clear acetate sheets (which were washed off and reused) and then on paper to learn how to accurately "double load" their brushes and use the correct amount of "floating medium."
For their final project in this unit, students had to create a gradient background--in a color of their choice--on an 8x10 canvas panel. To demonstrate their understanding of the technique, they had to paint at least three different "one stroke" flowers. They were given the option of adding more traditionally painted shades, shadows and highlights, though few students chose to do so.
In addition to learning about the history of decorative painting (which included an overview of Norwegian "rosemaling" and French "tole painting"), these students developed a new appreciation for the considerable talents and skills of accomplished "craft" and "folk" artists!
|
<urn:uuid:f9557fd9-fe9d-4875-b5f1-e417e534c8ab>
|
CC-MAIN-2017-26
|
http://art-rageous.net/DecorativePainting.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323970.81/warc/CC-MAIN-20170629121355-20170629141355-00538.warc.gz
|
en
| 0.967164 | 274 | 3.53125 | 4 |
With obesity rates and obesity-related ailments on the rise, engaging in regular physical activity is essential to warding off diseases and weight gain. In the August 2007 issue of the journal “Circulation,” researchers from the American Heart Association and the American College of Sports Medicine recommended that women and men engage in cardiovascular activity at least five days per week and strength training a minimum of two days per week. Use these guidelines when establishing a weekly exercise plan.
Monday -- Cardio
Begin the week by performing cardiovascular activity. You may select from a wide variety of exercises based on your desired intensity level. The recommendation outlined by AHA and ACSM suggests performing 30 minutes of moderately-intense or 20 minutes of vigorously-intense activity. Examples of moderately-intense activity include brisk walking, water aerobics and riding a stationary bike with little-to-no resistance. Vigorously intense activities may include jogging, playing high-activity sports such as tennis or soccer as well as riding a stationary bike with added resistance. The most important rule in selecting an activity is choosing one you truly enjoy.
Tuesday -- Strength Training
Engage in full-body strength training on Tuesday. Strength training not only promotes muscle growth, but aids in weight loss, increases bone mineral density and reducing pain associated with arthritis. Weightlifting exercises may be executed using weight machines, free weights such as barbells or dumbbells, resistance bands or your own body weight. It is possible to effectively fatigue muscle groups at home or at the gym. Although there are many weightlifting programs, the American College of Sports Medicine recommends performing between eight and 10 individual exercises that isolate each major muscle group. Perform one set of each exercise and lift the weight for eight to 12 repetitions. Choose a weight that will fully exhaust your muscles by the last repetition.
Wednesday -- Cardio
The third day of your weekly exercise plan involves performing cardiovascular activity. To keep from getting bored or burned out, select a different activity than what you did on Monday. For example, if you went jogging on Monday, ride a bicycle or engage in step aerobics on this day. Before increasing the intensity level of any cardiovascular exercise, warm up for five to 15 minutes. Likewise, cool down for five to 15 minutes at the end of the exercise.
Thursday -- Weightlifting
Today is the second weightlifting day out of the week. You may follow the same routine as on Tuesday or you may create a new routine with different exercises. Try switching up exercises by engaging in different modes of weightlifting. For example, perform a set of dumbbell bench presses followed by a set of pushups to fatigue the chest muscles.
Friday -- Cardio
Perform the recommended minutes of cardiovascular activity. Unlike popular theory, do not perform cardiovascular activity in the morning without eating. Doing so may result in a lack of energy or lightheadedness, says MayoClinic.com. Consume 75 to 100 grams of complex carbohydrates three hours before starting an aerobic exercise.
Saturday -- HIIT
This is the last cardiovascular activity day of the week. On Saturday, attempt a more challenging activity such as high-intensity interval training, or HIIT. Using any mode of aerobic activity, such as jogging, bicycling or swimming, warm up the body for five minutes, then increase the intensity level to a seven out of 10, with 10 being the highest level of exertion. Hold this intensity level for 60 seconds and reduce your pace to an intensity level of five out of 10 for two minutes. Repeat a total of nine intervals, which include the five minute warmup and cool-down intervals.
- Circulation: Physical Activity and Public Health: Updated Recommendation for Adults from the American College of Sports Medicine and the American Heart Association
- Centers for Disease Control and Prevention: How Much Physical Activity Do Adults Need?
- ACSM Health & Fitness Journal: ACSM Strength Training Guidelines: Role in Body Composition and Health Enhancement
- American Council on Exercise: Strength Training 101
- Straightforward Fitness: ACSM Cardio Guidelines
- MayoClinic.com: Eating and Exercise: 5 Tips to Maximize Your Workouts
- American Fitness Professionals & Associates: Endurance Nutrition: How and What to Eat Before, During and After Exercise? Pre-Event Meal Warning: Eat 3 Hours Before Exercise
- American Council on Exercise: High-Intensity Interval Training
- Jupiterimages/Brand X Pictures/Getty Images
|
<urn:uuid:951b49e4-16f5-4f58-b860-41b9eba013fe>
|
CC-MAIN-2020-05
|
https://woman.thenest.com/weekly-exercise-plan-2442.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00541.warc.gz
|
en
| 0.92316 | 909 | 3.046875 | 3 |
On this day in 1943, a dog becomes a World War II hero. Chips was a German shepherd-collie-husky mix who’d traveled with the U.S. Army from New York to Europe.
His family knew they had a special dog on their hands. When the Army put out a call for good dogs who could serve on sentry or patrol duty, the Wren family donated Chips.
“It killed my mother to part with him,” John Wren later said. He was a mere toddler when Chips left for war. “But Chips was strong and smart, and we knew he’d be good.”
In the end, Chips was gone for more than 3 years. During that time, he traveled the world, serving in North Africa, Italy, and France, among other places. He even met British Prime Minister Winston Churchill and U.S. President Franklin D. Roosevelt.
That’s because he was serving as a guard dog during the Casablanca Conference in January 1943.
Yet Chips is most remembered for his actions on July 10, 1943, as Allied forces began their invasion of Sicily. Chips was with his handler, Pvt. John Rowell, when their squad became pinned down by fire from an Italian machine-gun nest. The determined dog broke free from Rowell and charged.
Our soldiers watched Chips disappear, then they heard a shot ring out. “There was an awful lot of noise,” Rowell later said, “and the firing stopped. Then, I saw one Italian soldier come out the door with Chips at his throat. I called him off before he could kill the man.” Soon three other Italian soldiers emerged with their hands in the air.
Chips had single-handedly forced their surrender.
The encounter left the brave dog with a wound to the scalp and burns around his mouth and eye. Chips didn’t seem to notice. Later that day, he sniffed out 10 enemy soldiers, forcing their capture.
Chips was awarded a Silver Star for his heroism that day. He was also recommended for a Distinguished Service Cross and a Purple Heart, but neither of those would come to be. Some people were objecting to an animal getting such awards. In the end, the Army even reversed the award of the Silver Star.
Nevertheless, Chips received another honor before he returned home. He’d already met Churchill and Roosevelt. Now he met General Dwight Eisenhower, too. The future President reportedly bent down to pet the heroic dog, forgetting that a sentry dog would be trained to bite anyone but his handler.
Let’s just say that Chips didn’t make an exception for the General.
Finally, Chips came home. “I mostly remember when I saw him in the cage,” John Wren later remembered, “and realized that was my dog coming home. I was quite excited, as was everybody.”
Chips had one last act of heroism in him. He hadn’t been home for more than a few months when he’d saved little John’s life.
“My mother told me the story about how we were all at Quogue Beach one day,” Wren recalled, “and I wandered out to the water. Suddenly the undertow took me under, and Chips was the only one who saw it happen. He ran into the water and pulled me out by my swim trunks. He was quite an animal.”
Sadly, Chips didn’t get to enjoy too much of his retirement. Seven months after he came home, he passed away of kidney failure. His family would later receive one last honor on his behalf, though. Just a few years ago, Chips was awarded the Dickin Medal, which is the highest honor for an animal’s wartime bravery.
John Wren flew to London, England, to receive the award on Chips’s behalf.
“[I]t really made me feel great to see him finally receive some recognition as a special creature,” Wren concluded, “which, in our view, he was.”
Ace Collins, Man's Best Hero: True Stories of Great American Dogs (2014)
Chips, a U.S. Army Hero Dog That Served in World War II, Gets Posthumous Medal (Inside Edition; Jan. 25, 2018)
John M. Kistler, Animals in the Military: From Hannibal's Elephants to the Dolphins of the U.S. Navy (2011)
Maria Goodavage, Soldier Dogs: The Untold Story of America's Canine Heroes (2012)
Marylou Tousignant, Heroic dog is honored 75 years after saving the lives of U.S. soldiers (Wash. Post; Jan. 25, 2018)
Robin Hutton, War Animals: The Unsung Heroes of World War II (2018)
|
<urn:uuid:5cee3c0e-06cb-427a-bbac-3b126f986c82>
|
CC-MAIN-2023-40
|
https://www.taraross.com/post/tdih-chips-dog
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506646.94/warc/CC-MAIN-20230924123403-20230924153403-00611.warc.gz
|
en
| 0.982602 | 1,033 | 3.109375 | 3 |
You are not permitted to download, save or email this image. Visit image gallery to purchase the image.
Both fell to Earth in the general vicinity of ''hard'' sci fi, investigating futures that were different yet recognisable. They explored the expansion of the human species into outer space, the technical challenges and the social evolution this grand project would require.
In Stephenson's novel, the impetus comes from a cosmic disaster that befalls Earth, before which a tiny fraction of humanity is sent into extraplanetary orbit to form a surviving community.
In Robinson's narrative, in a more distant future, an interstellar ''habitat'', a kind of high tech ark, is sent to the distant ''solar system'' of Tau Ceti to settle the Earth like planet Aurora, which has an atmosphere, water, gravity, and temperature range within the range of human tolerance.
Of course, the time scale is enormous and the process takes centuries, with generations living their entire lives on the ship.
The question is posed: how does a multigenerational human community survive or thrive in a hostile, literally anti life environment, and what does that mean for the individuals and societies involved?
While these may seem rarefied concerns, this is by no means the case.
There is discussion in real world technological circles about the need for humanity to spread out from Earth, for a variety of reasons, from a ''civilisational backup'' in case of planetary disaster, simply for economic purposes, or from the gloomy prognosis that the damage occurring to our living planet is probably terminal, which calls for a Plan B.
In this way, the best contemporary science fiction - futuristic, or speculative - is engaging with concepts in a more urgent, intellectually intense way than so called ''literary fiction''.
Aurora is not a short novel. It is dense, at times reflective and meandering.
It comprises several distinct parts, from life aboard the space habitat and its network of miniature environments called ''biomes'', to landfall on Aurora and the catastrophic failure of the mission, to the decisions that stem from this disaster.
The scope of Aurora is broad, even profound.
From the technical issues of maintaining an ecological balance in the closed loop of the biomes, to the political and ethical consequences of this contained and compressed existence, through to the development of the artificial intelligence ''mind'' of the Ship, which develops over centuries of interactions with humans from a supercomputer to a sentient being.
There are no simple or easy answers in this substantial novel to the questions it puts forward.
For all the strangeness of the setting, these are familiar questions: the meaning of existence, our responsibility to our descendants, the twin faces of human community and violent confrontation, and the uneasy tension between the intellectual ability of our tool making species, and the subjective experiences of our emotional, sensory and spiritual life.
The conclusion that Aurora suggests is that human life is made for Earth, specifically.
While the exploration, and perhaps settlement, of extraterrestrial environments is an inevitability, there is only one ''home'' for humanity.
While this is a novel of big ideas, set in the incomprehensible reaches of deep space, the central investigation is of the flawed lives and intricate web of relationships of people in circumstances not of our making, our shared human condition.
Victor Billot is editor of The Maritimes, the magazine of the Maritime Union.
|
<urn:uuid:164bd59c-eb3d-4d17-ba86-a8969fa967ff>
|
CC-MAIN-2017-26
|
https://www.odt.co.nz/entertainment/books/humans-extraterrestrial-existence-examined
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323895.99/warc/CC-MAIN-20170629084615-20170629104615-00112.warc.gz
|
en
| 0.932631 | 702 | 2.546875 | 3 |
The accounting system in India is undergoing a significant change. With the notification of Companies (Indian Accounting Standards) Rules 2015, the Ministry of Corporate Affairs in India converged the Indian Accounting Standards (Ind AS) with International Financial Reporting Standards (IFRS) which was applied in a phased manner from 1 April 2016 beginning with large companies whose net worth was equal to or exceeded INR 5 billion, followed by its implementation for smaller companies with net worth between 2.5 billion to 5 billion thereafter. Among other accounting standards, Financial Instruments Standards Ind AS 32, 109 and 107 that defines, recognises, measures and specifies disclosure norms of financial instruments including financial derivatives were introduced. Warren Buffet very famously called derivatives, “financial weapons of mass destruction,” and giving credence to his views, time and again, financial as well as non-financial firms in India and around the world have sustained losses due to the usage of financial derivatives. Over the years, the capital markets have changed, and business models have become more challenging with complex sources of risk and uncertainty which has transformed risk management into a sophisticated art. This complex and ever-changing business environment has brought to the fore the necessity and importance of developing reliable and relevant disclosure norms to help protect all stakeholders, as derivatives, due to their underlying complex nature, can be a significant source of systematic risk. This is also reiterated, with shareholders and investors stepping up the demand for increased financial disclosure. This empirical study models the factors that determine Financial Derivative Disclosure of Indian non-financial firms The study develops a self-constructed unweighted Financial Derivative Disclosure Index (FDDI) to measure the derivative disclosure. The sample represents companies from Nifty 50, out of which banking and financial services companies were removed. Using multiple regression model, this study modelled the corporate governance factors which determine derivative disclosure. The factors identified were presence of usage of derivatives, size, foreign income, presence of risk management committee, institutional shareholding and binary variable for family business. The results show that the stewardship theory explains the determinants of financial derivative disclosure in Indian context, and promoters act as stewards and guide their firms to improve their financial derivative disclosures.
|
<urn:uuid:e831e06e-92e5-48c7-8eb2-0b77e05940d8>
|
CC-MAIN-2020-10
|
https://ro.uow.edu.au/aabfj/vol12/iss3/5/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144429.5/warc/CC-MAIN-20200219214816-20200220004816-00026.warc.gz
|
en
| 0.959863 | 441 | 2.609375 | 3 |
The Prentice Hall Essence of Computing Series provides a concise, practical and uniform introduction to the core components of an undergraduate computer science degree. Acknowledging recent changes within Higher Education, this approach uses a variety of pedagogical tools, case studies, worked examples and self-test questions to underpin the students's learning.This book is a concise introduction to formal logic. Written for undergraduates, it makes no excessive demands on previous mathematical knowledge, requiring little maturity in mathematical thinking. The main objective of the book is in preparing the reader for the analysis and application of techniques of logic in computing. A wide range of topics in mathematical logic are covered with each new idea introduced in a gentle, yet brisk fashion which quickly leads to the development of important skills.Beginning with the Truth Tables, the reader is introduced to the concepts of Boolean Algebra and thus logical propositions and truth values. Propositional Logic is explored through the use of the methods of Semantic Tableaux, Natural Deduction and the Sequent Calculus. More formal Axiomatic systems are examined and illustrated with some important theorems about such systems. Properties of soundness. completeness and consistency are explained in terms of Propositional systems. Resolution is presented for Propositional Logic in preparation for an understanding of its use in computer science. The book then turns to the First Order Predicate Logic, revising the now familiar topics of deduction and semantic tableaux, as well as soundness completeness and consistency. Resolution is re-examined and the application of First Order Predicate Logic in computing is investigated.
- Limba : Engleza
- Cuprins :
- Truth Tables.
- Semantic Tableaux.
- Natural Deduction.
- Axiomatic Propositional Logic.
- Resolution in Propositional Logic.
- Introduction to Predicate Logic.
- An Axiomatic Approach to Predicate Logic.
- Semantic Tableaux in Predicate Logic.
- Resolution in Predicate Logic.
- Data Publicarii : 1997
- Editie : 1
- Format : Paperback
- Numar pagini : 280
- ISBN : 9780133963755
|
<urn:uuid:838ea557-08f6-44b7-88c8-4bceab0dfd0f>
|
CC-MAIN-2017-39
|
http://www.okian.ro/the-essence-of-logic-9780133963755.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00259.warc.gz
|
en
| 0.876105 | 447 | 3.40625 | 3 |
In addition, users can make comments on annotated parts of an article allowing virtual conversations to take place right in the article. This video (courtesy of Will Richardson) shows user conversations about highlighted areas of an article. We all need to learn and improve our craft. Tech tools such as Diigo allow learning and collaboration to fit into our busy schedules by cutting the fat and getting right to the meat.
Concise Professional Development with Diigo
Education is a busy business. Ask your teachers to participate in a book study and watch the eyes roll to the sky. Its not you, its not your book--it a time thing. Educators feel slammed during the year, but Diigo offers a great solution. With a Diigo account, you can annotate articles with highlights and sticky notes and share the annotated articles to your colleagues or have them create Diigo accounts and create a group, which makes sharing very simple.
The real beauty of of Diigo is the ability to just grab the most important sentences and paragraphs (using the browser extension), which allows you to share the most important, actionable parts of a given article. The selected text can be emailed or shared in groups creating a concise, digestible form of professional development for departments, administrators, or the entire staff. Obviously, this tools can be used in the classroom too!
|
<urn:uuid:0563b3f0-277c-490e-9715-e545f4fbefda>
|
CC-MAIN-2017-47
|
http://www.educationshift.net/2013/08/concise-professional-development-with.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804125.49/warc/CC-MAIN-20171118002717-20171118022717-00548.warc.gz
|
en
| 0.933022 | 272 | 2.65625 | 3 |
The healthcare industry has been leveraging smartphone technology for years, from monitoring disease symptoms to curbing the opioid crisis, often through mobile applications. But now, a research team from the University of Michigan Kellogg Eye Center has modified a smartphone’s camera to help detect the most common form of diabetes-related eye disease.
Currently, diabetic retinopathy is diagnosed by an ophthalmologist who analyzes a retinal image taken by a retinal camera. Not only does this process require expensive equipment and special training, but it can also take nearly a week for the ophthalmologist to interpret the images and decide if a patient has diabetic retinopathy. If caught too late, patients can go blind from the disease.
“To make screening truly accessible, we need to provide on-the-spot feedback, taking the photo and interpreting it while the patient is there to schedule an eye appointment if necessary,” said Dr. Yannis Paulus, a Kellogg vitreoretinal surgeon, in a news release.
Dr. Paulus and his team combined RetinaScope, a device designed in-house that turns a smartphone into a retina camera, with EyeArt, an artificial intelligence (AI) platform developed by Eyenuk, into a diagnostic device that takes a retinal image, analyzes it, then informs the operator if the patient should be referred to an ophthalmologist for follow-up.
The team tested the device for its ability to tell if someone has diabetic retinopathy (sensitivity) and its ability to tell if someone does not have diabetic retinopathy (specificity). To determine if the test was accurate, they compared the device results with a gold standard technique and also brought in two independent experts to examine the device-captured retinal images.
Based on a study of 69 adults with diabetes, the device scored 86.8 percent on sensitivity and 73.3 percent specificity. Not only is this level of sensitivity greater than the recommended minimum for an ophthalmic screening device, but it’s close to what was achieved by a similar device developed by biotech company, IDx, which was cleared by the US Food and Drug Administration (FDA) last year. Unlike Dr. Paulus’ device, IDx-DR uses a fundus camera that’s specifically designed to photograph the retina rather than modifying the camera of a smartphone.
“This is the first study to combine the imaging technology with automated real-time interpretation and compare it to gold standard dilated eye examination,” said Dr. Paulus. “And the results are very encouraging.”
The results of the study were presented at the Association for Research in Vision and Ophthalmology annual meeting. Dr. Paulus says his team is working on improvements to both the hardware and software components of the diagnostic in pursuit of FDA approval.
The researchers hope that the convenience of their device will encourage more patients to get screened for diabetic retinopathy. Given that the Centers for Disease Control and Prevention (CDC) predicts that 16 million people will be living with diabetic retinopathy by 2050, the need for more reliable screening tools has never been greater.
|
<urn:uuid:f9eb907b-e6b8-4b00-b46f-723671d63078>
|
CC-MAIN-2020-29
|
https://xtalks.com/ai-smartphone-as-diabetic-retinopathy-diagnostic-tool-1891/?shared=email&msg=fail
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151761.87/warc/CC-MAIN-20200714212401-20200715002401-00241.warc.gz
|
en
| 0.948984 | 651 | 3.09375 | 3 |
Secure Delete | Erase and Wipe
These articles discuss the concept of secure deletion and why it is important.
What is Secure Deletion?
You might be wondering what all the fuss is about. Why should you erase your hard drive instead of just deleting the data? Isn’t that good enough? And, what’s with all those different terms?
To learn more about this, read our helpful article, Secure Delete versus Delete – What’s the difference?
Why to Secure Delete (erase) instead of delete
That might get you thinking that you should investigate this issue further. Here are some articles on the subject:
- Beware of Data Dumpster Divers “Some 30 percent of businesses in the UK leave data, some of it sensitive, on their PCs when they dispose of them…”
- Dead disks yield live information “Identity thieves are gleaning personal information from scrapped computers. Peter Warren reports on just how insecure our sensitive data really is”
- Deleted files can be recovered “Many computer users, including some who should know better, are unaware that deleted files can be recovered — undeleted — and can yield information which can be used against the person who deleted them. This information can be as common as a deleted email message or as important as sensitive business records or government transactions..”
- SSDs difficult to wipe securely, researchers find
Should YOU Secure Delete?
- Take the Test: Do you Need a File Shredder? (Mireth Technology). Secure Delete is not just for the paranoid. At least at disposal, most people should be erasing and not just deleting their data. Most people might include you. To find out if you need a secure deletion app, take this two minute test.
There are some government as well as some de-facto standards for overwriting data. Here are some articles about them.
- Secure Deletion of Data from Magnetic and Solid-State Memory (Peter Gutmann). “To gain access to sensitive data, one avenue of attack is the recovery of supposedly erased data from magnetic media or random-access memory. This paper covers some of the methods available to recover erased data and presents schemes to make this recovery significantly more difficult.”
Browser History Erasers | Cache Cleaners
These articles discuss the application of secure deletion to the browsing data created when you surf the internet or use email.
- Internet Track FAQ (Mireth Technology). If you’re concerned about the files left on your computer when you use the internet or just want to understand the issue, this FAQ offers a straightforward explanation of what internet tracks are, why you might be concerned about them and what to do if you are.
- Take the Test: Do you Need an Internet Eraser? (Mireth Technology). Scare tactics are sometimes used to sell customers on the idea of internet erase software. Well, don’t let them scare you. While it’s clear that some people need to erase browsing data, you might not be one of them. To determine whether you need to erase your internet tracks, take this two minute test.
|
<urn:uuid:c8513a50-5e86-4113-95fd-67511e7e09fe>
|
CC-MAIN-2020-05
|
http://mireth.com/resources/mac-privacy-wipe-erase-secure-delete-hard-drive/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00163.warc.gz
|
en
| 0.927522 | 654 | 2.90625 | 3 |
May Is Melanoma/Skin Cancer Detection and Prevention Month: Here’s What You Need to Know
It is often said that an ounce of prevention is better than a pound of cure. This old saying is one that remains true as far as it relates to the detection and treatment of Melanoma. According to The Skin Cancer Foundation, one person dies of melanoma every 52 minutes.
Early detection can decrease your risk of being diagnosed with Melanoma. While prevention is always preferred, the good news is once the condition is detected early in those who have it, and is subsequently treated; the Melanoma has a higher chance of being cured.
Preventing Melanoma: What You Need to Know
Being that the leading known cause of Melanoma is coming in contact with the direct Ultraviolet (UV) rays of the sun, limiting one’s exposure is the best place to start with your efforts to prevent Melanoma. In this case, prevention can be as simple (and as difficult) as adjusting your daily habits. Here are Melanoma prevention tips you should practice in your everyday life:
- Staying out of the sun as much as possible, to avoid direct contact with UV rays.
- Avoid using tanning beds to decrease your direct contact with UV rays.
- Wear at least SPF 30 when exposed to the sun and reapply every 1-2 hours.
- Raising your level of self-awareness by regularly inspecting your own skin as best as possible for any apparent abnormalities. (Monthly self-examinations of moles.)
- Everyone should have their skin checked by a dermatologist at least once a year.
- Wear protective clothing, like hats, and seek shade during the midday peak sun hours.
While the tips above can help decrease your exposure to Melanoma, there are numerous causes that cannot always be prevented. For example, people with a family history of Melanoma have a higher likelihood of getting it. There is also an increased risk for people who possess a genetic predisposition to Melanoma, such as people who are fair-skinned, blonde and blue-eyed or people who have red hair.
Getting Involved: Raising awareness in May
May is all about Melanoma and skin cancer prevention, detection, treatment and of course, general awareness. There is much you can do to raise your own level or awareness about the disease, as well as the awareness of others. For example, you can:
- Encourage accountability at the level of the family, so that family members can encourage each other to wear sunscreen and limit the times spent in the sun
- Help build awareness through schools and the formal education systems. By equipping and encouraging teachers and administrators to pass on the necessary information about Melanoma to their students.
- Organize health fairs and/or events focused on sharing information about Melanoma with your community.
|
<urn:uuid:23f645b2-bb64-4317-ae53-e539113118ea>
|
CC-MAIN-2020-34
|
https://universaldermatology.com/may-melanomaskin-cancer-detection-prevention-month-heres-need-know/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00448.warc.gz
|
en
| 0.959134 | 602 | 2.90625 | 3 |
Carbon Dioxide (CO2) sensor Module
Air quality sensor module for Greenhouse gas monitoring
The Carbon Dioxide sensor module can accurately measure low concentrations of CO2 at the ppb level in the ambient air. Also, the sensor module design is capable to monitor the ambient CO2 on a real-time basis.
The advanced support electronics of this air quality sensor makes it compact and reliable. Additionally, the low noise electronics allows stable and accurate detection of carbon monoxide even at very low concentrations in the atmosphere.
The sensor works on Non-Dispersive Infrared (NDIR) technology. The CO2 sensors have proven themselves in the field with long term stability and reliable operation. Hence, this makes it an ideal choice for indicative real-time air quality monitoring for outdoor applications.
Standard calibration gases and tools are used to calibrate the CO2 sensor. The calibration is in accordance with the USEPA (40 CFR Part 53) and EU (2008/50/EC) defined for equivalent method instruments.
Outdoor Air Quality monitoring systems like Polludrone Lite, Polludrone Smart, and Polludrone Pro use the CO2 sensor module. Therefore, this module is ideal for applications like Smart city monitoring, Greenhouse gas emission monitoring, and research-based project.
Measurement Range: 0-5000 ppm
Sensor Life: 2 years
Minimum detection limit: 400 ppm
Working Principle: NDIR
Working Principle: Non-Dispersive Infrared (NDIR)
Drift: ±5 ppm / Year
|
<urn:uuid:eaf88af9-0d2a-4d27-8618-1144352b5576>
|
CC-MAIN-2023-40
|
https://oizom.com/sensor-modules/carbon-dioxide-co2-sensor-module/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00263.warc.gz
|
en
| 0.817499 | 315 | 2.5625 | 3 |
Vaccine Hesitancy: Guidance and Interventions
Research shows that vaccine hesitancy (i.e. ‘the delay in acceptance or refusal of vaccines despite the availability of vaccination services’ (WHO SAGE, 2014a) is rising, resulting in alarming figures on disease outbreaks reported globally. Despite availability of vaccines, the number of countries reporting hesitancy has steadily increased since 2014 (Lane et al., 2018). Therefore, there is a need to understand what governments and partners can do to tackle this problem. The evidence for this rapid review is gender blind and taken from grey literature, including systematic reviews, interviews, research reports, and peer-reviewed academic papers from vaccine-related projects (e.g. Vaccine Confidence Project). Strategies aimed at specific populations in grey literature differed from those in peer reviewed literature (WHO SAGE, 2014a). This review does not focus on anti-vaccination (anti-vaxx/anti-vac) sentiments or movements. Drivers of vaccine hesitancy are also not explored in this review.
|
<urn:uuid:b6b7a7a9-bd45-48a3-8bbd-15f4b663ac34>
|
CC-MAIN-2023-40
|
https://k4d.ids.ac.uk/resource/vaccine-hesitancy-guidance-and-interventions/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510297.25/warc/CC-MAIN-20230927103312-20230927133312-00322.warc.gz
|
en
| 0.91944 | 218 | 2.96875 | 3 |
As the decline in road deaths tapered off last year, the European Automobile Manufacturers’ Association (ACEA) is making a push to raise drivers’ awareness of existing and future vehicle safety technologies.
Data issued by the European Commission earlier this month shows that there were 25,100 fatalities on European roads in 2018, a figure which has fallen by one-fifth since 2010. Although EU roads are still by far the safest worldwide, progress over recent years has been stabilising. Last year’s figures were down just 1 percent compared to 2017.
Last month, the EU institutions signed off the revised General Safety Regulation, which sets out the safety technologies that must be included as standard in new car types as of 2022, such as autonomous emergency braking (AEB), intelligent speed assistance (ISA) and lane keeping systems.
“Today, cars already come equipped with a wide range of safety measures. A key concern of ours is that many drivers are simply not aware of these existing technologies – let alone the many new safety features that will be fitted in all new passenger cars in just a few years’ time,” said Erik Jonnaert, ACEA Secretary General.
In an effort to address this, ACEA has launched a new website, www.roadsafetyfacts.eu, which provides a fact-based overview on everything related to vehicle safety technology and road safety. Through educational infographics, it explains various safety features in a clear and simple way. This new campaign forms part of the industry’s commitment to better communicate with citizens on safe driving and the more effective use of available safety features.
“Auto makers are fully committed to informing drivers about the safety features already available in their vehicles, in addition to making cars even safer in the future,” Mr Jonnaert underlined.
Indeed, the new website explains the latest innovations in active safety technology that can prevent accidents from happening altogether, or at least mitigate the impact, as many of these will become standard equipment as of 2022.
Jonnaert added, “At the same time, vehicle technology is not the only answer. To continue driving down road accidents and fatalities, we must combine vehicle technology with safer driver behaviour and improved road infrastructure.”
ACEA represents the 15 major Europe-based car, van, truck and bus manufacturers: BMW Group, CNH Industrial, DAF Trucks, Daimler, Fiat Chrysler Automobiles, Ford of Europe, Honda Motor Europe, Hyundai Motor Europe, Jaguar Land Rover, PSA Group, Renault Group, Toyota Motor Europe, Volkswagen Group, Volvo Cars, and Volvo Group.
Also read: Autocar Professional's April 15 issue walks the safety talk
|
<urn:uuid:6b943b8e-4eed-4c94-a562-f37388e5bc52>
|
CC-MAIN-2020-16
|
https://www.autocarpro.in/news-international/european-automakers-to-raise-awareness-of-safety-tech-to-further-reduce-road-deaths-42722
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00404.warc.gz
|
en
| 0.949436 | 552 | 2.765625 | 3 |
Global Warming Issues :)
It’s unfortunate that these days, “global warming” is such a dirty word; tis a fascinating topic and one we’ll be investigating a bit during our course (in Unit 3 when we talk about the rocky planets). Despite the fact we’ll be looking at this more later, I HAVE to post about two things that I found yesterday that remind me of global warming issues.
1. A Bad Astronomy blog post about new research about the cause of the Little Ice Age also linked to some new data put out by NASA stating the Sun cannot be the cause of global warming. This claim that the Sun’s natural increase in temperature over time (increase is true) is the cause of what we are calling “global warming” is one of the big drums that climate change deniers keep beating on. But it’s simply not true. We just go to the data (like the folks at NASA did in the above article). We’ll talk about it more in class.
2. One of my sisters is a landscape designer and she provided a link to the newly updated “Plant Hardiness” map published by the USDA. It shows that the hardiness zones (basically what kinds of plants you can put outdoors) are shifting northerly a bit.
However, they are careful to put this statement on their website:
“Climate changes are usually based on trends in overall average temperatures recorded over 50-100 years. Because the USDA PHZM represents 30-year averages of what are essentially extreme weather events (the coldest temperature of the year), changes in zones are not reliable evidence of whether there has been global warming.”
There are other factors in the shift, like better data (more coverage, more accurate) available now, but regardless, it seems mighty interesting to me :)
|
<urn:uuid:b20f74c3-4cae-40fc-8655-c0218985b47e>
|
CC-MAIN-2014-10
|
http://drgrundstrom.wordpress.com/2012/02/01/global-warming-issues/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999639602/warc/CC-MAIN-20140305060719-00073-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.948206 | 390 | 2.546875 | 3 |
The Punu Masks
A traditional African mask is the essential feature of Sub-Saharan and West African culture. Africans use these masks during different ceremonies and rituals, and they have both religious and spiritual meaning. The masks designers are very honorable people because the process of creating a new mask requires much time and a person who makes a mask must be familiar with the spirits. Craftsmen usually use wood as the main material, but the mask can be also made of light stone or some metals such as bronze or copper. When a craftsman has shaped the mask, he paints it with various natural colors. However, some masks are not painted but decorated with animal hair, seashells, straws, horns, eggshells, teeth, bones, or feathers. The craftsman may use animal hair to make the hair or beard of the mask (Alabi, Olalere, & Sola 217). African masks are worn in various ways. All masks can be divided into several categories: face masks that cover only the face of a person, hat masks that are worn on the head, and helmet masks that cover the head and the face. However, the variety of the African masks is not limited by this list; some masks are worn on the chest and still are considered the masks, but not the ornaments.
Among the African masks, the Punu masks are one of the brightest examples of the African culture. The Punu people live on the left bank of the Upper Ngoume River. In the 18th century, they migrated northwards and settled there together with the Lumbo, the Galoa, the Eshira, and the Vungu tribes and stayed there until the present days. The tribe is divided into independent villages with its clans and families. Each village has its own specifics, but still, they obey the same rituals and use similar masks for them. Moukoudji is the supreme organ that regulates the obeying of rules in the tribe; it also controls different cults including human relics and masks.
The Punus use their masks for establishing a connection with gods and spirits; they believe that only shamans have the power and right to communicate with the spirits and gods. As a rule, the Punu masks represent the female ancestor’s faces. The mask is painted in white color that symbolizes peace, spirits of the dead, deities, and the life after death. Moreover, this color is widespread in the funerals and memorials. Even though the masks are worn mainly during the funeral ceremonies, they are also used in the magical rites. However, masks are not only white; sometimes black masks with high-dominated foreheads, red lips, blue eyes, and bushy hair are also used. These characteristics represent the realistic features of the Punu women; thus, the tribe uses exactly these colors and specifics.
The Punu mask is recognized by the diamond-shape scarification features on the temples and forehead. As it was mentioned before, the Punu masks are divided into those that are covered with white color and those which are black. However, there is one more significant difference between them. Both black and white masks have the same stylistic design, but black masks perform the judiciary function, and they are used for identifying the sorcerers.
Masks are traditionally used for important ceremonies; they have cultural importance for the Punu people. They believe that spirits communicate with a person who wears a mask. In this way, a tribe may ask for a sign from the ancestors when they need it. Moreover, the masks are the pride of a tribe or family because they demonstrate the history of the family; thus, it is a big honor to have permission to wear such a mask.
As it was mentioned above, masks are the part of ceremonies and rituals that are often accompanied by dance. However, not all rituals and ceremonies require masks; the following list illustrates examples of the dance rituals that require wearing the Punu masks:
– Ancestor cults. The rituals are performed for memorizing the ancestors or asking for their help in difficult times.
– Fertility rites. These rituals are performed regularly, usually at the beginning of the summer or during the most drought seasons.
– Rites of passage. These rituals are performed for people who pass the rite of initiation; as a rule it refers to boys who have to become adult men.
– Agricultural festivals. At the end of the season, a tribe organizes the agricultural festival so as to please and thank the spirits and gods for a good harvest. In this way, people demonstrate their appreciation and ask for the same good harvest in the future. As a rule, it is a big holiday for any African tribe as people strongly depend on the harvest, and the life of the tribe is impossible without it.
– Initiations, including secret societies. The African tribes, just as the western nations, have specific secret societies that are closed to the majority of the tribe. If one wants to become a member of such a society, before a person is allowed to pass the secret initiation, he should prove that he is good enough for it.
– Rituals for increase (money, property, children, and others). These rituals are usually performed when a tribe experiences different difficulties or shortages; it may be illnesses, lack of money, or fruitlessness.
– Related Ceremonies. These rituals are performed for establishing connections with relatives and friends.
All these rituals and ceremonies presuppose the usage of masks as an essential attribute of their performance. The Punu masks can tell much about the African world and those people who have used them for many centuries until the present days. First of all, they demonstrate the culture and traditions of the African nation and people’s attitudes to them. Indeed all the nations in the world have old traditions that may look a bit strange or even barbaric today. Rituals used to be an essential part of ancient times; in such a way, the druids gathered together to perform their secret rituals, and Vikings drank the blood of their enemies to receive their strength. However, in the modern world, all these rituals became outdated, and people forgot about them. Nowadays, nations have only few traditions that date back to the past and, as a rule, they have also experienced dramatic changes. Nevertheless, the Africans have managed to save the pieces of their culture until the present days; they have strictly performed their rituals according to all ancient cannons. Therefore, when one talks about the Punu masks, he or she should remember that masks are not just a piece of art, but a reflection of the ancient traditions.
The Punu masks are allowed to be worn only by a shaman or a much-respected person; it is an honor if a person receives a permission to wear a mask. According to this fact, it can be said that masks are the items of the hierarchy structure of the African people. Common tribe members cannot wear masks; only those who hold senior positions or a shaman are allowed to do this. Therefore, the social order is strictly obeyed, and masks participate in this process as an important part of the African culture. According to this fact, it is sound to note that the hierarchy system in the African society is represented by the social class division (Palmeirim 75). This system is very similar to those that existed in the medieval Europe when the society was divided into three classes: the upper class, which was represented by kings and other powerful people, the middle class, to which belonged the knights and the priesthood, and the low class or the ordinary peasants. In the case of the African culture, the priesthood is represented by shamans with masks, rituals, and ceremonies. However, shamans in African society have more power and they can influence the ruler’s decisions. The belief in the supernatural spirits is very strong; thus, shamans have even higher position in the African society.
Modern scholars, as well as the collectors, have a great interest in the Punu masks. They consider them not only a priceless piece of art but also a legacy of the past. The mask may give much information about the tribe, but only for those people who can read it. The mask, as it was said above, is a unique reflection of the family. The mask contains information about its owner, as well as the traditions and history of the family. The ancestor spirit masks provide information on the genealogical tree of the family. The scholars have an opportunity to retrace the ancestors of a particular tribe and identify the peculiarities of their way of life.
Every mask is unique because it reflects certain specifics, and scholars use them to differentiate the tribes, their legacy, and history. For example, some tribes use only helmet masks for ceremonies while others wear the hat masks solely for the agriculture festivals. Therefore, the mask contains more information about the tribe than one can even imagine.
However, scholars are not the only ones who have a great interest in the Punu masks. For sure, in the collection of a famous collector of the antique relics, one can find the Punu masks. The collectors, as well as the scholars, admire the ornaments and styles of the masks and the symbolism they contain.
Despite the unicity and historical value of the Punu masks, one should note that the masks are not the feature of the Punu culture only. The masks were used almost by every nation; moreover, some civilized nations still use masks today, but often for festivals or carnivals. The similar masks or relatively similar masks were used in Japanese, Indian, Ethiopian, Maorian, Brazilian, and other cultures all over the world (Cunningham 34). Those masks were also made of various materials but primarily of wood. For example, the Japanese masks initially were used for the theater. These masks reflected the behavior of the character. The evil characters wore masks that portrayed negative human emotions such as anger, hatred, or envy. On the other hand, the good characters wore masks that portrayed positive emotions such as happiness, sincerity, or kindness. The masks were very important for the Japanese theatre because they also helped demonstrate different non-human characters such as spirits, dragons, or other mythological creatures. Therefore, the Japanese masks had primarily the entertainment function. Nevertheless, it is also necessary to mention that the masks were used by the Japanese warriors known as samurais. Japanese samurais wore helmets that consisted of three elements and one of these parts was a mask. In this case, the mask had the protection role because and protected the warrior’s face from the direct headshots. However, besides the protective function, the mask also performed the frightening role. The terrifying grimace on the mask had to frighten the enemies in the battle. Today, one may find these masks in museums, because they are used only as of the exhibit items. The theatre masks, however, still can be viewed during the performances organized for the tourists.
The Brazil masks are not the exhibit items; they are widely used nowadays during the annual carnivals and festivals. Unlike the African and Japanese masks, the specific feature of the Brazil masks is their brightness and splendor. Every year, the craftsmen design new masks that reflect the main theme of the annual festival. The masks are decorated with feathers of different colors in order to be attractive and elegant. As said above, Brazilian masks are used during the carnivals; they perform the entertainment and sensuous functions. The carnivals are accompanied by dances and exotic dresses of those who wear these masks.
According to this fact, it can be said that the Punu masks have analogs all over the world. However, the functions of other masks are different; some of them are used only for the entertainment goal, others are a mean of protection in the battle while some are used for rituals and communication with spirits just as the Punu masks. All masks in different cultures have one common meaning and significance, and it is the demonstration of emotions to a wide audience. The masks in all cultures are used for underlining some emotions or creating new ones. Sometimes masks are used for representing animals, spirits, or different mythological creatures.
The Punu masks reflect the identity of the Punu people: their culture, traditions, and values. These masks are more than the relics of the past; they have special significance for the people of the Punu tribes. Using the masks, the Punu culture has created the system of beliefs that have survived until the present day. The masks perform the role of a bridge between two worlds: the world of the alive and the world of the dead and supernatural.
|
<urn:uuid:492a7d8c-739b-4e4c-8629-9b1bac127c28>
|
CC-MAIN-2020-16
|
https://supreme-thesis.com/essays/the-punu-masks/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371656216.67/warc/CC-MAIN-20200406164846-20200406195346-00128.warc.gz
|
en
| 0.970181 | 2,567 | 3.625 | 4 |
How do photoluminescent signs work?
Photoluminescence occurs when a material absorbs photons (light energy) and then emits them back when the light source is removed i.e. in the dark, creating a noticeable lighting effect. Photoluminescent safety signs are designed to absorb photons from ambient light and then re-emit them in darkened conditions.
With a internal lifespan of up to 25 years they can also save money by not using electricity. Being maintenance free there is no need for costly regular testing regimes associated with ensuring the correct functionality of electrical signs. We have extended the use of photoluminescent material beyond fire safety signs with many other health and safety signs including hazard signs, prohibition signs, chemical safety signs, first aid signs and custom signage requirements. We have also created floor signs and tapes to aid your wayfinding during emergency evacuations.
Cost savings with photoluminescent signage
In addition to their cost effectiveness, eco-friendly safety signs are also more reliable than traditional exit signs due to increased visibility due to their natural ability to glow in the dark. Free from light bulbs they are the optimal signage solution in the event of emergency situations that result from structural shock falling debris. Using eco-friendly photoluminescence signs alongside your electrical fire exit signs is a cost effective way to help a companies’ journey to ‘go green’.
Reduce your carbon footprint
Going green and being environmentally friendly is a hot topic for businesses. As well as being cost effective and functional there are additional environmental benefits of using photoluminescent signs.Many companies are committed to recycling and purchasing non-toxic supplies, but still work in buildings whose materials, electrical systems and waste systems have been in place long before being “eco” became the business buzz word of the moment and before the economic benefits of being green were truly understood. Different companies will have different factors to consider when improving their green credentials. For example, a solicitors firm will have different concerns than a construction site.. Electrical fire safety signs are environmentally unfriendly due to their never-ending demand for electricity. Being lit 24 hours a day, 365 days a year, they also contribute to driving up companies’ fuel costs. Therefore by using photoluminescent signs brings both an economical and an environmental benefit. Often recognised as best practice in the UK, photoluminescent sign systems have also been adopted throughout the EU and USA.
How to use your photoluminescent signs
For your signs to be effective they will require initial activation from a good light source – this can be natural or artificial. Check out our video below to get the most out of your signs:
|
<urn:uuid:d5f058da-0221-4fce-8740-45d6b04f9ba9>
|
CC-MAIN-2017-30
|
https://www.stocksigns.co.uk/glow-with-the-flow/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424564.72/warc/CC-MAIN-20170723142634-20170723162634-00144.warc.gz
|
en
| 0.948069 | 553 | 3.125 | 3 |
This classic O'Reilly bestseller covers every element of HTML & XHTML in detail, explaining how each element works and how it interacts with other elements. With hundreds of examples, this book shows readers how to create effective Web pages and how to master advanced features like Cascading Style Sheets.
This guide to creating web documents using HTML and XHTML starts with basic syntax and semantics, and finishes with broad style guidelines for designing accessible documents that can be delivered to a browser. Links, formatted lists, cascading style sheets, forms, tables, and frames are covered. The fourth edition is updated to HTML 4.01 and XHTML 1.0. Annotation c. Book News, Inc., Portland, OR (booknews.com)
|
<urn:uuid:e71f98b9-1830-4e33-a46e-8ab13b4dbdd8>
|
CC-MAIN-2017-26
|
https://www.cheaptextbooks.org/9780596527327
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320070.48/warc/CC-MAIN-20170623151757-20170623171757-00358.warc.gz
|
en
| 0.815585 | 151 | 2.78125 | 3 |
Traffic assignment problems usually consider two dimensions.
- Generation and attraction. A place of origin generates movements that are bound (attracted) to a place of destination. The relationship between traffic generation and attraction is commonly labeled as spatial interaction. The above example considers one origin/generation and destination/attraction, but the majority of traffic assignment problems consider several origins and destinations.
- Path selection. Traffic assignment considers which paths are to be selected and the amount of traffic using these paths (if more than one unit). For simple problems, a single path will be selected, while for complex problems, several paths could be used. Factors behind the choice of traffic assignment may include cost, time, or the number of connections.
|
<urn:uuid:066a800b-0d1b-4439-b5ee-a642009ebc5b>
|
CC-MAIN-2023-40
|
https://transportgeography.org/contents/methods/route-selection-process/traffic-assignment-problem/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510179.22/warc/CC-MAIN-20230926075508-20230926105508-00651.warc.gz
|
en
| 0.940877 | 145 | 3.359375 | 3 |
Hard Cold Facts
January 3, 2008
Sneezing, scratchy throat, runny nose -- everyone knows the first signs of a cold. And everyone has an opinion about how you get a cold and how you treat a cold. Today, UT Health Center's Dr. Mom reports the cold hard facts about colds.
Colds are minor infections of the nose and throat that are caused by over 200 different viruses. A cold may last one week or more, and they are highly contagious. Adults suffer with 2-4 colds per year while children suffer 6-8 colds per year.
Colds are spread when droplets of fluid that contain the cold virus are transferred by touch. The viruses do not multiply on environmental surfaces, but they can still be transferred and still be infectious.
Cold symptoms include:
Lower the possibility of getting a cold by:
- Runny nose
- Scratchy throat
- Loss of taste and smell
- Not feeling well in general
I have a cold. What can I do?
- Wash your hands often, or use alcohol-based hand sanitizer! Germs are easily passed from one person to another by shaking hands, touching doorknobs and handrails.
- Avoid people who are sick
- Use a germ killing disinfectant
- Avoid touching your nose, mouth, and eyes where germs easily enter your body
- Get plenty of rest
- Drink plenty of fluids
- Eat a healthy diet to help your body fight infection
- When you sneeze or cough, use a tissue and dispose of it properly, and WASH YOUR HANDS!
- Take an over the counter cough and cold medicine that can help relieve symptoms
- Use a humidifier to help ease congestion
- Stay home to prevent spreading the cold virus
For more information:
- Feed a cold, starve a fever – actually it is better to eat a healthy diet in order to give your body the nutrients it needs to fight infection
- Antibiotics can cure a cold – colds are caused by viruses and antibiotics cure bacterial infections, not viruses.
- Taking extra vitamin C will keep me from getting a cold – studies have not shown that vitamin C prevents colds.
- You can catch a cold from being out in the cold weather – colds are common in the winter months because that is when the viruses are most active. The cold virus is transferred through fluid droplets that contain the cold virus.
- Cold symptoms usually last at least a week or more, but if symptoms persist or worsen, call your primary care physician.
|
<urn:uuid:e8d61fbe-5d74-4cf5-a1dd-2ed70e530aec>
|
CC-MAIN-2014-35
|
http://www.uthct.edu/drmom/hardcoldfacts.asp
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909042003-00025-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.925199 | 536 | 3.5 | 4 |
The CSN Tool will be a central information portal, integrating current knowledge on migratory waterbirds along the African-Eurasian flyway. This new web-based application will support the identification and conservation of the network of sites used by waterbirds to complete their annual migrations across Africa and Eurasia. It will integrate flyway-scale conservation efforts and foster international cooperation among a wide range of government and non-government organizations towards flyway level conservation of migratory waterbirds.
The CSN Tool will foster international cooperation among a wide range of government and non-government organizations towards flyway level conservation of migratory waterbirds.
What can the CSN Tool do for you?
This application will benefit everyone dealing with waterbirds and with wetlands management. At the flyway-scale, it will show the key sites for any waterbird population in the AEWA region. At the site level, it will help site managers to identify the significance of their site in the flyway context for each waterbird species their area hosts. In addition, the system will illustrate site boundaries, changes in population size over time and practical ecological requirements to help site management. The Critical Site Network Tool will also assist in the development of International Single Species Action Plans, systematic identification of wetlands to be protected under the Ramsar convention, site managers and environmental impact assessment practitioners.
In sum, the CSN Tool will allow conservation managers and policy makers at the local, national and international level to:
• Identify the key sites used by a specific population of waterbirds along their entire migration route
• Understand the importance of a specific site for a specific population, or group of waterbird species
• Verify the conservation status of a specific site
• Illustrate the boundaries of a specific site
• Show how population numbers are changing over time at a specific site
• Show the importance of a site from a flyway-scale perspective
• Provide practical information on the ecological requirements of waterbirds to help site management
The CSN Tool is aimed at conservation practitioners, decision-makers and planners at local, national and international level. It will help national authorities across the African-Eurasian region identify what critical sites fall into their national jurisdiction and highlight the importance of individual sites in a flyways context. The tool will assist international waterbird conservation efforts by providing the information needed to better protect waterbird species across their entire migratory range. It will help all stakeholders involved in the transboundary conservation of waterbirds to target their efforts to fulfil their obligations under relevant international treaties including i.e. the Ramsar Convention on Wetlands, the Convention on Migratory Species and the African-Eurasian Migratory Waterbird Agreement and the EU Birds Directive.
How does it work?
The CSN Tool will bring together information held in the three main databases used for international waterbird and wetland conservation. It will make this currently dispersed data available in a central, open and searchable Web-based interface. The tool is being developed by the World Conservation and Monitoring Centre (UNEP-WCMC) in collaboration with Wetlands International and BirdLife International. The CSN Tool will provide comprehensive site and flyway scale information for over 300 migratory waterbird species, including all the 236 species covered by the African-Eurasian Migratory Waterbird Agreement (UNEP/AEWA).
The CSN Tool will also combine information from key existing datasets on migratory waterbirds and their critical habitats, including:
The World Database on Protected Areas (UNEP-WCMC)
The World Database on Protected Areas (WDPA) provides the most comprehensive dataset on protected areas worldwide and is managed by UNEP-WCMC in partnership with the IUCN World Commission on Protected Areas (WCPA) and the World Database on Protected Areas Consortium. The WDPA is a fully relational database containing information on the status, environment and management of individual protected areas.
International Waterbird Census Database
The IWC Database includes over 25,000 sites and contains the most complete waterbird count data available in the African-Eurasian region. The IWC is an annual census of waterbirds in more than 100 countries and takes place in mid-January each year. Close to 15,000 voluntary expert observers count between 30 million and 40 million waterbirds using a standardized method involving the collection, checking, and importing of national and regional waterbird census data. The international census is coordinated by Wetlands International – one of the leading global NGOs dedicated to the conservation and wise use of wetlands.
World Bird Database
Through its Important Bird Areas Programme BirdLife International has identified, using standard numeric criteria compatible with the requirements of the Ramsar Convention, the key areas for the conservation of threatened, endemic and congregative species across the region. Information on the population size of species which triggers the classification of the site as an Important Bird Area is stored in the World Bird Database together with information on threats and conservation measures.
Ramsar Sites Information Service (RSIS)
The Ramsar Sites Information Service provides data on wetlands designated as internationally
important under the Ramsar Convention on Wetlands, generally called Ramsar sites. The information included in the database derives from the Ramsar Information Sheet, the Ramsar National Report and/or from Administrative Authority correspondence provided by Contracting Parties. This includes information on wetland types, land uses, threats, hydrological values of the sites etc.
CSN Tool Preview (The WOW portal data displayed using the ArcGIS Explorer):
The above screen shots illustrate a first working draft of one possible way of accessing the WOW portal data using the free software ArcGIS Explorer. This will basically work by streaming the live data from the WOW - CSN web mapping servers (developed by UNEP-WCMC) to the CSN software installed on a local machine.
The red dots you see on the screen are the IBA (Important Bird Areas) sites as they are in the Birdlife database. These will be linked to the species data. The CSN portal will also use the web mapping services directly online as a 2D view through the web browser.
When will the tool be available?
The Critical Site Network Tool is currently under development and will be launched in 2010. However, a demonstration version with limited data will be released in 2008 to collect feed-back on functionality from future users.
|
<urn:uuid:f3e98676-6f52-454e-aeec-349a63644c93>
|
CC-MAIN-2017-39
|
http://wow.wetlands.org/INFORMATIONFLYWAY/CRITICALSITESNETWORKTOOL/tabid/143/language/en-US/Default.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695726.80/warc/CC-MAIN-20170926122822-20170926142822-00117.warc.gz
|
en
| 0.871207 | 1,308 | 2.6875 | 3 |
Dr. K.N. Anandan
I propose that the narrative as a discourse can be used to give the richest kind of linguistic input to the learners. A narrative is not just the parading of certain sequence of events. Nor is it equivalent to a conventional story (say, the story of the woodcutter and the goddess, or of the goose that lays golden eggs) which begins at some point, runs through certain sequence of events and comes to a natural culmination.
Here follows a story.
A dove and an ant
A dove and an ant lived in a tree near a pond. One day the ant fell into the pond. It could not swim. The dove saw it. She dropped a leaf near the ant. The ant stepped on the leaf and reached the bank. The ant thanked the dove.
Q 1: How will we present the story in the class?
Q2: How can we help our learners make sense of the story that they are listening to?
Conventionally, a few steps will be followed.
1. Telling the story with the support of pictures
2. Showing appropriate gestures
3. Giving mother tongue equivalent for unfamiliar words
4. Explaining some part of the story in mother tongue
5. Repeating the story without showing pictures
6. Asking questions to check comprehension
Children may catch the idea of the story. They receive it as an assemblage of information as passive listeners without employing their thinking skills. While the teacher is telling the story there is no guarantee that corresponding thoughts are generated in their minds. The formation of inner speech does not take place by merely listening to the story if it is presented in the way mentioned above. In order to make it happen narrative has to be used to trigger inner speech in the minds of children. The text of the story has to be modified into that of a narrative.
Building up a narrative
The narrative aims at creating images in the minds of the listeners. It deals with human drama involving certain characters who the listeners can identify with, and get emotionally attached to. They start empathizing with these characters and share their thoughts and feelings.
Let us see how this can be materialized.
Consider the set of questions in A and B.
1. How can we convert the story “a dove and an ant” to a narrative?
2. What are the mental images to be created?
3. How can we instil empathy in the listeners?
1. What are the events?
2. Where do these events take place?
3. Who are the characters?
4. What are they saying?
5. What do they feel?
The questions in A are related to the overall effect that we are targeting through the text of the narrative. Those in B point to the craft of developing the narrative.
In the light of the questions given above we can revise the text given in Task 2.
We will blow up the information contained in the first two sentences (i.e., A dove and an ant lived in a tree near a pond. One day the ant fell into the pond).
Narrative: A dove and an ant
On the wayside there is a pond. What a big pond! And how many water lilies! White lilies, red lilies! Big green leaves! How beautiful!
The pond is full with water. Of course it is clear water. Like on a mirror, you can see your face on it.
There is a tree growing near the pond; a mango tree. Not a big one and not a small one, too. It has several branches. Most of the branches are bending over the pond. Now there are no mangoes on it but only flowers. Bunch of flowers ... Not one but many... A fresh smell flow out from them. Ah, what a nice smell!
On the topmost branch there is a nest. A dove lives in this nest. A small, white dove, with beautiful, red eyes ... What a nice bird! How beautiful it is!
Somewhere on the tree there is a family of ants.
A father ant, a mother ant and their many, many children!
How many ants!
The father ant was sitting on one of the branches.
He felt the smell of the flowers.
“Nice smell,’ said the children.
“Yes, it is,’ said the mother ant.
‘Where is it coming from, mom?’
‘Of course, it’s from the mango flowers.’
‘I ‘m sure there is honey in the flowers,’ said the father ant. ‘But we can’t reach there now.’
‘Can’t you feel it? A heavy wind is blowing.’
‘Please, dad. Take us to the honey,’ said the little ants.
‘Little ones, how can I take you there now? The wind will blow us away.’
‘Please, dad. We want honey,’ said the little ones again.
‘Get it for them, will you?’ said the mother ant.
‘Okay. Let me try.’
With his tiny legs he started moving on the branch.
‘I must reach that bunch of flowers,’ thought the ant.
The branch was bending over the pond. While walking, the ant looked down. He saw the water in the pond.
‘What’ll happen if the wind blows now?’
‘If the wind blows I’ll fall into the water, Yes, I will,’ thought the ant.
The thought frightened him.
‘I can’t swim.’
He closed his eyes. And then…
The wind started blowing.
And the ant fell down into the pond.
We have got a sample of the craft of developing a narrative from the story. What are the things that we have incorporated into the text?
1. Shall we blow up the remaining part of the story, too?
2. What are the details to be added?
3. What kinds of sentences are to be used?
4. What strategy is to be used to load the text with emotions?
Building up on the emotive aspect of language
Why do we focus on the emotive aspect of language? Recall our own experience of getting involved in interpersonal communicative situations. We may have met people at several places and may have talked to them about several things at several points of time. We are not likely to store these several pieces of conversation in our minds precisely because we don’t feel the need for doing so. For instance, we tend to forget the conversation that has taken place between the shopkeeper and ourselves the moment our business is over unless there is some special reason to retain it in our mind. The same is the case with the exchanges that have taken place on several other occasions.
Here is a typical piece of conversation used for teaching English.
With the vegetable vender
Customer: What is the price of tomato per kilo?
Vender: Eight rupees, Sir.
Customer: And for bhindi?
Vender: Seven rupees.
Customer: Okay. Give me half a kilo tomato and half a kilo bhindi.
Vender: Here’s is your tomato and bhindi, Sir.
Customer: Thank you. Here’s the money.
Vender: Thank you.
1. Can you identify the vegetable vender and the customer?
2. Which part of the world do they live in?
3. What comments can we make on the text of this discourse?
4. How long these expressions will remain in the memory of our learners?
The point that we are trying to make here is only this: We cannot be complacent with the kind of mechanical encounters given in Task 5 in the pretext of teaching English. Here follows another task that can illustrate the point we are trying to drive home in this section:
Examine the following activity:
The teacher displays a page of railway timetable and asks children to examine it thoroughly to see the details it contains.
She asks a number of questions such as the following:
• Which are the trains that leave from Chennai?
• Which train reaches Bangalore from Trivandrum
• What time does Chennai Trivandrum mail reach Coimbatore?
• How long does the train stop at Coimbatore?
• What is the departure time of the Chennai mail from Trivandrum?
• How far is Coimbatore from Chennai?
1. Will the learners be motivated to respond to these questions?
2. Do they have real need to answer these questions?
3. Is there any scope for generating divergent ideas?
In fact a large number of information can be pooled from the timetable by asking similar questions. A variety of structures can be invoked to pose the questions? (Suppose the train does not stop at Coimbatore how much running time can be saved? Which train takes the shortest running time from Chennai to Trivandrum?)
Theoretically speaking a lot of language can be generated using the railway time table. But it is a mechanical activity; the language that is generated will be emotionally void, and will not be emotionally registered in the minds of the learners.
Let us perceive this topic from a different perspective. There are certain encounters that will remain fresh in our minds so long as we live. This is because of the emotional vibrancy those encounters have created in us. Even then we may not recall syllable by syllable what we may have talked to or others may have told us on such occasions. Nevertheless we will have in our minds a “feel” of those encounters.
Why does this happen so? Note that experience, including linguistic experience gets sustained in our minds as emotional gestalts. It seems we do not have the parts but only the whole, though this may not be so. If we strive a little, parts can be recovered from the whole.
The point is that if linguistic experience is registered as emotional gestalts, then the role of a facilitator is to help learners develop such gestalts in their minds. This is possible only when learners can experience them. The role of a teacher in the constructivist paradigm is to transact experience, not to transmit information whether this is information about language or any other topic.
Since the narrative is meant to operate at the emotional plane of the listeners it makes use of an emotive language; it breathes life. The theme of a particular piece of narrative is decided by the plot that has to be specially selected taking into consideration the nature of learners belonging to a particular age group. For example, the narrative designed for small children will essentially make use of elements of fantasy which is not required for learners of higher age groups. Note that as a pedagogic tool the narrative is to be fine-tuned in such a way that it does not create any linguistic, cultural, or psychological barriers for the learner. Obviously it cannot deal with themes that do not belong to the experiential orbit of the learners. The overall aim of presenting a narrative is to create certain images in the minds of learners and to make them emotionally charged. It does not aim at creating situations for teaching vocabulary or certain structures and functions though learners might register certain vocabulary items and structures non-consciously.
We have a few pedagogic claims on the narrative:
1. It allows a holistic treatment of second language.
2. It accommodates different discourses; we can incorporate descriptions, conversations and rhymes into the text of a narrative.
3. Note that any language makes use of different varieties of sentences such as declaratives, interrogatives, imperatives, exclamatory sentences, short responses, negatives, tags. Unlike the other discourse forms (for example, essay, poem, letter, etc.) a narrative as a discourse can accommodate all these types of sentences quite naturally.
4. While performing the narrative the teacher will have to make use of all possible prosodic features such as stress, intonation, modulation. In this sense also, the narrative offers a holistic treatment to language.
5. While presenting the narrative the teacher can pause at certain points thus creating certain “narrative gaps” which can be filled in by the learners by constructing target discourses.
6. Narrative can fruitfully capitalize on the emotive aspect of the language. This is of vital importance in the language class because experience is sustained in human minds as emotional gestalts.
7. It can channel the thoughts of the listeners so that they can perform the tasks assigned to them in a better way.
As we have already mentioned the new approach proposes a discourse-oriented pedagogy in the sense that the input that is given to the learners (irrespective of their levels) will be in terms of discourses and what we expect from the learners is the construction of discourses. We are familiar with the design of a conventional text book. It contains several reading passages covering a wide range of discourses such as essays, stories, poems, letters, and descriptions. Each unit of the course book will be focusing on certain vocabulary items, structures and functions. Since the material is designed within a skill-based approach the course book will be focusing on the development of receptive and productive language skills, and study skills for which a number of tasks will be suggested for practicing Why should we teach vocabulary, structures and other linguistic facts of the second language? We do this with the expectation that the learner will be using them in meaningful contexts. After teaching these items the traditional “brick-laying” methodologist will test whether learners have learnt them with help of some exercises where they will be asked to fill in the blanks choosing the right word from a set of words given to them. Perhaps he will also test whether the learners can ‘use the words in their own sentences.’ We have already seen that words or sentences in isolation have no independent existence; they are parts of some discourses. If the child is not able to construct discourses as and when they are needed what is the point of going for the drudgery of learning word meanings and their uses?
Providing slots for the listeners
How can we involve the listeners in the process of narration?
Consider the narrative piece ‘The dove and the ant.’
1. Can we present this narrative at a single stretch?
2. Where can we provide slots for interaction with the listeners?
3. What kind of questions are to be asked?
Discourse-oriented pedagogy helps us materialise the shift from fragmentary and skill-based treatment of language in terms of structures and vocabulary items to a holistic and knowledge-based treatment in terms of discourses. It captures the emotive aspects of language and can be adapted to suit to the needs of all levels of learners.
|
<urn:uuid:6e508fb1-b0af-4ca4-87d6-b57bfee268b2>
|
CC-MAIN-2017-34
|
http://keralaenglishgroup.blogspot.com/2010/10/narrative-as-pedagogic-tool.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00225.warc.gz
|
en
| 0.945947 | 3,128 | 4.25 | 4 |
It’s almost Thanksgiving here in the U.S., so why not use this time to teach your students the ancient art of writing thank-you notes? Show the world that teenagers can be gracious and appreciative too, if they’re given the right skills. With the following six simple steps, your students can be the most courteous class in the school.
In her article “How to Write a Thank-You Note,” Leslie Harpold outlines six elements of an effective thank-you note:
- Greeting the giver
- Expressing your gratitude
- Discussing your use of the gift
- Mentioning the past, alluding to the future
- Thanking the giver again
- Closing the note
Read Harpold’s full article for an explanation of each element and for some concrete examples of the thank-you note in action. You could even print the article for your class. Then, buy some inexpensive thank-you notes or have students make their own, and take some class time for students to use Harpold’s outline to write notes to give out at Thanksgiving dinner.
It’s a very simple template, but if you can get your students to use it, then you will have made the world a better place.
|
<urn:uuid:3baaf7b5-684b-4dd7-9844-178f89b5f92f>
|
CC-MAIN-2014-15
|
http://www.nicksenger.com/blog/a-thanksgiving-lesson-for-teens
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00582-ip-10-147-4-33.ec2.internal.warc.gz
|
en
| 0.899476 | 272 | 3.015625 | 3 |
The Key Stage 3 Curriculum
The 2 year Key Stage 3 (KS3) Curriculum comprises the core subjects of English,
Maths and Science and the following foundation subjects: Humanities
(History and Geography), Languages, Design Technology, Computer Science, Art, Music, Drama, PE and PSHCE and aims to give students a flying start to their time at Tallis. It is underpinned by our commitment to excellence through creativity, community, challenge and engagement. Our Tallis Habits of Mind are central to the schemes of work in year 7 and 8. In year 7 students undertake two extended enquiries in each subject area. We aim to help students develop as self-motivated learners with habits that will serve them well throughout their lives.
The time allocation for each subject in Year 7 is as follows:
Accessibility: The KS3 Curriculum is common to all students.
Students study Geography, History and French or Spanish during Year 7 and 8.
In Design Technology students follow a rotation model allowing the access to the different subjects offered, namely Product Design, Graphics and Food Technology.
Art, Music, Drama and Technology: These subjects are taught throughout
Computing: All students are taught Computing in Years 7 and 8 as a separate subject for an hour a week. Students will learn a range of knowledge and skills including computer systems, data structures, algorithms, programming, networks and communications systems in the modern world, using the latest technology and the Internet. They will develop and apply their analytic, problem solving and computational thinking creativity. They will also explore important elements of how to make use of ICT safely and responsibly and get the chance to develop digital media to share and communicate their learning with a wider audience. They will become digitally literate - able to use, and express themselves and develop their ideas through, information and communication technology – at a level suitable for the future workplace and as active participants in a digital world.
PSHCE: PSHE, careers and work related learning, citizenship and SMSC education are taught through one hour of tutorial time per week, the assembly programme and across the schemes of work of all subject areas. RE is taught as a distinct subject within the humanities curriculum.
At Tallis we believe that the cornerstones ensuring equal opportunities are excellence and justice for all. Excellence means that the academic, cultural and community life of a school guarantees that children of all abilities and backgrounds are respected, loved, nurtured and empowered to fulfil potential and reach beyond the circumstances of birth.
Justice demands that children educated by the state should have fair access to shaping the future so that all the doors in the world are open to them when they leave us.
Excellent and imaginative teaching in a strong and happy community makes this possible. Extra-curricular activities give us more space for creativity.
|
<urn:uuid:494662b5-761e-42aa-8bce-5e04d3aad9a7>
|
CC-MAIN-2017-43
|
http://www.thomastallisschool.com/key-stage-3.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825147.83/warc/CC-MAIN-20171022060353-20171022080353-00368.warc.gz
|
en
| 0.94112 | 577 | 2.96875 | 3 |
Can"ker*worm` (?), n. Zool.
The larva of two species of geometrid moths which are very injurious to fruit and shade trees by eating, and often entirely destroying, the foliage. Other similar larvae are also called cankerworms.
⇒ The autumnal species (Anisopteryx pometaria) becomes adult late in autumn (after frosts) and in winter. The spring species (A. vernata) remains in the ground through the winter, and matures in early spring. Both have winged males and wingless females. The larvae are similar in appearance and habits, and beling to the family of measuring worms or spanworms. These larvae hatch from the eggs when the leaves being to expand in spring.
© Webster 1913.
|
<urn:uuid:cc9a13ef-b7f0-48ac-af00-3ef7f9293ac9>
|
CC-MAIN-2017-47
|
https://everything2.com/title/Cankerworm
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809778.95/warc/CC-MAIN-20171125105437-20171125125437-00127.warc.gz
|
en
| 0.962948 | 166 | 3.6875 | 4 |
How to End Homelessness
It's an affordable housing crisis
The key to ending homelessness is understanding that homelessness is poverty without the key to a house. In other words, it’s not a homelessness crisis. It’s an affordable housing crisis.
People sleeping in doorways and on park benches are only the tip of the homelessness iceberg. For every homeless person on the street, there are ten or twenty or thirty homeless people—including entire families—who don’t have a key to their own house or apartment.
In addition to people already homeless, about 34 million low-income Americans are at high risk of homelessness because they pay 50 percent of their income on housing. Most are only a paycheck or health crisis away from homelessness.
Homelessness became a national crisis more than thirty years ago, when federal support for public housing and other subsidies to low-income people stopped. So the solution to homelessness—a solution used by other developed nations that don't have the level of homelessness seen in the U.S.—in restoring federal support for affordable housing.
The National Housing Trust Fund
Passed by Congress and signed into law by President George W. Bush, the National Housing Trust Fund was meant to be the first significant increase in funding in decades for affordable rental housing for extremely low-income people. Because of the 2008 recession, the Fund was unfunded. That ended last year, when the funding stream was re-established. In 2016, almost $200 million will go into the fund and be distributed to the states and territories. While this is a good start, it will not end the affordable housing crisis.
Tweaking the home mortgage deduction
Under current NHTF funding, Maryland will receive about $3 million in 2016 for low-income housing. A proposed law, however, would increase that funding ten-fold--to over $30 million dollars a year, according to the National Low Income Housing Coalition. By making modest changes to the federal tax code, and do it in a way that would make the tax break more fair, the Common Sense Housing and Investment Act would also expand the number of low- and moderate-income homeowners with mortgages who would get tax breaks. The NLIHC estimates the changes would save about $230 billion over ten years (and recommends reinvesting that savings to low-income rental housing through the NHTF). According to the NLIHC, the bill also would help lower rates of homelessness, all without using any additional government money.
How You Can Help
- By donating to HPRP, you can become part of our efforts to help people who are homeless or at risk of homelessness, as well as supporting us as we work to change policies that affect people who don't have a home
- Sign up for our action alerts and events
- Visit our blog to find out more about what we do
Support the Common Sense Housing and Investment Act
Join United for Homes, the campaign to fund the NHTF through modifications of the home mortgage interest deduction. The Common Sense Housing and Investment Act would support renters struggling to meet their basic needs, while also converting the mortgage interest deduction to a 15 percent tax credit. The bill would expand tax benefits for more homeowners while giving people with disabilities, the elderly and lower-income working families better access to rental homes. The bill also would help lower rates of homelessness, all without using any additional government money.
|
<urn:uuid:57ab9bf5-5754-40b2-b44d-5f69a770f7cc>
|
CC-MAIN-2020-24
|
http://hprplaw.org/end_homelessness/how_to_end_homelessness
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523476.97/warc/CC-MAIN-20200607013327-20200607043327-00538.warc.gz
|
en
| 0.965831 | 694 | 2.671875 | 3 |
Hashing in Computer Science: Fifty Years of Slicing and Dicing
PART I: MATHEMATICAL PRELIMINARIES.
1.1: The Sum and Product Rules.
1.2: Mathematical Induction.
1.4: Binomial Coefficients.
1.5: Multinomial Coefficients.
1.8: The Principle of Inclusion-Exclusion.
1.11: Inverse Relations.
Appendix 1: Summations Involving Binomial Coefficients.
2. Recurrence and Generating Functions.
2.2: Generating Functions.
2.3: Linear Constant Coefficient Recursions.
2.4: Solving Homogeneous LCCRs Using Generating Functions.
2.5: The Catalan Recursion.
2.6: The Umbral Calculus.
2.7: Exponential Generating Functions.
2.8: Partitions of a Set: The Bell and Stirling Numbers.
2.9: Rouché’s Theorem and the Lagrange’s Inversion Formula.
3. Asymptotic Analysis.
3.1: Growth Notation for Sequences.
3.2: Asymptotic Sequences and Expansions.
3.3: Saddle Points.
3.4: Laplace’s Method.
3.5: The Saddle Point Method.
3.6: When Will the Saddle Point Method Work?
3.7: The Saddle Point Bounds.
3.8: Examples of Saddle Point Analysis.
4. Discrete Probability Theory.
4.1: The Origins of Probability Theory.
4.2: Chance Experiments, Sample Points, Spaces, and Events.
4.3: Random Variables.
4.4: Moments—Expectation and Variance.
4.5: The Birthday Paradox.
4.6: Conditional Probability and Independence.
4.7: The Law of Large Numbers (LLN).
4.8: The Central Limit Theorem (CLT).
4.9: Random Processes and Markov Chains.
5. Number Theory and Modern Algebra.
5.1: Prime Numbers.
5.2: Modular Arithmetic and the Euclidean Algorithm.
5.3: Modular Multiplication.
5.4: The Theorems of Fermat and Euler.
5.5: Fields and Extension Fields.
5.6: Factorization of Integers.
5.7: Testing Primality.
6. Basic Concepts of Cryptography.
6.1: The Lexicon of Cryptography.
6.2: Stream Ciphers.
6.3: Block Ciphers.
6.4: Secrecy Systems and Cryptanalysis.
6.5: Symmetric and Two-Key Cryptographic Systems.
6.6: The Appearance of Public Key Cryptographic systems.
6.7: A Multitude of Keys.
6.8: The RSA Cryptosystem.
6.9: Does PKC Solve the Problem of Key Distribution?
6.10: Elliptic Groups Over the Reals.
6.11: Elliptic Groups Over the Field Zm,2 .
6.12: Elliptic Group Cryptosystems.
6.13: The Menezes-Vanstone Elliptic Curve Cryptosystem.
6.14: Super-Singular Elliptic Curves.
PART II: HASHING FOR STORAGE: DATA MANAGEMENT.
7. Basic Concepts.
7.1: Overview of the Records Management Problem.
7.2: A Simple Storage Management Protocol: Plain Vanilla Chaining.
7.3: Record-Management with Sorted Keys.
8. Hash Functions.
8.1: The Origin of Hashing.
8.2: Hash Tables.
8.3: A Statistical Model for Hashing.
8.4: The Likelihood of Collisions.
9. Hashing Functions: Examples and Evaluation.
9.1: Overview: The Tradeoff of Randomization Versus Computational Simplicity.
9.2: Some Examples of Hashing Functions.
9.3: Performance of Hash Functions: Formulation.
9.4: The X2-Test.
9.5: Testing a Hash Function.
9.6: The McKenzie et al. Results.
10. Record Chaining with Hash Tables.
10.1: Separate Chaining of Records.
10.2: Analysis of Separate Chaining Hashing Sequences and the Chains They Create.
10.3: A Combinatorial Analysis of Separate Chaining.
10.4: Coalesced Chaining.
10.5: The Pittel-Yu Analysis of EICH Coalesced Chaining.
10.6: To Separate or to Coalesce; and Which Version? That Is the Question.
11. Perfect Hashing.
11.2: Chichelli’s Construction.
12. The Uniform Hashing Model.
12.1: An Idealized Hashing Model.
12.2: The Asymptotics of Uniform Hashing.
12.3: Collision-Free Hashing.
13. Hashing with Linear Probing.
13.1: Formulation and Preliminaries.
13.2: Performance Measures for LP Hashing.
13.3: All Cells Other than HTn-1 in the Hash-Table of n Cells are Occupied.
13.4: m-Keys Hashed into a Hash Table of n Cells Leaving Cell HTn-1 Unoccupied.
13.5: The Probability Distribution for the Length of a Search.
13.7: Hashing with Linear Open Addressing: Coda.
13.8: A Possible Improvement to Linear Probing.
14. Double Hashing.
14.1: Formulation of Double Hashing.
14.2: Progressions and Strides.
14.3: The Number of Progressions Which Fill a Hash-Table Cell.
14.3.1: Progression Graphs.
14.5: Insertion-Cost Bounds Relating Uniform and Double Hashing.
14.7: The UDH Chance Experiment and the Cost to Insert the Next Key by Double Hashing.
14.8: Proof of Equation (14.12a).
14.10: Proof of Equation (14.12b).
15. Optimum Hashing.
15.1: The Ullman–Yao Framework.
15.1.1: The Ullman–Yao Hashing Functions.
15.1.2: Ullman–Yao INSERT(k) and SEARCH(k).
15.1.3: The Ullman–Yao Statistical Model.
15.2: The Rates at Which a Cell is Probed and Occupied.
15.3: Partitions of (i)Scenarios, (i)Subscenarios, and Their Skeletons.
15.4: Randomly Generated m-Scenarios.
15.5: Bounds on Random Sums.
15.6: Completing the Proof of Theorem 15.1.
PART III: SOME NOVEL APPLICATIONS OF HASHING.
16. Karp-Rabin String Searching.
16.2: The Basic Karp-Rabin Hash-Fingerprint Algorithm.
16.3: The Plain Vanilla Karp-Rabin Fingerprint Algorithm.
16.4: Some Estimates on Prime Numbers.
16.5: The Cost of False Matches in the Plain Vanilla Karp-Rabin Fingerprint Algorithm.
16.6: Variations on the Plain Vanilla Karp-Rabin Fingerprint Algorithm.
16.7: A Nonhashing Karp-Rabin Fingerprint.
17. Hashing Rock and Roll.
17.1: Overview of Audio Fingerprinting .
17.2: The Basics of Fingerprinting Music.
17.3: Haar Wavelet Coding.
17.5: Some Commercial Fingerprinting Products.
18. Hashing in E-Commerce.
18.1: The Varied Applications of Cryptography.
18.3: The Need for Certificates.
18.4: Cryptographic Hash Functions.
18.5: X.509 Certificates and CCIT Standardization.
18.6: The Secure Socket Layer (SSL).
18.7: Trust on the Web ... Trust No One Over 40!
18.9: Criticism of MD5.
18.10: The Wang-Yu Collision Attack.
18.11: Steven’s Improvement to the Wang-Yu Collision Attack.
18.12: The Chosen-Prefix Attack on MD5.
18.13: The Rogue CA Attack Scenario.
18.14: The Secure Hash Algorithms.
18.15: Criticism of SHA-1.
18.17: What Now?
Appendix 18: Sketch of the Steven’s Chosen Prefix Attack.
19. Hashing and the Secure Distribution of Digital Media.
19.2: Intellectual Property (Copyrights and Patents).
19.4: Boil, Boil, Toil ... and But First, Carefully Mix.
19.5: Software Distribution Systems.
19.7: An Image-Processing Technique for Watermarking.
19.8: Using Geometric Hashing to Watermark Images.
19.9: Biometrics and Hashing.
19.10: The Dongle.
Appendix 19: Reed-Solomon and Hadamard Coding.
Exercises and Solutions.
|
<urn:uuid:8b569d10-b7d5-404d-bb9b-c4cd8a35d54e>
|
CC-MAIN-2014-10
|
http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470344733.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999664114/warc/CC-MAIN-20140305060744-00073-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.654792 | 2,109 | 3.53125 | 4 |
Jeffco Public Library Databases
Ms.Kurach's Biographical Sketch assignment requires a lot of research and gathering of information. To do this, we are going to use the Jeffco Public Library Databases. These databases are reliable and credible sources of information. To use these databases, all you need is a Jeffco Public Library card. For this assignment, we are going to use the databases Gale Biography in Context and Middle Search Plus.
Middle Search Plus
Ebooks are also a great option. At KCMS, we own the Encyclopedia of World Biography ebook. Please note that the publication date is 2004, which may be limiting for some research topics.
Encyclopedia of World Biography, 2nd ed., 23v, 2004
Please use the following passwords for access:
Remote Password: kencaryl
When gathering information, be sure to use the research grid provided by Ms. Kurach. A copy is attached below if needed.
|
<urn:uuid:f88e4f67-8970-455d-80ca-d5fe84ace26b>
|
CC-MAIN-2014-10
|
https://sites.google.com/a/jeffcoschools.us/kcms-library/english-lessons/8thgradebiographicalsketch
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021900438/warc/CC-MAIN-20140305121820-00070-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.845245 | 195 | 2.5625 | 3 |
By Angry Staff Officer for CIMSEC’s “Movie Re-Fights Week”
Anyone familiar with the late Galactic Civil War will remember the outstanding triumph by the Rebel Alliance at the Battle of Hoth. Many had considered that this would be a last stand by the Alliance, or at the very least a mere draw if enough transports were able to get away before the Imperial Fleet bore down on them. However, the Alliance was able develop a battle plan that was built on an analysis of the Imperial ground forces’ tactics, techniques, and procedures from years of fighting. This plan emphasized the Alliance’s maneuverability and the terrain that they had chosen for the engagement.
Prior to the engagement, the general staff for the Rebel Alliance had wargamed possible enemy avenues of approach and strike group composition. Because they had effectively shielded their base on the snow-bound planet of Hoth, they knew that the Empire would have to land a strike force on the planet to try to knock out the shield generator. Attempts to enter the battlespace with air assets could be nullified by the Alliance’s Ion Cannon. Additionally, early warning sensors were placed both on the planet’s surface as well as in the atmosphere.
The Alliance’s Echo Base and shield generator were safely harbored inside a draw with only one ground avenue of approach. This site was carefully selected after a thorough intelligence preparation of the battlefield by Alliance engineers and intelligence officers. They could thus canalize any approaching ground force between two ridges of ice and rock. Analyzing the Imperial task organization from past battles, Alliance intel officers theorized that they would most likely attempt to infiltrate with heavy All Terrain Armored Transport (AT-AT) Imperial Walkers and dismounted ground troops to exploit gaps. This would leave them vulnerable on their flanks and rear to air sorties from Alliance T-47 snowspeeders.
Additional preparations included the development of an engagement area in the draw, with obstacle emplacement and fields of fire picked out for concealed heavy weapons. Deep pits were dug and camouflaged with hologram imagery to make the ground appear level. These were offset between lanes of massive tanglefoot: lengths of wire attached to deep stakes sunk into the ice that would impede vehicular movement. Additionally, two belts of landmines were placed in the expected Imperial landing area to disrupt the attack at its outset. Heavy weapons emplacements were dug into the slopes of the surrounding hills to strike at any vulnerabilities in the AT-AT’s armor. In the enemy’s immediate front, several dummy gun emplacements were created to draw the Imperial troops into the trap. The goal was to create as much havoc as possible to the Imperial heavy armor to degrade the morale of their dismounted troops.
The ground forces commander established his heavy weapons fields of fire and coordinated with the Alliance air wings of snowspeeders, specifically Rogue Squadron, to define their flight patterns, where they would infiltrate the battlefield, and where they would exfiltrate, thus avoiding any friendly fire. They gambled that they would have immediate air superiority as the Empire would wait until the shield was down before sending in any air assets. Final protective fires were set at the entrance to Echo Base, where Alliance planners hoped that they could at the very minimum establish a choke point with destroyed Imperial vehicles. Rather than commit to a linear defense, the Alliance relied on a defence in depth, which allowed greater freedom of movement for their dismounted infantry to avoid the heavy guns of the AT-ATs.
The Alliance commander on Hoth, General Carlist Rieeken, assumed a certain amount of risk committing his forces to the battle. He maintained his contingency plan of escape from the planet via transports to assuage his conscience that was still plagued by the loss of Alderaan. Princess Leia Organa emphasized that Hoth was the ideal place to deliver the empire a dramatic defeat that would resound throughout the Galaxy, and Rieeken reluctantly went along with the plan.
Upon the Empire’s discovery of the Rebel base on Hoth, Lord Darth Vader devised a plan whereby the Imperial fleet would come out of hyperspace at some distance from Hoth and bring its heavy weapons to bear upon the planet. However, when Admiral Kendal Ozzel, commander of the Empire’s Death Squadron, brought the his ships out of hyperspace, they immediately triggered the Alliance’s early warning systems in planetary orbit. The shield was activated and Vader was forced to commit to a ground attack. As predicted, the Empire landed heavy armor along with several battalions of the 501st Legion’s snowtroopers on Hoth, at the only available entrance to Echo Base.
Major General Maximilian Veers had overall command of the Imperial ground force. An armor officer by trade, Veers had been stuck at the rank of colonel for some time. His last assignment had been as an instructor at the armor schoolhouse; with the destruction of the first Death Star, so many senior Imperial commanders had been killed that Veers was elevated to major general. Thus, he was entering his first major ground operation with little field experience in the current operating environment. This was perhaps why he walked right into the trap that the Alliance had lain for him.
He deployed his AT-AT’s in line abreast into the draw, with the dismounted 501st troopers behind them. Because of this, his first line of armor suffered significantly from the first two mine belts. Veers then moved two companies of infantry forward of his armor, to check for additional traps and mines. As the terrain constricted them into the draw, the infantry bunched up, and were immediately engaged by Alliance crew served weapons concealed on the flanks, causing heavy casualties amongst the snowtroopers. Veers ordered his lead AT-AT’s forward to knock out the Alliance weapons positions, but two were immediately lost when they stumbled into the pits. The top-heavy nature of the Imperial armor caused the walkers to completely collapse when they encountered the pits, rendering them useless and causing severe casualties to the troops trapped inside. In frustration, Veers ordered all his infantry to dismount to get eyes on the Alliance positions.
The dismounted infantry surged forward, encountering the tanglefoot. Company commanders reported obstacle locations back to Veers, who put his armor into single file as Imperial engineers began to slowly breach their way through the obstacles, taking catastrophic losses from Alliance positions. With his armor’s linear firepower thus limited, Veers could only watch in horror as Rogue Squadron struck from his left, their cannons decimating his ground troops. The second wave of snowspeeders were able to neutralize the rear AT-AT with the cables on their speeders, pinning the entire Imperial task force inside the engagement area. Veers panicked and ordered his armor to fan out to engage the targets that they could identify. This decimated the entire armored force, as they could not maneuver out of the engagement area. The armor took 90% losses, with the entirety immobilized inside the engagement area. Veers’ command vehicle was decapitated by concentrated Alliance firepower and he died in flames.
From space, Vader’s rage increased by the second as he monitored the battle below. When he lost communications with Veers, he flew into a fury and committed two more battalions of ground troops. These arrived to observe the last moments of the first task force, which disappeared under sustained blaster fire. Rather than walk into certain death, these two battalions elected to defect from the Empire in their transports.
Vader ordered the planet blockaded and called for reinforcements. However, word of the Imperial disaster on Hoth spread like wildfire around the galaxy. Revolts erupted in nearly every system, tying down all available ground troops and star destroyers. The Imperial blockade winnowed away due to attrition from small Alliance strike groups that ate away at it. In frustration, Vader abandoned the blockade and retreated to where the beginnings of the second Death Star were taking shape. Superior Alliance intelligence tracked him there, and the Death Star was destroyed before it could ever become operational. Battle damage assessments calculated that Vader was on board when it was destroyed, but could not confirm his death. His body was never found. The Empire vanished in the fire and destruction of the insurgency that began with the victory on Hoth.
Angry Staff Officer is an engineer officer in the Army National Guard with an enlisted infantry background. He has blogged under the name ‘Angry Staff Officer’ since 2014 and is a member of the Military Writer’s Guild. He has served in multiple positions in both staff and line units, at the company, battalion, and division levels, and served one tour in Afghanistan. Angry Staff Officer holds his master’s degree in history. He enjoys snark, satire, cynicism, history, and over analyzing foreign policy. He writes at www.AngryStaffOfficer.com and can be found on Twitter @pptsapper.
|
<urn:uuid:bcbf6722-0a1c-4b2c-ad95-638e12e2e1fe>
|
CC-MAIN-2017-30
|
http://cimsec.org/re-fighting-the-battle-of-hoth-an-engineers-perspective/20602
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425766.58/warc/CC-MAIN-20170726042247-20170726062247-00004.warc.gz
|
en
| 0.976962 | 1,854 | 2.75 | 3 |
Giant Hogweed is an invasive and strong-growing plant, originally from Southern Russia and Georgia the species can grow over 3 metres tall. Although a fully matured plant can look very striking, don’t be fooled by it’s attractiveness as Giant Hogweed is invasive and potentially harmful. If touched, chemicals in the sap can cause photo-dermatitis causing the skin to be very sensitive to sunlight, which in time will lead to blistering, pigmentation and scaring.
There is currently no statutory right to remove Giant Hogweed, but PHS Greenleaf recommend you do so as this plant can be very harmful to unassuming members of the public. Legislation has been applied to invasive aliens in which Giant Hogweed is classified. The Wildlife and Countryside Act 1981 list Giant Hogweed on Schedule 9, Section 14 meaning that it is an offence to cause the spread of invasive plants into the wild in England and Wales, similar legislation is in place in Scotland and Northern Ireland. PHS Greenleaf offer a comprehensive site survey, in which specific invasive weed species will be identified and assessed. Suggestions and recommendations on a practical solution to deal with the problem will be presented in a report.
How PHS Greenleaf can help
- Our trained technicians will assess the area and provide a bespoke report
- Our experienced technicians are fully equipped with the knowledge on how to effectively perform Giant Hogweed Removal
- We will supply all Risk Assessments and Safe Systems of work for these operations
Our service options
- Giant Hogweed can be dug up and left on site (Giant Hogweed is a Controlled Waste Similar to Japanese Knotweed and must be disposed of in a licensed landfill site)
- Giant Hogweed removal including a waste transfer note to remove waste from site
|
<urn:uuid:fb09cd11-3463-4722-8f36-f342ac46a0c8>
|
CC-MAIN-2017-39
|
https://www.phsgreenleaf.co.uk/giant-hogweed/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00050.warc.gz
|
en
| 0.941967 | 361 | 2.984375 | 3 |
The global food demands will double by 2050 as the population increases. Concurrently, climate science suggests that our agricultural production methods need to adjust to less predictable rainfall, warmer temperatures and more frequent extreme weather events. Several major research reports demonstrate that agriculture could address climate change, unemployment, urbanization, desertification, water pollution, among other environmental challenges.
With these escalating challenges of food and climate change, it is in the interest of development organizations, governments, private sector, and communities to invest in agricultural practices that are adaptive to climate change; that lower or prevent associated greenhouse gas emissions; that generate income for farmers; and that ensures food security.
Climate Smart Agriculture (CSA) fulfils these needs; champions the international development community. It is agriculture that sustainably increases productivity, resilience (adaptation), reduces or removes greenhouses gas emissions (mitigation), and enhances achievement of national/global food security among other development goals. Its elements which comprise of effective practices - ranging from conservation agriculture, agroforestry, watershed management among others - are already practised by pockets of smallholder farmers across the globe and could be scaled out to increase productivity .
With increasing vulnerability of communities to climate change impacts and projected future impacts of climate change, it is urgent to adopt and scale out the CSA practices and innovations across the world, particularly in developing countries where most of the food is produced. However, the ageing population of farmers is less likely to understand this urgency, let alone develop the capacity to adopt these innovations. A new generation of farmers is required.
There is increasing worldwide momentum that recognises young people as the new generation of food producers. Smallholder farmers in developing countries supply up to 70 per cent of the world’s food . In these countries too, over 40 percent of the population are young people, who face increasing unemployment rates. For instance, two-thirds of the population across sub-Saharan Africa are below the age of 25 years. Of these, 44 per cent are below 15 years of age. As food demand increases, there will be a growing pressure on these younger people to feed the future and contribute meaningfully to their national economies.
Young people can substantially contribute to agriculture and rural development, but often, systemic challenges hold them up:-
- The prevailing perceptions and attitudes towards agriculture mainly acquired when growing and schooling. The current generation of young people have been socialized to look down upon agriculture as a dirty, unproductive, and poor man’s activity for the unschooled, and instead value white collar jobs. Unfortunately these are not forthcoming, and the world populations continues to increase demanding more food ;
- Lack of an enabling environment for prospective young farmers including access to productive like land and capital, markets, research and partnerships;
- Lack of a favourable political environment for the agricultural sector at a national, regional, and international level, for instance in trade, infrastructure, transfer of technologies etc.;
- Inadequate skills and skills mismatch among the youth (e.g. in production, processing and business skills). Education reforms have failed to produce skilled workers, while the teaching of entrepreneurial skills and behaviours is often not properly integrated into school curricula and may not teach students self-reliance and risk-taking;
- Generational gap, for instance in the transfer of indigenous farming knowledge from adult farmers to young farmers;
- Labour market discriminations often result into a high rate of young women unemployed, or underemployed. The lack of employment –promoting strategies in most countries and culture often further compounds this challenge; and
- Other impacts of globalization that hinder prospective young farmers including urbanization, economic uncertainties, and price volatilities among others.
These notwithstanding, young people bring several thrusts to CSA as a solution to food security, climate change, and inclusive economic growth. To start with, they easily adopt and adapt new knowledge to fit their needs. The current generation of young people are leading in ICT innovation and application in varied sectors of health, finance, education, security, and agriculture among others. Young people are energetic, dynamic, with an increasing number getting educated. This means they are effectively on a pathway to obtain appropriate skills in CSA and related livelihood enhancement approaches. The third thrust comes from the youth dividend. Offering over 40% of the world population, particularly in developing countries where most of the food is produced, and where there are approximately 11 million youth joining the labour market every year, CSA promises opportunities for youth self-employment and job creation. Finally, the need for young people to create independent spaces of action well fits into the context of CSA as they create employment through the value and supply chains, as well as in diversified investments in CSA.
|
<urn:uuid:ae792e84-1152-4d03-a0f6-ffd6aa2880af>
|
CC-MAIN-2017-26
|
http://gmwaura.blogspot.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320057.96/warc/CC-MAIN-20170623114917-20170623134917-00173.warc.gz
|
en
| 0.937898 | 962 | 3.765625 | 4 |
Latest photos on AncientFaces
No one from the augustin-becker community has shared photos. Here are new photos on AncientFaces:
Augustin-becker Surname History
The family history of the Augustin-becker last name is maintained by the AncientFaces community. Join the community by adding to to this genealogy of the Augustin-becker:
- Augustin-becker family history
- Augustin-becker country of origin, nationality, & ethnicity
- Augustin-becker last name meaning & etymology
- Augustin-becker spelling & pronunciation
- genealogy and family tree
Augustin-becker Country of Origin, Nationality, & Ethnicity
No one has submitted information on augustin-becker country of origin, nationality, or ethnicity. Add to this section
No content has been submitted about the Augustin-becker country of origin. The following is speculative information about Augustin-becker. You can submit your information by clicking Edit.
The nationality of Augustin-becker is often complicated to determine in cases which country boundaries change over time, leaving the nation of origin a mystery. The original ethnicity of Augustin-becker may be difficult to determine depending on whether the surname came in to being organically and independently in different locales; e.g. in the case of family names that come from professions, which can appear in multiple countries independently (such as the last name "Bishop" which may have been taken by church officials).
Augustin-becker Meaning & Etymology
No one has submitted information on augustin-becker meaning and etymology. Add to this section
No content has been submitted about the meaning of Augustin-becker. The following is speculative information about Augustin-becker. You can submit your information by clicking Edit.
The meaning of Augustin-becker come may come from a craft, such as the name "Dean" which may have been adopted by members of the clergy. Some of these profession-based surnames might be a profession in another language. This is why it is important to understand the country of origin of a name, and the languages spoken by its family members. Many names like Augustin-becker are inspired by religious texts like the Quran, the Bible, the Bhagavadgītā, etc. Often these surnames relate to a religious expression such as "Lamb of God".
Augustin-becker Pronunciation & Spelling Variations
No one has added information on augustin-becker spellings or pronunciations. Add to this section
No content has been submitted about alternate spellings of Augustin-becker. The following is speculative information about Augustin-becker. You can submit your information by clicking Edit.
Understanding misspellings and spelling variations of the Augustin-becker surname are important to understanding the history of the name. Names like Augustin-becker vary in their spelling as they travel across villages, family unions, and languages over generations. In times when literacy was uncommon, names such as Augustin-becker were transliterated based on how they were heard by a scribe when people's names were written in official records. This could have given rise misspellings of Augustin-becker.
Last names similar to Augustin-beckerAugustin-bichl Augustin-boor Augustin-brucker Augustin-bühler Augustin Charles Augustincic Augustin d' Augustin daniela Augustindas Augustindavid Augustin de bourguisson d' Augustin-delalande Augustine Augustine Alice S Augustine-bäumler Augustin-eble Augustinedwaine Augustine Elliot Augustinegodswill Augustinelensiano
augustin-becker Family Tree
Here are a few of the augustin-becker genealogies shared by AncientFaces users. Click here to see more augustin-beckers
|
<urn:uuid:c06368eb-759b-455a-9806-7f7bbde98f16>
|
CC-MAIN-2014-35
|
http://www.ancientfaces.com/surname/augustin-becker-family-history/2433961
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832738.80/warc/CC-MAIN-20140820021352-00462-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.927177 | 833 | 2.515625 | 3 |
Request a Review Copy
If you are a teacher or administrator and would like to review this program for use in your classroom or school, please complete this form.
A Handbook for Young Writers, Thinkers, and Learners
Write on Track is a writing handbook for students in grade 3. You'll find guidelines, models, checklists, tips, and much more. Write on Track also helps students become better readers, test takers, and learners. The handbook has five main parts.
- The Process of Writing helps your students learn all about writing, from using the writing process to understanding the qualities of good writing.
- The Forms of Writing section provides guidelines and models for every form of writing. Would you like your students to write in journals or create time-travel fantasies? Would you like them to write essays or news stories? Check out this section.
- The Tools of Learning section helps your students improve skills in reading, speaking, test taking, and using technology.
- The Proofreader’s Guide includes rules and examples to help students with punctuation, spelling, mechanics, usage, and grammar.
- The Student Almanac supports writing across the curriculum with fascinating science facts, helpful math tips, colorful maps, and much more!
A free online Write on Track Teacher's Guide walks you through the student handbook, giving advice for using each page in your classroom. You'll also find free document downloads, related minilessons, additional student models, additional writing topics, videos, and more. And the Teacher's Guide correlates every page to the Common Core State Standards, connecting you to even more resources.
Also, check out the Write on Track SkillsBook, teaching punctuation, capitalization, mechanics, usage, spelling, and grammar.
Write on Track . . .
- uses colorful art to catch students' eyes and an encouraging voice to keep them reading.
- leads students step-by-step through the writing process: prewriting, writing, revising, editing, and publishing.
- helps students write better sentences, paragraphs, and essays.
- teaches all modes of writing: personal, narrative, explanatory, persuasive (argument), literary, research, creative, and assessment.
- includes high-interest student models and guidelines to help students create similar writing.
- aligns with the new standards, providing close-reading and on-demand writing strategies.
- helps student write well across the curriculum.
- helps students develop learning skills: reading, spelling, speaking, viewing, listening, thinking, studying, collaborating, taking tests, and using technology.
- guides students when editing for punctuation, mechanics, spelling, usage, sentences, and grammar.
|
<urn:uuid:ab7b5bad-5095-4cf5-9c67-4543727c973e>
|
CC-MAIN-2017-30
|
https://k12.thoughtfullearning.com/products/write-track
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00538.warc.gz
|
en
| 0.90263 | 553 | 4.0625 | 4 |
A "supermoon" lit up the sky last night as stargazers enjoyed the Perseid meteor shower - one of the year's most dramatic lunar events.
The moon appeared 14% bigger and 30% brighter than normal as it reached the point in its orbit closest to the Earth, known as "perigee".
Many enthusiasts grabbed their mobile phones to take a snapshot of the spectacle - which comes two days before the meteor shower reaches its peak.
Given a dark, clear sky in a normal year, it is common to see more than 100 of the meteors an hour during the second week in August.
Dr Bill Cooke from the American space agency Nasa's Meteoroid Environment Office, said the luminous "supermoon" risked drowning out the meteor shower.
He said: "Lunar glare wipes out the black-velvety backdrop required to see faint meteors, and sharply reduces counts."
Dr Cooke added that the Perseids were also "rich in fireballs as bright as Jupiter or Venus" that would remain visible despite the moon's glare.
A study conducted by his team since 2008 has shown the Perseids to be the undisputed "fireball champion" of meteor showers.
"We see more fireballs from Swift-Tuttle than any other parent comet," said Dr Cooke.
Every 133 years, comet Swift-Tuttle swings through the inner Solar System leaving behind a trail of dust.
When the Earth passes through, the dust cloud particles hit the atmosphere at 140,000 mph and burn up in streaking flashes of light, creating the spectacle known as the Perseids.
The meteors will be visible until Wednesday, with activity peaking on Tuesday.
An unusually bright full "supermoon" was also seen on July 12, and another is due to appear on September 9.
Supermoons occur relatively often, every 13 months and 18 days, but are not always noticed because of clouds or poor weather.
|
<urn:uuid:c8e91204-9024-4f4b-b75b-b994b8267236>
|
CC-MAIN-2017-26
|
http://www.chroniclelive.co.uk/news/uk-news/supermoon-meteor-showers-visible-night-7592963
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320368.57/warc/CC-MAIN-20170624235551-20170625015551-00202.warc.gz
|
en
| 0.956136 | 405 | 3.40625 | 3 |
In allowing polluters offset carbon emissions by paying for forest owners reduces greenhouse gas emissions, Stanford study finds
A groundbreaking California program that sells carbon offsets is astonished to see the environmental benefits, such as providing an environment for species that are threatened and offers lessons for other initiatives being developed in other states and nations.
It’s not possible to make money from trees but you can make money by watching trees grow. At the very least, you can with the help of a groundbreaking California program that permits forest owners throughout all over the United States to sell carbon credits to businesses that are required from the government to lower emissions. Researchers from Stanford studied the program and discovered that it has important environmental benefits, beyond the mere offset of greenhouse gas emissions.
Visit the website to watch the video.
“Many developing nations with huge forests are looking at similar programs to stop the loss of forests,” said lead author Christa Anderson an undergraduate student at the Emmett Interdisciplinary Program in Environment and Resources at Stanford’s School of Earth, Energy and Environmental Sciences. “California offers the first proof of concept using the government’s program to credit the standing forest.”
For more information on carbon offsets please visit Carbon.Credit.
Storing more carbon
California law states that the state must reach the 1990 levels of greenhouse gases by 2020, and to be 40 percent lower than the levels of 1990 in 2030. The market for cap and trade which covers power utilities and industrial facilities, as well as transport natural gas and fuel suppliers are the foundation of the initiatives. On markets, environmentalists have the option to purchase offsets in order to meet a portion of their emission reduction needs.
Forest offsets which comprise the majority of offsets within California’s cap-and-trade market, are a result of forest owners changing how the land is managed to ensure that the trees can conserve more carbon. It could mean cutting down trees less frequently and reforestationing areas that were previously forested or enhancing the forest through different management techniques. In any case, professional foresters review the modifications to ensure they’re efficient.
For every additional tonnes of carbon dioxide they store in their trees the forest owners receive a credit of approximately $10 to offer to California businesses that must reduce the greenhouse emissions. Since its inception at the end of 2013, the program has brought the forest owners around $250 millionand offset the emission of 25 million tonnes of carbon emissions – equivalent five percent the annual vehicle emissions.
Some critics say offset purchases enable polluters to not cut emissions, and they may also credit reductions that would have been achieved with out the offset program. Although valid, the claims are not proven in the study that was published in Frontiers in Ecology and the Environment.
Exhibit A to demonstrate the effectiveness of the program The researchers highlight the fact that the majority of the forest owners participating in it are forest businesses as well as investment land owners who were cutting down their land before. They had to change their ways of doing business to be able to take part in the program. This is another proof of the impact offsets have on land.
Although California’s cap-and-trade program allows forest offsets in an amount equivalent to 8 percent of polluters’ emissions, the amount that has been issued to date is just 2 percent of total cap emissions. Because the number of offsets is very tiny, polluters are still required to cut their own emissions on their own rather than buying offsets. The whole program results in reductions in emissions that would not have been possible without it according to the Stanford scientists discovered when they looked at the metrics that are used to verify the sturdiness of individual projects.
But Anderson along with her colleagues caution against making use of forest offsets in large amounts because they can distract attention from urgent and significant emission reduction goals elsewhere. A good example is that state legislators recently introduced a bill that would change California’s energy sector to 100% renewable energy sources in 2045.
Lessons that go beyond California
The forest offsets approach may invert the standard paradigm in which conservation-oriented landowners manage land primarily for that purpose, and achieve sustainable forest management and carbon sequestration as co-benefits. Through the California program forest owners with diverse motives alter their land management in order to achieve higher carbon sequestration. They also receive sustainable conservation and forest management as co-benefits. For instance, 17 of the 39 offset forest projects studied contain habitats for endangered species. They also are the people who benefit from management changes focused on carbon sequestration.
“California is working out its trade and cap plan for the near future.” the coauthor of this study Katharine Mach who is a senior researcher within the School of Earth, Energy and Environmental Sciences. “Forest offsets have made up an insignificant but substantial component of the state’s climate actions up to now, and could determine what direction California will go next.”
Carbon offset schemes in the process of being developed across Canada, China and elsewhere should take some cues from California’s model as per Stanford researchers. Some of them include:
A 100-year period in monitoring offsets following the last credit was received gives assurance that the offsets that are credited represent real reductions in emissions for a longer period of time.
The majority of offset projects gain substantial credits within their first year. This could allow projects that are otherwise financially infeasible.
Through embracing projects with multiple motives – not just those that focus on carbon sequestration, California does not limit participation in programs and reaping benefits.
Since the program’s minimum carbon base is based upon highly respected U.S. Forest Service census information, the confidence in the climate benefits it offers is very high.
In the meantime, researchers propose improving the state’s forests offset scheme by requiring the participants to provide a wider array of information on the co-benefits as well as by taking into account the risks of climate change for the program.
|
<urn:uuid:828605ec-9249-421d-ac34-acddb10e3f0f>
|
CC-MAIN-2023-40
|
https://techautomates.com/tech-articles/carbon-offsets-have-wide-ranging-environmental-benefits/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511023.76/warc/CC-MAIN-20231002232712-20231003022712-00869.warc.gz
|
en
| 0.957432 | 1,215 | 2.984375 | 3 |
Segeomjeong Pavilion is a pavilion located on Hongjecheon Stream at the base of Mount Bugaksan in northwest Seoul. Due to its location, surrounded by nature and built on a stream, the pavilion has been mentioned in many literary works, poems, and essays over the years.
While historians aren’t exactly sure when the pavilion was first built, its history dates back hundreds of years. For centuries, ordinary citizens, visiting Chinese officials, and warriors have stopped here to refresh, prepare for battle, or enjoy the surroundings views. It was considered one of the best places to refresh on a hot summer day.
Hongjecheon Stream, which runs for 13.92 kilometers (8.64 miles), is a tributary of the Hangang River. Hongjecheon flows through Jongno-gu, Seodaemun-gu and Mapo-gu. In the past, the stream was known as Hongjewoncheon due to its location near the Joseon-era Hongjewon rest area. Today, the stream is popular for its waterfall and bicycle path.
It is believed that during the Injo Coup of 1623, leaders of the coup passed through Changuimun Gate before stopping in front of the pavilion to wash their swords in the stream as they prepared for their attack on the palace. Their mission was to dethrone King Gwanghae, the fifteenth king of the Joseon dynasty. After the coup, people began to call the pavilion Segeomjeong, literally meaning “The Sword Washing Pavilion.”
In 1746, King Yeongjo ordered Chongyungcheong, one of five military bases, to be relocated to the north of Seoul near Hongjecheon Stream. It was also decided at this time that the pavilion would be completely rebuilt for a place of rest and leisure. After a year of construction, the pavilion was completed. From this point on, the pavilion was officially known as Segeomjeong.
Segeomjeong was well maintained from the 18th century into the early 20th century. By the 1930s, during the occupation of Korea by Japan, the area including the pavilion fell into despair.
In the 1940s, an oilpaper factory worker started a fire by throwing a cloth soaked in oil into a nearby garbage bin. The resulting fire spread throughout the neighborhood, eventually destroying the entire pavilion.
For the next 30 years, other than the two original stone fountains, all traces of the pavilion were forgotten.
In 1977, the government of Seoul decided to reconstruct the pavilion. Without any building plans, the government relied on an 18th century painting by Gyeomjae Jeong Seon (1676–1759) as a reference for rebuilding. The painting depicted Segeomjeong with a low wall erected behind it, a gate near a road, and a smaller gate near the stream.
After Segeomjeong was reconstructed, photographs of the pavilion from the early 20th century were discovered. It was realized at this point that the reconstruction was different from the original pavilion.
Just below the pavilion is a large, flat rock. This rock is known as canopy rock or chailam (차일암). A state ceremony known as the “Festival of Draft Erasing” or sechoyeon (세초연) was held on this rock. During this ceremony, paper drafts used during the creation of the Chronicles of the Feudal Joseon Dynasty were placed into the stream, erasing any writings. The paper was then sent to the Royal Paper Mill located just across from the river where it would be recycled. After the ceremony, the king would host a feast at the pavilion.
Today, the small pavilion is surrounded by residences, buildings, and roads. While much of the original atmosphere has disappeared with modernization, visitors who walk down to the stream and the pavilion might forget for just a moment that they are in a metropolis as big as Seoul.
|
<urn:uuid:fec6d04e-fafe-4533-a98f-5f031fa3d565>
|
CC-MAIN-2023-14
|
https://www.theseoulguide.com/segeomjeong-pavilion/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00791.warc.gz
|
en
| 0.979291 | 852 | 3.046875 | 3 |
F. Van der Plas, R. Howison, J. Reinders, W. Fokkema and H. Olff Functional traits of trees on and off termite mounds: understanding the origin of biotically-driven heterogeneity in savannas Journal of Vegetation Science 24
Article first published online: 31 AUG 2012 | DOI: 10.1111/j.1654-1103.2012.01459.x
In African savannahs, Macrotermes termite mounds have been shown to support unique tree communities, acting as ‘browsing hotspots’. Here, we show that tree species dominating mounds seem to be less adapted to fire, low nutrient availability and water stress than typical savannah trees. Surprisingly, mound tree species are less nutritious and less preferred by browsers than other savannah trees.
Complete the form below and we will send an e-mail message containing a link to the selected article on your behalf
|
<urn:uuid:c8a92029-97c8-43a9-9beb-66d1a855e093>
|
CC-MAIN-2013-48
|
http://onlinelibrary.wiley.com/emailArticleLink?doi=10.1111/j.1654-1103.2012.01459.x&issueDoi=
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163056670/warc/CC-MAIN-20131204131736-00007-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.860822 | 199 | 2.8125 | 3 |
Common indicators of dyslexia at primary school age include:
• Has particular difficulty with reading and spelling.
• Puts letters and figures the wrong way round.
• Has difficulty remembering tables, alphabet, formulae etc.
• Leaves letters out of words or puts them in the wrong order.
• Still occasionally confuses 'b' and 'd' and words such as 'no/on'.
• Still needs to use fingers or marks on paper to make simple calculations.
• Poor concentration.
• Has problems understanding what he/she has read.
• Takes longer than average to do written work.
• Problems processing language at speed.
Primary school age non-language indicators:
• Has difficulty with tying shoe laces, tie, dressing.
• Has difficulty telling left from right, order of days of the week, months of the year etc.
• Surprises you because in other ways he/she is bright and alert.
• Has a poor sense of direction and still confuses left and right.
• Lacks confidence and has a poor self image.
Aged 12 or over.
As for primary schools, plus:
• Still reads inaccurately.
• Still has difficulties in spelling.
• Needs to have instructions and telephone numbers repeated.
• Gets 'tied up' using long words, e.g. 'preliminary', 'philosophical'.
• Confuses places, times, dates.
• Has difficulty with planning and writing essays.
Has difficulty processing complex language or long series of instructions at speed
|
<urn:uuid:3e38f091-437c-47df-a3c2-bb3e91404ef7>
|
CC-MAIN-2017-39
|
http://dyslexiaessex.org.uk/indicators-of-dyslexia
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687833.62/warc/CC-MAIN-20170921153438-20170921173438-00581.warc.gz
|
en
| 0.882146 | 327 | 3.65625 | 4 |
Skywatchers could be treated to a Memorial Day weekend meteor shower tonight – but scientists can’t quite be sure.
When Earth passes through the trail of Comet 209P/LINEAR, the mystery meteor shower could produce hundreds of shooting stars in the course of an hour. Or it could be a total no-show. It all depends on how much space debris the comet has left behind over the course of centuries.
Not even SETI Institute meteor astronomer Peter Jenniskens, who was among of the first to sound the alert about the potential for a new meteor shower, knows how the show on Friday night (and early Saturday morning) will turn out. "The situation with the meteor shower itself is the same as it was 10 years ago when we first noticed this opportunity," he told NBC News.
Jenniskens and a colleague, Finnish mathematician Esko Lyytinen, figured out that Earth would pass right through a sweet spot in Comet 209P/LINEAR's orbit on the night of May 23-24, 2014. The comet, which was discovered in 2004 by the LINEAR sky survey, has been swinging between the orbits of Earth and Jupiter for centuries — and leaving behind trails of cosmic dust every time it passed through.
Meteor showers occur when Earth passes through a substantial trail of cometary debris. Those bits of grit zip through the upper atmosphere and spark bright trails of light. Comet 209P/LINEAR's trails haven't matched up with Earth's orbit in the past, but this year, the alignment should be ideal.
"We found that a lot of these trails from the past are actually in Earth's path," Jenniskens said. "They're all sort of piled up on top of each other."
How much debris is in those trails? "The big mystery, and nobody knows, is whether the comet was shedding dust in the 18th, 19th and early 20th century," Jenniskens said. "If it was dormant, we may not see anything at all."
But if the comet was active back then, skywatchers could see 100 to 400 meteors per hour under peak conditions, maybe even more. That could put it in the same league as the Leonid meteor storm of 1999. What's more, the timing is ideal for North American observers.
"If there's going to be a meteor shower, it should show up this year," Jenniskens said. "Keep your expectations low, but make sure not to miss it, because this event has potential."
The shooting stars would appear to emanate from a point near the North Star, in the dim constellation Camelopardalis (the Giraffe). That's why the meteor shower has a name that's a real mouthful: the Camelopardalids.
Some meteor showers are known for their fast streakers, but the debris from Comet 209P/LINEAR is expected to plow through the atmosphere at a mere 43,400 mph (19.4 km/sec), Jenniskens said. "Instead of seeing these swift streaks in the sky, what you'll see is meteors that glide through the air, in stately motion," he said.
The timing is critical: If there is a meteor storm, it's not expected to last more than an hour or two. Jenniskens is targeting 3:18 a.m. ET Saturday for the peak, but other astronomers have come up with different predictions, ranging between 2 and 4 a.m. ET.
It's a good idea to check with the SETI Institute's Java-based Fluxtimator app, which offers three prediction models and is accessible via http://meteor.seti.org. (Word to the wise: You may have to fiddle with your computer's Java security settings to make it work.)
How to see it in person or online
Plan to get to your viewing spot before the predicted peak. The best spots are far away from the glare of city lights, with an unobstructed, clear view of the sky. Bring a comfy lounge chair or blanket to lie on, and give your eyes plenty of time to adjust. (Check out these tips from last year's Leonids for more advice.)
To keep track of the meteor buzz via social media, check in with the Camelopardalids Facebook page, which is hosted by NASA's Jet Propulsion Laboratory, or watch for Twitter updates with the hashtags #meteor and #camelopardalids.
Seeing meteors flash outdoors is obviously dependent on the weather. But if you're clouded out, you can still get in on the action online:
- The Slooh virtual observatory is planning a marathon webcast starting at 6 p.m. ET Friday. Views of Comet 209P/LINEAR, which just happens to be passing through our celestial neighborhood, will be streamed from telescopes on the Canary Islands. Live coverage of the meteor shower begins at 11 p.m. ET, with commentary from Slooh's Geoff Fox and Paul Cox. Jenniskens is due to be one of the guests on the show. You can tune in via Slooh.com, YouTube or Slooh's iPad app.
- NASA is offering a live video view of the skies over Huntsville, Alabama, starting during the run-up to the meteor shower. Bill Cooke, an expert on meteors at NASA's Marshall Space Flight Center in Huntsville, will participate in a Web chat from 11 p.m. to 3 a.m. ET. You should be able to access the chat as well as the Ustream video via this Web page. While you're waiting for the show to begin, take a look at NASA's six-page guide to the Camelopardalids.
- The Virtual Telescope Project 2.0 has partnered with astrophotographers in the United States and Canada to stream sky imagery starting at 1:30 a.m. ET (0530 GMT). Italian astrophysicist Gianluca Masi is in charge of the event and will provide commentary.
- Even if you can't see the meteors, you can still hear them — thanks to SpaceWeather Radio. When meteors pass over a specially configured radio antenna in New Mexico, the 54 MHz TV signals reflected by the meteor trails are registered and translated into weird-sounding audible whistles. This audio file demonstrates how they sound.
Jenniskens has arranged to look for the Camelopardalids from an instrument-equipped airplane flying out of California. "We will be going into the skies just to have a good, clear view of this," he said. "We're bringing night-vision cameras to measure precisely how the dust is distributed."
From his vantage point, Jenniskens will be able to determine definitively whether the Camelopardalids are a hit or a flop.
"If there are no meteors, I'll be disappointed," he said. "But as a scientist, I will have learned something. We'll know that the comet was dormant long before it was discovered. It's cool to be able to look into the past that way."
Got pictures to share? Alert us to them by using the hashtag #NBCMeteor on Twitter or Instagram, submitting your favorites via our FirstPerson photo-upload page, or sharing them on the NBC News Science Facebook page.
|
<urn:uuid:34e607d3-7ec7-407b-b216-91f66f4f6636>
|
CC-MAIN-2017-26
|
http://www.nbcnews.com/science/space/meteor-shower-showtime-will-camelopardalids-be-hit-or-flop-n112651?cid=par-time_20140523
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320206.44/warc/CC-MAIN-20170623235306-20170624015306-00665.warc.gz
|
en
| 0.949579 | 1,504 | 3 | 3 |
As the result of a private contractor safety glasses program, an employee began encouraging his eighteen year-old son, who installs siding on houses, to wear safety glasses while working. The son finally relented, when aluminum dust started getting in his eyes. About one week later, he was applying siding with an air powered staple gun. When the son fired a staple, it hit a metal plate behind the siding, ricocheted back towards his face and one leg of the staple penetrated the safety glasses' lens, see the figure below. The staple hit with such force that the frames were cracked and the son received bruising on the eyebrow and cheekbone.
The safety glasses definitely saved his eyesight and possibly even his life!
Every day an estimated 1,000 eye injuries occur in American workplaces. The financial cost of these injuries is enormous--more than $300 million per year in lost production time, medical expenses, and workers compensation. No dollar figure can adequately reflect the personal toll these accidents take on the injured workers.
The Occupational Safety and Health Administration (OSHA) and the 25 states and territories operating their own job safety and health programs are determined to help reduce eye injuries. In concert with efforts by concerned voluntary groups, OSHA has begun a nationwide information campaign to improve workplace eye protection.
Take a moment to think about possible eye hazards at your workplace. A 1980 survey by the Labor Department's Bureau of Labor Statistics (BLS) of about 1,000 minor eye injuries reveals how and why many on-the-job accidents occur.
WHAT CONTRIBUTES TO EYE INJURIES AT WORK?
• Not wearing eye protection. BLS reports that nearly three out of every five workers injured were not wearing eye protection at the time of the accident.
• Wearing the wrong kind of eye protection for the job. About 40 of the injured workers were wearing some form of eye protection when the accident occurred. These workers were most likely to be wearing eyeglasses with no side shields, though injuries among employees wearing full-cup or flat-fold side shields occurred, as well.
WHAT CAUSES EYE INJURIES?
• Flying particles. BLS found that almost 70% of the accidents studied resulted from flying or falling objects or sparks striking the eye. Injured workers estimated that nearly three-fifths of the objects were smaller than a pin head. Most of the particles were said to be traveling faster than a hand-thrown object when the accident occurred.
• Contact with chemicals caused one-fifth of the injuries. Other accidents were caused by objects swinging from a fixed or attached position, like tree limbs, ropes, chains, or tools which were pulled into the eye while the worker was using them.
WHERE DO ACCIDENTS OCCUR MOST OFTEN?
Craft work; industrial equipment operation. Potential eye hazards can be found in nearly every industry, but BLS reported that more than 40% of injuries studied occurred among craft workers, like mechanics, repairers, carpenters, and plumbers. Over a third of the injured workers were operatives, such as assemblers, sanders, and grinding machine operators. Laborers suffered about one-fifth of the eye injuries. Almost half the injured workers were employed in manufacturing; slightly more than 20% were in construction.
HOW CAN EYE INJURIES BE PREVENTED?
Always wear effective eye protection. OSHA standards require that employers provide workers with suitable eye protection. To be effective, the eyewear must be of the appropriate type for the hazard encountered and properly fitted. For example, the BLS survey showed that 94% of the injuries to workers wearing eye protection resulted from objects or chemicals going around or under the protector. Eye protective devices should allow for air to circulate between the eye and the lens. Only 13 workers injured while wearing eye protection reported breakage.
Nearly one-fifth of the injured workers with eye protection wore face shields or welding helmets. However, only six percent of the workers injured while wearing eye protection wore goggles, which generally offer better protection for the eyes. Best protection is afforded when goggles are worn with face shields.
Better training and education. BLS reported that most workers were hurt while doing their regular jobs. Workers injured while not wearing protective eyewear most often said they believed it was not required by the situation. Even though the vast majority of employers furnished eye protection at no cost to employees, about 40% of the workers received no eye safety training on where and what kind of eyewear should be used.
WHERE CAN I GET MORE INFORMATION?
• The OSHA website or your nearest OSHA area office. Safety and health experts are available to explain mandatory requirements for effective eye protection and answer questions. They can also refer you to an on-site consultation service available in nearly every state through which you can get free, penalty-free advice for eliminating possible eye hazards, designing a training program, or other safety and health matters.
o Don't know where the nearest federal or state office is? Call an OSHA Regional Office at the U.S. Department of Labor in Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, or Seattle.
• The National Society to Prevent Blindness. This voluntary health organization is dedicated to preserving sight and has developed excellent information and training materials for preventing eye injuries at work. Its 26 affiliates nationwide may also provide consultation in developing effective eye safety programs. For more information and a publications catalog, write the National Society to Prevent Blindness, 79 Madison Ave., New York, NY 10016-7896.
EYE PROTECTION WORKS!
BLS reported that more than 50% of workers injured while wearing eye protection thought the eyewear had minimized their injuries. But nearly half the workers also felt that another type of protection could have better prevented or reduced the injuries they suffered.
It is estimated that 90% of eye injuries can be prevented through the use of proper protective eyewear. That is our goal and, by working together, OSHA, employers, workers, and health organizations can make it happen.
This is one of a series of fact sheets highlighting U.S. Department of Labor programs. It is intended as a general description only and does not carry the force of legal opinion. This information will be made available to sensory impaired individuals upon request. Voice phone: (202) 523-8151. TDD message referral phone: 1-800-326-2577.
Safety Glasses And Goggles
Shields and Helmets
Using Protective Eyewear
No matter where we work, flying particles, dusts, fumes, vapors or harmful rays are apt to expose us to potential eye injury. Fortunately, we can protect against these hazards by using the appropriate protective eyewear for our jobs and by following our companies' established safety guidelines. The following is a guide to the most common types of protective eyewear and the specific hazards they can guard against.
Standard safety glasses look very much like normal glasses, but are designed to protect you against flying particles. Safety glasses have lenses that are impact resistant and frames that are far stronger than regular eyeglasses. Safety glasses must meet the standards of the American National Standards Institute (ANSI). (Safety glasses are also available in prescription form for those persons who need corrective lenses.) Standard safety glasses can be equipped with side shields, cups, or tinted lenses to offer additional protection.
Like standard safety glasses, goggles are impact resistant and are available in tinted lenses. Goggles provide a secure shield around the entire eye area to protect against hazards coming from many directions. Safety goggles may have regular or indirect ventilation. (Goggles with indirect ventilation may be required if you are exposed to splash hazards.)
Face shields and helmets are not in themselves protective eyewear. But, they are frequently used in conjunction with eye protectors. Full-face shields are often used when you are exposed to chemicals or heat or glare hazards. Helmets are used when welding or working with molten materials.
Using Protective Eyewear
You can guard against eye injury by making sure that you are wearing the appropriate protective eyewear for the particular eye hazards you face. It's important to remember that regular glasses alone do not offer protection from eye hazards. Follow your company's established safety procedures, and never hesitate to ask your supervisor if you have any questions about what you can do to protect your sight for life.
Protective eyewear has evolved dramatically over the years. In the 1960s, standard safety glasses were worn mainly in industry and made of tempered glass with unattractive frame styles. But since then, a merge between safety glasses and sunglasses has made eyewear more protective and fashionable. There's a much wider selection of colors and styles to choose from. In fact, many sports and industry safety glasses are made with anything from sports team logos to zebra stripes on the frames. And instead of tempered glass, the majority of lenses today are made of impact-resistant polycarbonate.
In terms of research to improve protective eyewear, Dr. Williams noted that the process is ongoing: "A lot of work has been done over the years to perfect the features of protective eyewear. What we have today is quite good. The task now is to educate people on how important it is to wear eye protection. People don't realize that an eye can be destroyed in a fraction of a second."
Where to find protection
You can purchase most protective eyewear from e-tailers like ABCSafetyGlasses.com for about $5-$10 a pair and considerably less on higher quantity orders. Buy glasses that are made of an impact-resistant polycarbonate, or that are labeled as meeting ANSI (American National Standards Institute) requirements. Some types of sunglasses can be used as protective eyewear, as long as they have impact-resistant polycarbonate lenses.
All rights reserved, 2002
|
<urn:uuid:cab168d9-4c29-40e4-857f-4057619dcaa0>
|
CC-MAIN-2014-35
|
http://www.eyesafety.4ursafety.com/eye-safety-articles.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909051524-00293-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.956573 | 2,048 | 3.125 | 3 |
SATURDAY, July 27 (HealthDay News) -- Allergy and asthma triggers can turn your backyard from a summer oasis into a place of misery if you don't take precautions, experts say.
More than 50 million Americans have allergies and asthma, according to the American College of Allergy, Asthma and Immunology. Here, the college identifies potential causes of allergy and asthma that could lurk in your backyard:
Insect stings can cause a life-threatening allergic reaction. People who know they have an insect allergy should always carry their prescribed epinephrine. To avoid insect sting, always wear shoes in the yard; keep food covered; don't sip from open soft drinks; steer clear of sweet-smelling perfumes, deodorants and hairspray; and don't wear brightly colored clothes.
Grass and tree pollens aren't the only outdoor allergens that can trigger allergy and asthma symptoms. They can also be caused by outdoor molds that grow on rotting logs, in compost piles and on grasses and grains. Summer heat can promote mold growth. If over-the-counter remedies don't relieve symptoms, you may need to get allergy shots, the allergists said.
Some people are allergic to certain sunscreens. If you notice a rash or itchy skin after applying sunscreen, you might be allergic to the chemicals in the product. Choose natural sunscreens that don't have the chemicals benzophenone, octocrylene and PABA (para-aminobenzoic acid), which can irritate skin.
About 4 percent of Americans have a food allergy, and they need to be careful at backyard barbecues. They may be unknowingly exposed to food allergens in salads and sauces. Another potential threat is cross-contamination, which occurs when the same utensils are used for grilling and serving side dishes, and when condiments are shared. People with food allergies should bring an allergy-free dish for themselves, use condiment packets and carry two doses of prescribed epinephrine.
Smoke from barbecues and open fires can trigger an asthma attack. Sit upwind of the smoke and avoid getting too close.
The bite of the lone star tick, which is found in southern and central regions of the United States, can cause an allergic reaction after you eat red meat. If you notice hives, nausea, asthma and other allergy symptoms three to six hours after eating red meat, you may have what is called a meat-induced alpha-gal allergic reaction. If the symptoms are serious, seek emergency medical care. Follow up with proper allergy testing and a treatment plan.
The MedlinePlus Medical Encyclopedia has more about allergies.
Copyright © 2012 HealthDay. All rights reserved.
|
<urn:uuid:2b5d7a0e-461c-42f0-bb9c-daa65a8cb8b1>
|
CC-MAIN-2013-48
|
http://health.usnews.com/health-news/news/articles/2013/07/27/watch-out-for-backyard-allergy-triggers_print.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345768998/warc/CC-MAIN-20131218054928-00086-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.941706 | 563 | 2.859375 | 3 |
When you decide to go on a diet one of the first things that you will learn is that it is important to keep track of what you eat during the day. Tracking all of the meals you eat can help you figure out which foods you will be eating as well as which foods you are not eating enough of. For example, after retaining a food journal for a few days, you might see that you are not consuming very many vegetables but that you are consuming lots of sugar and bad carbohydrates. When you write every little thing down you can see which parts of your diet must change as well as have a lot easier time figuring out what kind and how long of a workout you need to do to shrink your waist line and burn the most calories.
But what happens if you write every single thing down but still can't figure out how to shed weight? There is a great way and a sluggish approach to track the food you eat. A food record isn't just a list of the things you've eaten during the day. You must account for a few other very important information. Here are a number of points that you can employ to help your food tracking be more successful.
You ought to be very precise when you write down the things that you are eating. You have to do more than simply write down "salad" into your food log. You should list every one of the materials within that salad as well as the type of dressing on it. You should include the amount of the food you consume. "Cereal" is just not beneficial, however "one cup Shredded Wheat" is. Don't forget that the more of a thing you eat, the more calories you are going to ingest so you need to list out the measurements of what you eat so that you will know precisely how many calories you take in and will need to burn.
Write down the time you're consuming items. This helps you figure out when you feel the most hungry, when you are prone to snack and what you can do about it. After a few days you'll note that even if you might be eating lunch at the same time every day, you are still hungry an hour later. You may possibly also be able to identify when you are eating only to have something to do. This is incredibly useful because realizing when you're vulnerable to snacking will help you fill those times with other activities that will keep you away from the candy aisle.
Write down your emotions while you eat. This makes it possible to figure out when you use foods to help soothe emotional issues. It will even identify the foodstuffs you choose when you are in certain moods. There a wide range of people who seek out junk food when they feel angry or depressed and are just as likely to choose healthy things when they feel happy and content. Not only will this let you notice when you reach for particular foods based on your mood, it will help you find ways to keep healthier (but similar) alternatives on hand for those same moods and help you figure out whether or not someone professional can help you deal with the issues that are sending you towards certain foods in the first place.
About the Author:
You can get more info on Sweaty Feet and many other related articles by visiting the coldsweats.org website.
Post a Comment
|
<urn:uuid:dce72777-3595-4d42-9aad-bfd255955e7d>
|
CC-MAIN-2023-14
|
https://www.milwaukeebusinessopportunities.com/2011/10/keeping-track-of-what-exactly-you-eat.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00392.warc.gz
|
en
| 0.974551 | 676 | 2.53125 | 3 |
White blood cells (WBCs), also known as leukocytes, are an essential component of the immune system, serving to protect the body from harmful microorganisms. When an infection develops in the body, the number of white blood cells quickly increases, and the cells are transported to the infection site to attack and destroy the bacteria, virus, or other “bug” causing it.
There are five types of WBCs that circulate in the blood — neutrophils, lymphocytes, monocytes, eosinophils, and basophils— and the concentrations of each can fluctuate on a day-to-day basis. The specific role and function of each type are described below.
This is the most common type of white blood cell, accounting for more than 50 percent of your body’s supply. Neutrophils are the cells that engulf and destroy infection-causing bacteria and other harmful pathogens. Immature neutrophils are known as band cells, while fully developed neutrohpils are called polys.
The two types of lymphocytes are B cells and T cells, which are produced in lymphoid tissue of the spleen, lymph nodes, and thymus gland. The B cells make antibodies that attack bacteria and toxins, and T cells target once-healthy cells in the body that have become cancerous or overtaken by a virus.
These WBCs, which are distinguished by their large nucleus, develop into either macrophages or dendritic cells. Macrophages ingest microbes that the body recognizes as dangerous, while dendritic cells acquire antigens— foreign substances that trigger antibody production— so that T cells are able to identify them.
Found in the bloodstream as well as the lining of various tissues, eosinophils contain proteins that aid the body in fighting off parasitic infections. However, when these cells accumulate, they can actually contribute to the kind of inflammation that occurs in allergic disorders such as asthma. The medical term for an abnormally high number of eosinophils is eosinophilia, a condition that is considered to be a reaction to a certain disease, allergen, or parasite, rather than a disease itself.
Constituting less than 1 percent of the total white blood count, basophils are present in both the blood and tissues, and, like other WBCs, help to ward off foreign invaders. However, basophils are unique in their ability to kill parasites that are external to the body, including ticks. Additionally, basophils release heparin, an anticoagulant, and histamine, a blood-thinning substance. Basophils are similar to eosinophils in that when their number climbs too high, they can contribute to allergies and other inflammatory reactions in the body. In fact, histamine is the substance that causes allergy symptoms like itchy skin, runny nose, and watery eyes, which is why those who suffer from allergies usually take antihistamine medication for relief.
REASONS FOR A WHITE BLOOD CELL COUNT TEST
Measuring the number of the different white blood cells in the body is useful for diagnosing infections and other diseases. This blood test is called a white blood cell differential test, and it calculates the number of each WBC type as well as the total WBC count, all of which are measured in micrograms per liter (mcg/ L). The normal ranges for the different kinds of white blood cells vary, but the generally accepted ranges for adults are provided in the table. Some labs may be at zero if the immune system is not under a specific type of stress.
REFERENCE RANGES FOR WHITE BLOOD CELL COUNTS (in “mcg/ L” & “%”)
- Total white blood cells: 4,500 to 11,000*
- Neutrophils: 1,800 to 7,800 (50 to 70 percent of total)
- Lymphocytes: 1,000 to 4,800 (15 to 45 percent of total)
- Monocytes: 0 to 800 (0 to 10 percent of total)
- Eosinophils: 0 to 450 (0 to 6 percent of total)
- Basophils: 0 to 200 (0 to 2 percent of total)
*For pregnant women, the normal range for total white blood cells is 5,900 to 17,000 mcg/ L.
If one or more of your WBC counts is abnormal, your doctor will order further testing to determine the cause. It is important to realize, though, that an abnormality does not necessarily mean you have a serious medical condition. As you will see, both high and low WBC counts can be triggered by a wide range of factors, and not all of them are significant.
WHAT CAUSES A HIGH WHITE BLOOD CELL COUNT?
The medical term for a high white blood cell count is leukocytosis. An elevated level usually indicates that an infection is present in the body, but may also be caused by lifestyle factors such as strenuous exercise and eating too many refined carbohydrates and sugars. These foods increase insulin release, thereby driving up baseline WBC counts. In other cases, though, the underlying cause is more severe. The following conditions are associated with high WBC counts:
- Allergies, especially severe allergic reactions
- Bacterial or viral infections
- Bone marrow disorders such as Myelofibrosis (serious bone marrow disorder that disrupts your body’s normal production of blood cells) and Polycythemia vera (an uncommon neoplasm in which the bone marrow makes too many red blood cells.)
- Certain medications, including allopurinol (Zyloprim), aspirin, corticosteroids, epinephrine, quinine (Qualaquin), and triamterene (Dyrenium)
- Chronic inflammatory conditions, like rheumatoid arthritis
- Eating a large meal
- Intense exercise
- Kidney failure
- Removal of the spleen
- Severe physical or emotional stress
- Thyroid imbalance, particularly autoimmune thyroiditis
- Tissue damage due to injuries such as burns Another condition may be the reason for your high count, so work with your doctor to find an accurate diagnosis.
WHAT ARE THE SYMPTOMS OF A HIGH WHITE BLOOD CELL COUNT?
Since an increased number of WBCs usually means that the body is trying to fight off an infection or illness, you may experience symptoms such as swollen lymph nodes, inflammation, or fever. General indicators of leukocytosis also include weight loss, poor appetite, bruising, and bleeding. Symptoms associated with specific medical conditions— like bone marrow disorders and thyroid imbalance— may occur as well. If any of these symptoms sound familiar to you, seek the advice of your physician.
HOW CAN A HIGH WHITE BLOOD CELL COUNT BE TREATED?
Treatment for elevated white blood cell counts must be directed at the cause. Antibiotics are usually prescribed for infections, while anti-inflammatory drugs, such as aspirin and acetaminophen (Tylenol), may be used to reduce inflammation and fever. However, if an abnormally high count does not have any apparent cause, additional blood testing may be needed to clarify the source of the problem. A serious illness, like leukemia or other bone marrow disease, requires specific and aggressive treatments such as medication, intravenous fluids, bone marrow transplants, blood transfusions, or leukocytoreduction— a medical procedure used for decreasing the number of WBCs in leukemic patients. Treatment for an elevated WBC count should be obtained as soon as possible to order to prevent complications. Although less serious causes of high WBC counts, such as bacterial and viral infections, require medical treatment in many instances, it is also possible to prevent and treat them naturally with dietary and lifestyle change, as detailed below.
The goal of your diet should be to decrease your consumption of foods that lead to inflammation— a primary cause of high WBCs— as you increase your intake of immunity-boosting foods. First, cut out commercial baked goods and processed foods containing artificial sweeteners and other chemical additives, which promote inflammatory conditions in the body. Any food containing saturated or trans fats, like fried and fast food, should also be avoided. Second, allergens like wheat, dairy products, and soy should be eliminated from your diet, since they can both cause inflammation and raise WBC levels, especially if you have an underlying allergy or sensitivity. And to counteract inflammation, incorporate omega-3s into your diet by eating walnuts and flax seeds. Omega-3s are loaded with benefits, including anti-inflammatory properties. At the same time, you can bolster your immune system by eating foods rich in zinc and selenium. Zinc is plentiful in whole grains and pumpkin seeds. Great sources of selenium include Brazil nuts, garlic, and certain vegetables like cabbage, celery, cucumbers, and radishes. Minerals are often lost through food processing, so when possible, buy foods that are fresh and organic. Your lifestyle is central to maintaining a strong immune system. Lessen your exposure to environmental contaminants, such as pesticides, heavy metals, and plastics, by purchasing products free of these ingredients, which can compromise your immunity. Additionally, strive to get at least seven hours of sleep per night, drink plenty of filtered water (approximately 2 to 3 liters per day), and exercise in moderation. If your white blood cell count is already high, you may want to consider restricting the amount you exercise until your number returns to normal. Remember, overexertion can cause a spike in WBCs.
WHAT CAUSES A LOW WHITE BLOOD CELL COUNT?
While a high WBC count is not always cause for concern, a lower-than-normal number of white blood cells, or leukopenia, usually suggests a medical problem— though some individuals are genetically predisposed to a below-normal WBC count. Autoimmune diseases, bone marrow disorders, cancer, viral infections that impair the bone marrow, and certain drugs can all cause a drop in white blood cell counts.
Below is a list of specific causes of low counts, and many are similar to the causes of high WBCs. This is because if the immune system is activated and producing too many white blood cells, the body may wear out its ability to make WBCs. Neutrophil levels are most affected by this process, while other counts may remain within the normal range or become slightly elevated.
- Autoimmune diseases, such as rheumatoid arthritis and lupus
- Bone marrow dysfunction caused by conditions such as aplastic anemia, myelofibrosis, and other congenital disorders
- Chemotherapy and radiation therapy
- Chronic inflammation
- Hypersplenism (destruction of blood cells by the spleen)
- Immune deficiency caused by diseases like HIV/ AIDS
- Infectious diseases
- Nutritional deficiency, especially in vitamins A, B, C, E, selenium, and zinc.
In addition, the following medications may reduce the total number of white blood cells in the body:
- Anti-thyroid drugs
- Corticosteroids, such as cortisone, hydrocortisone, and prednisone
- Sulfonamide antibiotics, including Bactrim (sulfamethoxazole/ trimethoprim)
The condition causing your low white blood cell count may not be included in the list above. Your doctor will evaluate your WBC differential test and any symptoms you may have in order to make a diagnosis.
WHAT ARE THE SYMPTOMS OF A LOW BLOOD CELL COUNT?
Physical indicators of a low blood cell count may include recurrent infections, slow-healing wounds, and fatigue. It is also possible that you will experience no symptoms at all. If you have a low number of WBCs, contact your doctor if you have signs of an infection, such as swollen lymph nodes, sore throat, fever, or skin lesions.
Minor illnesses, bacterial infections, and nutritional deficiency can also be the cause of a low white blood cell count. Self-care is essential when it comes to preventing and treating this type of leukopenia. As mentioned earlier, zinc and selenium are vital minerals that help boost the immune system. Vitamin C, which is abundant in citrus fruits and dark leafy greens, is another important nutrient that you should be sure to work into your diet. Although it is not a cure for the common cold or flu, vitamin C activates WBCs to fight off infection, helping the immune system defend itself against invaders. The body uses up to six times more vitamin C when it’s trying to fight off a bug. L-glutamine, an amino acid, is also used more by white blood cells when they need to become more active. For example, marathon runners and endurance athletes may experience a drop in L-glutamine after a major athletic event, making them more prone to illnesses like upper respiratory infections. In addition, be careful about eating raw fish or undercooked meat, which can harbor bacteria and make you ill. This also goes for raw fruits and vegetables; while it’s best to buy and eat them fresh, make sure you wash them before consuming. Adequate sleep— at least seven to restful hours per night— is also essential for maintaining a healthy immune system; studies show that getting five hours of sleep per night can cause a nine-fold increase in your risk of developing cold or flu. Moderate physical activity is important for stimulating WBC production, boosting your overall health, and lowering stress, which can make you more prone to illness and drive up your body’s inflammatory response. And if your white blood cell count is currently low, be sure to avoid direct contact with people who are sick until your level returns to normal. This is especially important for those undergoing chemotherapy or another medical treatment that increases susceptibility to disease. You should also follow the dietary and lifestyle guidelines for treating high white blood cell counts. Namely, avoid inflammation-causing foods, chemical additives, artificial sweeteners, and harmful chemicals found in many household and personal care products. All of these substances can trigger inflammatory conditions that wear down your body’s immune response. For example, do not microwave food in plastic containers, drink out of plastic water bottles, or use products that contain phthalates. The presence of toxins and their negative impact on immunity is another good reason to buy your food organic when you can. Of course, these lifestyle modifications will not necessarily raise your white blood cell count, but they do support overall wellness, as well as cut your risk of developing infections and other medical complications.
- Myelofibrosis, https://www.mayoclinic.org/diseases-conditions/myelofibrosis/symptoms-causes/syc-20355057
- Consumption of a Defined, Plant-Based Diet Reduces Lipoprotein(a), Inflammation, and Other Atherogenic Lipoproteins and Particles Within Four Weeks., https://www.ncbi.nlm.nih.gov/pubmed/30014498
- The White Blood Cell and Differential Count, https://www.ncbi.nlm.nih.gov/books/NBK261/
- Factors affecting leukocyte count in healthy adults., https://www.ncbi.nlm.nih.gov/pubmed/4070192
- White Blood Cells in Obesity and Diabetes, http://care.diabetesjournals.org/content/27/10/2501#ref-14
- White Blood Cell Count and Mortality in the Baltimore Longitudinal Study of Aging, https://www.sciencedirect.com/science/article/pii/S0735109707006870
The information on healthmatters.io is NOT intended to replace a one-on-one relationship with a qualified health care professional and is not intended as medical advice.
|
<urn:uuid:5902d8e7-8ed1-4adf-a5d6-0099fc7c451f>
|
CC-MAIN-2023-23
|
https://blog.healthmatters.io/2018/07/18/white-blood-cells-wbcs-high-low-meanings-lifestyle-recommendations/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644907.31/warc/CC-MAIN-20230529173312-20230529203312-00522.warc.gz
|
en
| 0.930666 | 3,431 | 4.09375 | 4 |
Most students know that space being more fluid on university grounds than the outside world often leads to people winding up in the wrong classrooms. Not so many know that it being more ridged than in Elsewhere can sometimes give Them the same problem.
Unlucky Gentry: (Wait, that’s not the poetry professor)
Chemistry Professor: “Good morning class, today we will be discussing the relationship between metals and salts.”
Prof: “While the term salt is most often used to describe the common table salt, sodium chloride, there are in fact millions of different types of salt.”
Gentry: *Internally screaming*
Prof: “A salt is any chemical in which the hydrogen in an acid has been replaced by a metal. Hydrochloric acid produces table salt when reacted with the metal sodium. However it is possible to make a salt from any metal on the periodic table, such as iron.”
The chemistry students were eyed even more warily after that day.
|
<urn:uuid:0ce40825-24fb-4e1f-a83d-d619f1e13e2e>
|
CC-MAIN-2017-34
|
https://www.tumblr.com/search/have%20%20not
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109682.23/warc/CC-MAIN-20170821232346-20170822012346-00669.warc.gz
|
en
| 0.952828 | 211 | 2.640625 | 3 |
Landmarks of Washington, D.C.
2 of 14
The Thomas Jefferson Memorial is located in the East Potomac Park. Twenty-six columns surround the domed, white-marble structure. The memorial was the brainchild of President Franklin D. Roosevelt, who believed Thomas Jefferson deserved a memorial, along with Washington and Lincoln. Construction began in 1938 and was finished in 1943.
Fun Fact: The 19-foot-tall bronze statue of Thomas Jefferson weighs five tons (10,000 pounds).
Photo source: Carol M. Highsmith
Next: Lincoln Memorial
|
<urn:uuid:4a454e75-27dd-41af-b298-acc2c2b7484d>
|
CC-MAIN-2013-20
|
http://fun.familyeducation.com/slideshow/monuments/61486.html?page=2
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702762497/warc/CC-MAIN-20130516111242-00053-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.907544 | 118 | 2.859375 | 3 |
Your source for content-rich, kid-safe online resources.
Global rating average: 0.0 out of 5
These sites give an overview of music terms. Learn about rhythm, pitch, and sheet music. Includes games, interactives, worksheets, and tests. There are links to eThemes Resources on a variety of music topics.
Explore an interactive site to learn about music. Learn about basic of music from "the music lab" and types of instruments for the orchestra from "instruments of the orchestra". Click on "Radio" to listen to orchestra music.
Watch a movie by Tim and Moby explaining how to read music sheet. Learn about written notes, clefs, musical notations, time signature, measures, and more. There are activities and a quiz at the bottom of a page. NOTE: This site requires subscription.
Watch a movie by Tim and Moby explaining about different types of musical scales. Learn about western musical scales, diatonic scales, chromatic scales, major scales, minor scales, and intervals. NOTE: This site requires subscription.
These websites are about musical instruments. Hear the sounds that different instruments make, read about unusual instruments, and play games. There are links to various eThemes Resources on music topics.
These websites are about various modern genres of music. Learn about jazz, country, rock and roll, mariachi music, and more. Some website include audio and video clips and interactive timelines. There are links to various eThemes Resources on music topics.
These websites are about famous music composers throughout history. Learn about Bach, Beethoven, the Gershwin brothers, and other famous composers. There are games, sound clips, and video clips. Includes links to eThemes on various music topics.
|
<urn:uuid:d5b20ab3-7c90-4459-a41f-e7bffd79a4b9>
|
CC-MAIN-2017-30
|
https://ethemes.missouri.edu/themes/306?locale=en
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424645.77/warc/CC-MAIN-20170724002355-20170724022355-00192.warc.gz
|
en
| 0.912024 | 370 | 2.9375 | 3 |
Overview of Information About Hookah
- A hookah is a type of water pipe used for smoking tobacco.
- Hookah is generally a social activity - people sit together to share a pipe, often at a hookah bar or café.
- Many people mistakenly believe that smoking hookah is safer than other forms of tobacco.
Effects of Hookah
- Hookah tobacco, like all tobacco, contains nicotine - the most addictive substance known to humans.
- Smoking a hookah for 45 minutes is the same as chain smoking 15 cigarettes.
- It is possible to become addicted to nicotine after smoking hookah just a few times.
Other Risks of Hookah
- A hookah smoker inhales 8 times more carbon monoxide and 36 times more tar in a typical session than someone who smokes 1 cigarette.
- Like other forms of tobacco, hookah smoking raises the risks of serious health problems, such as cancer, heart disease and lung problems.
- Colds, flu, and even herpes can be passed when people share a hookah pipe.
- Hookah smokers also have a much higher risk of gum disease and tooth loss than nonsmokers.
Prevention and Treatment of Hookah Usage
- Don't start smoking hookah, and avoid going to bars, clubs or café's that offer hookah.
- Because hookah smoking is a social activity, turning it down when your friends want to smoke can be difficult. Suggest alternative activities, such as going out to eat, dancing or going to a concert.
- To make a confidential appointment at no charge for one-on-one cessation assistance, please call (530) 752-6334 or go to Health-e-Messaging and log in to schedule an appointment with a Safe Zone trained Alcohol, Tobacco and Other Drug Intervention Services Coordinator.
Smoke and Tobacco Free Campus
For a healthier community and cleaner environment, UC Davis will be smoke and tobacco free effective January 1, 2014. Call the Alcohol, Tobacco and Other Drugs Intervention Services Coordinator to find out more regarding individual tobacco cessation services available at no charge to all registered students at (530) 752–6334.
|
<urn:uuid:44a27a0e-9579-447b-b058-1688536ed92b>
|
CC-MAIN-2023-14
|
https://shcs.ucdavis.edu/health-topic/hookah
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00400.warc.gz
|
en
| 0.910274 | 442 | 3.09375 | 3 |
Born in a log cabin, Abraham Lincoln (February 12, 1809-April 15, 1865) taught himself enough law to become a lawyer. An ambitious man, he was elected repeatedly to the Illinois state legislature, where he became a leader of the Whig party. After a single term in Congress (1846-1848), Lincoln’s political career seemed to have peaked. As the debate over slavery tore apart the nation in the 1850s, the Whigs appeared increasingly irrelevant, so Lincoln joined the newly formed Republican party. Although he lost a hard-fought campaign against Democrat candidate Stephen Douglas for a senate seat in 1858, a series of debates between the two men had attracted national attention, especially among the growing abolitionist movement. Chosen as a compromise candidate during the 1860 Republican convention, Lincoln won election as president, aided by the breakup of the Democrat party over slavery. Convinced that he intended to destroy their way of life, the southern states seceded, starting a long and bitter civil war (1861-1865). When Lincoln decided to emancipate the slaves, it seemed likely that he would lose the 1864 election, but several victories on the battlefield ensured that he was given a second term with a sizeable majority. Unfortunately, Lincoln was assassinated shortly after the war ended.
- 1 Early Life
- 2 State Legislature (1834-1842)
- 3 Congressman (1846-1848)
- 4 Prairie Lawyer (1848-1856)
- 5 The Senate Race against Stephen Douglas (1858)
- 6 Republican National Convention and 1860 Election
- 7 President-elect
- 8 Civil War
- 8.1 The Trent Affair
- 8.2 General George McClellan
- 8.3 Lincoln’s Search for a replacement for McClellan (March 1862-March 1864)
- 8.4 Emancipation Proclamation
- 8.5 The Gettysburg Address
- 8.6 Lincoln seeks nomination for a second term
- 8.7 Ulysses Grant Takes Command (March 1864)
- 9 1864 Presidential Election
- 10 Second Presidential Term
- 11 The End of the Civil War
- 12 Assassination
- 13 Related Movies:
- 14 Further Reading:
- 15 Related Posts:
Abraham Lincoln was born in a log cabin in Kentucky on February 12, 1809. Although illiterate, his mother, Nancy Hanks Lincoln, had memorized lengthy passages from the Bible, and she raised Lincoln and his sister Sarah in the Baptist faith, a fatalistic belief which emphasized the will of Providence. His father, Thomas Lincoln, was a more out-going fellow, who was popular in the community. Although his parents had grown up in an environment where owning slaves was natural, they questioned the practice and joined an anti-slavery congregation in 1816. A desire to leave the slave-holding environment was a key factor behind his father’s decision to move the family to the new state of Indiana in December 1816, which had joined the Union as a free state that same month. A hard-working farmer and skilled carpenter, Lincoln’s father prospered and became a solid member of the growing community. Nancy Lincoln died in 1818, and his father married Sarah Johnston, a widow with three children and a cheerful personality, who brought warmth into the home, the following year.
Lincoln’s body grew quickly, so he was over six-feet-tall when he was sixteen, and while his muscles had a surprisingly wiry strength, he never really filled out. Frequently hired out by his father, Lincoln learned to give a full day’s work, and was popular as a rail splitter due to his strength and skill with an axe, but he also knew how to tell humorous stories. Receiving enough education during the winters to learn how to read, write and do basic sums, he was the best-educated among his friends and classmates, but it is unlikely that the total period of education was greater than a year. Reading about the lives of famous people made him realize that there was more to life than farming. As Lincoln’s mind expanded, he gradually drifted apart from his family and friends, especially after his sister Sarah died during childbirth in 1828. Developing an interest in the law, many hours were spent watching trials in the local courthouses.
Lured by enthusiastic letters from a relative, Thomas Lincoln moved the family to Decatur, Illinois in March 1830. Turning twenty-two in 1831, Lincoln became legally independent, and informed his father that he was moving to New Salem to start his own life as a clerk in a store in the booming community. Having little connection with his father, he made no effort to remain in contact afterwards.
A tough wrestler who would not back down, Lincoln won acceptance among the rough crowd, but also spent time with the few educated people in the community. Unlike most people, he did not drink because whiskey blurred his mind, but he did not join the local temperance group, believing that alcohol was a person’s own business, not his. Determined to advance in society, Lincoln concentrated on improving himself, studying mathematics, grammar and rhetoric over the winter. Most important, he spent many hours in the store arguing politics with customers.
A career in politics was a vital route for the upwardly mobile, and the political situation was in flux since the old party lines had broken down. The Federalist Party of George Washington and Alexander Hamilton had died out, while the Thomas Jefferson Republicans had divided into quarrelling factions, so people identified themselves by the politicians they supported, not the parties. Lincoln backed Henry Clay, President Andrew Jackson’s key rival, who wanted to make the states more connected to better compete with foreign nations, and who wanted to solve the pressing slave problem by slowly freeing slaves and sending them back to Africa.
While preparing to run for state legislature, where Lincoln planned to work to improve the local river to enable steamboats to reach the town, the Black Hawk Indian war broke out in northern Illinois in 1832. Lincoln and his friends enlisted, and he was elected captain of his militia company, an impressive feat since he had only been living in new Salem for nine months. His company never actually found Chief Black Hawk or any of his followers, so he spent most of his time marching and battling mosquitos, but he gained leadership experience. Despite winning a huge majority of votes in New Salem, Lincoln lost the election because he was a young man with few accomplishments who was unknown outside his town.
Since his initial employer had gone out of business, Lincoln bought a share in a store, but he and his partner tried to expand too fast, while Lincoln spent too much time talking politics and his partner spent too much time at the whiskey barrel. Ballooning debt forced them to close the store, but luck and ambition favored Lincoln when he became postmaster of New Salem and deputy surveyor for the county, which enabled him to meet everyone in the county. The jobs paid relatively well but his debts were huge, especially after his partner died. Instead of following the practice of most debtors and leaving the county, he repaid the debt, dollar by dollar, even though it took a decade and a half.
State Legislature (1834-1842)
A reputation for honesty combined with extensive contacts enabled Lincoln to win election as a Whig to the legislature in 1834. This would be a key period since party lines had solidified again. Jackson led the Democrats and Clay led the Whigs, essentially a coalition of social groups opposed to Jackson. Moving to the state capital at Vandalia, Lincoln roomed with fellow legislator John Todd Stuart, a lawyer from Springfield, who encouraged him to become a lawyer. Lincoln made few speeches but he attracted attention for his writing ability. When the legislature adjourned, Lincoln returned to New Salem to throw himself into the study of law.
As a politician, Lincoln believed fervently in the United States as a nation, not a collection of independent states, and he deeply admired the Declaration of Independence with its emphasis on the opportunity to better one’s station in society. Progressive for his time, Lincoln believed that any white man or woman who paid taxes or bore arms should be able to vote. Although Lincoln won re-election in 1836, Stephen Douglas led the Democrats to take control of the state legislature, while Democrat Martin Van Buren became president. As a second-term legislator, Lincoln naturally played a more active role in the debates, helping to lead the successful effort to move the capital to the larger, more centrally located Springfield.
Another change occurred when Stuart invited Lincoln to join his law firm in Springfield, since Stuart needed a new partner and he had become a licensed lawyer in September 1836. With a population of two thousand, Springfield was a different world from New Salem. Stuart and Lincoln’s law firm thrived, and Lincoln became a more skilled lawyer under Stuart’s tutelage. In particular, he became well-known for his skill in handling juries. A news junkie, Lincoln read several newspapers daily in order gain a full understanding of the political issues.
Although Lincoln did not support the abolitionists, considering them to be far too zealous to have any effect, the growing frequency of mob attacks on blacks and abolitionist editors offended his view of American democracy. He became one of the Young Whigs, the leading Whigs in Springfield, who helped choose Whig candidates in Illinois, rivalled by the Young Democrats, led by Stephen Douglas, four years younger than Lincoln, nicknamed the Little Giant for his small size (five feet four inches tall) and relentless aggressiveness during political clashes. As floor leader of the Whigs in the state legislature, Lincoln played a key role in uniting the Illinois Whigs as they prepared for the 1840 presidential campaign. Lincoln threw himself into the campaign, riding across the state to address crowds, but Van Buren won the state by a small margin, although Whig presidential candidate William Henry Harrison took the White House.
Lincoln began courting Mary Todd in late 1839, even though he was too shy to hold up his end of the conversation. Although she came from a more socially advanced family, and was ten years younger, she saw something in the earnest, tall politician. Sharing a love of poetry and politics, Mary was a staunch supporter of the Whig party. Her family did not share her interest, and Lincoln’s marriage proposal was rejected in 1840. Forbidden to see her again because of his social background, Lincoln broke off the engagement.
Believing he could never escape his past, Lincoln was extremely depressed that winter. After a horrible week-long bout of insomnia, he recovered enough to return to a busy work schedule. However, the public works project that he had sponsored failed to win funding, and he was not renominated for a fifth term as state legislator, so it seemed that his career had come to an end. With Stuart serving in Congress in Washington, Lincoln formed a new partnership with Stephen Logan, a much more experienced lawyer. Lincoln benefited from the older man’s guidance, and the two men soon had a thriving law firm.
Gradually regaining his confidence, Lincoln began seeing Mary in secret. Encouraged by the happiness of his close friend Joshua Speed, who had recently married, Lincoln proposed to Mary and they decided to marry in November 1842. Realizing that Mary genuinely loved Lincoln, her family gave in. Since he was still paying off his debts, the young couple had to live in a rented room in a tavern for the first year, and their first son was born there. However, marriage brought stability, and he worked hard, so he had paid off his debts and bought a house a couple of years later. Although he was able to hire a maid to help Mary, she still had difficulty adjusting to the responsibilities of motherhood and marriage, after growing up in a mansion waited on by servants. Shocked by the first year in the tavern, Mary would always remain fearful about money, worried that they would slip back into poverty. Furthermore, she was left alone for long periods, since Lincoln was often away, riding the circuit of little towns in the district along with other lawyers and the circuit judge, since there were not enough cases in Springfield to provide a living. Both of them were prone to bouts of depression, but she was out-going and he always remained introverted in private, despite remarkable courage during political debates. They were probably not the best-suited couple, but they loved each other and the marriage was stable despite occasional clashes caused by deep-rooted personality conflicts and financial stress.
Married and increasingly prosperous, Lincoln decided to run for congress in 1843 but lost the nomination to Mary’s cousin John Hardin, a prominent member of society. Lincoln’s idol Senator Henry Clay lost the presidential election to Democrat James Polk, partially because potential Whig voters chose the small anti-slavery Liberty Party instead, reminding Lincoln of the perils of a divided vote. In 1844, Lincoln and Logan dissolved their partnership, and Lincoln took on William Hendron as partner, motivated partially by Hendron’s valuable connections to the younger Whigs who were rising in the party. Despite conflicting personalities, the two men became close, and Lincoln treated the younger man like a son. Unfortunately, Hendron and Mrs. Lincoln swiftly grew to hate each other.
Lincoln, John Hardin and Edward Baker, the three contenders for the 1844 congressional Whig nomination were friends so they had made an agreement to each take turns serving as congressmen. When Hardin decided to run again in 1846, even though it was Lincoln’s turn, Lincoln campaigned successfully for the nomination. The election took place when the Mexican-American War (1846-1848) broke out, and Lincoln made public speeches advocating support for the nation in wartime, despite private doubts about the war, so he won a decisive victory. The war created further political controversy since the Whigs accused President Polk of deliberately starting the war as an excuse to invade Mexico, while northern Democrats feared that the captured territory would become slave states.
The Whigs were determined to regain the White House, and Lincoln proved to be a diligent congressman, making several critical speeches about Polk. However, he occasionally sneaked off to see debates in the senate, especially if either Daniel Webster or John C. Calhoun were involved, since they were two of the three greatest orators in the United States at the time. Lincoln’s friendly manner and ability to use humorous stories to defuse tension during impromptu debates outside of Congress made him popular with his fellow legislators. When the war ended, the United States agreed to pay fifteen million dollars for California and New Mexico, even though the war had already cost $27 million and the lives of 27,000 soldiers. Efforts by Democrat leaders to pass a bill to occupy all of Mexico were blocked by the Whigs, but they could not prevent the purchase of California and New Mexico. Lincoln’s stance was very unpopular back in Illinois, and he was viewed as unpatriotic, which was painful, but not critical, since he had already agreed that it was his former partner Logan’s turn to run. Instead, he plunged into the presidential campaign of Whig candidate and war hero Zachary Taylor. In particular, he pressed abolitionists to support Taylor instead of Free-Soil candidate Martin van Buren in order to ensure that the Democrats did not retain power.
Despite his zealous work to elect Taylor, none of the people recommended by Lincoln were given office, and he turned down the offer of governor of Oregon Territory, viewing the Democrat-dominated territory as political exile.
Prairie Lawyer (1848-1856)
Returning to Springfield, Lincoln had to cope with his wife’s renewed Christian faith following the sudden death of their youngest son, four-year-old Edward, in February 1850. The situation was eased by the birth of another son, William, in December 1850. Learning that his own father was dying, Lincoln limited himself to sending a message, claiming he was too busy to return or even attend the funeral.
However, Lincoln’s career boomed, and he became one of the leading lawyers in the state, a man of substantial wealth, who was proud of his success, which proved how far he had advanced in his life. This wealth was a tribute to his hard work and endless hours spent on the circuit, since he did not charge high rates, he just handled more cases than other lawyers. Although he was a surprisingly effective cross-examiner, he was an even better mediator, who often advised his clients to accept a settlement, realizing that both parties would have to live together in the same community. Actually, he had to work. Most lawyers owned farms or businesses to supplement their income, but he relied solely on income from his fees. Lincoln appeared to enjoy the solitude of travelling to remote communities, and building relationships with the other lawyers who travelled the circuit, some of whom became his closest friends. Despite his achievements, he continued to suffer occasionally from bouts of depression. Unmoved by sentiment, Lincoln argued cases to win, not to follow any moral code, so he would use the law and logic to win cases, even defending slave owners seeking the return of their property, despite his personal loathing of slavery.
At the same time, Lincoln followed politics closely, especially the successful struggle in the senate to pass the Compromise of 1850, which defused the growing crisis over southern secession. The new territories of Utah and New Mexico could decide to enter the Union as slave states based on the principle of popular sovereignty, even though their climate made a plantation-type economy extremely unlikely. Southerners were appeased by the decision to permit slavery in the national capital, and the passage of a stronger Fugitive Slave Act. President Taylor died before the compromise was passed, and was succeeded by vice president Millard Fillmore.
Remaining in touch with Whig politicians in the state, Lincoln served as a Whig national committeeman during the 1852 presidential campaign, even though he correctly predicted that Whig candidate Winfield Scott would not win. When Senator Stephen Douglas replaced the Missouri Compromise that had kept slavery confined to the south with the Kansas-Nebraska Act in 1854, where the residents of each new territory would decide if it would enter the Union as a free or slave state, Lincoln was appalled, believing that it would enable the spread of slavery across the nation. Slavery did not expand but the debate over slavery became even more intense. The Free Soil movement exploded, although its members worked within the existing parties for the moment. Douglas was strongly criticized by his followers in Illinois, and Lincoln followed him when he toured the state, making a passionate appeal to defend the Compromise that had preserved the republic. Genuine anger against Douglas’ stand was undoubtedly combined with resentment that his old rival in the state legislature had risen to the position of senator, while Lincoln had only served a single term in congress.
Lincoln’s own attitude towards slavery was evolving during the mid-1850s. He had grown to hate slavery, stating that it violated the principle of liberty that was the foundation of the nation, but he feared the threat of ending slavery in the south would cause the breakup of the nation. In particular, he attacked the principle of popular sovereignty, pointing out that self-government can not exist when one man made another his slave. Most important, the original founders had tolerated slavery as a distasteful necessity but popular sovereignty would transform it into a sacred right. Lincoln’s speech in Springfield, which became known as the Peoria speech because it was published in Peoria, Illinois, led to the organizers of the new Republican Party putting his name on the central committee for Illinois but he refused to abandon the Whig Party.
Despite his growing popularity, Lincoln lost a hard-fought campaign to be chosen as senator by the Illinois state legislature, which still selected senators at the time, in February 1855. However, he had saved the day for the Whigs by gracefully accepting defeat and pledging his remaining delegates to give another Whig the nomination. Meanwhile, the competition between pro and anti-slavery groups to settle Kansas was turning violent.
The Senate Race against Stephen Douglas (1858)
By early 1856, Lincoln had accepted that the Whigs were too splintered to be effective, and since he loathed the racism of the Know-Nothings, an anti-immigrant, anti-Catholic movement, which had nominated former Whig Millard Fillmore for president as candidate of the American party, he helped found the Republican party in Illinois. A serious contender for the position of vice-president for the Republican ticket, he failed to attract enough ballots at the national convention. However, he made many speeches on behalf of Republican candidate John C. Fremont, even though he had little liking for the man. Having been a Whig for twenty years, Lincoln refused to criticize Fillmore, saving his attacks for the Democrats’ James Buchanan. The 1856 presidential election revolved around slavery, and southerners cried that they would be forced to secede if Fremont was elected. Despite his genuine hatred of slavery, Lincoln was not an abolitionist, he simply wanted to prevent the spread of slavery, not eliminate it, and he repeatedly stressed that he did not believe in equality between blacks and whites. Although the Republicans lost the election, they had received the second-highest total of votes, an impressive result for a new party.
Still intensely ambitious as well as diametrically opposed to Douglas’ political beliefs, Lincoln resolved to challenge the Little Giant for the senate in 1858. Douglas had risen in popularity, gaining more Republican supporters than he had lost Democratic supporters. Given his widespread appeal to Republicans and northern Democrats, Republican leaders were considering nominating Douglas as the Republican candidate for senator in 1858, providing a shocked Lincoln with further motivation to oppose Douglas. Careful lobbying within the Illinois branch of the Republican party ensured that Lincoln, not Douglas, was the Republican candidate.
After making his “A House Divided Speech” at the Republican convention, where he predicted that a nation half-free and half-slave could not endure, he won the nomination for the senate, and the speech was printed on the front page of Republican journals across the state. However, the Biblical nature of the phrase “A House Divided” worried many potential Republican supporters who feared that Lincoln was advocating war to ensure the abolition of slavery.
The senate race received national attention and slavery would be the key issue of the race, especially since Douglas had stated that the government “was made by the white man, for the benefit of the white man, to be administered by white men.” Slavery had been the dominant topic during the presidential campaign and recent events ensured that it would remain the main issue during the senate campaign. Congressman Preston Brooks beat anti-slavery Senator Charles Sumner with a cane in the Senate chamber on May 22, 1856 in revenge for a recent speech where Sumner had described South Carolina Senator Andrew Butler, Brooks’ uncle, as having chosen the harlot slavery as his mistress. Angered by the sack of Lawrence, Kansas, the center of anti-slavery supporters, by pro-slavery men on May 21, 1856 zealous abolitionist John Brown and six followers executed five pro-slavery men at Pottawatomie Creek, Kansas three days later. Finally, the Supreme Court declared on March 6, 1857 that people of African descent, whether slave or free, were not protected by the Constitution, in what became known as the Dred Scott Decision.
After following Douglas as he made speeches in Illinois, Lincoln challenged Douglas to a series of debates that would be held around the state. Although he had nothing to gain and everything to lose, the hyper-competitive Douglas accepted the challenge, but he may simply have worried that refusal would bring calls of cowardice. When Lincoln was following him, Douglas was clearly the main speaker and would receive large crowds, while Lincoln appeared to be trying to steal Douglas’ popularity, but now they would be presented as equals. Held between August 21 and October 15, the debates attracted thousands of people. Douglas harangued the crowds, asking if they wanted the state to become a colony of free Negroes. This race-baiting approach forced Lincoln to confirm that he believed in white superiority but he continued to emphasize equal rights. Between the seven scheduled debates, both Douglas and Lincoln crisscrossed the state, making speeches on a daily basis, while other renowned Republicans toured the state to speak on Lincoln’s behalf. Neither man could be said to have won the debates, and Republican candidates won a narrow majority of votes in the popular vote, but the Democrats in the state legislature still outnumbered the Republicans, so Douglas became senator.
Republican National Convention and 1860 Election
Recognizing Lincoln’s massive effort, no Republican blamed him for the defeat. In fact, press coverage of the speeches had made him a national figure, leading to calls from a few party leaders that he run for president, even though he admitted that he had little hope of defeating the main contenders: Senator William Seward and Salmon Chase, Governor of Ohio. This was not false modesty, but a realistic assessment of his chances, since he had no administrative experience and had won no election higher than congressman. However, Lincoln was ambitious, so his speeches during the debates with Douglas were collected for publication, and he accepted an invitation from the famous clergyman and abolitionist Henry Ward Beecher to speak in New York City. Still dismissive of his chances, Lincoln hoped that a good showing in the Republican national convention would reinforce his position as the leading Republican in Illinois.
Rather than dwell on the presidency, Lincoln was determined to challenge Douglas again for the senate, so he began working to publicize both himself and the Republican party, while lobbying Republican leaders in other states to consider the needs of the national party when planning local strategies. He observed correctly that Douglas’ open appeals to free-soilers during the senate race had weakened his support within his own party. When Douglas made a turnaround and rejected that approach to win back southern support, Lincoln made a number of speeches in Ohio attacking Douglas to reinforce the appeal of the Republican party, which brought invitations to speak at rallies in other states.
After John Brown’s attempt to spark a slave rebellion at Harper’s Ferry failed and he was captured on October 18, 1859, southerners believed that Republican leaders like Charles Sumner, Seward and Stevens had been involved in the conspiracy and that it was the consequence of the Republican stand against slavery. Southern accusations against Seward weakened his appeal as a presidential candidate, causing more of Lincoln’s supporters to urge him to run for president, especially since he was more popular in the lower states of New Jersey, Pennsylvania, Ohio, Indiana, and Illinois, which were crucial for victory. A speech in New York City was a huge success, leading to invitations to speak in northeastern cities, and he was courted by Seward’s opponents within the party, who viewed Lincoln as the ideal compromise candidate.
Among the thirteen candidates in the convention, the five leading contenders were William Seward, Abraham Lincoln, Salmon Chase, Simon Cameron and Edward Bates. As the convention neared, Seward had the most delegates, and the other contenders seemed unable to beat him. Seward’s greatest threat was his own radical image, so Lincoln’s campaign manager decided to harp on Seward’s inability to win, and to focus on convincing other contenders to switch to Lincoln if the race went to a third ballot. The strategy worked since none of the other candidates, including Seward, had strength in the key battleground states of Pennsylvania, Indiana and Illinois. Chase should have been best positioned to serve as an alternative to Seward, but he had made so many enemies through his blunt personality and inability to mend fences with rivals that he failed to even win the full support of the Ohio delegation. Confident that he was the obvious choice, Chase did not even bother to appoint a campaign manager to lobby wavering delegates. The convention was held in Chicago, so Lincoln’s supporters actually controlled the organization of the convention and seating arrangements, while Judge Davis, Lincoln’s former travelling companion and the head of his delegation, worked smoothly to ensure that Lincoln started strong, but had enough votes in reserve that he would appear to be growing in strength. Davis had sent the members of his team to approach the other delegations, asking them to nominate Lincoln on the second ballot if their own candidates did not receive enough votes on the first ballot. Lincoln received the second largest number of ballots on the first vote, but the gap between him and Seward had become tiny by the second vote, and the third vote settled the matter since all of the delegates aside from Seward’s supporters switched to Lincoln.
As the Republican Presidential candidate, Lincoln steadfastly refused to join any faction within the party, stating that he intended to be a unifier, and he worked hard to win over his former rivals. This approach, combined with his well-honed skills as a mediator, kept peace among the various factions, and ensured that all of his former rivals backed him and campaigned on his behalf. Fortunately, he faced a divided opposition, since the Know-Nothings and hardcore Whigs had united to form the Constitutional Union Party with John Bell as their candidate. The Democratic Party had split into the Northern Democrats and the Southern Democrats with Stephen Douglas and John Breckinridge as their respective candidates. Following the defeat of the pro-slavery advocates in Kansas, the uproar over the Dred Scott decision, and John Brown’s failed attempt to spark a slave uprising in Virginia, the more radical Southern leaders were in no mood for compromise over slavery, instead they demanded the right to bring slaves into every territory. Refusing to accept the moderate platform proposed by Douglas’ supporters, delegations from the southern states walked out of the convention. A reconvened convention in Baltimore nominated Douglas, but the Southern Democrats nominated Breckinridge. A Republican victory seemed likely but there was a justified fear that no one would receive a majority of electoral college votes.
While the opinion of Lincoln in the north was divided, southerners loathed him, viewing him as a radical abolitionist. In the end, Lincoln received less than half of the popular vote, with Douglas the closest contender, but he won a decisive victory in the electoral college. Lincoln received 39.8% of the popular vote and 180 votes in the electoral college; Douglas received 29.5% of the popular vote but only received a single vote in the electoral college; John Breckinridge received 18.1% of the popular vote and 72 votes in the electoral college; and John Bell received 12.6% of the popular vote and 39 votes in the electoral college.
Recognizing his own lack of administrative ability, Lincoln chose Seward to head his cabinet, which was made up of the leading Republicans, drawn from all over the nation to ensure that each faction was represented, since he had grasped that the party was actually a mix of competing interest groups. Seward’s great rival Salmon Chase was made Secretary of the Treasury; Simon Cameron, the dominant political fixer in Pennsylvania, was made Secretary of War; Gideon Welles, a former Jackson Democrat who had left the party for the Republicans over the slavery issue, became Secretary of the Navy; Edward Bates, like Lincoln a Whig who had joined the Republicans because of his Free Soil beliefs, was appointed Attorney-general; Caleb Smith, a former Whig from Indiana, was made Secretary of the Interior; and Montgomery Blair, representing the powerful Blair family, headed by Blair’s father Francis, became Postmaster General. Critics feared Francis Blair would have too much influence in the cabinet, since he had backed Bates for president, Welles was his son-in-law, and the postmaster was his son. Seward was influential in the eastern states, especially New York, Chase was a former governor and senator of Ohio, Cameron dominated Pennsylvania, Welles was from Connecticut, Bates represented Missouri and the Border States, Smith Indiana, Blair was based in Maryland, and Lincoln himself was one of the key politicians in Illinois. Despite the regional balance, they were all from the north or the border-states, none were from the south.
Aside from a swarm of office seekers, demanding patronage in return for their efforts to ensure his election, Lincoln had to deal with the increasing likelihood of secession, despite his efforts to reassure Southern leaders that his administration would not threaten their way of life. Lincoln initially believed that there were enough pro-Union men in the south to prevent such a drastic event as secession. Confident that his many speeches had made it clear that he would not interfere with slavery where it existed, he merely wanted to prevent its spread, Lincoln made no effort to actually meet with key southern leaders to defuse the situation. Since the south had threatened secession for a generation and never carried through with the threat, Lincoln thought that it was just a bargaining chip in an effort to win more concessions. Part of the problem is that while Lincoln had travelled over the north and had a broad range of contacts, he had little first-hand experience in the south itself.
While Lincoln laboriously weighed the pros and cons of prospective members of the cabinet, a steady stream of southern states seceded from the Union. Between late December 1860 and early February 1861, seven states (South Carolina, Mississippi, Louisiana, Florida, Alabama, Georgia and Texas) announced that they were leaving the Union, and a convention of delegates from six of the seven states elected Jefferson Davis president on February 9. Refusing to take responsibility for the growing secession crisis, President James Buchanan stated that the Southern states had no right to secede but he had no right to stop them.
The key issue became ensuring that the remaining eight slave-holding states stayed in the Union. As the nation moved closer to civil war, Republican leaders urged compromise, but Lincoln refused, claiming that any further concessions to the south would transform the United States into a slave empire. Moreover, Lincoln was an easy-going man until he made up his mind, and then he became rigidly stubborn. Since opposition to the further extension of slavery had been a key plank in his campaign, he would not back down from what he had been elected to do. He had already given in, in his mind and those of many of his supporters, by agreeing to keep the fugitive slave law as it was.
Aside from the threat to the Union’s existence, there were threats on Lincoln’s existence, since rumors circulated that he would be assassinated before he could take office. Although Lincoln dismissed the rumors, Seward and Winfield Scott, senior general of the army, took them seriously. Convinced that there was a plot to assassinate the president-elect in Baltimore during his tour of the north, where he stopped in numerous cities to make speeches, detective Allen Pinkerton persuaded Lincoln to make his way into the capital in disguise. He reached Washington safely, but when the newspapers learned of the situation, they mocked him mercilessly.
Lincoln also found himself engaged in a power struggle with his Secretary of State since Seward believed that he was the more talented politician, therefore he should lead the government. Seward even threatened to leave the cabinet if his rival Chase stayed, but Lincoln refused to budge. Although he won the initial confrontation, Lincoln discovered that the members of his cabinet were not inclined to work together.
Lincoln’s first crisis as president was the defence of Fort Sumter in Charleston Bay. Reinforcement was impossible but surrender would make the Union look weak. As Lincoln tried to find a solution, Republican papers criticized the lack of a clear policy towards the south. Compromise appeared impossible, so after a great deal of debate with his cabinet, he decided to supply the fort and see what would happen. Since this strategy violated a promise made by Seward to southern leaders, Seward made a private offer to assume responsibility for making policy. Faced with a direct challenge to his authority, Lincoln reminded Seward that he was president, and would set policy. Learning that Lincoln had sent a fleet to relieve Fort Sumter, the rebels bombarded the fort, so the garrison surrendered on April 14, 1861, after a day of heavy bombardment, and it was permitted to evacuate. War had begun.
Following the start of the war, Douglas publicly supported Lincoln, who called for the recruitment of 75,000 militiamen, compelling men all over the nation to choose sides. Foreign-born Americans in the major cities swiftly formed units, while militia regiments sprung up across the northern states. Two days after Lincoln’s proclamation, Virginia joined the Confederacy, and three more states seceded during the next two months. The loss of Virginia was a deep disappointment to the president, who had worked hard to retain the loyalty of Union men. Instead, they had become secessionists and brought the Confederacy within sight of Washington. The Confederate capital was moved from Montgomery, Alabama to Richmond, Virginia at the end of May. However, there were a number of states still in play, in particular Maryland and Kentucky. Kentucky initially remained neutral but nine of its ten congressional districts elected Unionists during the June elections for Congress.
The war had begun but the federal army was not ready for war. The sixteen thousand regulars were spread across the nation and a third of the officers had resigned their commissions in order to serve in the Confederate army. The only generals with combat experience were too old to actually lead an army. Colonel Robert Lee, Scott’s first recommendation to command the army, rejected the offer because he was unwilling to fight against Virginia, his home state. Worse, he joined the newly formed Confederate army, which Lincoln could see with a telescope from the roof of the White House. Fearing an attack on Washington, Lincoln spent several anxious days waiting for the arrival of troops that had been promised by northern states. During those nervous days, newly appointed Minister to Russia Cassius Clay and Kansas Senator James Lane formed small units of volunteers to defend the government in case of invasion. Even so, residents of Washington breathed easier when there were ten thousand men in the capital by April 27.
As the army grew, Lincoln made it a priority to keep the army bipartisan, handing out military commands to Democrats as well as Republicans. A few victories appeared. Federalists managed to retain control of Missouri, a key border state, routing the secessionist militia. After seizing control of Baltimore, the secessionist centre of the state, to ensure the smooth passage of troops, Maryland was convinced to remain in the Union, and vital Kentucky sided with the Union after a Confederate army invaded.
Unprepared for the war, the administration lacked clear lines of authority, so the first few months were chaotic, while the mixture of regular and volunteer units led by politicians given commands to retain loyalty only added to the confusion. Determined to avoid accusations of radicalism that would cost him support in the border states, Lincoln refused to recruit blacks, claiming that it was a war to save the Union, not free the slaves, and he did not make a single mention of slavery during his inaugural address. His one compromise was to end the fugitive slave law for the states in rebellion. Abolitionists like William Lloyd Garrison and liberal Republicans like Senators Charles Sumner, Zachariah Chandler and Ben Wade felt that there was no point in postponing emancipation now that the civil war had started. However, Lincoln knew that public opinion, even in the North, was against emancipation, so he refused to take that radical step. Instead, he would continue the policy of containing the spread of slavery, in the hope that it would eventually starve to death.
Douglas died shortly after the war started, depriving Lincoln of a valuable supporter, and the remaining Democrats in Congress a leader. However, Lincoln and Seward had begun to bond, spending free time together, which improved their cooperation and raised his spirits, which were heavily burdened by the war. This relationship naturally fuelled jealousy in the rest of the cabinet.
Lincoln had established office hours where he would receive visitors for several hours a day, but he routinely ignored his own rules, even though the constant grind of dealing with people wore him down. Possessing a haphazard attitude towards rules, his cabinet meetings were never held on a regular basis, since he preferred to meet with individual cabinet secretaries one-on-one. His one break was carriage rides to visit the troops and meet with officers at the camps surrounding the capital. During evenings when there were no formal events, he would either relax with a few friends, including Seward, or attend a play or concert.
By the start of Congress on July 4, politicians were pressing for the army of thirty thousand men to smash the rebels and march on Richmond to end the war. After endless parades, noble speeches, taking photos of themselves in uniform, and discussions of the glory of war, both the troops and everyone else was fired up with excitement and romance. Lincoln wanted Brigadier General Irwin McDowell, commander of the Union forces around Washington, to take the army and attack the Manassas railroad junction, roughly twenty-five miles south of Washington. Scott said that the army would not be ready until the fall, and recommended that the blockade be tightened and a force should move down the Mississippi to squeeze the Confederacy until pro-Union forces within the Confederacy defeated the rebels. Lacking faith in a spontaneous uprising within the Confederacy, Lincoln dismissed McDowell’s argument that the troops were inexperienced with the comment that the enemy were inexperienced as well.
When the Grand Army finally started to move, everyone was predicting a victory, so the huge defeat on July 21 was a major shock. The disaster had been observed firsthand by the hundreds of spectators, including congressmen and correspondents, who had seen the Union army routed, with the troops running in panic. Fortunately for the government, the rebels were too exhausted and confused to attack the capital. The reality of war had destroyed hopes that the rebellion would easily be crushed, and Lincoln had to bear much of the blame. Recognizing the scale of the threat, he adopted Scott’s blockade strategy, called for long-term volunteers, and suggested three separate expeditions against the Confederacy. Furthermore, a new commander was needed. Having proven successful in western Virginia, George McClellan was given command of the Army of the Potomac. Young and conceited, he was undeniably able, as well as tremendously popular since he looked like the ideal general. McClellan threw himself into reorganizing the army, drilling the troops mercilessly.
Fremont had been given command of the Army of the West, but stirred up a hornet’s nest when he announced that the slaves of rebels in Missouri would be freed. Desperate to keep control of the border states, Lincoln reminded him that the president, not Fremont, would make political decisions, and rescinded the proclamation. The border states remained in the Union but liberal Republicans were furious, and formed Emancipation Leagues to convince the public of the need to make the war a war of liberty. Frustrated by rumors of corruption, Fremont’s blatant politicking and inability to actually fight battles, Lincoln removed him in October 1861, but he had become a hero to the abolitionist movement.
The Trent Affair
When an American warship intercepted the British ship the Trent on November 8, 1861, and removed two Confederate commissioners travelling to Britain, British anger seemed likely lead to war, which would ensure a Confederate victory. The British government demanded the release of the envoys and a formal apology. Initially reluctant to back down, Lincoln found that his cabinet, in particular Secretary of State Seward, advocated compromise in order to avoid war or an end to trade with Britain. Realizing the consequences of British support for the Confederacy, Lincoln agreed to release the commissioners, although he refused to make an apology, thus ensuring that the civil war remained an internal matter. Since the British government was not eager for an expensive war, the demand for an apology was dropped and the crisis soon disappeared. Inexperienced in foreign affairs, Lincoln had turned to Senator Charles Sumner for advice in this area, and would continue to rely on him in the future, even though Seward sometimes grumbled that there were two secretaries of state. The British government had been seriously debating formal recognition of the Confederacy in the summer and fall of 1861, but that movement had lost momentum as the government had considered the expense of going to war against the Union. When the two Confederate commissioners finally reached London, they found that the British government had decided to remain neutral, therefore formal recognition would only follow military victory.
General George McClellan
By that time, the Army of the Potomac had grown to an army of 75,000 men, but McClellan still resisted pressure to advance. In fact, Republican leaders were losing faith in both McClellan and Lincoln. However, Lincoln continued to support McClellan and even made him General in Chief when Scott resigned due to old age. This confidence in McClellan’s abilities was not returned by McClellan, who frequently referred to Lincoln as a baboon or a gorilla in letters to his wife. To be fair, he had a low opinion of everyone in the cabinet, even Winfield Scott, his superior. Convinced that he was horribly outnumbered, McClellan refused to move forward until the Union forces in the west had achieved victories to weaken the pressure.
Call for action were growing, since the Treasury was surviving on credit, the large army meant there were fewer laborers for farms and factories, and the war had ended trade between the north and south.
As McClellan continued to delay, finally taking to his bed with illness late December, pressure from Congress for Lincoln to act grew. Since McClellan had neglected to share his plans with any of his subordinates, or even appoint a second-in-command, nothing could happen until he recovered. Worse, neither of the two commanders in the west, Don Carlos Buell and Henry Halleck, were moving forward or even talking to each other. Lincoln had discussed the situation with other generals and made his own studies, so he realized that a Union offensive would succeed only if they attacked from several directions to prevent the rebels from shifting their smaller forces to defeat each invasion individually. Unlike Lincoln, this discord did not concern McClellan, since he felt that none of the other fronts mattered, only Virginia, but he still refused to accept any blame for the failure to advance. The threat of losing his army brought McClellan out of bed but he still refused to reveal his plan, claiming that the president was incapable of keeping a secret.
When the Army of the Potomac settled into winter quarters in January, McClellan had commanded the army for six months, trained it to a remarkable level, received abundant quantities of supplies, and faced an enemy residing only two days march away, so it should not come as a surprise that Lincoln, Congress and the general public expected McClellan to do something. Only after an exasperated Lincoln issued orders for all three Union armies to attack in unison on February 22, 1862 did McClellan explain his plan to sail down the Potomac River and attack Richmond, the capital of the Confederacy. Although aware that the absence of the main army gave the rebels the opportunity to sacrifice Richmond to capture Washington, which would immediately give the Confederacy legitimacy, a dubious Lincoln gave his consent.
The situation finally improved when Buell and Halleck started to advance, although they were still not coordinating. Actually, Halleck’s subordinate Ulysses Grant was winning all of the battles, and a gratified Lincoln promoted him to major general.
Relying on information supplied by agents working for Allen Pinkerton, McClellan was convinced that he was heavily outnumbered by the Confederate army, so he refused to advance. Congressional leaders furiously demanded that Lincoln replace McClellan but Lincoln knew there was no suitable candidate and kept the troublesome general. When the rebel army fell back to positions closer to Richmond, Union scouts discovered that the heavy guns that had terrorized McClellan were fake, and it seemed likely that less than 40,000 men had manned the lines, far fewer than the 200,000 claimed by McClellan. Deeply embarrassed, Lincoln removed McClellan as General in Chief but kept him as commander of the Army of the Potomac. McClellan then led his army to sail down to the Chesapeake Bay to outflank the Confederate lines and attack Richmond. In the end, McClellan dallied again even though he had 100,000 men, enabling the rebels to safely retreat to newer lines, which he refused to attack, since he continued to believe that he was outnumbered, despite the embarrassment of the discovery of fake Confederate guns. McClellan’s delay enabled the Confederates to transfer reinforcements and gave them the initiative, since he refused to actually attack.
McClellan had still failed to fight an actual battle, while continuing to call for reinforcements otherwise he would be overwhelmed, but Grant had proven to be a fighter, winning a battle at Shiloh in southern Tennessee. It had been a bloodbath but Grant had won, so Lincoln ignored calls to remove Grant for drunkenness, stating “I can’t spare this man. He fights.”
When Stonewall Jackson launched a surprise campaign in the Shenandoah Valley in Virginia, threatening the capital, Lincoln cancelled plans to reinforce McClellan, who would have remained passive except that the Confederates attacked him. McClellan beat back the enemy, and rebel commander Joe Johnston was wounded, so he was replaced by Robert E. Lee. Shocked by the death toll, McClellan remained in place, instead of taking the initiative following the rebel defeat. Believing that war should be a scientific affair that minimized casualties, McClellan could not bear the thought of losing men, since he genuinely cared for them. He had also come to resent interference from Washington, which he viewed as a nest of rascals and traitors, so his telegrams to the White House had to be edited by the supervisor of telegraph messages, who realized that Lincoln would have to fire McClellan for insubordination if he saw them. When Lee attacked, McClellan believed reports that he was horribly outnumbered and retreated to better defensive positions. There was a seven-day-long battle but Lee could not achieve a decisive victory and McClellan was unwilling to take the offensive so it was a bloody stalemate.
The telegram exchanges between Lincoln and McClellan grew increasingly bizarre. One telegram would demand 50,000 men or the army was lost. The next message would state that he had conducted a brilliant retreat unparalleled in the annals of war, the troops were now in review, bands were playing and all he needed was another 100,000 men and he would win the war. Lincoln’s replies stressed with growing impatience that there were no more men. McClellan may have believed that he was a genius, but few in Washington aside from his Democrat supporters shared that opinion since he had not won a single victory. Since a scapegoat was required, Democrats rallied around McClellan to attack Secretary of War Stanton. Despite the rising crescendo of attacks on Stanton, Lincoln stood by him, refusing to ask for his resignation. McClellan’s failure had wider ramifications, since there was a real chance that Britain would decide that a Union victory was impossible, and recognize the Confederacy as an independent nation, which would be a signal to the rest of Europe to open relations with the rebels. When Lincoln inspected McClellan’s position in July, the general took the time to draft a letter stating that Lincoln must not emancipate the slaves but he should appoint another General in Chief, an unsubtle hint to be restored to his former rank.
Lincoln’s Search for a replacement for McClellan (March 1862-March 1864)
Unknown to McClellan, Lincoln had already decided to appoint Halleck General in Chief, acknowledging that McClellan’s approach was unable to win the war. Halleck was selected because he had written several books on military strategy and because he had won a number of victories in the West, even though Grant had actually done the fighting.
After consulting with Halleck, McClellan was ordered to bring his army back to Washington but a sudden Confederate advance confused the situation and revealed Halleck’s indecisive nature. McClellan had been ordered to link up with an army led by General John Pope, who had won a reputation as an aggressive commander in the West, and McClellan feared that Pope, not himself, would command the combined army, which would greatly outnumber Lee, hopefully ensuring victory. Realizing the danger, Lee moved first, hitting Pope while he was still alone. This approach was made easier by McClellan’s lengthy, intentional delay, leaving Pope had to face an army of roughly equal size alone during the Second Battle of Bull Run. McClellan held back, waiting for confirmation that he, not Pope, would command. If Pope would be in charge, then a defeat would be useful since he would undoubtedly be asked to save Washington.
When it became clear that corruption in the War Department meant that troops were receiving substandard equipment, Lincoln fired Cameron as Secretary of War, even though there was no evidence that he had made a personal profit, just that he had permitted his friends and cronies to profit. Appointed minister to Russia in January 1862, Cameron was replaced by Edwin Stanton, former attorney-general in James Buchanan’s administration, even though Stanton had publicly humiliated Lincoln during a high-profile case involving the McCormick Reaper company in Cincinnati seven years earlier, ignoring Lincoln’s existence even though they were part of the same defense team. Lincoln knew that Stanton had the necessary ability and there was no time during war for personal grudges. Lincoln’s decision proved correct, since Stanton immediately commenced a reform of the War Department. In fact, the two men gradually grew to respect each other.
Halleck and Stanton had both urged the president to remove McClellan from command, while several other members of the cabinet opposed the decision, but Lincoln felt that only McClellan could reorganize the demoralized army and save Washington. McClellan would soon be tested when Lee invaded Maryland in September, hoping that a victory on Union soil would win British support. The two armies clashed at Antietam on September 17, and McClellan claimed a complete victory since he had blocked the invasion, believing it unimportant to prevent Lee from escaping, which infuriated Lincoln.
A key factor was that the officer corps was largely Democratic, and the senior officers of the Army of the Potomac were deeply loyal to McClellan, so there was genuine talk of the need to replace Lincoln with a stronger man who would treat the army better, and ensure that war remained a professional affair where civilian property, like slaves, was considered off-limits. McClellan felt that emancipation and the suspension of habeas corpus had managed to destroy the nation’s free institutions, replacing them with despotism. This sentiment expresses the key problem, namely that many northerners simply did not view the ownership of human beings as tyranny. Aware that McClellan was more popular with the army than he was, Lincoln delayed replacing McClellan, even though he felt that McClellan had missed a valuable opportunity to destroy the enemy army at Antietam. After inspecting the army of the Potomac, and studying military matters, as well as observing the massive number of Union troops on leave, Lincoln realized that neither the people nor the general in charge of the army were willing to accept that they were at war. McClellan thought that overwhelming force and strategy would win the war without any destruction, casualties or chaos, failing to understand that war meant destruction, casualties and chaos.
Tired of generals who were unwilling to fight, once the elections were over Lincoln replaced Buell with Rosecrans and McClellan with Ambrose Burnside, who looked like a general but was insecure and did not think he was able to command an entire army. Burnside was chosen because he was a subordinate of McClellan, so he would be more acceptable to the army, but he would prove to be a good judge of his own character when he launched a series of frontal assaults against Fredericksburg on December 13 that produced nothing other than twelve thousand casualties. Hoping to redeem himself, Burnside led his army to cross the Rappahannock River in January but the huge mass of men became stuck in the mud, drenched with cold, driving rain. Presented with another disaster, Lincoln replaced Burnside with one of his greatest critics, his own subordinate Joe Hooker, who became the fourth commander of the Army of the Potomac.
In the spring of 1863, the Union forces were poised to commence a massive assault against the Confederacy, including a powerful naval assault against the vital port of Charleston, South Carolina, Grant was preparing another attempt against Vicksburg, Rosecrans was moving into eastern Tennessee, and Hooker was planning to move against Lee.
Grant was stuck in front of Vicksburg, and the naval assault on Charleston had failed, but Hooker was advancing towards Lee. Despite the advantage of a powerful, well-trained army that outnumbered Lee two to one, Hooker was outfought at Chancellorsville (April 30-May 6 1863), and had to retreat. This period was one of the worst of Lincoln’s presidency, which was not aided by a mutual loathing between Halleck and Hooker. Lincoln gave him another chance but when Lee suddenly dashed towards Maryland before turning to strike at Pennsylvania and Hooker still refused to attack, claiming that he was outnumbered, Lincoln lost patience, and accepted his resignation, replacing him with General George Meade.
The two armies encountered each other at Gettysburg on July 1, 1863 where Lee suffered such heavy casualties that he abandoned his invasion of the north. The battle had lasted three days, and Lincoln spent most of his time in the telegraph office, where he was frequently joined by Stanton, Seward, Welles and Senators Sumner and Chandler. Unaware that the enemy’s losses were proportionally larger, Meade was shocked by his own casualty list and refused to pursue Lee, despite repeated urgings from Halleck and Lincoln, causing Lincoln to explode with frustration and anger. Fortunately, Vicksburg had finally surrendered to Grant. While Lincoln fumed that Meade had missed a rare opportunity to finally destroy Lee’s army, the twin defeats at Gettysburg and Vicksburg ensured that Britain would not recognize the Confederacy. From that point on, a Union victory was inevitable. In addition, Grant had become Lincoln’s favorite general because he won victories and he never asked for reinforcements. Moreover, Grant strongly supported Lincoln’s plan to enlist escaped slaves from the Mississippi region to deprive the Confederacy of needed rural labor, while increasing the size of the Union army.
The dire military situation and the realization that the war would not be won in the near future were propelling the president to accept the need for emancipation. Another factor was the steady stream of advice from American ambassadors in Europe, stating that emancipation would prevent recognition of the Confederacy. He had resisted increasingly harsh criticism from abolitionists, pointing out that they considered only their own needs while he had to consider the nation, where emancipation was far from universally popular. To ensure that his views were fully understood, Lincoln wrote a letter to Horace Greely’s Tribune, the leading abolitionist newspaper, on August 22, 1862, which was widely reprinted, stating that his primary goal was the preservation of the Union, therefore he was equally willing to end slavery or to strengthen it if he thought that either action would save the Union.
When representatives from the border-states rejected his appeal to accept a program of gradual emancipation, Lincoln was forced to recognize that slave-owners would never voluntarily free their slaves. Compelling them to free their slaves would simply drive them into the arms of the rebels, so he resolved to back the confiscation bill that was currently the subject of a fierce battle in Congress between liberal Republicans and Democrats and conservative Republicans. The bill would permit the confiscation of property of anyone who aided the rebellion, so plantation owners in territory captured by Union forces would have their slaves freed, while any slave that reached Union lines would automatically become free. Blacks would even be permitted to enlist in the army. Siding with the liberal Republicans, Lincoln signed the bill into law. In fact, he went further, using his authority as Commander in Chief of the army to proclaim that all slaves in the rebel states, even those belonging to loyalists, would become free on January 1, 1863.
Meeting with his cabinet on July 22, 1862, Lincoln’s proposal received the support of a majority of the cabinet: Smith opposed the measure, but remained silent; Blair warned that it would cost the Republicans dearly during the fall elections; and Chase advocated a more gradual approach, which seems bizarre given his long struggle for the abolition of slavery, unless he feared that Lincoln’s proclamation would win the support of radical Republicans, who had previously backed Chase, which would likely end his chances of becoming president in 1864. However, Seward persuaded him to wait until the north had won a major victory before issuing the proclamation.
Even though Antietam was not the victory he had wanted, Lincoln decided it was sufficient, and resolved to announce his emancipation proclamation. The news was welcomed by Republicans but opposed furiously by Democrats. One Democrat in particular was incensed. McClellan vowed that he would not fight to free slaves, describing the proclamation as an accursed doctrine. Emancipation proved to be extremely unpopular in the army as well, creating a dangerous threat to morale. While the proclamation was naturally criticized by the Confederacy, pro-Union leaders in the south called it treachery. In England, the proclamation combined with the Union victory at Antietam convinced the cabinet to postpone recognition of the Confederacy until the situation changed.
The proclamation had an impact during the fall election when the Republicans lost so many seats that they barely retained control of Congress, and a complete defeat seemed likely during the 1864 presidential election. The loss should not have been a surprise, since Lincoln had been elected with a minority of votes, the war was not going well, and the burden on the economy was almost unbearable. Aside from widespread resentment against emancipation, Democrats had also successfully stoked public anger against Lincoln’s suspension of habeas corpus.
Chase initially had been the only member of Lincoln’s cabinet arguing for black enlistment but Stanton had also come under the influence of senators Wade, Chandler and Stevens, who were early and vocal proponents of black soldiers. Once the proclamation had been signed, Stanton urged Lincoln to permit the raising of black regiments. Given the deep resentment among the army towards the idea of black soldiers, Lincoln and Stanton decided that they would be paid less and would be formed in segregated units under white officers. The hordes of escaped slaves arriving at Union lines soon became a serious logistical problem. While the able-bodied men became soldiers, huge numbers of refugees had to be kept in the south to avoid inflaming the situation in the north, but they also had to be fed. Lincoln had foreseen this problem and quickly sent Adjutant General Lorenzo Thomas, an extremely capable administrator, south to allocate the refugees into the army, as military laborers or as paid laborers on the plantations, according to their ability. A trial colonization project in Haiti had already failed, leaving Lincoln convinced that the freed slaves had to be resettled in the south.
However, black enlistment was not enough to meet the army’s need for men, so the government introduced the draft, which lit a match to an already explosive anti-war sentiment, forcing Lincoln to respond with drastic measures against anyone accused of treason.
By August 1863, black regiments were already transforming the war, either relieving white units from garrison duty or serving as front-line units. Lincoln’s decision to enlist black soldiers may have been a necessity, but it was not popular, especially in the mid-west, where Copperheads (peace Democrats) had been attacking the president over the suspension of habeas corpus and the introduction of the draft in order to continue a war that was bankrupting the nation. White mobs attacked recruiting officers in the mid-west, while there were a number of draft riots, and men were especially angry that anyone could avoid the draft by paying $300, which had been imposed by Congress despite strenuous opposition from Lincoln and Stanton. The worst riots occurred in New York City when white workers, mostly Irish, burned draft offices and then rampaged through the black part of the city, killing hundreds of blacks, as well as any white policeman or citizen who tried to interfere, for three days. The riots in New York had likely been sparked by speeches made by Copperhead politicians, in particular, a speech by New York Governor Horatio Seymour on July 4, where he accused the federal government of exceeding its constitutional authority by forcing white men into an ungodly conflict on behalf of black men. Even though the initial assault on governmental buildings was clearly organized, Lincoln refused to order an official investigation because he feared that it would simply provoke more riots and unrest, which would distract attention from the war. As a result, the fall elections would be a litmus test of voters’ acceptance of his policies, especially emancipation, the draft, martial law and black soldiers, so he felt justified when Republican candidates swept the state elections that fall.
The Republican victory was welcome, but the Union defeat at Chickamauga was discouraging. Lincoln had tired of Rosecrans but waited until several days after the state elections in Ohio, where native-son Rosecrans was popular, in October to give Grant command of the armies in the west, and Grant promptly fired Rosecrans, replacing him with Thomas.
The Gettysburg Address
Pennsylvania’s state government had initiated the idea of a National Soldiers’ Cemetery on the Gettysburg battlefield, and Lincoln agreed to speak at the commemoration ceremony in November, viewing it as an excellent opportunity to remind people that the war was being fought not just to suppress a rebellion but to preserve democracy in the United States. Whenever he had free time, he would work on the speech, but the speech remained unfinished when he arrived at Gettysburg, so he stayed up late the night before the ceremony rewriting the speech. Lincoln’s address was short, a mere two paragraphs, incredibly brief by the standards of the time, especially after four hours of speeches and prayers, but it was powerful, although no one at the time could have predicted the speech’s impact. Lincoln himself had been disappointed in the speech. Actually, the speech had ended so quickly that the restive audience had barely noticed that he had started, but it was well-received by newspapers, while his opponents attacked it and the belief in equality it stressed.
Fearing that he might lose the 1864 election or not even be nominated by his party, Lincoln focused on introducing his approach to reconstruction to the Confederate areas that had already been retaken by Union forces. Those areas had been given state governments manned by white politicians from the area who had opposed secession. The policy was a mixed success, and liberal Republicans in Congress argued that reconstruction was the responsibility of Congress, not the president. In particular, liberal leader Senator Sumner opposed Lincoln’s policy of limiting the political vote to whites. Sumner believed that only former slaves would be loyal, therefore they should receive the vote. The thought of giving blacks the right to vote disgusted conservative Republicans, who backed Lincoln against the liberals. The feuding within the Republican party did not worry Lincoln, but he feared that if all citizens of rebel states were given the vote, they would simply use their numbers to retake control of the state legislature and return the same people to power that had started the war in the first place. To avoid that problem, Lincoln decided that large segments of southern society would have to be disenfranchised, including all former Confederate civilian officials; all men who had the rank of colonel or higher; and anyone who had resigned from the Union army or Congress to join the Confederacy. Everyone else would receive a the right to vote after they had sworn allegiance to the United States. The plan was accepted by Congress, ending the Republican feuding, at least for the moment.
Lincoln seeks nomination for a second term
Although it had become traditional since Andrew Jackson’s presidency for presidents to only serve a single term, Lincoln wanted a second term to prove that his policies were popular but he was not the first choice of Republican leaders. The various factions within the Republican party did not oppose Lincoln’s overall objectives, but simply questioned whether he had the leadership and administrative ability needed to accomplish those objectives. Chase had pinned his hopes on the tradition of single-term presidencies, and did not expect Lincoln to run for a second term. When it became clear that Lincoln intended to run for a second term, Chase presumed that he would represent the radical Republicans and Lincoln the conservative Republicans. Chase was clearly campaigning even though he remained in Lincoln’s cabinet and refused to actually admit that he was campaigning. His numerous letters to politicians and journalists followed the same theme, Lincoln had made numerous errors but he would do a better job, not that he sought the presidency, although he would naturally accept the burden if pushed by his countrymen. Obviously, this was not a secret, but Lincoln remained unconcerned, believing that it was better to keep his rival in plain sight.
When Chase volunteered to go home to Ohio to campaign for the pro-Union candidate against Democrat Clement Vallandigham, who advocated peace at any cost, even the resumption of slavery, Lincoln agreed. Chase would undoubtedly also campaign for himself, but it was worth the risk to ensure the defeat of the Democrats, and Chase did bring victory in his home state, while the Democrat candidate was also defeated in Pennsylvania. Chase unfortunately believed that he deserved all the credit, that the President and recent victories had nothing to do with it, therefore he overestimated his support. Again. As Chase’s supporters became increasingly blatant in their unofficial campaign, Chase continued to protest that he was uninvolved. Discreet approaches to the other members of the cabinet failed to win their support or encouragement but he ignored the signs. Stanton had remained friends with Chase, although not as close as before, but Stanton had grown to respect Lincoln, so Chase did not even have his backing. When Chase’s allies distributed a circular to a hundred leading Republicans advocating the nomination of Chase since Lincoln would never be re-elected, it provoked such a severe reaction that the Republican party in Ohio passed a unanimous resolution supporting Lincoln, ending Chase’s hopes, and forcing him to publicly withdraw from the presidential race.
Ulysses Grant Takes Command (March 1864)
Determined to reinvigorate the Union army, Lincoln promoted Grant to lieutenant general, a rank that had last been used when George Washington commanded the army, and General in Chief of all Union armies, in March, leaving Sherman in command of the army in the west, which proved to be an extremely popular decision. Lincoln would have promoted Grant sooner but the general was being courted by both Republicans and Democrats as a presidential candidate, and Lincoln did not want to elevate a potential rival. This reluctance disappeared after he received a letter from Grant swearing that he had no interest in becoming president. Given the size of the army, Halleck became chief of staff, Grant set the strategy and plans, and Meade actually executed the plans as commander of the Army of the Potomac, roles that suited each man perfectly. Grant would keep Lee busy while Sherman marched through Georgia to capture Atlanta, damaging the Confederate economy. This concerted plan was what Lincoln had been demanding since the start of the war. More important, he needed a victory if he was going to win the election.
Hoping to oblige, Grant hit Lee hard in May and continued to hit hard, but the direct approach produced astonishing casualties because Lee had chosen the Wilderness, a dense maze of bogs and ravines, as the battleground, which made it impossible for Grant to employ his greater advantages in manpower and artillery. At the end of the bloodletting, Lee was safe behind his defences, and Grant did not know what to do next. Although heartbroken by the long lines of wounded arriving in Washington, Lincoln remained confident that Grant would persevere despite the horrendous casualties.
1864 Presidential Election
Lincoln did not attend the Republican convention and stated that he did not care who was selected as vice president. It is unknown why he made no effort to keep Hamlin, but he may have felt that Hamlin was too radical, and would the alienate the War Democrats and conservative Republicans needed to win the election. Most important, the position simply was not considered that important at the time. Senator Andrew Johnson, former governor of Tennessee, was chosen as vp because he was a War Democrat, and the sole southern Senator to remain in the Senate after the outbreak of war. All that Lincoln needed was a decisive victory but Sherman was advancing at snail’s pace through Georgia, and Grant still appeared unable to get Lee to fight in the open.
Although the members of the cabinet feuded among themselves, driven by personal jealously and genuine political differences, they were all effective in their duties. Stanton had revitalized the War Department; Blair had modernized the primitive postal system to create a national system which enabled the soldiers to remain in touch with their distant families; Welles oversaw the dramatic expansion of the navy; and Chase provided the key service of finding the money needed to pay for the war. Most important, they were all personally loyal to Lincoln, so he kept them in their positions. However, there was one exception, Salmon P. Chase. Aside from conspiring to win the presidential nomination without openly declaring his candidacy, Chase had repeatedly pressured Lincoln to agree to policy decisions by threatening to resign. Tiring of the conflict and more secure in his position now that he had the nomination, when Chase offered yet another resignation as a veiled threat to have his way in a dispute over the position of assistant treasurer of New York, Lincoln accepted Chase’s resignation, replacing him with Maine Senator William Fessenden, stunning Chase and angering liberal Republicans in the Senate, who began talking about supporting Fremont instead.
When the Senate and Congress passed a harsher reconstruction bill than Lincoln wanted on July 2, he vetoed it, claiming that Congress did not have the authority to ban slavery. Infuriated, Republican senators and congressmen ranted in the press against Lincoln. The president’s popularity fell further when Confederate general Jubal Early suddenly attacked Maryland, threatening Washington, in mid-July, although a desperate attack by troops commanded by General Lew Wallace delayed Early long enough for Grant to send reinforcements to drive Early away from the capital. When Lincoln ordered the draft of an additional 500,000 men, he was harshly criticized, especially since Congress had abolished the right to commute military service by paying $300. Grant understood that Lincoln’s political situation depended on a military victory, and he tried to give that victory, but both he and Sherman were stuck in long, wearying sieges.
Lincoln’s own supporters were convinced that he would be defeated by McClellan, the Democratic candidate. McClellan had announced that he would bring peace by allowing the south to keep slavery if it returned to the Union. Despite a depressed acceptance of his future defeat, Lincoln, supported by cabinet members Seward, Stanton and Fassenden, refused to negotiate with the Confederacy, even though negotiation would probably ensure victory during the election. Worse, he realized that if he lost the election, the new president would likely recognize the Confederacy, bringing an end to the Union. The refusal to abandon emancipation threatened to cost him the support of War Democrats and the conservative Republicans, who were beginning to seek a replacement candidate, since Lincoln seemed bound to lose.
Lincoln’s chances looked bleak until a message from Sherman arrived on September 3, days after the Democratic Convention, stating “Atlanta is ours and fairly won.” Other, smaller Union victories restored Lincoln’s confidence, and he made it clear that he would not consider resigning as Republican candidate, he would not open negotiations with the Confederacy, and he would not break his promise to black people by permitting a return to slavery.
McClellan still remained a threat, since he was backed by powerful banking, industrial and railroad tycoons, and working class men afraid that the Emancipation Proclamation meant equality with blacks, as well as liberals opposed to Lincoln’s suppression of habeus corpus and free speech, while the draft remained extremely unpopular. Hoping to mend fences with the Radicals, Lincoln asked Postmaster Montgomery Blair, an object of radical hatred, to resign from the cabinet, where he had been feuding with Stanton with months. It is a tribute to Lincoln’s generous nature that he retained the loyalty of the powerful Blair clan despite the forced resignation of Montgomery Blair. Even Chase agreed to campaign on behalf of Lincoln, motivated by the hope of appointment as Chief Justice of the Supreme Court, while the abolitionist wing were naturally opposed to McClellan. Lincoln’s courtship of Republican liberals was aided by independent candidate John Fremont’s decision to resign from the race in order to ensure that McClellan was defeated, thus preventing the return of slavery. Fremont had realized that he had little chance of victory, especially when he learned that radical senators like Wade and Chandler would be backing Lincoln now that Blair had left the cabinet.
Despite the genuine possibility of defeat, Lincoln refused to stack the deck in his favor by speeding up the admission of the territories of Colorado and Nebraska to the Union, even though they would have voted for him. The Democrats were confident that the majority of the soldiers would vote for McClellan, believing that his former troops still worshipped him. They would be wrong. In the end, Lincoln won handsomely, thanks partially to the overwhelming support of the military, but the win was less a mandate for his views on emancipation and more a reaction against McClellan.
Second Presidential Term
Starting a second term, Lincoln could have replaced his entire cabinet, but he was happy overall with the men, now that the key troublemakers, Chase and Blair, were gone. They came from different wings of the party, but factional infighting was reduced. Most important, no one was more aware of the achievements of Seward, Stanton, Bates and Welles than Lincoln. Bates left the cabinet because he was too old to continue, and he was replaced by James Speed, the brother of Joshua Speed, Lincoln’s old roommate in Springfield, because Lincoln trusted him, he was respected, and he had been a loyal Union man in Kentucky, a critical border state. Once again demonstrating his inability to hold a grudge, Lincoln appointed Chase chief justice because he was the best choice, even though the key members of his cabinet urged him to select Blair. The decision paid off, and Lincoln received more support from the Radicals in Congress for his reconstruction plan for the south.
When Sherman and his army disappeared in the south during his march towards the sea, where an entire army lived off the land, while burning its way through some of the richest farmland in the south, much of the North, Lincoln included, worried that Sherman and his army would be cut off, surrounded and eliminated. The outside world had no news of Sherman’s army for 32 days, and they were nervous days for Lincoln, since the defeat of his army would mean the war would last even longer, something growing numbers of voters had no patience for. Meanwhile, Sherman’s troops had left a path of devastation sixty miles wide behind them, wrecking every railroad, burning any food they could not carry and killing every animal. When the army reached Savannah, the news flashed across the north that the army had not only survived but had prevailed, ruining the state of Georgia and denying the rest of the Confederacy the food it desperately needed. News of Sherman’s feat reached Washington on the evening of December 14, and a message announcing the destruction of a Confederate army at Nashville arrived the next day.
The Thirteenth Amendment
Following these two major victories, Lincoln turned his attention to the proposed Thirteenth Amendment, which would emancipate all of the slaves, personally lobbying reluctant conservative Republicans and Democrats. The amendment had passed in the Senate the previous spring, but it had failed to pass the House where the congressmen had voted along party lines. His efforts bore fruit when the amendment was approved by the required two thirds of Congress, although it was a tight vote, and it seems likely that several Congressmen were swayed by promises of patronage or other practical benefits. The galleries in the House were packed when the Amendment was put to a vote and officially passed on January 31, 1865.
Unfortunately, his relationship with his wife had dissolved over the past couple of years, especially after the death of their son Willie in 1862, and they spent little time together.
Many politicians thought his fascination with humorous stories and the writings of humorists was a distraction and unsuitable given the situation, but it was Lincoln’s method of dealing with the crushing burden of leading a nation during wartime.
Despite Lincoln’s hard work to push through conscription to provide Grant and Sherman with the men they needed, he shared their respect for the soldiers, and rarely permitted men to be executed or punished for being AWOL, often interfering with the War Department if the case was brought to his personal attention, and ordering the men to be pardoned. He extended mercy whenever possible to the families of Confederate soldiers, claiming that he would rather make his enemies his friends.
Hampton Roads Conference
When the defeat of the Confederacy seemed imminent, Lincoln set terms of unconditional surrender, and he continued to work even though he was exhausted and in poor health. Meeting with Seward and three peace commissioners sent by the Confederacy, Vice President Alexander Stephens, former Senator R. M. Hunter and former Supreme Court Justice John A. Campbell, on Grant’s flagship the River Queen for four hours on February 3, at what became known as the Hampton Roads Conference, he insisted that negotiations would commence only after the states had stopped their armed rebellion, and he would not relent on emancipation. Stephens had asked for a temporary cessation of hostilities to let passions on both sides cool, but Lincoln refused to consider the idea. Seward informed the commissioners that Congress had just passed the Thirteenth Amendment, which made it clear that there would be no return to slavery.
Radical Republicans were outraged when they learned that Lincoln was meeting with peace commissioners from the South, fearing that he would abandon emancipation to obtain peace, and congressman Thaddeus Stevens delivered a speech that savaged the president. However, after Lincoln submitted a report to Congress on the conference, the congressmen realized that he had simply wanted the commissioners to openly state their goals, and a number of congressmen, including Stevens, made speeches praising Lincoln. There is no denying that Lincoln did want peace, since he proposed paying four hundred million dollars to the Confederate states to cover the value of the freed slaves, but the proposal was unanimously rejected by his cabinet, who stated that only fighting would end the war. Davis would have refused the offer anyway.
During the reception after the second inaugural, police guarding the entrance initially refused to permit former slave and leading abolitionist Frederick Douglass to enter, but a guest informed Lincoln who sent a message allowing him to come in. Busy shaking hands, Lincoln loudly greeted Douglass as a friend, called him over and briefly sought his opinion about the inaugural address, which may not sound important, but Douglass became the first African-American to be received formally at the White House by a president.
The End of the Civil War
After the Confederates abandoned Richmond on April 1, and General Lee surrendered to Grant at Appomattox on April 9, the war was basically over, fortunately since Lincoln and Grant had both feared the cost of another major battle, or worse, that Lee would escape Grant’s trap and commence guerrilla warfare. Disregarding calls for vengeance, Lincoln was determined to be merciful and would not try the rebel leaders for treason, although he hoped that they would make his life easier and flee the country. This forgiveness was not popular with everyone in the government, including Vice-president Johnson, who believed fervently that treason must be punished. However, this approach was strongly supported by the three senior Union generals and the army itself, which was doing the actual fighting. While Lincoln was in favor of limited suffrage for blacks, he did not consider interracial marriage, full suffrage, or anything else related to genuine equality. As a result, the radicals were still unhappy with his proposals. At the same time, conservative republicans thought that he was going too fast. This disagreement with his approach fuelled a struggle between Congress and the executive over who had the real power.
Naturally, many southerners and sympathizers were angered by the fall of the south, and some sought revenge. Lincoln was looking forward to finishing his term and starting his life, when he decided to attend a play at Ford’s Theatre, where he was assassinated by John Wilkes Booth on April 15, 1865. Lincoln fell into a coma and died nine hours after the shooting.
Directed by D. W. Griffith, starring Walter Huston and Una Merkel
Abraham Lincoln’s life is presented through a series of episodes that examine his early live of Ann Rutledge, legal career, failed attempt to win a Senate seat, presidential election, and the Civil War. (full review)
Directed by John Ford, starring Henry Fonda and Alice Brady
Abraham Lincoln prepares for his first big case, courts his future wife and debates whether to enter politics. (full review)
Directed by John Cromwell, starring Raymond Massey and Ruth Gordon
Young lawyer Abraham Lincoln falls in love with society belle Mary Todd, whose relentless ambition drives him to pursue a political career, despite his deep-rooted reluctance. (full review)
Directed by Anthony Mann, starring Dick Powell and Adolphe Menjou
A discredited detective learns of a plot to assassinate Abraham Lincoln as he travels by rail through Baltimore to his inauguration. (full review)
Directed by Robert Redford, starring Robin Wright and James McAvoy
Lawyer Frederick Aiken defends Mary Surratt, the owner of the boarding house, where John Wilkes Booth and his fellow conspirators plotted the assassination of President Abraham Lincoln.
Directed by Steven Spielberg, starring Daniel Day-Lewis and Tommy Lee Jones
Balancing the conflicting needs of the radical and conservative factions of the Republican Party, President Abraham Lincoln struggles to convince enough Democrats to vote for the Thirteenth Amendment, which will abolish slavery. (full review)
With Malice Toward None: A Biography of Abraham Lincoln-Stephen B. Oates, New York: Harper Perennial, 2011.
A solid, one-volume account of Abraham Lincoln’s life and career that is a perfect introduction to Lincoln.
Abraham Lincoln: The Prairie Years and The War Years: One-Volume Edition-Carl Sandburg, new York: Galahad Books, 1993.
Originally written in the 1920s, when memories of the period were fresher, Sandburg makes Lincoln’s environment come alive, so that the reader understands the hardships and rough conditions that people faced. A skilled writer, he doesn’t just relate the facts but includes sentences like the following about Ann Rutledge and Lincoln’s relationship: “probably they formed some mutual attachment not made clear to the community; possibly they loved each other and her hand went into his long fingers whose bones told her of refuge and security.” Because the book contains so much of Lincoln’s writings and quotes attributed to him, it is the best book to get an appreciation of the man, but its explanation of the time and environment that he lived in is its weak point. The bulk of the book deals with the Civil War, roughly a quarter of the book covers his life prior to the presidency. He provides superb character sketches of the main politicians and generals on both sides during the war, illustrating the complexities of each man with several paragraphs, making it easier to understand that the war was caused and carried out by a collection of men unaccustomed to the idea that anyone else could be right. Sandburg is so determined to give readers an accurate impression of Lincoln the man that he devotes a chapter to Lincoln’s humorous stories, repeating dozens of them, which can become tiresome.
Team of Rivals: The Political Genius of Abraham Lincoln-Doris Kearns Goodwin, New York: Simon & Schuster, 2006.
Lincoln-David Herbert Donald, New York: Touchstone, 1995.
While Donald has undoubtedly performed impressive research, he is not a spell-binding writer. He simply presents Lincoln’s history in a straightforward manner.
|
<urn:uuid:5b5789be-5625-413d-9932-b2b369b0cd31>
|
CC-MAIN-2020-24
|
http://historyonfilm.com/abraham-lincoln-bio/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413624.48/warc/CC-MAIN-20200531182830-20200531212830-00476.warc.gz
|
en
| 0.985059 | 18,467 | 3.671875 | 4 |
James Meredith, left, and Medgar Evers are two of the most historic figures in Mississippi’s civil rights struggles. Evers helped Meredith in his effort to enroll at the University of Mississippi in 1962. He secured the NAACP’s legal team headed by Thurgood Marshall, who had won the Brown v. Board of Education lawsuit, to assist Meredith. Evers himself had been denied admission to Ole Miss law school in 1954.
Evers, born in Decatur, Mississippi, in 1925, returned from service in World War II and enlisted at Alcorn A&M College. He married Myrlie Beasley of Vicksburg before graduating in 1951. They moved to Mound Bayou, where he sold insurance and began to work in voter registration. Named NAACP field secretary in 1954, he moved his family to Jackson, where he continued working for voting rights and the desegregation of schools and other public facilities, and speaking out against racial violence and injustice. Evers was murdered in his driveway early in the morning on June 12, 1963, as he arrived home from a rally. He was buried in Arlington National Cemetery. His killer, Byron de la Beckwith, after two trials in 1964 that ended in hung juries, was finally convicted of his murder in 1994. His widow, Myrlie Evers-Williams, would later chair the board of directors of the NAACP.
Meredith was born in Kosciusko, Mississippi, in 1933. He served in the Air Force and spent two years at Jackson State College before attempting to enter the University of Mississippi. Opposed by Mississippi Governor Ross Barnett, his enrollment triggered riots on the Ole Miss campus. Two people were killed and hundreds wounded on campus. Meredith graduated from the university, received a law degree from Columbia University in 1968. He was injured in 1966 as he led his March Against Fear from Memphis to Jackson. Meredith worked in various business pursuits, wrote his memoirs, and became a Republican later in life and worked on the staff of Senator Jesse Helms. In 2002, the University of Mississippi honored him on the 40th anniversary of his enrollment there. Later that year, his son, Joseph, received a doctorate of business administration from Ole Miss. Tragically, Joseph died of a heart complication in 2008. Meredith lives in Jackson with his wife. He has a daughter and two sons.
|
<urn:uuid:c178f918-1a17-4d8e-87da-7383b03813e8>
|
CC-MAIN-2014-23
|
http://mdah.state.ms.us/timeline/people/james-meredith-and-medgar-evers/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510258086.28/warc/CC-MAIN-20140728011738-00146-ip-10-146-231-18.ec2.internal.warc.gz
|
en
| 0.989594 | 478 | 2.90625 | 3 |
Types of Aerobic Septic Systems
While you may think that septic systems are all the same, there are variations. Finding the right one to meet your needs is what we do. We aim to provide you with the newest and greatest in modern septic systems. We’d like to talk to you about a couple of different types:
- Graveled Systems
- Low Pressure Dosage Systems
- Aerobic Systems
These systems are the oldest of the septic tank designs. Some levels of treatment of waste occur in the septic tanks, but most of the treatment occurs as wastewater is discharged from the tank. It enters the drain field and is filtered through the sediments below. Bacteria and other organisms consume any organic material in the wastewater to cleanse the tank. These organisms multiply forming a bio-mat that sits on the soil layer.
When the drain field is in balance, parasites and other organisms keep the bio-mat from becoming too thick. This maintains the tank for long period of time but eventually can lead to excessive build-up. Average use can be handled by these systems.
Low Pressure Dosage Systems
These systems are used when the soil doesn’t allow for graveled systems. They are commonly used in Texas due to the high water levels and flooding.
|
<urn:uuid:246618a0-e03a-460b-88d8-11fae2936c05>
|
CC-MAIN-2020-29
|
https://septicsystemshouston.com/aerobic-septic-systems/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00156.warc.gz
|
en
| 0.944746 | 265 | 2.640625 | 3 |
PowerPoints with headline points of the above set works. I use these later on in study after workshopping the pieces either vocally or instrumentally. Students start by completing an aural analysis and annotating scores.
As a department we created this resource to support our students in giving more extended answers when listening. We have printed a class set in A3 and laminated them and hand them out when listening.
I give my year 12 students this diary at the start of the AS course to log their listening habits outside of the lessons.
Last year I did a one hour revision session on Mozart’s Piano Sonata in Bb for local schools at St George’s in Bristol. Below are the resources I used.
|
<urn:uuid:291ee63e-48b1-4b74-b990-4a8200d4b19b>
|
CC-MAIN-2020-24
|
http://www.mymusicclassroom.com/?page_id=1780
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399820.9/warc/CC-MAIN-20200528135528-20200528165528-00061.warc.gz
|
en
| 0.931472 | 151 | 2.765625 | 3 |
Knut Hamsun (August 4, 1859 – February 19, 1952) was a leading Norwegian author and recipient of the Nobel Prize in Literature for 1920. His most famous novel, Hunger, describes the experiences of a poor but proud intellectual who is modeled on Rodion Raskolnikov, the hero of Fyodor Dostoevsky's Crime and Punishment. Following Dostoevsky, Hamsun was a key transitional figure between nineteenth century realism and the subjectivism of modern prose, such as the irrational world of Franz Kafka. He was noted for his insistence that the intricacies of the human mind ought to be the main object of modern literature. Many modernists embraced the modern, urban culture, but Hamsun shows it to be a terrifying place, devoid of the certainties and securities of pre-modern life. He showed the darker, irrational side of "human progress" at a time when its virtues were largely trumpeted by other modern artists.
Knut Hamsun was born as Knud Pedersen in Lom, Norway in Gudbrandsdal. He was the fourth son of Peder Pedersen and Tora Olsdatter (Garmostrædet). He grew up in poverty in Hamarøy in Nordland. At seventeen, he became an apprentice to a ropemaker, and at about the same time he started to write. He spent several years in America, travelling and working at various jobs, publishing his impressions under the title Fra det moderne Amerikas Aandsliv (1889). In 1898, Hamsun married Bergljot Goepfert (née Bech), but the marriage ended in 1906. Hamsun then married Marie Andersen (born in 1881) in 1909 and she would be his companion until the end of his life. She wrote about their life together in her two memoirs. Marie was a young and promising actress when she met Hamsun, but she ended her career and travelled with him to Hamarøy. They bought a farm "to earn their living as farmers, with his writing providing some additional income." However, after a few years, they decided to move south, to Larvik. In 1918, the couple bought Nørholm, an old and somewhat dilapidated manor house between Lillesand and Grimstad. The main residence was restored and redecorated. Here Hamsun could occupy himself writing undisturbed, although he often travelled to write in other cities and places (usually preferring spartan housing and conditions).
Knut Hamsun died in his home at Nørholm.
Hamsun first received wide acclaim with his 1890 novel Hunger (Sult). The semi-autobiographical work described a young and egocentric writer's descent into near madness as a result of hunger and poverty in the Norwegian capital of Kristiania. To many, the novel presaged the writings of Franz Kafka and other twentieth-century novelists with its internal monologue and bizarre logic. Other important works by Hamsun include the novels Pan, Mysteries, and The Growth of the Soil. Hamsun received the Nobel Prize in literature in 1920. A 15-volume edition of his complete works was published in 1954.
Hamsun was a prominent advocate of Germany and German culture, supporting Germany both during the First and the Second World War. He was a rhetorical opponent of British imperialism and the Soviet Union as well. Despite his immense popularity in Norway and around the world, Hamsun's reputation for a time waned considerably because of his support of Vidkun Quisling's Nasjonal Samling (National Socialist) government. Following a meeting with Joseph Goebbels in 1943, he sent Goebbels his Nobel Prize medal as a gift. Hamsun also met with Adolf Hitler and tried to have him remove Josef Terboven from the position of Reichskommissar of Norway.
After Hitler's death, Hamsun wrote an obituary in the leading Norwegian newspaper Aftenposten, describing him as a "warrior for mankind." It has been argued by supporters that his "sympathies" were those of a country that had been occupied. He sometimes used his status as a man of fame to improve the conditions of his area during the occupation and criticized the number of executions. Still, following the end of the war, angry nationalist crowds burned his books in public in major Norwegian cities. He was charged with treason for his pro-German sympathies. After the war, Hamsun was confined for several months in a psychiatric hospital. A psychiatrist concluded he had "permanently impaired mental abilities," and on that basis the charges of treason were dropped. Instead, a civil liability case was raised against him and in 1948 he was fined 325,000 Norwegian kroner for his alleged membership in Nasjonal Samling, but cleared of any direct Nazi-affiliation. Whether he was a member of Nasjonal Samling or not and whether his mental abilities were impaired is a much debated issue even today. Hamsun stated he was never a member of any political party. Hamsun himself wrote about this experience in the 1949 book, On Overgrown Paths, a book many take as evidence of his functioning mental capabilities. The Danish author Thorkild Hansen investigated the trial and wrote the book The Hamsun Trial (1978), which created a storm in Norway. Hansen was very critical of Norway for its treatment of the elderly Hamsun, writing "If you want to meet idiots, go to Norway." In 1996, the Swedish director Jan Troell based the movie Hamsun on Hansen's book. In Hamsun, famous Swedish actor, Max von Sydow, plays Knut Hamsun, while his wife Marie is played by the Danish actress Ghita Nørby.
Hunger (Sult) is perhaps Hamsun's most famous work. It was published in its final form in 1890. Parts of it had been published anonymously in the Danish magazine Ny Jord in 1888. The novel is hailed as the literary opening of the twentieth century and an oustanding example of a modern, psychological novel. It hails the irrationality of the human mind in an intriguing, unique and sometimes humorous tone.
Written after Hamsun's return from an ill-fated tour of America, Hunger is set in fin-de-siecle Kristiania (now Oslo). It recounts the adventures of a starving young man, whose sense of reality is giving way to a delusionary existence on the darker side of a modern metropolis. A proud young man with an inflated sense of self-importance who is stung by the humiliating set of circumstances he finds himself in, he vainly tries to maintain an outer shell of respectability, while his mental and physical decay are recounted in great psychological detail. His ordeal, enhanced by his inability or unwillingness to pursue a professional career, which he deems unfit for someone of his abilities, is pictured in a series of encounters, which Hamsun himself has described as 'a series of analyses'. In many ways, the protagonist of the novel has traits reminiscent of Rodion Raskolnikov, the protagonist of Crime and Punishment, whose author, Fyodor Dostoevsky, was one of Hamsun's main influences. The influence of naturalist authors like Emile Zola is also apparent in the novel, as is his rejection of the realist tradition.
Hunger encompasses two of Hamsun's literary and ideological leitmotifs:
- His insistence that the intricacies of the human mind ought to be the main object of modern literature. Hamsun's own literary program, to describe "the whisper of the blood and the pleading of the bone marrow," is thoroughly manifest in Hunger.
- His depreciation of modern, urban civilization. In the famous opening lines of his novel, he ambigiously describes Kristiania as "this wondrous city that no-one leaves before it has made its marks upon him." The latter is counter-balanced in other of Hamsun's works such as the novels Mysteries (Mysterier) (1892) and Growth of the Soil (Markens Grøde), which earned him the Nobel prize in literature, but also a reputation for being a proto-National Socialist Blut und Boden author.
The novel's first person protagonist, an unnamed vagrant with intellectual aspirations, probably in his late 20s, wanders the streets of Norway's capital in a pursuit of nourishment. In four (possibly imagined) episodes, he meets a number of more or less mysterious persons, the most notable being Ylajali, a young woman with whom he has a sexual encounter. Overwhelmed by hunger, he scrounges for meals, while his social, physical and mental state are in constant decline. However, he has no antagonistic feelings towards "society" as such, rather he blames his fate on "God" or a divine world order. He vows not to succumb to this order and remains "a foreigner in life," haunted by "nervousness, by irrational details." He also plays strange pranks on strangers he meets in the streets. A major artistical and economical triumph for him is when he is able to sell a text to a newspaper, but despite this he finds writing increasingly difficult. Towards the end of the story, he asks to spend a night in a prison cell, fooling the police into believing that he is a well-to-do journalist who has lost the keys to his apartment; in the morning he can't bring himself to reveal his poverty, even to partake in the free breakfast they provide the homeless. Finally, when his existence is at an absolute ebb, he signs on to the crew of a ship leaving the city.
|1877||Den Gaadefulde. En kjærlighedshistorie fra Nordland (Published under Knud Pedersen)|
|1878||Et Gjensyn (Published under Knud Pedersen Hamsund)|
|1878||Bjørger (Published under Knud Pedersen Hamsund)|
|1889||Lars Oftedal. Udkast (11 articles, previously printed in Dagbladet)|
|1889||Fra det moderne Amerikas Aandsliv||The Spiritual Life of Modern America|
|1893||Ny Jord||Shallow Soil||ISBN 1-4191-4690-4|
|1895||Ved Rigets Port||At the Gate of the Kingdom|
|1896||Livets Spil||The Game of Life|
|1898||Victoria. En kjærlighedshistorie||Victoria||ISBN 1-55713-177-5|
|1902||Munken Vendt. Brigantines saga I|
|1903||I Æventyrland. Oplevet og drømt i Kaukasien||In Wonderland||ISBN 0-9703125-5-5|
|1903||Dronning Tamara (Play in three acts)|
|1904||Det vilde Kor (Poems)|
|1905||Stridende Liv. Skildringer fra Vesten og Østen|
|1906||Under Høststjærnen. En Vandrers Fortælling||Under the Autumn Star||ISBN 1-55713-343-3|
|1908||Rosa. Af student Pærelius' Papirer||Rosa||ISBN 1-55713-359-X|
|1909||En Vandrer spiller med Sordin||A Wanderer Plays on Muted Strings||ISBN 1-892295-73-3|
|1909||En Vandrer spiller med Sordin||Also translated combined with Under Høststjærnen as Wanderers||ISBN 1-4191-9307-4|
|1910||Livet i Vold (Play in four acts)||In the Grip of Life|
|1912||Den sidste Glæde||The Last Joy||ISBN 1-931243-19-0|
|1913||Børn av Tiden||Children of the Age|
|1915||Segelfoss By 1||Segelfoss Town (Volume 1)|
|1915||Segelfoss By 2||Segelfoss Town (Volume 2)|
|1917||Markens Grøde 1||Growth of the Soil||ISBN 0-394-71781-3|
|1917||Markens Grøde 2|
|1918||Sproget i Fare|
|1920||Konerne ved Vandposten I||The Women at the Pump||ISBN 1-55713-244-5|
|1920||Konerne ved Vandposten II|
|1923||Siste Kapitel I||The Last Chapter (Volume 1)|
|1923||Siste Kapitel II||The Last Chapter (Volume 2)|
|1927||Landstrykere I||Wayfarers||ISBN 1-55713-211-9|
|1930||August I||August (Volume 1)|
|1930||August II||August (Volume 2)|
|1933||Men Livet lever I||The Road Leads On (Volume 1)||ISBN 1-4191-8075-4|
|1933||Men Livet lever II||The Road Leads On (Volume 2)|
|1936||Ringen sluttet||The Ring is Closed|
|1949||Paa gjengrodde Stier||On Overgrown Paths||ISBN 1-892295-10-5|
The December 5, 2005–January 2, 2006 issue of The New Yorker has a major article by Jeffrey Frank It seems to rely on the Ingar Kolloen biography (two volumes, reportedly aggregating about 1000 pages). In English, Hamsun was never popular and remains largely unknown. His infamous audience with Adolf Hitler is recorded as largely Hamsun complaining to Hitler about the Nazi depredations against Norwegians. At this time he was a largely-deaf old man in his 80s. The twenty–first century consensus puts him in the forefront of modernists, a forerunnoer of William Faulkner and Franz Kafka. Ernest Hemingway once credited Hamsun with teaching him "how to write." Nobel Prize-winning writer Isaac Bashevis Singer was also greatly influenced by Hamsun and translated some of his works.
- In From the Cold, The New Yorker, 2008.
ReferencesISBN links support NWE through referral fees
- Braatøy, Trygve. Livets Cirkel. (The Circle of Life: Contributions toward an analysis of Knut Hamsun's work). J.W. Cappelenes Forlag, 1929. ISBN 8202043158 (1979 edition).
- Ferguson, Robert. Enigma: The Life of Knut Hamsun. New York: Farrar, Straus and Giroux, 1987. ISBN 0374520933
- Humpal, Martin. The Roots of Modernist Narrative: Knut Hamsun's Novels Hunger, Mysteries and Pan. International Specialized Book Services, 1999 ISBN 8256011785
- Kolloen, Ingar Sletten. Svermeren 2003 Biography
- Kolloen, Ingar Sletten. Erobreren 2004 Biography
All links retrieved April 21, 2018.
1901: Sully Prudhomme | 1902: Theodor Mommsen | 1903: Bjørnstjerne Bjørnson | 1904: Frédéric Mistral, José Echegaray | 1905: Henryk Sienkiewicz | 1906: Giosuè Carducci | 1907: Rudyard Kipling | 1908: Rudolf Christoph Eucken | 1909: Selma Lagerlöf | 1910: Paul Johann Ludwig von Heyse | 1911: Maurice Maeterlinck | 1912: Gerhart Hauptmann | 1913: Rabindranath Tagore | 1915: Romain Rolland | 1916: Verner von Heidenstam | 1917: Karl Adolph Gjellerup, Henrik Pontoppidan | 1919: Carl Spitteler | 1920: Knut Hamsun | 1921: Anatole France | 1922: Jacinto Benavente | 1923: William Butler Yeats | 1924: Władysław Reymont | 1925: George Bernard Shaw
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.
|
<urn:uuid:3825a8dc-ed0c-4e82-bcfc-4db75818f06b>
|
CC-MAIN-2023-40
|
https://www.newworldencyclopedia.org/entry/Knut_Hamsun
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510284.49/warc/CC-MAIN-20230927071345-20230927101345-00060.warc.gz
|
en
| 0.924625 | 3,871 | 2.875 | 3 |
character styles stay/strip/replace in VARIABLES
Headers created with Variables need to be able to do three different things with character styles that are included in text :
- The Character Style stays present in the VARIABLE ;
- The Character Style is stripped.
- The Character Style is mapped to another character style.
These settings should accessible in the Style: <stylename> section of the TOC options -- customizable for each paragraph style used to generate the TOC.
The most notable use for the mapping style is when the typeface between the TOC and the body text of the book differs,…
Julie J commented
i agree! i am using story titles within our Lesson Titles in text book page footers. The story title often needs to be italicized, but there is no way to do this when defining Varibale text.
|
<urn:uuid:37c5e6ab-3585-45e1-aa2a-5a6d96734f5f>
|
CC-MAIN-2020-16
|
https://indesign.uservoice.com/forums/601021-adobe-indesign-feature-requests/suggestions/36154768-character-styles-stay-strip-replace-in-variables
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00075.warc.gz
|
en
| 0.87988 | 186 | 3.046875 | 3 |
Jesus of Nazareth
Why is Jesus Christ associated with Nazareth?
Nazareth, a small village in Upper Galilee, was the boyhood home of Jesus. Joseph and Mary, according to the New Testament, returned there sometime after Jesus’ birth in Bethlehem, a small town in Judea in the south (Matthew 2:23). From Jesus’ youth until he was thirty years of age, Nazareth was his home.
During this period it was not uncommon for a person to be identified with the town where he or she was born or lived (see for example Luke 8:2 where Mary of Magdala is mentioned). As a result, Jesus Christ is identified with Nazareth some seventeen times in the New Testament as “Jesus of Nazareth.”
Even in his death, though he had left Nazareth nearly three years earlier, Jesus was identified with the small village off the main road in the hills of Galilee: “And Pilate wrote a title, and put it on the cross. And the writing was, Jesus of Nazareth the King of the Jews” (John 19:19).
A few years later, following the Resurrection, Peter began to reach out beyond his Jewish people when he visited the Roman centurion, Cornelius, in Caesarea Maritima to share the “good news.” In this momentous meeting, Peter began his famous sermon with the geographical identification of Jesus’ boyhood home: “God anointed Jesus of Nazareth with the Holy Ghost and with power: who went about doing good, and healing all that were oppressed of the devil, for God was with him” (Acts 10:38). As the missionary work of the disciples spread across the Mediterranean basin and the Near East, people well beyond the Holy Land learned about Jesus of Nazareth.
In addition to the traditional name connection to a place, Matthew believed that Jesus’ identification with Nazareth was already known by early Hebrew prophets, “He came and dwelt in a city called Nazareth: that it might be fulfilled which was spoken by the prophets, He shall be called a Nazarene.” (Matthew 2:23).
Nazareth as it may have appeared during the first century A.D. Used by permission, Balage Balogh.
Nazareth‘s Place in the New Testament Story
The Return of the King, [Matthew] 2:19-23
This third passage in Matthew 2 begins with the same structure as we find in the previous one about the flight into Egypt-the angel of the Lord appears to Joseph in a dream and gives a command to get up and take the child and his mother and go back to the land of Israel, to which command Joseph arises and does exactly as he is told (noting the parallel language in vv. 14 and 21). Going to Judea again was truly a good idea, since Herod’s son Archelaeus was ruling there, and so having been warned of this in a dream, he withdrew to the “district” of Galilee, going to live in the small town of Nazareth. This, too, is seen as a fulfillment of Scripture, but notice that here prophets (plural) are referred to for the quotation “he shall be called a Nazarene.”
It has been difficult to find a Scripture or even a combination of Scriptures that match these words. One ingenious suggestion is that Isaiah 11:1 in the Hebrew lies in the background, which speaks of the NZR “branch” from the stump of Jesse, a reference to the messianic figure also referred to as Immanuel in Isaiah 7:14. In favor of this association is the fact that at Qumran, the “branch” in this passage was also interpreted messianically (1QH 6.15: 7.6-19). Though a different Hebrew word is used for branch, this same way of speaking of a messianic figure is found in Jeremiah 23:5; 33:15; Zechariah 3:8 and 6:12. What we are seeing here is indeed midrashic use of the Old Testament, and the combination of such Scriptural material with stories of Jesus, creatively woven together, has been called a midrashic haggadah, but it would be better called midrash and haggadah (narrative), for we have no reason to think the story itself is being embroidered except by the creative addition and handling of the Old Testament.
Another suggestion is that Matthew has in mind the notion of being a Nazarite, which is the term substituted for “one set apart” or a “holy one unto the Lord” in the LXX (cf. Isa 4:3; Judg 13:5-7; 16:17). Jesus Christ then is seen as one holy unto God, a conclusion that might find support in Matthew 19:10-12 if Jesus is referring to himself. However, the usual characterization of Jesus as one who ate and drank with sinners and at weddings (cf. John 2 to Mark 1-3) does not comport with the notion that he took a Nazaritic vow. This suggestion then seems less likely than the connection with the branch oracle.
On the surface of things, the impression left by this account is that Joseph and his family are moving to Nazareth for the first time. What is odd about this story is that of course, another son of Herod, Herod Antipas, was ruling in Galilee, so why would Galilee be better than Judea for the family? But then one must also ask why would Joseph move to such an out of the way town unless there were already family connections there. Or was it chosen precisely because in a town of 500-1,500 at the most, they would be able to disappear or become inconspicuous? It is a town nowhere mentioned in the Old Testament or in earlier Jewish sources, which may explain why the exegetical gymnastics were necessary to relate this move to Nazareth to the Old Testament. Though many scholars think it is difficult to reconcile this account with what Luke 2:39-40 says, which suggests that Jesus’ family was originally from Nazareth, both accounts agree on this key point-that Jesus grew up in Nazareth and came to be called Jesus of Nazareth. It is interesting that one of the castes of priests settled there after the fall of Jerusalem in AD 70, which suggests that it was seen as a ritually pure place.
Ben Witherington III, Matthew, (Macon: Smyth & Helwys Publishing, 2006) . 71-2
Ben Witherington III is Professor of New Testament Interpretation at Ashbury Theological in Wilmore, Kentucky.
Cities, Towns and Village: Nazareth in Context
Cities, as the dwelling places of elites, dominated the social and geographical landscape of Greco-roman antiquity. Elites built, controlled, and inhabited the cities. Caesarea and Jerusalem, of course, were major urban centers in Judea. Herod the Great constructed Caesarea to provide a port on the coast of Palestine and a monumental statement of loyalty to Caesar August. Major cities in the Galilee of Jesus included Sepphoris [modern Zippori] and Tiberias. These cities were founded by Herod Antipas and were the headquarters of Herodian officials. Not surprisingly, in view of the interest of the Jesus movement, they are never mentioned in the Gospels. Capernaum, Tarichese (Magdala), and Cana were administrative towns for fishing and agriculture. Peasants of the Galilean countryside lived in small villages like Nazareth or Nain.
K.C. Hanson and Douglas E. Oakman, Palestine in the time of Jesus: Social Structures and social Conflicts (Minneapolis: Fortress Press, 1998), 116-117
K.C. Hanson has taught biblical studies at Episcopal Theological School and the School of Theology at Claremont, Creighton University and St. Olaf College
Douglas E. Oakman is dean of Humanities and professor of Religion at Pacific Lutheran University, Tacoma, Washington
karenrose – has written 96 posts on this site.
Living out a great season of my life, thanks to Jesus Christ, and two wonderful daughters, a great life's work. Loving this opportunity to share faith online... I'm a single Mom, convert to The Church of Jesus Christ of Latter-day Saints, second-gen Italian, from the East coast originally. Love the fine arts, dance, frozen yogurt, temples, scriptures, writing, jazz, helping others reach their potential, king salmon, ....and not in that order. God is good. I feel it deeply when people have a misconception of Heavenly Father or Jesus Christ, His Son, that lessens or cheapens Them and blinds one's ability to feel His presence or to trust in an ultimately good eternal end to life's circumstances.
This entry was posted on Friday, February 22nd, 2008 at 12:07 pm and is filed under Array. You can follow any responses to this entry through the http://jesus.christ.org/68/jesus-of-nazareth/feed feed. You can leave a response, or trackback from your own site.
Leave a Reply
You must be logged in to post a comment.
|
<urn:uuid:318b52cb-17b9-4493-a2a6-88607ded3445>
|
CC-MAIN-2014-10
|
http://jesus.christ.org/68/jesus-of-nazareth
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394020792760/warc/CC-MAIN-20140305115952-00018-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.969826 | 1,963 | 3.296875 | 3 |
Rear elevation of The Breakers, 2009
|Location||44 Ochre Point Avenue, Newport, Rhode Island|
|Architect||Richard Morris Hunt|
|Architectural style||Italian Renaissance|
|NRHP Reference #||71000019|
|Added to NRHP||September 10, 1971|
|Designated NHL||October 12, 1994|
The Breakers is a Vanderbilt mansion located on Ochre Point Avenue, Newport, Rhode Island, United States on the Atlantic Ocean. It is a National Historic Landmark, a contributing property to the Bellevue Avenue Historic District, and is owned and operated by the Preservation Society of Newport County.
The Breakers was built as the Newport summer home of Cornelius Vanderbilt II, a member of the wealthy United States Vanderbilt family. It is built in a style often described as Goût Rothschild. Designed by renowned architect Richard Morris Hunt and with interior decoration by Jules Allard and Sons and Ogden Codman, Jr., the 70-room mansion has approximately 65,000 sq ft (6,000 m2) of living space. The home was constructed between 1893 and 1895. The Ochre Point Avenue entrance is marked by sculpted iron gates and the 30-foot (9.1 m) high walkway gates are part of a 12-foot-high limestone and iron fence that borders the property on all but the ocean side. The 250 × 120 ft (76 × 37 m) dimensions of the five-story mansion are aligned symmetrically around a central Great Hall.
Part of a 13-acre (53,000 m²) estate on the seagirt cliffs of Newport, it faces east overlooking the Atlantic Ocean.
As the previous mansion on the property owned by Pierre Lorillard IV burned in 1892, Cornelius Vanderbilt II insisted that the building be made as fireproof as possible and as such, the structure of the building used steel trusses and no wooden parts. He even required that the furnace be located away from the house, under Ochre Point Avenue; in winter there is an area in front of the main gate over the furnace where snow and ice always melt.
The designers created an interior using marble imported from Italy and Africa plus rare woods and mosaics from countries around the world. It also included architectural elements (such as the library mantel) purchased from chateaux in France.
The Breakers is the architectural and social archetype of the "Gilded Age," a period when members of the Vanderbilt family were among the major industrialists of America. Indeed, "if the Gilded Age were to be summed up by a single house, that house would have to be The Breakers." In 1895, the year of its completion, The Breakers was the largest, most opulent house in the Newport area. It represents the taste of an American upper class—socially ambitious but lacking a noble pedigree—whose determination to imitate and surpass the European aristocracy in lifestyle; a taste and ambition which was cynically noted by many members of the European upper-classes. However, this cynicism, coupled with assumptions of vulgarity, was not so deeply rooted that it prevented the daughters of these lavish houses and their associated dollars marrying into the European aristocracy.
Vanderbilt died from a cerebral hemorrhage caused by a second stroke in 1899 at the age of 55, leaving The Breakers to his wife, Alice Gwynne Vanderbilt. She outlived her husband by 35 years and died at the age of 89 in 1934. In her will, The Breakers was given to her youngest daughter, Countess Gladys Széchenyi (1886–1965), essentially because Gladys lacked American property. Also, none of Alice's other children were interested in the property while Gladys had always loved the estate.
The Breakers survived the great New England Hurricane of 1938 with minimal damage and minor flooding of the grounds.
In 1948, Gladys leased the high-maintenance property to the The Preservation Society of Newport County for $1 a year. The Preservation Society bought The Breakers and approximately 90% of its furnishings in 1972 for $365,000 from Countess Sylvia Szapary, the daughter of Gladys. However, the agreement with the Society granted life tenancy to the Countess Szapary. Upon her death in 1998, The Preservation Society agreed to allow the family to continue to live on the third floor, which is not open to the public.
It is now the most-visited attraction in Rhode Island with approximately 400,000 visitors annually and is open year-round for tours.
The pea-gravel driveway is lined with maturing pin oaks and red maples. The formally landscaped terrace is surrounded by Japanese yew, Chinese juniper, and dwarf hemlock. The trees of The Breakers' grounds act as screens that increase the sense of distance between The Breakers and its Newport neighbors. Among the more unusual imported trees are two examples of the Blue Atlas Cedar, a native of North Africa. Clipped hedges of Japanese yew and Pfitzer juniper line the tree shaded foot paths that meander about the grounds. Informal plantings of arbor vitae, taxus, Chinese juniper, and dwarf hemlock provide attractive foregrounds for the walls that enclose the formally landscaped terrace. The grounds also contain several varieties of other rare trees, particularly copper and weeping beeches. These were hand-selected by Ernest W. Bowditch, a landscape architect and civil engineer based in the Boston area. Bowditch’s original pattern for the south parterre garden was determined from old photographs and laid out in pink and white alyssum and blue ageratum. The wide borders paralleling the wrought iron fence are planted with rhododendron, mountain laurel, dogwoods, and many other flowering shrubs that effectively screen the grounds from street traffic and give visitors a feeling of seclusion.
- Staff's restrooms
- Entrance Foyer
- Gentlemen’s Reception Room
- Ladies’ Reception Room
- Great Hall (50 ft x 50 ft (15 m) x 50 ft) – Over each of the six doors which lead from the Great Hall are limestone figure groups celebrating humanity's progress in art, science, and industry: Galileo, representing science; Dante, representing literature; Apollo, representing the arts; Mercury, representing speed and commerce; Richard Morris Hunt, representing architecture; and Karl Bitter, representing sculpture.
- Main Staircase
- Library - Clearly themed for its use, the library is made from coffered ceilings painted with a dolphin, symbolic of the sea and hospitality, supported by Circassian Walnut paneling impressed with gold leaf in the form of a leather bound book. Between the ceiling and the gold paneling lies green Spanish leather embossed with gold, which continues into the library from the alcove where the noble inhabitants played cards. Inside the central library rests two busts; the bronze bust depicts William Henry Vanderbilt, the oldest child of Cornelius II and Alice, who died of typhoid at the age of 21 while attending Yale university. There is now a library at Yale dedicated to William Henry Vanderbilt. The second bust, in marble, is of Cornelius Vanderbilt II. The fireplace, taken from a 16th-century French chateau (Arnay-le-Duc, Burgundy), bears the inscription “I laugh at great wealth, and never miss it; nothing but wisdom matters in the end.”
- Music Room - the room’s open interior was used for recitals and dances. With woodwork and furnishings designed by Richard Van der Boyen and implemented by Jules Allard and Sons. The room has a gilt coffered ceiling lined with silver and gold, as well as an elliptical ceiling molding which bears the inscription in French of song, music, harmony and melody. Around the edge are the names of well known composers. The fireplace is of Campan marble and the tables were designed to match. Mr. Vanderbilt was known to play the violin and Mrs. Vanderbilt the piano, which is a second empire French mahogany ormolu mounted piano.
- Morning Room - This room, a communal sitting room facing east to admit the morning sun, was used throughout the day, and is designed by the French company head Jules Allard. Around the room is platinum paneling, with muses of the sciences and humanities. All interior woodwork and furnishings were designed and constructed in France, then shipped to America before assembly.
- Lower Loggia
- Billiards Room - This room, in the style of Ancient Rome, was designed by Richard Morris Hunt and shows his competence in stone works. The great slabs of Cippolino marble from Italy form the walls, while alabaster arches provide contrast. Throughout the room there is an assortment of semi-precious stones, forming mosaics of acorns (the Vanderbilt family emblem, intended to show strength and longevity) and billiards balls on the top walls. The Renaissance style mahogany furniture provides further contrast with that of the colored marble.
- Dining Room - The 2,400 square foot dining room is the house's grandest room, and has 12 freestanding alabaster Corinthian columns supporting a colossal carved and gilt cornice. Rich in allegory, this room serves as an exemplar proof of what 19th century technology can do with Roman ideas and 18th-century inspiration. On the ceiling, the goddess Aurora is depicted bringing in the dawn on a four horse chariot as Greek figures pose majestically. A 16th-century style table of carved oak seats up to 34. Two baccarat crystal chandeliers light the room with either gas or electric, and 18, 22 or 24 carat gold gilt adheres to the wall through rabbit skin glue.
- Breakfast Room - The Breakfast room and its modified Louis XV style paneling and furnishings was used for family morning meals. The furnishings, colors and gilt, although still extravagant in their use, contrast with the dining room’s more lavish decoration.
- Pantry - A central dumbwaiter serves to bring the food and wine from the kitchen and cellar to the principal rooms. The pantry was also used for the storage of the family's table silver; this was brought with the family when they traveled, and stored in a steel vault. An intercom system allows the butler to direct the necessary servants to their needed locations, and each number on the caller corresponds to a number on a room.
- Kitchen - The kitchen, unlike others in the time period, was situated on the first floor away from the main house to prevent the possibility of fires and cooking smells reaching the main parts of the house. The well-ventilated room supports a 21-foot cast iron stove, which heats up as a single element through a coal burning stove. The work table is made of zinc, a metal which served as the forerunner to stainless steel; in front of it is a marble mortar used to crush various ingredients. Ice cut from the local ponds kept the side rooms cool where food was stored, and facilitated a colder room for the assembling of confectionaries.
- Mr. Vanderbilt’s Bedroom - As with the rest of the second floor, Ogden Codman designed this room in Louis XIV style. The bed is made of carved walnut finish and the mantel is of gray and peach Numibian marble, which hosts a large mirror above to bring more light into the room. There lies much memorabilia of family and friends, though Cornelius Vanderbilt II lived only a year at the Breakers in good health, before dying the following year, 1899, of a stroke.
- Mrs. Vanderbilt’s Bedroom - Designed as a perfect oval, Alice Vanderbilt’s room accommodates multiple doors, though they are cut into the wall to leave an undisturbed picture of geometric perfection, that connect the bedrooms. Alice had four closets to allow for her possible seven clothing changes per day, and a pager to administer and relegate family needs to the servants. This room also served as her study and had many bookshelves. Additionally, there are discretely designed corridors which permitted female servants to maintain the laundry and costume needs of the family in a seemingly invisible fashion.
- Miss Gertrude Vanderbilt’s Bedroom - Gertrude, daughter of Cornelius II and Alice, was a less confirmatory character who wished to be loved for her personality rather than her wealth and family, and later found her match in Harry Payne Whitney, and became an artist. Around the room there lie multiple pieces of her art work, including “The Engineer”, which was inspired by her brother during WWI, “Laborer” and another that commemorates the American Expeditionary Force of WWI. She moved into The Breakers when she was 19. Above her bed is a portrait of Miss Gertrude Vanderbilt at 5 years old, and beside that, to the left of the bed is a sketch of her as a young woman.
- Upper Loggia - Serving as an informal living room, the upper loggia faces east, and opens to the Atlantic. During the summer, when needed, the windows could be opened on both sides to allow for a breezeway. The walls are painted marble, and the ceiling is designed to depict three canopies that covered the sky. The lawn, designed by James and Ernest Bowdwitch, hosted many parties and was well kept by a gardening staff of 20, who also introduced and maintained various non-indigenous trees.
- Guest Bedroom - This room exemplifies the Louis XVI-style through furniture, woodwork and light fixtures, with Neoclassical style abounding in the interior. The wall paneling has never been retouched, though the rest of the room has been restored by the preservation society.
- Countess Szechenyi’s Bedroom - Designed by Ogden Codmen in 18th century simple elegance style, this room exhibits ivory and cream colored design.
- There are also two other small bedrooms located on the second floor.
The third floor contains eight bedrooms and a sitting room decorated in Louis XVI style walnut paneling by Ogden Codman. The North Wing of the third floor quarters were reserved for domestic servants. With ceilings near 18 feet high, Richard Morris Hunt created two separate third floors to allow a mass congregation of servant bed chambers. This was all in part of the configuration of the house, built in Italian Renaissance style, that included a pitched roof. Flat roofed French classical houses in the area allowed a concealed wing for staffing at the time. The Breakers does not feature this luxury.
A total of 30 bedrooms are located in the two third floor staff quarters. Three additional bedrooms for the Butler, Chef, and Visiting Valet are located on the Mezzanine "Entrasol" Floor located between the first and second floor just to the rear of the main kitchen.
The Attic floor contained more staff quarters, general storage areas, and the innovative cisterns. One smaller cistern supplied hydraulic pressure for the 1895 Otis lift, still functioning in the house though wired for electricity in 1933. Two larger cisterns supplied fresh and salt water to the many bathrooms in the house.
Over the Grand Staircase is a stained glass skylight designed by artist John La Farge. Originally installed in the Vanderbilt's 1 West 57th Street New York City townhouse dining room, the skylight was removed in 1894 during an expansion of the house.
The Breakers is also a definitive expression of Beaux-Arts architecture in American domestic design by one of the country's most influential architects, Richard Morris Hunt. The Breakers - Hunt's final project - is one of the few surviving works by Hunt that has not been demolished during the last century and is therefore valuable for its rarity as well as its architectural excellence. The Breakers made Hunt the "dean of American architecture"[by whom?] as well as helped define the era in American life which Hunt helped to shape.
- Foundation: Brick, Concrete and Limestone
- Trusses: Steel
- Walls: Indiana Limestone
- Roof: Terra cotta Red Tile
- Wall Panels: Platinum leaf (eight reliefs of mythological figures only)
- Other: marble (plaques), wrought iron (gates & fences)
- "National Register Information System". National Register of Historic Places. National Park Service. 2007-01-23.
- "Breakers, The". National Historic Landmark summary listing. National Park Service. Retrieved 2008-06-28.
- Gannon, Thomas. Newport Mansions: the Gilded Age. Fort Church Publishers, Inc., 1982: p. 8.
- Mackenzie Stuart, p240 and throughout.
- Miller, G. Wayne (2000-07-07). "Fortune's Children". A Nearly Perfect Summer (Providence Journal). Retrieved 2007-08-10. "The Breakers left family ownership three decades ago, when the Preservation Society bought it for $365,000, a pittance—but let Paul, Gladys and their mother continue summering on the third floor, formerly servants' quarters. Mother died in 1998 but her children summer there still, hidden from the hundreds of thousands of tourists who explore below."
- United States Department of the Interior / National Register of Historic Places Registration Form (Rev.8-86)
- Newport Preservation Society's Breakers Audio Tour
- "National Historic Landmark Nomination".
- They used Gold leaf on the design. Mansion wall panels found to be platinum – The Boston Globe
- Wilson, Richard Guy, Diane Pilgrim, and Richard N. Murray. American Renaissance 1876–1917. New York: The Brooklyn Museum, 1979.
- Baker, Paul R. Richard Morris Hunt. Cambridge, MA: The MIT Press, 1980.
- Benway, Ann. A Guidebook to Newport Mansions. Preservation Society of Newport County, 1984.
- Croffut, William A. The Vanderbilts and the Story of their Fortune. Chicago and New York: Belford, Clarke and Company, 1886.
- Downing, Antoinette F. and Vincent J. Scully, Jr. The Architectural Heritage of Newport, Rhode Island. 2nd edition, New York: Clarkson N. Potter, Inc., 1967.
- Ferree, Barr. American Estates and Gardens. New York: Munn and Company, 1904.
- Gannon, Thomas. Newport Mansions: the Gilded Age. Fort Church Publishers, Inc., 1982.
- Jordy, William H., and Christopher P. Monkhouse. Buildings on Paper: Brown University, Rhode Island Historical Society and Rhode Island School of Design, 1982.
- Lints, Eric P. "The Breakers: A Construction and Technologies Report" Newport, RI: The Newport Preservation Society of Newport County, 1992.
- Metcalf, Pauline C., ed. Ogden Codman and the Decoration of Houses. Boston: The Boston Athenaeum, 1988.
- Patterson, Jerry E. The Vanderbilts. New York: Harry N. Abrams, Inc., 1989.
- Perschler, Martin. "Historic Landscapes Project" Newport, RI: The Preservation Society of Newport County, 1993.
- Schuyler, Montgomery. "The Works of the Late Richard M. Hunt," The Architectural Record, Vol. V., October–December, 1895: p. 180.
- Smales, Holbert T. "The Breakers" Newport, Rhode Island. Newport, RI: Remington Ward, 1951.
- Thorndike, Joseph J., ed. Three Centuries of Notable American Architects. New York: American Heritage Publishing Co., Inc., 1981.
- Mackenzie Stuart, Amanda. "Consuelo & Alva"; Harper Perennial, London; 2006. ISBN 13 978-0-00-712731-3.
|Wikimedia Commons has media related to The Breakers.|
- Preservation Society of Newport County - Breakers Page
- Complete details of the building, from the United States Department of the Interior, National Park Service (Adobe PDF file)
- The Breakers, Ochre Point Avenue, Newport, Newport, RI at the Historic American Buildings Survey (HABS)
|
<urn:uuid:16617059-306d-469b-a22b-5d90c51d36d6>
|
CC-MAIN-2013-48
|
http://en.wikipedia.org/wiki/The_Breakers
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163055862/warc/CC-MAIN-20131204131735-00046-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.939713 | 4,252 | 2.53125 | 3 |
Tomorrow there will be lots of sweet treats, flowers and other goodies being given as gifts for the celebration of Valentines Day!
Please read on and follow these important tips to keep your pet safe from potential Valentine’s Day hazards.
PET SAFE FLOWERS, PLANTS and FLORAL ARRANGEMENTS:
There are certain flowers and plants that can be harmful or even deadly to dogs and cats. Before choosing a plant for a Valentines gift or picking the flowers for your sweetie’s floral arrangement, please check this list of toxic and non toxic plants from the ASPCA’s website.
CHOCOLATE and SWEETS:
Always keep chocolate and other sweets out of reach of your pets. Chocolate can be especially deadly to pets. According to the ASPCA’s website, “Seasoned pet lovers know the potentially life-threatening dangers of chocolate, including baker’s, semi sweet, milk and dark. In darker chocolates, methylxanthines—caffeine-like stimulants that affect gastrointestinal, neurologic and cardiac function—can cause vomiting/diarrhea, hyperactivity, seizures and an abnormally elevated heart rate. The high-fat content in lighter chocolates can potentially lead to a life-threatening inflammation of the pancreas. Go ahead and indulge, but don’t leave chocolate out for chowhounds to find. ”
RIBBONS and BOWS:
Fancy packaging and curling ribbons can be toxic and deadly to pets if they are ingested. Cats often love to play with curling ribbon, packaging and string which can become entangled in their intestines and cause major damage. Also watch out for balloons and strings.
COCKTAILS and ALCOHOL:
According to the ASPCA, “Spilled wine, half a glass of champagne, some leftover liquor are nothing to cry over until a curious pet laps them up. Because animals are smaller than humans, a little bit of alcohol can do a lot of harm, causing vomiting, diarrhea, lack of coordination, central nervous system depression, tremors, difficulty breathing, metabolic disturbances and even coma. Potentially fatal respiratory failure can also occur if a large enough amount is ingested.”
|
<urn:uuid:0bb1e696-a6dd-465a-bca1-6a36e0a35b63>
|
CC-MAIN-2023-14
|
https://www.petcp.com/valentines-day-safety-tips-pets/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00215.warc.gz
|
en
| 0.896125 | 468 | 2.65625 | 3 |
today, is no different
to what happened to Pharaoh,
during the Exodus from Egypt...
What's happening to
It's not just a thrilling story of old.
The Exodus story hasn't concluded with the Jews leaving Egypt.
The Government that's controlling the world before the Coming of Moshiach
On 3 different levels:
Just like in the first Redemption from Egypt,
there's a Pharaoh that stands in the way of the Redemption that's coming to the World,
with the advent of Moshiach.
The contemporary Government before Moshiach is the global Pharaoh that stands in the way of the new era (Redemption)
are possibly included with "Government".
Like Universities for example.
Just going global.
Getting past Pharaoh was the key to the Exodus.
Getting past the Government is the key to Moshiach.
Egypt has to know that I am G-d.
'וידעו מצרים כי אני ה
Pharaoh is not to be exterminated.
He's to be transformed.
"ומלכות הרשעה... ותכניע"
In our daily prayers we ask for the Wicked Kingdom to be
Ultimately it's not about annihilation (1-3) – but about subjugation (4) – of the evil reign prior to Moshiach.
It's when we reach the final stages of Levels 1 and 2, that we need to deal with the toughest / most stubborn aspects of Pharaoh.
Pharaoh was the king of the world, but with Moses turning up, everything started falling around him.
Did he take notice?
Only after 10 plagues.
Example of Government's Hands are Tied:
Most of the Exodus story is about breaking Pharaoh, during the last year of their stay.
|Level||Egypt Means:||Exodus Means:|
|1. Culture||Materialism, limiting the person's perspectives to physical ideaologies||Spiritual perspective, and how to use material instincts to serve G-d|
|2. Personal||Evil Inclination, limiting the individual's service of G-d||Overcoming limitations from within|
|3. Before Moshiach||Government running the show as "usual"||Moshiach revealing G-d's sovereignty in the World|
"The Exodus from Egypt is a big foundation and a strong pillar in our Torah and our belief" - Sefer Hachinuch, Mitzvah 21.
Pharaoh didn't die - Pirkei D'Rabbi Elazar, Ch. 43: Pharaoh ended up becoming the king of Ninve who features in the story of Jonah
The Jewish People wanting to return to Egypt - Exodus 16:3, Numbers 11:18, Numbers 14:2, Numbers 20:5, Numbers 21:5.
Preferring physical crushing labor under Pharaoh to serving G-d - Numbers 11:5.
The Manna bread was considered by them as "nothing" - Numbers 11:6: "אין כל" -- "There is nothing".
The Exodus will conclude with the Coming of Moshiach - Sefer Hamaamarim 5708 p.159.
Third Temple at the end of the Song of the Sea (שירת הים) - Exodus 15:17: "You will bring them and implant them on the mount of Your heritage, a foundation for Your dwelling place that You, G-d, have made -- a Sanctuary, my L-rd, that Your hands established. G-d shall reign for all eternity." The "Sanctuary" referred to here is the Temple that will be built (By G-d) in future times (when He will reign forever) - Rashi on this verse.
Egypt within the person - Torah Or, Shemot 49d.
Remembering the Exodus from Egypt all the time - Deuteronomy 16:3 "All the days of your life" -- literally.
Pharaoh = Evil Inclination - Igeret Hamusar of the Rambam.
Pharaoh = nape of the neck - Ibid.
מצרים (Egypt) = confinements - Torah Or, Ibid.
ארץ מצרים = wanting Egypt -
Original Exodus from Egypt has enabled religious freedom - Sefer Hamaamarim, ibid.
Egypt on every level - Torah Or, ibid.
All limits will be undone - Talmud, Shabbat 118a.
Synergy between Pharaoh and the Government at the End of Days - based on the general synergy between the Exodus from Egypt and the Final Redemption that's alluded to in the verse "Like the days of your going out of the land of Egypt, will I show him wonders" (Micha 7:15). Also see Pharaoh's Tzaraat and Bereshit Rabba 1:26.
Transformation of Egypt/Pharaoh in the Final Redemption - Sefer Hasichot 5752 Parashat Bo Par. 10-11. Also see Talmud Sanhedrin 59b regarding the Serpent (the embodiment of Evil in this world) which will be an important servant for G-d in the Future Times, noting Pharaoh is compared to a serpent (Ezekiel 29:3).
Jacob's blessing of Pharaoh's welfare - Genesis 47:7.
One should pray for the well-being of the government - Mishna Avot 3:2
|
<urn:uuid:1881aabf-e48f-4b74-8bc2-c5d6270b0110>
|
CC-MAIN-2023-23
|
https://slides.com/aboutmoshiach/pharaoh-legacy/fullscreen
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647614.56/warc/CC-MAIN-20230601042457-20230601072457-00461.warc.gz
|
en
| 0.867406 | 1,320 | 2.71875 | 3 |
Design (Wikipedia’s definition):
Design is the planning that lays the basis for the making of every object or system. It can be used both as a noun and as a verb and, in a broader way, it means applied arts and engineering.
Creative Ambiguity (my definition):
Creative Ambiguity is brought about when an intangible idea, process or way of thinking is defined in an imprecise way. It is a delicately-balanced conceptual space in which the very nature of the ambiguity leads to creative outputs.
So if Creative Ambiguity is a good thing, how do we go about planning and designing for it? I suggest 3 guidelines:
- Avoid using precise language if your understanding of a idea, process or way of thinking is imprecise.
- View other people’s opinions in an and/and/and way rather than either/or. Embrace the greyness!
- When coming across a new idea, process or way of thinking, find out if it has been previously defined. If not, come up with a new term and throw it out there for people to comment upon.
According to Pragmatism, things don’t have to ‘exist’ they just need to be ‘good in the way of belief’. Is Creative Ambiguity good in the way of belief for you?
|
<urn:uuid:2d38001e-513e-4c8a-a4eb-0a1cc5ecfb23>
|
CC-MAIN-2017-30
|
http://dougbelshaw.com/blog/2010/06/18/designing-for-creative-ambiguity/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550975184.95/warc/CC-MAIN-20170728163715-20170728183715-00713.warc.gz
|
en
| 0.935072 | 286 | 3.1875 | 3 |
Effective leadership is necessary in medicine to foster an organizational culture that promotes patient safety. By fostering an environment of psychological safety that encourages others to feel safe communicating issues and speaking up with concerns, leaders are able to act decisively and timely to protect patients and employees. Ultimately, leaders who promote a positive organizational climate contribute to higher job satisfaction among employees, decreased burnout, fewer medical errors, and an overall improved culture of safety.
Effective leadership is necessary in medicine to foster an organizational climate that promotes patient safety. Leadership is the cornerstone of success to any project or business. Effective leaders lead by example, value a strong work ethic, and demonstrate a commitment to the mission of an institution or department beyond that of self-preservation.1 Capable leaders use a clear vision to instill a larger sense of purpose, setting the tone for the direction of an organization. Leaders who promote a positive and cohesive work environment engender trust among providers and staff and establish psychological safety for employees. Leadership determines organizational priorities and can funnel resources toward important safety initiatives. Fostering an environment that encourages others to speak up with concerns allows leaders to act decisively and in a timely manner to protect patients and employees. Ultimately, leaders who promote a positive organizational climate contribute to higher job satisfaction among employees, decreased burnout, fewer medical errors, and an overall improved culture of safety.2
Improving safety culture within health care systems is an essential component of preventing and reducing errors. The Joint Commission defines safety culture as the collection of “beliefs, values, attitudes, perceptions, competencies, and patterns of behavior that determine the organization’s commitment to quality and patient safety”.3 A core measure of a strong safety culture is the willingness of employees, whether clinical or in support roles, whether newly hired or experienced, to feel comfortable speaking up when they see something amiss. It is imperative that leaders support and foster an environment in which speaking up is encouraged so that care teams can learn from adverse events, close calls, and unsafe conditions. This can be accomplished by encouraging a transparent and nonpunitive approach to reporting. Moving to a “just culture” where individual blame is minimized or removed, and a focus is placed on system faults that contribute to adverse events, can improve a safety culture.
Leaders must also adopt and champion efforts to eradicate intimidating behaviors. When unprofessional behavior is tolerated within an organization, it undermines patient safety. Failing to address unprofessional behavior in a fair and transparent manner allows such behavior to persist and signals to new employees that such behavior may be tolerated, potentially promoting more of it. Addressing unprofessional behavior in disruptive employees can yield improved staff satisfaction and retention, enhanced reputation, improved patient safety and risk-management experience, and better work environments.4
Team members who identify unsafe conditions or who have good suggestions for safety improvements should be recognized and rewarded. Leaders can use a number of techniques to improve safety culture, including use of surveys to identify culture gaps, encouraging teamwork training, performing executive walk-rounds, and establishing unit-based quality and safety teams.5 By proactively assessing system strengths and vulnerabilities, health care teams can track progress and prioritize areas to improve safety culture.
Psychological safety is defined as the belief one will not be punished for making an error or speaking up. It is a core component of a safe culture, and intertwines with both patient safety and burnout. Psychological safety allows for creativity, speaking one’s mind, and lack of fear for having new, different, or dissonant ideas.6 A psychologically safe environment also permits providers to discuss issues related to their own work-life balance. In creating psychological safety, leaders must foster an environment where providers feel safe communicating issues with patient care. Effective leaders maintain open lines of communication and remain open to feedback. Though this may subject one to increased vulnerability, the ability to accept feedback and react constructively allows leaders to recognize problems earlier and deal with them proactively.1 Otherwise, team members may not speak up about a problem for fear of retaliation or humiliation.
Organizational Culture and Employee Burnout
An organization’s culture can enhance patient safety and drive quality. It can also contribute to burnout (Figure 1). Burnout is a syndrome conceptualized as resulting from chronic workplace stress that has not been successfully managed.7 Traditionally, organizational culture in health care has not allowed room for a discussion of work-life balance. Providers have feared voicing concerns regarding their personal needs that may not align with departmental or institutional goals. Some institutions may only start to pay attention to burnout when it begins to contribute to loss of productivity, patient access, lower patient safety scores, and increased costs. Frequent management changeover or uncertainty, lack of a strategic plan, or goal incongruence can lead to physicians feeling devalued or ineffective. High rates of turnover can be a sign that ineffective leadership is contributing to high burnout rates in departments or institutions. Turnover leads to increased costs, recruitment expenses, agency/locum bridging, higher rates of paid time off, and need for additional support services, to name a few.
Today, as data continue to mount relating burnout among health care workers to increases in incidence of medical errors and malpractice, it is in every institution’s best interest to address employee stress and work to successfully manage it. Following the Institute of Medicine (IOM) landmark report asserting that deaths from medical errors had become the third leading cause of death in the US behind cancer and heart disease, quality improvement initiatives to reduce patient harm have spawned nationwide.8 Recent studies have suggested a two-fold increase in medical errors when associated with clinician burnout as compared to those not associated with burnout, with an overwhelming 55% of respondents reporting burnout symptoms.9 If these issues go unaddressed, the health care professional’s well-being, or potentially even the his or her safety, can become compromised. To prevent burnout and increase wellness amongst providers, leaders should reflect on an organization’s climate and implement change when needed. By implementing monitoring tools, including workplace wellness initiatives and workplace response teams, leaders can foster an organizational culture that prevents burnout.
Key Attributes of Effective Leaders
Acquisition of certain attributes in leadership is so important that a multitude of workshops, courses, and degrees have been established to help hone and refine these skills. The following list, though not comprehensive, reviews a few of the most important attributes that distinguish an effective leader from an ineffective one (Table 1).
Table 1: Key Attributes of Effective Leaders
Effective communication is necessary to allow an organization’s people to know what is expected, valued, and appreciated. Clearly articulated goals help people remain focused, track progress, and discuss challenges openly. As new ideas are developed, it is critical to clearly define mission objectives and review them at regular intervals along the way with all involved stakeholders, including frontline providers, thought leaders, or senior faculty. This monitoring of progress with regular checks and balances avoids potential miscommunication and assures compliance with intended goals. At all times, leaders must remain open to constructive criticism and feedback. If this is hindered, team members may begin to fear retaliation or humiliation for speaking up.
Fostering a culture of teamwork and camaraderie is essential to building a culture of safety. Leaders should take pride in what their providers have already accomplished while nurturing their skills for further development. The positive attitude from the leader is instrumental and contagious at the same time. When leaders work together with their frontline providers, it empowers them to partner with the vision and the growth at the highest level. One example of collaborative teamwork is the sharing of important data metrics. Providers are more likely to comply with the recurrent demands of workplace objectives when given a better understanding of why they need to do it. Effective communication and collaborative teamwork are essential in aligning with a common goal.
While experience alone does not make a great leader, experienced leaders may be more comfortable taking chances and more confident making decisions. When leaders hesitate or become indecisive, as inexperienced leaders sometimes do, it can lead to confusion and exhaustion among employees. However, every future leader needs a place to start. Professional development and leadership training for high-potential individuals can be of great benefit to organizations. While some may have the skills to be successful as leaders more innately than others, not everyone is a natural born leader. Even those with significant experience or professional leadership training may fail. A study by the Center for Creative Leadership showed that roughly 38% to more than half of new leaders fail within their first 18 months.7 Leaders can avoid becoming part of this staggering statistic by incorporating good leadership strategies that motivate their team members to accomplish their goals. Openness to feedback, checking in regularly with one’s own goals, and recognizing signs of failure are all keys to success and continuous improvement.
It is imperative that leaders work with frontline providers to develop and implement creative work strategies to maximize efficiency while limiting workplace stressors and reducing burnout. Increasing pressure continues to mount from organizational and third-party stakeholders to meet metrics. Some institutions are seeing only a slight increase in volume, yet the work hours are longer, translating to an increased risk to the employee’s health with diminishing returns in productivity. Longer employee work hours are associated with increased fatigue, poor mood, poor recovery from work, and a nearly 40% increase in risk for coronary artery disease.10-12 Men and women working long hours showed higher prevalence of depression and anxiety disorders.13 For decades, the National Institute for Occupational Safety and Health (NIOSH) has recognized shift work and work-related sleep loss to be a hazard in the workplace and has carried out an active research program to address this hazard. A goal of NIOSH’s National Occupational Research Agenda (NORA) for Healthcare and Social Assistance is that health care organizations adopt best practices for scheduling and staffing that minimize excessive workload and other factors associated with fatigue.14 As the cost of health care continues to increase, so do the demands on productivity. With continual improvements in information technology, electronic medical records, and machine learning, there is a growing list of tools available to help improve processes and streamline care so that increased productivity demands do not always translate into increased workload.
Effective leadership in medicine is necessary to promote patient safety. Leaders must continually strive to be role models, stewards of resources, and improve processes. Effective leaders support safety initiatives and create systems that address concerns brought forth by frontline providers and patients. Constraints of any kind in an organization can lead to increased frustration, communication breakdown, and potential errors. In order to remain efficient and effective, leaders must overcome these obstacles and maintain forward thinking, regularly checking in with their employees, ensuring their state of wellbeing, and taking corrective action when elements become out of balance. By creatively adapting and effectively communicating, leaders can help their organizations accomplish goals, even in difficult times. Employees with higher job satisfaction at work have lower rates of burnout, allowing for increased focus, productivity, and fewer overall medical errors.
Dr. Trainer is assistant professor of Anesthesiology at Virginia Commonwealth University and Central Virginia VA Health Care System in Richmond, VA. She is also completing a fellowship in Critical Care Medicine with the Department of Anesthesiology and Critical Care at the University of Virginia, Charlottesville, VA.
Dr. Dayal is program director of Pain Medicine in the Department of Anesthesiology and Perioperative Care, University of California Irvine, CA, and is an associate clinical professor in the Department of Anesthesia and Perioperative Care, University of California Irvine Medical Center, CA.
Dr. Agarwala is chief medical officer at Massachusetts Eye and Ear, faculty anesthesiologist at Massachusetts Eye and Ear and Massachusetts General Hospital, and assistant professor at Harvard Medical School, Boston, MA.
Dr. Pukenas is vice chair and vice chief of Administrative Affairs in the Department of Anesthesiology at Cooper University Health Care and assistant dean for Student Affairs and associate professor of Anesthesiology at Cooper Medical School of Rowan University in Camden, NJ.
The authors have no conflicts of interest.
- Albright-Trainer B. Leadership philosophy: what makes a great leader? VSA Update Newsletter. Fall 2019.
- Sfantou D, Laliotis A, Patelarou A, et al. Importance of leadership style towards quality of care measures in healthcare settings: a systematic review. Healthcare (Basel). 2017;5:73.
- Joint Commission Sentinel Event Alert. The essential role of leadership in developing a safety culture. Joint Commission. Issue 57, March 1, 2017.
- Hickson GB, Pichert JW, Webb LE, Gabbe SG. A complementary approach to promoting professionalism: identifying, measuring, and addressing unprofessional behaviors. Acad Med. 2007;82:1040–8.
- Tucker A, Singer S. The effectiveness of management by walking around: a randomized field study. Prod Oper Manag. 2014;25:1977–2001.
- Delizonna, L. High-performing teams need psychological safety. here’s how to create it. Harv Bus Rev. 2017 Aug.
- Riddle, D. Executive integration: equipping transitioning leaders for success. 2016. https://www.ccl.org/wp-content/uploads/2015/04/ExecutiveIntegration.pdf. Accessed March 5, 2020.
- Kohn LT, Corrigan JM, Donaldson MS, et al. To err is human: building a safer health system. Washington, DC: National Academy Press, Institute of Medicine. 1999.
- Tawfik DS, Profit J, Morgenthaler TI, et al. Physician burnout, well-being, and work unit safety grades in relationship to reported medical errors. Mayo Clin Proc. 2018;93:1571–1580.
- Caruso CC, Hitchcock EM, Dick RB, et al. Cincinnati, OH: Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health; 2004. Overtime and extended work shifts: recent findings on illnesses, injuries, and health behaviors. DHHS (NIOSH) Publication No. 2004–143.
- Siu O-L, Donald I. Psychosocial factors at work and workers’ health in Hong Kong: an exploratory study. Bulletin of the Hong Kong Psychological Society. 1995;34/35:30–56.
- Virtanen M, Heikkilä K, Jokela M, et al. Long working hours and coronary heart disease: a systematic review and meta-analysis. Am J Epidemiol. 2012;17:586–596.
- Kleppa E, Sanne B, Tell GS. Working overtime is associated with anxiety and depression: the Hordaland health study. Journal of Occupational and Environmental Medicine. 2008;50:658–666.
- NORA Healthcare and Social Assistance Sector Council. State of the sector healthcare and social assistance. Department of Health and Human Services, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health Publication No. 2009–139. http://www.cdc.gov/niosh/docs/2009-139/pdfs/2009-139.pdf. Accessed on March 5, 2020.
|
<urn:uuid:2dba502d-4e3f-459e-b2d1-059e626d104e>
|
CC-MAIN-2020-29
|
https://www.apsf.org/article/effective-leadership-and-patient-safety-culture/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655912255.54/warc/CC-MAIN-20200710210528-20200711000528-00317.warc.gz
|
en
| 0.92917 | 3,144 | 2.890625 | 3 |
Sierra Club - Clean Water for Florida
Nutrient Pollution Standards Campaign Archive: 2010-2011
August 2011: Federal Appeals Court Rules for Clean Water in Florida
A federal appeals court struck down a challenge filed by polluting industries and upheld an historic clean water settlement between the
US Environmental Protection Agency and Earthjustice that requires EPA to set limits on sewage, fertilizer and
manure in Florida's waterways.
Unchecked, the phosphorus and nitrogen in sewage, manure and
fertilizer are sparking repeated toxic algae outbreaks in Florida
waters. These outbreaks are a public health threat because they can
make people and animals sick, contaminate drinking water, cause fish
kills, and shut down swimming areas. Most recently, the Caloosahatchee
River in southwest Florida was covered with nauseating green slime and
rotting fish for weeks.
"The polluters keep trying to use our public waters as their private
sewers, but we intend to keep fighting them. They have to take
responsibility for their mess," said Earthjustice attorney David
Guest. "Our economy depends on tourism, and nobody wants to come to
Florida to look at dead fish and slime-covered water."
A who’s-who of Florida’s leading polluting industries filed a legal
challenge to stop the cleanup in January. The federal court ruled
"The polluters have been using scare tactics, bogus science,
underhanded political bullying, and campaign cash to try to get their
way. Fortunately, the Clean Water Act is still a good law that
protects ordinary citizens, and it prevailed today."
- David Guest
July 2011: Sierra Club and other concerned groups gathered in protest
outside of the office of
U.S. Rep. John Mica to protest his sponsorship of legislation that would rein in federal
authority to establish water-pollution rules in Florida and other states.
Representatives of Save the Manatee Club, Florida Native Plant Society,
Environment Florida and other groups were outraged by the Representatives "Dirty Water" bill.
"I am horrified by the bill Congressman Mica has proposed and gotten passed in the House,"
said Deirdre Macnab, president of the League of Women Voters of Florida.
"This bill is a passport for polluters in this state."
Mica said his legislation would require EPA to rely more on negotiating with states to get
them to adopt new pollution standards and would limit the agency's ability to simply
force states to adopt new rules. The protesters who gathered Thursday in the parking lot
outside Mica's Maitland office said the EPA's action was long overdue because Florida
officials were never going to do a proper job on their own of cleaning up state waters.
"Polluters run the state of Florida," said Frank Jackalone, staff director of the
Sierra Club in Florida. "We need somebody outside with a big stick."
The EPA is concurring with concerned citizens in Florida who want to curtail the
amount of phosphorus and nitrogen pollution that finds its way into lakes and
streams from sources such as crop and lawn fertilizers and from the discharge of
treated sewage.That pollution can trigger rampant growths of algae capable of
wiping out certain fish and native plants. Advocates of EPA's efforts to enforce the
federal Clean Water Act in the state cite widespread examples of waterways suffering
from nutrient pollution, including the St. Johns River in Central and North Florida.
Read Orlando Sentinel's Op Ed piece on protecting Florida's water
Protect Florida's water
June 2011: Sixty supporters of clean water answered the call from Sierra Club
and community partners to show up at the Tampa City Council meeting (June 23)
to urge council members to pass an ordinance banning purchase and use of nitrogen lawn fertilizer during Florida’s rainy summer months.
Backed by this massive outpouring of support from residents, council members stood firm in the face of immense pressure from national corporate
fertilizer, pest control and landscape companies and passed the ordinance in a 6 to 1 vote! Tampa joins Pinellas County across Tampa Bay in
eliminating the sale and use of nitrogen lawn fertilizer in the summer when North America’s most intense thunderstorms deliver most of the state’s
annual rainfall. These heavy downpours wash fertilizer off lawns and into rivers, canals, Tampa Bay and the Gulf of Mexico, feeding all manner of
harmful, toxic algae blooms.
A sense of urgency filled the room with a July 1 deadline looming for local governments to pass a summer sales ban on non-compliant
fertilizers, under a new state law passed after the Scotts MiracleGro Company donated $1 million to legislators.
Sierra Club members were joined by neighborhood associations, small businesses, local conservation and civic organizations whose
members spoke eloquently about the importance of taking action to reduce the amount of harmful nutrients - nitrogen and
phosphorous - that flow into our bays and waterways. Friends of the Hillsborough River, Suncoast Native Plant Society,
Tampa Bay Estuary Program, Tomorrow Matters, Florida Consumer Action Network and many others filled the chamber and an overflow room.
Tampa now joins the other 43 cities and counties along Florida’s Gulf Coast that are covered by rainy season nitrogen and
phosphorous fertilizer application bans, now in effect from Tampa Bay to Naples. For the past several years Sierra Club Florida’s
Red Tide campaign has led the way in organizing community support for these many victories from our offices in Ft. Myers, Sarasota and
St. Petersburg, gaining broad-based, bi-partisan support from neighborhoods and businesses dependant upon waterways free of toxic algae
blooms for fishing and tourism.
Starting June 1, 2012, Tampa will join Pinellas County and all of its municipalities in prohibiting the sale of the products that are illegal
to apply during the June 1 to Sept. 30 rainy season. This move in Pinellas County has already replaced the unsustainable products on store
shelves with “summer-safe” blends developed by Florida businesses that make lawns greener and healthier with iron & other elements that,
however, don’t feed algae when washed into the water. Fertilizer sold October – May will be required to have half its nitrogen in a slow
release form so it stays on lawns for months to gradually feed turf without washing off, eliminating any need to apply during the rainy season.
The ban is expected to prevent eight tons of nitrogen from getting into Tampa's waterways, saving the city $56 million in removal costs.
In a victory luncheon following the council vote, Sierra Club Florida Senior Organizing Manager Frank Jackalone told supporters
"Tampa is now the largest city in the state of Florida with a strong summer fertilizer ban, which is why this is such an important victory."
Sierra Club Organizing Representative Phil Compton, added "Today we won a victory for the Hillsborough River, Tampa Bay and all of our waterways.
It was a victory for our future!"
Tampa now shares with Pinellas County the distinction of having the strongest urban fertilizer regulations in the state – something we
hope every county and city that cares about improving water quality will aspire to in the future.
- Marcia Biggs, Chair of the Sierra Club Tampa Bay Group,
Phil Compton, Field Organizer, Sierra Club
Marti Daltry, Sierra Club field staff, pleas for fertilizer ordinance
May 24, 2011: Manatee County passes fertilizer ordinance.
On May 24, Manatee County became the 36th local government in Florida to adopt a strict rainy season urban
fertilizer application ban. The passage of the fertilizer ordinance culminated a three-year fight by the Manatee-Sarasota
Sierra Group to protect all our aquatic ecosystems from the toxic runoff.
Over 40 proponents of strong fertilizer management gathered in the commission chambers, some testifying for the ordinance.
They were identified by the neon stickers that have become a staple of the Sierra Club’s presence at council and commission meetings across the state.
The usual opponents were also present; Tru-Green, Scotts Miracle-Gro, Valley Crest and pest control industry representatives argued for
the absolute minimum measures found in the FDEP Model Ordinance. The turf industry (Schroeder-Manatee Ranch) also chimed in and attempted,
unsuccessfully, to gut the ordinance with a proposal to exempt all licensed applicators from the entire ordinance.
The draft ordinance presented to the county commissioners was close to a mirror image of the Pinellas County ordinance passed in 2010 –
the strongest urban fertilizer management ordinance in the state – and included the fertilizer sales restrictions found only in Pinellas County to date.
However, after over three hours of presentations, public comment and commissioner discussion, the draft ordinance was stripped of the
sales restrictions but remained with all of the other strong fertilizer pollution control provisions found in the Pinellas, Sarasota,
and Lee County ordinances. These include:
The City of Tampa, Charlotte County and Collier County are currently in the process of debating the adoption of their own 4-month rainy
season application bans – if they follow the leadership provided by the other 36 local governments, the chain of strong fertilizer
pollution control codes will cover the entire southwest Florida gulf coast.
- Chris Costello, Sierra Club Regional Representative
- A ban on application of fertilizer containing Nitrogen and/or Phosphorous in the four rainy summer months – from June 1-Sept. 30.
- A required fertilizer-free zone of at least 10 feet from water bodies.
- A yearly application limit for Nitrogen of 4 pounds per 1,000 square feet.
- The required use of at least 50 percent slow-release/controlled release nitrogen products.
April 15, 2011: Water Quality Fight Update
Congratulations to all who are working for clean water in Florida.
Sierra Club Florida lobbyist Dave Cullen and Florida
field organizer Cris Costello have lead a successful campaign that rallied
intense grassroots and grasstops opposition to defeat an attempt in the
Florida House of Representatives to preempt any local city and county
fertilizer management ordinances stronger than Florida's very weak "model"
The bill would have negated the successful efforts
of Sierra Club staff and volunteers in the Florida Red Tide Campaign over
the past five years that secured in more than 40 cities and counties strong
ordinances restricting the use and sale of fertilizer products containing
nitrogen and phosphorus.
The bill was stripped of preemption language in time for the final vote in
the House today. When Rep. Ingram took to the floor of the House
yesterday to announce his substitute bill, he declared "This amendment has
the support of the Florida Association of Counties, the League of Cities and
- believe it or not (long pause) - the Sierra Club!"
Although the Florida Senate has yet to vote on a companion bill, we are very
hopeful that the House bill will pave the way to final victory. Stay tuned.
Your help will be needed when a senate bill comes up.
Many thanks to members of the Sierra Club Florida Legislative Advisory
Committee, the SCF Steering Committee, the SCF Water Quality/Red Tide Team
and Florida field organizing staff copied on this message who all played an
important role in leading us to this important step on the road to victory.
- Frank Jackalone,
Senior Field Organizing Manager,
April 12, 2011: Water Quality Fight Update
HB457 goes to the House floor this Thursday (April 14).
Representative Ingram (the bill’s sponsor) has agreed to file an amendment to the bill that will ensure that any county or
city can adopt a fertilizer management ordinance stronger than the state model ordinance. The amendment should remove any
question of prospective preemption (except for preemption of local sale restrictions after 7/1/11).
March 23, 2011: Water Quality Fight Update
HB 457, a bill that would preempt and gut fertilizer regulation ordinances adopted by 40 local governments throughout Florida,
passed this morning in the House Community and Military Affairs Subcommittee by one vote.
The 8 to 7 vote count is below (all democrats and two republicans voted no).
Please take time today to call and thank the no votes – it is very important to show our gratitude.
This bill has been referred to two additional committees in the House: the Rule-making and Regulations Subcommittee
and the State Affairs Committee for additional consideration.
The Senate companion bill, SB 606, was approved 4-0 in the Agriculture Committee and has been referred to the
Community Affairs, Rules and Budget committees for additional consideration. We will bring you another update when the
bill has been scheduled in the next House or Senate Committee. - Frank Jackalone,
Senior Field Organizing Manager,
Here is how the Vote went on HB457 3-23-11 in House Community & Military Affairs Subcommittee
- YES Rep. Ritch Workman, Chair, Melborne.
- NO Rep. Ed Hooper, V. Chair, Clearwater.
- NO Rep. Lori Berman, Delray Beach
- YES Rep. Jeffrey ''Jeff'' Brandes,St. Petersburg
- NO Rep. Matt Caldwell, Fort Myers
- NO Rep. Daphne Campbell, Miami Shores
- YES Rep. Fredrick W. ''Fred'' Costello, DeLand
- YES Rep. Jose Felix Diaz, Miami
- YES Rep. Chris Dorworth, Heathrow
- YES Rep. James ''J.W.'' Grant, Tampa
- NO Rep. John Patrick Julien, North Miami Beach
- NO Rep. Mark Pafford, West Palm Beach
- NO Rep. Scott Randolph, Orlando
- YES Rep. Ronald ''Doc'' Renuart,Ponte Vedra Beach
- YES Rep. Jimmie T. Smith, Lecanto
Talking points on nutrient standards:
by David J. Cullen, Sierra Club Lobbyist
- Localities have already adopted more stringent ordinances than the “model ordinance” and water quality has improved in those areas!
- Lawns are not agriculture. This is not about food production. But lawns do contribute to nutrient run off.
- The cost of removing nitrogen from Tampa Bay through storm water treatment projects ranges from $40,000-$200,000 per ton (according to the treatment method used.) Source control is the best (and cheapest) water protection strategy.
- Nutrient pollution that damages water quality with algal blooms affects these Florida businesses:
- Florida tourism is a $65.2 billion annual industry that generated 1,007,000 jobs in 2008
- Fresh and saltwater fishing generated $6.1 billion and 52,945 jobs
- The commercial fishing industry generated $5.6 billion and 108,695 jobs
- Lawn care companies can still do business under the more stringent ordinances.
They are free to apply iron, magnesium, potassium, compost based fertilizers, etc.
during the summer rainy season. They can also do pest control and mow, trim topiaries, etc.
These ordinances put no one out of work.
- Many Florida fertilizer companies already offer “summer safe” products and Florida companies
have gone from 2% of the market to between 70 and 90% of the market in areas with summer application bans.
- The ordinances do NOT affect the sale of, plant material, potting soil, or feeds
March 14, 2011: U.S. Senate tries to pre-empt local clean water work.
Senate. Sen. Nelson has not gotten the message that Floridians want water quality standards.
Senator Bill Nelson has joined Republicans in calling for a halt to US EPA efforts for stricter water pollution rules. It would appear he
is listening to the industry that pollutes and not the people. He had, in the recent past, pushed for new curbs on nutrient run off from farms, lawns and
wastewater treatment systems that fuel algae blooms in Florida waters. Now, he appears to have changed his mind.
Senator Marco Rubio is one of the legislators pressing to
kill funding for EPA's implementation of new rules, set to take effect next year.
The truth is that new rules would relace existing vague statndards for what is unacceptable levels of pollution in waterways, with
specific numeric limits on nitrogen, phosphorus and sediment. Municpalities in Florida were coming around to using such standards
to keep the algae bloom from chasing away the tourists from Florida. To keep Florida from moving ahead on this, industry reps
have essentially asked the US Congress to stop that progress.
Senator Nelson has written a letter to Lisa Jackson at the EPA, asking to suspend application of the rules. (Read his letter,
February 2011 Rooney rider adopted by House - Sen. Bill Nelson key to upcoming Senate vote
On Friday (2/18) night, the Congressman Rooney rider to the Fiscal Year 2011 Continuing Resolution
- that would stop EPA from implementing the new freshwater numeric water quality standards
- was adopted by the U.S. House of Representatives in a largely party-line vote, 237-189.
Only 17 Republicans voted against the rider but 16 Democrats voted for it – three of those Democrats
being from Florida (Alcee Hastings, Corrine Brown and Ted Deutch).
Read the story in the Florida Times-Union
U.S.House budget vote threatens Florida Clean Water Rule
The Rooney rider was just one of many anti-EPA amendments adopted by the House last weekend
and now the battle moves to the Senate.
The opposition is doing everything it can, with many more dollars to spend, to get rid of Florida’s new nutrient pollution limits.
Go to this web address http://www.3-1-2011.org/ to get a glimpse of the opposition’s maneuvers.
Lastly, all are encouraged to send thank you messages to Congresswoman Debbie Wasserman-Schultz (202-225-7931),
who made a heroic speech on the House floor against the Rooney rider, and to
Representatives Kathy Castor (202- 225-3376), Frederica Wilson (202-225-4506) and Cliff Stearns (202-225-3973),
who all voted against the Rooney rider.
- Cris Costello,Regional Representative,Sierra Club
Nutrient pollution in Florida is a controversial issue.
In 2010 we have concurrently experienced a 100 mile long toxic algae bloom and accompanying fish kill in the
St. Johns River, and a full court press from the state’s largest polluters to delay and defeat efforts to meet
the Clean Water Act provisions that would prevent such an environmental and economic disaster.
The connection between urban fertilizer management and the lawsuit filed and settled in federal court by the
Sierra Club and other environmental groups to require the U.S. Environmental Protection Agency to impose quantifiable
– and enforceable – limits (numeric nutrient criteria) for fertilizer, sewage and animal waste runoff is an important one.
The first set of numeric nitrogen and phosphorous limits, those relating to lakes and flowing waters,
go into effect November 2010. Florida communities are now looking for the lowest cost alternatives for
reducing nutrient loads to area water bodies, both to meet the new criteria and to protect their economic engines
from the type of environmental disaster experienced on the St. Johns River.
Strong urban fertilizer management is the least costly of possible alternatives and can be instituted and effective immediately.
It is far more cost-effective to prevent nutrient pollution than it is to utilize hundreds of thousands or millions of
tax dollars in restoration efforts for impaired waters – the cost of removing nitrogen from water resources runs from
$40,000-$200,000 per ton. For this reason, the communities along the southwest gulf coast so devastated by the
Red Tide blooms of 2005 were the first in the state to adopt strong fertilizer ordinances.
The cost of meeting the EPA proposed numeric nutrient criteria has been the rallying point for those
(utilities, agriculture and industry) who oppose the new standards. However, cities and counties can
reduce nutrient pollution at little or no cost by adopting strong urban fertilizer rules.
For example, in 2008 the Tampa Bay Estuary Program established a model fertilizer and landscape ordinance
that with 50% compliance would prevent an estimated 30 tons of nitrogen per year from entering Tampa Bay
from Hillsborough County alone at a negligible cost; the seasonal sales ban would have acted as enforcement
for the application ban. In Tampa Bay, those 30 tons prevented would offset the annual nitrogen discharge
from five wastewater treatment plants, thereby saving tax payers dollars spent on waste water treatment.
Download our full paper on Water Quality Standards.
Sierra Club Florida Nutrient Standards Campaign can always use more volunteers. If you want to help out, please contact
Cris Costello, Field Organizer - [email protected]. For other questions you can
email the [email protected]
|
<urn:uuid:d82f10d1-a3f8-4dcb-b2f5-e2c2f8ceff30>
|
CC-MAIN-2014-35
|
http://florida.sierraclub.org/water_quality_archive.asp
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824209.82/warc/CC-MAIN-20140820021344-00131-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.924606 | 4,392 | 2.640625 | 3 |
What are inhalants?
Inhalants describe a range of products that can be consumed by inhaling vapour through the nose and/or mouth. Inhalant use can be problematic and young people may be at more risk of harms.
Access more information about inhalants.
Where to get help and support
For anyone concerned about their own or someone else’s alcohol and other drug use, contact Adis 24/7 Alcohol and Drug Support on 1800 177 833 or visit the Adis website.
For workers, service providers, retailers and communities – find information, advice and resources on the Dovetail website.
The outcome of the Inhalants Roundtable
The Inhalants Roundtable (December 2019) was chaired by the Chief Health Officer. It was attended by nearly 40 participants from across the manufacturing and retailing industry, youth and health services, peak bodies and commissions, government departments and councils.
Participants jointly agreed on key themes and opportunities for enhancing responses to prevent and reduce harms from inhalant misuse.
|
<urn:uuid:ac08aea5-1d61-44f7-9db3-3f66fcdfee3e>
|
CC-MAIN-2023-23
|
https://www.health.qld.gov.au/public-health/topics/atod/inhalants
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657144.94/warc/CC-MAIN-20230610062920-20230610092920-00164.warc.gz
|
en
| 0.928897 | 215 | 3.0625 | 3 |
(The)—i.e. the hospitable sea. It was formerly called Axine (inhospitable). So the “Cape of Good Hope” was called the Cape of Despair. “Beneventum” was originally called Maleventum, and “Dyrrachium” was called Epidamnus, which the Romans thought was too much like damnum to be lucky.
Source: Dictionary of Phrase and Fable, E. Cobham Brewer, 1894
|
<urn:uuid:ad7e7496-f9cd-4670-a638-7005277f4ebf>
|
CC-MAIN-2013-48
|
http://www.factmonster.com/dictionary/brewers/euxine-sea.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163065206/warc/CC-MAIN-20131204131745-00049-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.986681 | 107 | 2.796875 | 3 |
Since the 2013 documentary Blackfish, about Seaworld and the controversy over an orca it held, many of us are familiar with the ethics of keeping cetaceans in captivity. The public outrage inspired by the documentary was so great that many of the world’s aquariums are reevaluating how they do business. Most recently, the Vancouver Aquarium announced that it will no longer keep cetaceans—whales, dolphins, and porpoises—at its facilities. John Nightingale, the aquarium’s president, said in a blog post that the decision was predominantly influenced by public pressure: “It had become a local hot topic, to the point where it was just hijacking everything else.”
While great news, it appears that the aquarium made the decision to end its cetacean program only because it became weary of dealing with the “distraction” of legal issues and negative public opinion—not for the best interest of cetaceans, stating “it’s time to get on with it.”
Last spring the Vancouver Park Board unanimously voted to ban new cetaceans at the aquarium. The board’s decision was a reaction to the deaths of two belugas, a mother and calf, at the facility due to unknown toxins in the water. Since then, two of the three remaining cetaceans at the aquarium have died, inciting outrage from the public.
The Vancouver Aquarium disagreed with the Park Board’s decision and instigated a tense B.C. Supreme Court battle over the issue. The aquarium challenged the Park Board’s ban on the grounds that it overstepped its authority, engaged in an unfair process, created unacceptably vague bylaws, and that they are prohibiting the aquarium’s freedom to express its viewpoint on the ethics of keeping cetaceans in captivity.
But when considering what species to liberate from captivity, why stop at cetaceans? What about octopuses? What about elephants and primates in zoos? And finally, what about extending concern to other animals that we may not perceive as intelligent?
No artificial environment can replace natural habitat and autonomy. Zoos and aquariums are commercial enterprises concerned with profit and pleasing customers first, and the health and happiness of the nonhumans kept in captivity second. In many countries, zoos and aquariums provide the bare minimum to successfully house and put captives on display. Some countries, like the Unites States, have feel-good nonprofits like the Association of Zoos and Aquariums (AZA) that accredits facilities for providing “appropriate” and “aesthetic” environments for their “collection” and the public. But ultimately, the AZA is a pro-captivity organization that grants accreditation based on extremely limited animal welfare laws.
One investigation found that in a span of thirty years, fewer than half of captive marine mammals reached the industry’s projected life expectancies, and found that about one quarter of the animals died before reaching one year of age. Michael Hutchins, a director at AZA during the time of the investigation, responded, “The number of people in the public that are exposed to these animals and know about them that wouldn’t otherwise pay any attention to them whatsoever, I think you can make the argument that they are true ambassadors…So you have to weigh that against the cost to individuals.”
It is flawed logic to hold captive individual beings accountable for their entire species. Due to the zoo and aquarium’s inability to replicate natural habitats and social structures, the loss of their autonomy, and being frequently shipped around the country, captive animals show sign of stress and self-destructive behavior that does not occur naturally. As a result, when we see them in zoos or aquariums, our perception of them is significantly distorted.
So what do we really learn when we visit these “ambassadors” in zoos and aquariums? Most of us are captivated by animals, predominantly because they’ve disappeared from our lives. We have a desire to watch other animals whether that be on the TV from our couches, traveling to other countries to watch them in their natural habitats, or going to zoos and aquariums. We perceive zoos as providing us with a way to escape urban life and to “get back to nature” but they are wholly unnatural institutions.
Ultimately, the very nature of zoos and aquariums contradicts their message. How can we ever truly appreciate, respect, and learn about other animals when we take away their autonomy and see them as objects to be collected for our entertainment and viewing purposes? It is a shame that our desire to connect with other animals does not allow us to empathize with them.
|
<urn:uuid:09036c73-1cb0-44d4-a14b-981934e46708>
|
CC-MAIN-2020-29
|
https://thehumanist.com/news/science/zoos-aquariums-getting-touch-nature-others-captivity
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887319.41/warc/CC-MAIN-20200705090648-20200705120648-00060.warc.gz
|
en
| 0.955885 | 991 | 2.625 | 3 |
Bentley’s History Department offers an integrated curriculum designed to expose students to the global past, and to prepare them for the world they will inherit. As an interdisciplinary social sciences and humanities department, we seek to facilitate student discovery of our nation and the world so that students are able to think critically, read thoughtfully and write eloquently through objective analysis of social, economic, and cultural material. Courses assist students in building awareness of the ways in which historical events and the development of ideas have had a lasting impact on contemporary society.
During their freshmen year, students focus on the foundations of global culture, commerce, and politics while studying modern world history. In their sophomore year, students examine United States history, addressing indigenous societies in North America and the colonial origins of the American nation-state through the end of the twentieth century. As juniors and seniors, students choose three trimester-long seminars drawn from history and related fields, allowing students to focus on areas, periods, and special topics tailored to meet their interests.
|
<urn:uuid:5cfc6355-85b0-458c-bdeb-3b51289d42c7>
|
CC-MAIN-2020-10
|
https://www.bentleyschool.org/bentley-academics/upper-school-9-12/academics/history-
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147154.70/warc/CC-MAIN-20200228104413-20200228134413-00550.warc.gz
|
en
| 0.944915 | 209 | 2.703125 | 3 |
Omega 6 fats, derived from natural healthy sources, have been somewhat demonized or at least forgotten in the world of natural health and nutrition. A lot of focus has gone towards highlighting the importance of omega 3 fats – and rightly so, because omega 3 fats do play a vital role in boosting the health of your heart and brain, and preventing inflammation and diseases like arthritis, Crohn’s, ADHD and cancer. However, the downside has been that people have over-compensated for this by taking excess omega 3 supplements (usually fish oils) and have thrown their omega 6:3 ratio – and body – out of balance. However, both omega 6 and omega 3 fatty acids offer potent health benefits which are so powerful that they actually act as a cancer preventer and/or cancer cure.
The Ideal Ratio of Omega 6:3 Fatty Acids in the Body
So what then is the best ratio of omega 6:3 fats for optimal health? For this answer we turn to Professor Brain Peskin, one of the true experts in this field of nutrition. Peskin is a scientist who specializes in studying parent EFAs (Essential Fatty Acids). He noticed that the term “EFA” was being widely misused – people were using it to describe secondary derivatives of EFAs such as EPA, DHA and GLA. To counter this, Peskin actually coined a new term (Parent Essential Oils or PEOs) to describe the 2 true essential or critical fatty acids which are the foundations of all the others. These 2 are LA or linoleic acid (another name for omega 6 fatty acid) and ALA or alpha-linoleic acid (another name for omega 3 fatty acid). Your body can make all the other fatty acids from these 2 “parent” oils.
To determine the ideal omega 6:3 ratio for the human diet, Peskin studied the composition of fat inside human tissue. He found that the ratio of omega 6 to omega 3 was much higher in the body than you might believe. In the brain and nervous system, it is 100:1; in the skin, 1000:1; in the organs, 4:1; in the adipose tissue (fat), 22:1; and in muscles, 6.5:1. Most of the human body is made of fat and muscle, so you could say the average ratio – as found in the composition of the human body – is around 10:1. Much higher than you may have thought!
Based on this, Peskin recommends “a ratio of greater than 1:1 up to 2.5:1 of parent omega-6 to parent omega-3. With this ratio, a suggested use is 725 mg per 40 lb of body weight (e.g. a 160-lb person requires 3 g on a daily basis).”
But Weren’t We Told We Were Eating Too Much Omega 6?
Yes, we were told that, but you have to look at the type of fat or oil. I have previously written about the dangers of plastic oils or plastic fats, and how we must learn to differentiate between healthy fats (that come from a farm or field and have not been adulterated) and hydrogenated fats (the plastic fats which Big Pharma and Big Agra chemical companies have been pushing). One of the main problems with eating too much omega 6 is that, for the average person, the omega 6 they are getting are trans fats or hydrogenated fats; they are part of margarine, Crisco or some other adulterated vegetable oil, rather than raw olive oil, walnuts or sunflower seeds. We are not necessarily eating too much omega 6; we are eating too much unhealthy omega 6.
Epigenetics, the Environment of the Cell, is Far More Reponsible for Health/Disease than Genetics
Another thing I like about Peskin is how he has sought to scientifically prove that the state of health (or disease) of a cell or body is overall not due to genetics. This busts the Big-Pharma promoted propaganda that people are destined to get disease due to their genes, and there’s nothing they can do about it. The idea that you are completely controlled by your genes is an insidious, disempowering lie which can strip you of your confidence, will and responsibility. It can also lead you to undergo insane operating procedures, such as Angelina Jolie’s decision to have her breasts cut off because they might – might – one day develop a malignant tumor. Yet in today’s upside-down world, such as decision was greeted with admiration and painted by mainstream women talkshow hosts as “brave”, when “insane” or “masochistic” may be better adjectives to describe it.
Epigenetics is the new field of science that explores why some genes are expressed and some are not. We now know that every cell membrane has receptors that pick up environmental signals. In other words, your cells can choose whether or not to read their genetic blueprint, depending upon the signals they get from the environment. Even if you do have a so-called “cancer gene”, if there is definitively such a thing, it does not necessarily have to get expressed. You are not controlled by your genes; which genes are turned on or off, expressed or not expressed, is primarily determined by your thoughts, attitudes and perceptions!
Peskin wrote an article in 2011 entitled “Good News: It’s Not Genetic“. In it he shows the science of why “your DNA sequence does not determine your entire genetic fate” and why “mutations are caused by epigenetic adulteration (environmental causes altering the behavior of genes but not necessarily the structure).” To those who have always intuitively felt that we humans are more powerful than we generally think or have been taught to believe, it provides scientific evidence showing why genes can be overcome and why free will can be more powerful than determinism.
Both Omega 6 and 3 Are Cancer Preventers and Cancer Cures
Coming back to omega 6 and omega 3 fats, Peskin has also extensively studied the direct relationship of EFAs and PEOs to cancer and cardiovascular disease, the 2 leading causes of death in the US. His research shows that both omega 6s and omega 3s can actually act as cancer preventers and cancer cures. Remember the 1931 Nobel Prize winner Otto Warburg, who discovered that hypoxia (lack of oxygen reaching the tissues) could result in cancer? And conversely that cancer could be cured by getting enough oxygen to the weakened cells? Many healing protocols and products (such as Vital Ion) are based on the idea of deep oxygenation of the body – of getting enough oxygen into the cells, knowing that parasites cannot survive in an oxygen-rich environment.
Healthy PEO omega 6 and omega 3 fats can solve the oxygen problem, and thus the cancer problem. They both build cells and contain powerful anti-inflammatory compounds, such as precursors like prostaglandin and the body’s powerful “natural blood thinner” prostacyclin (a platelet anti-aggregate and anti-adhesive). In Peskin’s report Anti-Aging Therapeutics Volume XII, he explains how he conducted an experiment by feeding mice a PEO blend (comprised of healthy omega 6 and omega 3 fats). The mice fed the most PEO fats faed the best, with highly significant tumor reduction. Peskin concludes that “my experiment conclusively shows that PEO-based oils are able to modify the internal structure of cells in an epigenetic fashion, thus making them more cancer resistant; the desired anti-cancer/increased cellular oxygenation solution is accomplished per Warburg’s findings.”
Submit your review
|
<urn:uuid:ab460464-011c-4dec-a8e3-967a4f9da4d4>
|
CC-MAIN-2017-26
|
http://www.thesleuthjournal.com/ideal-omega-fats-stop-cancer/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00616.warc.gz
|
en
| 0.952737 | 1,628 | 2.640625 | 3 |
A View from Emerging Technology from the arXiv
The Puzzle of Particle Jets and Blast Waves
Nobody knows why particles form into jets when caught in a blast wave, but a new video casts some light on the conundrum.
One of the best ways to understand fluid motion is to record a video of the action. That has created some friendly rivalry between fluid dynamicists who compete to produce the best films of their work.
Each year, the fluid dynamics division of the American Physical Society hosts a Gallery of Fluid Motion at its annual meeting and invites researchers to submit their best reels. These guys store their videos on the arXiv, making them fair game for this blog.
It’s that time of year again, so I’m keeping my eye out for the best videos submitted to this year’s Gallery of Fluid Motion.
Today’s pick describes an interesting puzzle that emerges from the way substances packed around explosives behave when a detonation occurs. David Frost at McGill University in Canada and a few buddies, packed millions of tiny glass spheres just a120 micrometres across around 28g ball of C4 and videoed the shape they formed as the detonation occurred using a 10,000 fps camera.
It turns out, the glass spheres form into fairly evenly-spaced jets as they are caught up in the blast wave (see picture above, video links below). Apparently, these jets are a common feature of the explosive dispersal of particles. For example, when the C4 is surrounded by water, even more jets form but these dissipate more quickly.
But curiously, Frost and co say that the highest number of jets occur when the C4 is surrounded by a mixture of water and glass beads. This, they say, is entirely unexpected.
Certain factors are known to influence ‘particle jetting’, as this phenomenon is called; things like particle size as well as the size and geometry of the charge.
But here’s the thing: nobody really knows why the jets form at all.
An interesting mystery described in a cool video, perhaps something for a computer modeller with some spare time to take on.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today
|
<urn:uuid:1534aee3-4313-482e-aeb2-c0625636deb6>
|
CC-MAIN-2017-47
|
https://www.technologyreview.com/s/425794/the-puzzle-of-particle-jets-and-blast-waves/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804976.22/warc/CC-MAIN-20171118151819-20171118171819-00758.warc.gz
|
en
| 0.942206 | 468 | 2.75 | 3 |
Contents of Article
- Introduction to the Yorkshire Terrier
- History of the Yorkshire Terrier
- Yorkshire Terrier Health Related Issues
- Yorkshire Terrier Temperament
- Yorkshire Terrier Grooming
- Yorkshire Terrier Fun Facts
- Common Yorkshire Terrier Mixes
- Yorkshire Terrier FAQ’s
Introduction to the Yorkshire Terrier
Yorkshire Terriers, or “Yorkies,” as they are affectionately known, are real terriers though they come in a Toy package.
Once hardworking dogs who hunted rats in English factories, the Yorkie was too beautiful to stay in the textile mills. High society loved the beautiful little dogs with the long, silky coats and Yorkies have been beloved pets ever since.
Brave and determined, but very loving, Yorkies make excellent pets for many people. Their small size makes them a very good choice for many people who live in apartments in the city.
History of the Yorkshire Terrier
The British Isles have always been home to terriers who were used as ratters and to kill vermin on farms. Following the Industrial Revolution in the 18th century there were also plenty of rats in the new factories. It’s not surprising that terriers were also called upon to hunt and kill rats in the factories. That’s when the Yorkshire Terrier arrived.
However, the Yorkie owes his existence to an earlier dog called the Waterside Terrier. This was a small dog with a longish coat that was blue-gray in color.
Waterside Terriers weighed between 6 and 20 pounds – usually around 10 pounds. The Waterside Terriers had been created by crossing the old rough-coated Black and Tan English Terrier that was found around Manchester with Paisley and Clydesdale Terriers – dogs that don’t exist these days. (Paisley Terriers were a small version of Skye Terriers which have a very long coat. Paisley is a town in Scotland.) Weavers brought the Waterside Terrier to Yorkshire from Scotland when they emigrated there in the mid 19th century. Once in Yorkshire, the Waterside Terrier was bred to suit the task required to run through and under the factories to hunt and kill rats. But the dogs were so beautiful that they attracted the notice of people of wealth who wanted to keep them as pampered pets. Yorkies left the factories and gave up their job as vermin hunters to become beloved companions. But they are still terriers at heart.
A dog named Huddersfield Ben, born around 1865, was an extremely influential sire and is regarded as the father of the breed. He was popular for his looks, his ratting abilities, and for the quality of offspring he sired.
The new breed was first called the “broken-haired Scotch Terrier” but this name was changed to the Yorkshire Terrier in 1870 after the dogs started being exhibited at dog shows. The first Yorkie born in the U.S. was in 1872. The breed was recognized by the AKC in 1885. Today Yorkies are one of the most popular breeds in the United States. They have been one of the Top Ten most popular breeds for over a decade and are today ranked number 6 out of over 180 breeds.
Yorkshire Terrier Health Related Issues
The Yorkshire Terrier Club of America has a very comprehensive health page with links to lots of articles and web sites. They cover everything from ticks to liver disease. Anyone interested in Yorkies should check their site and read the articles. You should also talk to a breeder and ask questions about the breed and its health issues. Ask about the health of their dogs, the health tests they have had done, and what health guarantees they provide in their contract.
The Yorkshire Terrier Club of America recommends that any dog considered for breeding have the following health tests done:
- Eye Examination by a boarded ACVO Ophthalmologist- Prior to the onset of breeding, recommend evaluations at 1, 3, and 6 Years of Age.
- Patellar Luxation
- Legg-Calve-Perthes (Optional)
- Autoimmune thyroiditis (Optional)
- DNA Repository (Optional)
- Hip Dysplasia (Optional)
Because of their very small size, Yorkshire Terrier puppies can sometimes have problems with hypoglycemia (low blood sugar). The puppies may not have enough muscle mass to be able to store enough glucose to regulate their blood sugar between meals. This can sometimes also occur in adult Yorkies. Yorkshire Terrier puppies need to have multiple small meals during the day. Adult Yorkies also need three or four meals per day. You can give puppies and adult Yorkies treats or snack between meals to help keep their blood sugar level steadier. Puppies or dogs may need Nutrical or a little corn syrup if they seem to be exhibiting symptoms of low blood sugar. In severe cases, you will need to take the puppy or dog to the vet. If hypoglycemia is not reversed, the puppy or dog can die.
Yorkies can be subject to a condition called Portosytemic shunt. This is a congenital condition in which the portal vein that carries blood to the liver to be cleaned is malformed. Because it is malformed, some of the “dirty” blood bypasses the liver and poisons the other organs in the body with the toxins that are in the blood. A dog with this condition might be smaller than normal, have no appetite, have poor muscle development, poor coordination, gastrointestinal problems, not learn as fast as normal. He could even become blind or have seizures after eating. In many cases portosytemic shunt can be repaired with surgery that fixes the malformed portal vein.
Yorkies can also experience luxating patellas. This condition is something like a slipped kneecap in a human. It usually occurs when the dog is moving and the joint will lock up in an extended position. It is momentarily painful but as soon as the muscles relax the joint will move back into the normal position. A luxating patella can vary from a minor problem that happens once to a chronic problem with some dogs. In severe cases the treatment is usually surgery. Dogs usually recover quickly following surgery.
Legg-Calves-Perthes syndrome is a condition in which the top of the thigh bone (the femur) degenerates. It probably happens because there is not enough blood circulating around the hip joint. The bone at the top of the femur eventually collapses and the cartilage surrounding it will crack and become deformed. The syndrome usually occurs when a Yorkie is between 5 and 8 months of age. Symptoms include limping, pain, and lameness. Surgery is usually required to remove the affected bone but once it is removed, muscle will hold the femur in place and new tissue will form to keep the bones from rubbing against each other. The leg will be slightly shorter than the opposite leg but most dogs return to virtually normal use of their leg.
Hypoplasia of dens refers to a condition involving the second cervical vertebra and some damage to the spinal cord. The neck and spine do not form the normal pivot point for the dog’s neck. The condition can occur at any age. It can be a mild condition or very serious.
Yorkies are also subject to distichiasis. This means that they can have eyelashes growing where they should not grow, which can cause irritation to the eye. In the worst cases, the cornea can be scratch or form corneal ulcers. Your vet can remove the abnormal eyelash(es) manually or with minor surgery. Yorkies can also have problems with cataracts and dry eye which is why the breed club recommends that dogs used for breeding have their eyes check regularly.
Yorkies can also experience a problem called tracheal collapse. In this condition the walls of the trachea seem to be weakened – possibly from pulling against a leash. Many people recommend harnesses for small dogs like Yorkies to keep them from pulling on a leash and hurting their throats/necks. The first sign of possible tracheal collapse is a harsh cough that sounds like a goose honking. The cough can become constant. In advanced cases the dog may require surgery to help repair the collapsed trachea.
Yorkies can also have bad teeth and a delicate digestive system. Like many small breeds, their teeth can be subject to crowding. Because of their very small size owners need to be especially careful about injuries and accidents with the breed. Even a slight fall or someone tripping over the dog can be very dangerous to a dog that weighs 4 to 7 pounds.
Yorkshire Terrier Temperament
Ask any Yorkshire Terrier owner and they will tell you that the breed’s temperament is one of the most endearing things about them. Yorkies may be small but they have lots of personality. The Yorkshire Terrier is active, curious, and they love attention. They are also surprisingly feisty and ready to defend their owners, their homes, and the ground they stand on. They don’t know they are small dogs or, if they know, they don’t care. Yorkies are perfectly willing to get in the face of a much bigger dog. But you should not allow that to happen. Yorkie owners often think it’s cute to allow a tiny little dog to tackle a big dog and bark at him. Big dogs and their owners think it’s a lot less cute. Don’t risk running into a grumpy big dog who might hurt your Yorkie. Keep him on leash when you are out where he might encounter larger dogs and people.
Otherwise, Yorkshire Terriers make good pets for many people. They are small and adaptable. They can get along with other pets of similar size, though it is not recommended that they play or interact much with larger animals because they can be easily injured. Yorkies don’t require much exercise but they enjoy daily walks.
Yorkshire Terrier Grooming
Yorkshire Terriers are tan and blue. They have a long, straight, silky coat. Keeping the coat long and in good condition takes a lot of care. The Yorkies you see at dog shows with beautiful coats receive meticulous coat care. The coat is bathed, lightly oiled, and the ends are wrapped in papers to prevent the hair from breaking. The papers are checked and adjusted and the dog is bathed and groomed again before the show. This goes on throughout a dog’s show career.
Retired showdogs and dogs kept as pets can also keep a long coat but they don’t have to have this much attention to the coat. It is still necessary to brush the coat regularly and keep the ends trimmed so they don’t become straggly. Both pets and showdogs with long coats typically have a small bow in their “top knot” – the hair swept up above their eyes. It is kept in place with a small rubber band. The bow is simply a nice touch.
Some owners prefer to keep their pet Yorkie’s coat cut short. The coat is easier to care for this way, though you will still have to brush it as it grows out. You can take your Yorkie to a pet groomer and tell them you want it cut in a pet style.
As with other dogs, you will need to trim your Yorkie’s nails and check his ears regularly. Clean them as needed. Yorkies can have problems with their teeth, like many Toy breeds, so brush them regularly.
In the United States and in Canada, Yorkies have their tails docked. This is something that breeders take care of with their veterinarians when puppies are 2-3 days old.
Yorkshire Terrier Fun Facts
- The Yorkshire Terrier was used to help create the lovely Silky Terrier from Australia in the late 19th century. A number of Yorkies were brought to Australia and bred to Australian Terriers to try to improve their coats. The result was a new breed – the Silky Terrier.
- “Teacup” is not an official Yorkie size or type. It usually refers to a dog that will be less than 4 pounds as an adult. Most breeders avoid breeding dogs of this size because they can be subject to additional health problems.
- Ch. Ozmilion Mystification was the first Yorkshire Terrier to win Best In Show at Crufts in England in 1997. Crufts is the largest annual dog show in the world, lasting several days.
- The Yorkshire Terrier Smoky was a war dog and World War II hero. Owned by William Wynne of Cleveland, Ohio, Smoky was found in a foxhole in the jungle in New Guinea while Wynne was serving with the 5th Air Force in the Pacific. Smoky was 7 inches high and weighed 4 pounds. Many people credit Smoky with reviving interest in Yorkies at a time when they were mostly forgotten. For two years Smoky backpacked through the jungle and accompanied her master on his combat flights. She lived with him in the jungle in New Guinea and Rock Islands. She slept in a tent on his blanket and shared his rations, occasionally eating some Spam. Smoky was given credit for 12 combat missions she was awarded eight battle stars. She made it through 150 air raids in New Guinea and survived a typhoon in Okinawa. She even had her own parachute. Wynne gave Smoky credit for saving his life by warning him of incoming shells when fire hit eight men next to them. He called her his “angel from a foxhole.” Smoky – and Wynne – survived the war and Smoky died when she was about 14. There is a life-size bronze statue of her sitting in a GI helmet where she is buried in Lakewood, Ohio.
Common Yorkshire Terrier Mixes
Yorkies have been a popular breed for designer dogs and mixes. Some of the hybrid dogs that feature a Yorkie parent include:
- Yorkie-poos (Yorkshire Terrier and Poodle Mix)
- Bichon Yorkie (Bichon Frise and Yorkie Mix)
- Boston Yorkie (Boston Terrier and Yorkie Mix)
- Yorkie Pin (Miniature Pinscher and Yorkie Mix)
- Yorkie Russell (Jack Russell Terrier and Yorkie Mix)
- Yorkie-ton (Coton de Tulear and Yorkie Mix)
- Yorkillon (Papillon and Yorkie Mix)
- Yorkinese (Pekingese and Yorkie Mix)
- Yorktese or Morkie (Maltese and Yorkie Mix)
and many more. If you can imagine a small breed crossed with a Yorkshire Terrier, you can probably find the mix somewhere.
Yorkshire Terrier FAQ’s
What is a Yorkshire Terrier’s Life Expectancy?
Yorkies tend to have a long life expectancy. Many of them live into their teen years. They can live from 13 to 16 years.
Are Yorkshire Terriers easy to train?
It depends who you ask. Many small breeds can be hard to housetrain but this may be due to the fact that owners are not very strict with them. Trained by a person who insists on good behavior, a Yorkie can be easy to train. However, many people treat Yorkies and other small dogs like babies and they can become spoiled which leads to bratty behavior. If you socialize your Yorkie and train him with positive reinforcement as you would a larger dog, you should be able to train him without any trouble.
Do Yorkshire Terriers shed a lot of hair?
No, Yorkies do not shed a lot. They have a single coat that is similar to human hair. They do not have the downy undercoat that many breeds have. No dogs are really hypoallergenic. Despite what many people think, it’s not about the hair or fur on a dog, it’s about the dander. If you have allergies to dog dander, you might be able to live with a Yorkie. You should meet the particular dog and see how you react to him.
Here is a list of dogs that are most often recommended to people with allergies to dog dander. Wirehaired dogs and hairless dogs are usually your best bet.
Do Yorkshire Terriers make good apartment pets?
Yorkies make wonderful apartment pets. They are small and they can adapt to live in lots of different situations. Some Yorkies will bark more than others, especially as a warning, but most of them are quiet and peaceful at home.
Are Yorkshire Terriers good with Children?
Yorkies are usually not recommended for very small children. This is mostly because the dogs are so small (4-7 pounds). They can easily be hurt or even killed by rough play. They can also be somewhat aggressive with small kids if their terrier nature is aroused. Many breeders will not place a Yorkie puppy in a home with small children. However, Yorkies make very good pets in homes with older kids and teens.
|
<urn:uuid:d7c3dada-186d-4426-9265-9ce93bc6c9bc>
|
CC-MAIN-2017-43
|
https://dog.reviews/yorkie/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00850.warc.gz
|
en
| 0.964296 | 3,563 | 2.734375 | 3 |
Scientists admit that computers are learning too quickly for humans to keep up.
From driving cars to beating chess masters at their own game, computers are already performing incredible feats.
And artificial intelligence is quickly advancing, allowing computers to learn from experience without the need for human input.
But scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether.
Scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether. Pictured is the Terminator film, in which robots take over - a prospect that could soon become a reality. dailymail
Last year, a driverless car took to the streets of New Jersey, which ran without any human intervention.
The car, created by Nvidia, could make its own decisions after watching how humans learned how to drive.
But despite creating the car, Nvidia admitted that it wasn't sure how the car was able to learn in this way, according to MIT Technology Review.
The car's underlying technology was 'deep learning' – a powerful tool based on the neural layout of the human brain.
Deep learning is used in a range of technologies, including tagging your friends on social media, and allowing Siri to answer questions.
The system is also being used by the military, which hopes to use deep learning to steer ships, destroy targets and control deadly drones.
There is also hope that deep learning could be used in medicine to diagnose rare diseases.
But if its creators lose control of the system, we're in big trouble, experts claim.
|
<urn:uuid:672b8e0c-91be-42c7-bb9d-bf8bd41e2c72>
|
CC-MAIN-2017-43
|
http://errymath.blogspot.com/2017/04/has-humanity-already-lost-control-of.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824618.72/warc/CC-MAIN-20171021062002-20171021082002-00444.warc.gz
|
en
| 0.973646 | 327 | 3.4375 | 3 |
September 26, 2019
The healthcare industry is often faced with a question as to who really holds the ownership of medical health records/data: the patient or the physicians? The physician creates health records, which however are based on the happenings of a patient. Therefore, the question still remains – who actually owns your health data?
The original physical medical/health record usually belongs to the physician who created it as well as the facility where the record was created. However, the information or data gathered within this original medical record is owned by the patient. In many countries, it is mandatory by law for all the healthcare facilities to maintain the original medical record of patient with care, and protect it from loss, damage, alteration, and unauthorised use.
On the other hand, there is a rapid transition occurring today from paper to electronic records, which has created new opportunities for sharing information among physicians and patients as well as with third parties. In the wake of rising healthcare data being generated by patient visits together with data generated on mobile as well as wearable or home devices, the Government of India has also put forth certain guidelines for maintaining patient electronic health records. Additionally, the ever-expanding mobile revolution has intensified consumers’ expectations about data sharing with physicians, including those generated by mobile health apps. However, this shift is happening so quickly for either physicians or the regulatory bodies concerned with medical records to keep up. In any form, the ownership of your healthcare data always remains with you!
Patients today already have unique perceptions about their own health data and the healthcare providers or doctors still have certain legal rights and duties related to the protection of that health record. Meaning, the healthcare providers, after accessing your health data, still remain the experts when it comes to diagnosing and providing a suitable treatment plan for your health condition. Furthermore, a patient-centred approach to data possession and management can substantially change the way the population health management is handled at the moment, because such approach can help patients to manage their own health and expressively involve in their treatment plan with their doctors. Having control of their health data allows patients to become more engaged, and only such patients can be healthier. After all, you know your body best!
To sum up, in the healthcare world, you, as a patient, have every right to ownership along with the legal privacy, security and accuracy of your health data. Therefore, once these data are captured and documented in written or electronic form, and since your physician owns the platform where those data are recorded and stored, the physician then technically gains the property right of possession of data. Basically, your physician then becomes the legal guardian of your health data with specific legal rights and duties concerning possession and protection of that health data. Again, the real ownership of your health data always remains with you!
|
<urn:uuid:63b38799-0975-445f-a39d-5f1e2c933457>
|
CC-MAIN-2020-05
|
https://thekyt.in/blogs/who-is-the-real-owner-of-your-health-data/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00320.warc.gz
|
en
| 0.972176 | 569 | 2.734375 | 3 |
What Is The Cloud?
The “cloud” simply refers to a network of servers. It is the space in which information, software, applications, and services are housed and accessed. More importantly, cloud computing is the term used to describe the delivery of these products and services over a network or the Internet. When users employ cloud technology, they are usually accessing a remote network in order to perform a task more efficiently. However, cloud computing can take on a large number of structures and styles.
Cloud computing and cloud technology have existed since the development of the Internet, but have become increasingly important over time. Cloud computing is still evolving, but it has already become a critical component in billions of daily operations. WHOA is leading the way as this technology evolves, employing our expertise and understanding of it to help all types of businesses.
|
<urn:uuid:c4ed5da7-2622-4942-8cec-bc664cde0e6b>
|
CC-MAIN-2023-23
|
https://www.whoa.com/cloud-101/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646652.16/warc/CC-MAIN-20230610233020-20230611023020-00167.warc.gz
|
en
| 0.963299 | 169 | 2.984375 | 3 |
Author’s Note: This paper will be helpful for those who would like to delve more deeply and broadly into the geology that has shaped both the Highlands-Cashiers Plateau and the wider Southern Appalachians of which it is a part. It was originally prepared as a term paper for a course entitled “Geology of the Southern Appalachians,” taught in 2020 at UNC Asheville by Dr. Jackie Langille. While I have modified it to include more definitions and explanatory material than were in the original, its form, tone and language are still more academic than those in Whence These Special Places? So it’s probably not a good starting point for the totally uninitiated. However, I expect that those who have grasped the principal concepts of Whence These Special Places? will have the framework needed to find this paper both informative and helpful in furthering their understanding and appreciation of the processes revealed in the rocks of our region.
For useful graphics about the geologic provinces and geologic ages referenced in this paper, click on the <Appalachian Geologic Provinces> and <Geologic Timescale> buttons on the Research & Resources page.
A Geologic History of the Southern Appalachians
Bill Jacobs, May 2020
The Blue Ridge is a composite of polydeformed metasedimentary
crystalline terranes that represent both the early Paleozoic Laurentian
margin and accreted terranes.”
Arthur J. Merschat, 2009
Introduction. Arthur Merschat’s summary of Blue Ridge geology, written in full-on doctoral-dissertation style, highlights key elements of the development of the Blue Ridge province – “polydeformed,” “metasedimentary crystalline terranes,” “Laurentian margin,” and “accreted terranes.” This paper will explore those elements as they apply within the Blue Ridge, and also relate their underlying tectonic processes and related depositional histories to the broader Southern Appalachians, to include the Valley and Ridge, Plateau and Piedmont provinces. The discussion will be organized chronologically, but will weave in the elements highlighted by Merschat as they emerge at various points in the process.
First, however, some definitions for the terms used by Merschat. “Polydeformed” means that the rocks of the Blue Ridge have experienced multiple rounds of deformation, such as bending, folding, and squeezing into new patterns and layering. “Metasedimentary” means that the rocks were formed from sedimentary accumulations (as opposed to igneous processes involving molten magma), and that they were subsequently subjected to sufficient heat and pressure (typically by deep burial) to be metamorphosed into different rocks. “Crystalline” refers to rocks that have formed directly from minerals crystalizing out of cooling magma, or whose crystal structure has been modified by metamorphic processes (that is, they are not sedimentary rocks). “Terrane” is used by geologists to designate a region whose geology reflects a distinctive history different from that of surrounding areas, and implies a chunk of crust that has been imported (“accreted”) during tectonic processes. “Laurentia” refers to the continental landmass we today know as North America, and its “margin” is the edge created as Rodinia divided and “Gondwana” (which contained today’s Africa and South America) rifted away during the opening of the Iapetus Ocean; the margin is thus the region most affected by new seashores, flooding of nearby low-lying landscapes (and thus deposition of mud and sand that formed new rock layers), and the accretion of crust when renewed tectonic activity pushed terranes and seafloor material onto the continental landmass.
The Middle (or “Meso”) Proterozoic. Although the mountains now exposed in the Southern Appalachians were formed much later than the Middle Proterozoic, it was during that era that tectonic forces created and emplaced the bedrock of today’s eastern North American continent, upon which the current mountains were built. The large-scale event was the formation of the supercontinent of Rodinia, approximately 1.2 – 1.0 GYA. Hatcher (2005). As plate boundaries were subjected to compressive forces, ocean basins closed, sedimentary rocks were emplaced on the continent, and mountains were raised in what is known as the Grenvillian “orogeny” (geologist-speak for a mountain-building event). As sedimentary layers were buried by this process, they would have experienced the first of the metamorphic crystallization and deformation events that can be identified today in the Southern Appalachians.
Mesoproterozoic rocks form the region’s oldest exposures, but they are not widely visible at the surface. In general, they are found as highly metamorphosed gneisses along a SW – NW swath 5 – 15 kms wide and about 200 kms long, running from north of the Great Smokies into Virginia, a few miles west of Asheville and mostly just to the east of the Tennessee border. Clark (2008), Merschat, C. & Cattanach (2008). Elsewhere in the region, these early metasedimentary rocks were not detached and transported with younger overlying rocks during the much-later Alleghanian orogeny, or they at least remained more deeply buried during the transport process. In addition, a substantial, heavily metamorphosed pluton associated with the Grenvillian orogeny, and now known as the Toxaway Gneiss, is exposed along the Blue Ridge Escarpment southwest of Brevard.
The Most Recent (or “Neo”) Proterozoic. Rodinia first began rifting apart around 735 MYA, but in the area occupied by today’s Southern Appalachians, the early phases of rifting did not immediately result in the creation of a new continental margin. Hatcher (2005). Rather, what is sometimes referred to as a “failed rifting” process caused the crust to thin, resulting in volcanic activity reflected in rocks exposed in the Mt. Rogers area and dated to approximately 760 MYA. Merschat, A. et al (2016). Another effect of the rifting process was the creation of inland basins that filled with sediments that lithified as deep layers of sandstones and shales. Buried and metamorphosed by subsequent orogenies, and caught up in the great northwestern transport of the Blue Ridge-Piedmont Megathrust Sheet during the Alleghanian orogeny, the rocks of one of these basins now appear as gneisses and quartzites along the western edge of the Blue Ridge in North Carolina and far eastern Tennessee. Bearing the general name of the Ocoee Supergroup, their leading edge forms much of the Great Smokies. Clark (undated).
Rodinia completed rifting about 565 MYA, forming what today is roughly the eastern boundary of the North American plate, with the creation of ocean basins between Gondwana and Laurentia. This process set the stage for the rocks seen today in much of Southern Appalachia, in several ways. First, the ocean basin closest to Laurentia, known as the Iapetus Ocean, filled with deep layers of sediments flowing off the Laurentian uplands. Second, as Gondwana and Laurentia separated, significant land masses appear to have been severed from both Laurentia and Gondwana, becoming large islands between Gondwana and Laurentia. These events created the raw material for crystallized, polydeformed, metasedimentary rocks exposed in multiple terranes. Their conversion into today’s Blue Ridge involved tectonic transformations and transport during a 250 million-year period stretching through much of the Paleozoic.
The Paleozoic (541 – 252 MYA). The processes that transformed the Neoproterozoic raw materials arose from the creation of a new supercontinent, named Pangea, as Gondwana and Laurentia, as well as the islands in between, again came together as one large, contiguous landmass. The broad process is referred to as the Appalachian orogeny, but it was divided into three distinct orogenic episodes. Because they occurred along the converging margins of the Gondwanan and Laurentian plates, they all are examples of what geologists would call “convergent tectonics.”
The Taconic Orogeny. This first stage of Paleozoic convergent mountain-building began during the Cambrian and continued until approximately the end of the Ordovician, about 440 MYA. Known as the Taconic orogeny, it involved eastward subduction of the leading, oceanic edge of the Laurentian plate beneath a plate approaching from the southeast. (Note – all plate directional references are based on North America’s modern orientation). Hatcher (2005). As the dense, hydrous oceanic crust subducted, the heat and release of moisture generated magmas that rose to the surface to create an arc of volcanic islands, as is seen today in subduction zones around the Pacific Ocean basin. In addition, as the overlying plate approached Laurentia, it created an accretionary wedge of the mixed sedimentary rocks of the Iapetus Ocean basin, emplacing them on the Laurentian landmass. These rocks were buried to depths of 15 km or more, where metamorphic processes converted them into the crystalline rocks today known as the Ashe Metamorphic Suite (AMS)/Tallulah Falls(TF)/Alligator Back formations. It seems likely that much of the burial was at least in part under the now-eroded islands of the volcanic arc, which would have come ashore as the Iapetus Ocean closed. In any event, these rocks now form much of the eastern third of the Blue Ridge from north Georgia, across North Carolina, and into Virginia.
Among the polydeformed crystalline formations found in the Blue Ridge are a group of terranes located immediately to the west of the AMS/TF in southwestern North Carolina. These include the Dahlonega gold belt, Cartoogechaye and Cowrock terranes, which collectively form the Central Blue Ridge, divided by major faults from the AMS/TF-dominated Eastern Blue Ridge and the Rodinian-dominated Western Blue Ridge. Merschat, A. (2009). These formations have not been fully interpreted, but from their predominately metasedimentary character, they seem likely to have originated as chunks of the Laurentian crust or margin. Hatcher (2010). Presumably severed during the rifting of Rodinia, they would have been brought back together with the continental landmass as the Iapetus Ocean closed. Because they are located to the west of AMS/TF, it seems likely that they were emplaced at an early stage of the accretionary process; the Cartoogechaye terrane was especially deeply buried, experiencing the highest metamorphic conditions recorded in the Taconic. Hatcher (2010).
The Taconic orogeny also resulted in the creation of batholith-scale plutons, likely a mix of magmas resulting from fractional melting due to burial of crustal rocks at great depth, and possibly from subduction-related moisture releases. Both the Whiteside pluton, running for 30 kms or so to the northeast from Highlands, NC, and the Henderson gneiss, exposed in a broad 100 km-long swath along the western edge of the Piedmont from South Carolina near the Georgia line past the Lake Lure area in North Carolina, Clark (2008), have been dated to 450 – 465 MYA, which would have been the latter stages of the Taconic. Several smaller plutons along the Blue Ridge Escarpment in South Carolina (the Caesar’s Head and Table Rock gneisses) have also been dated to the late Taconic. Jubb (2010).
The Neoacadian Orogeny. The geologic record contains little, if any, evidence of tectonic activity for some 75 million years after the Taconic orogeny drew to a close at the end of the Ordovician. The next orogenic surge affecting Laurentia appears to have begun in the Devonian and continued into the early Mississippian, and is best known by its effects to the east and northeast rather than in the Southern Appalachians. Named the Acadian orogeny to the northeast, the somewhat more recent phase that affected the Southern Appalachians is typically called the Neoacadian. This event is responsible for implanting the Carolina superterrane in today’s North Carolina Piedmont. This landmass, as well as others to the northeast, were probably originally associated with Gondwana, and accreted as the oceans between Gondwana and Laurentia continued to close. Hatcher (2010). As it docked, it brought along an intervening terrane, the Cat Square, which also forms part of today’s Inner Piedmont. Merschat, A. (2009). Interestingly, it appears that the Cat Square originated 200 – 300 km to the northeast and was transported into the North Carolina Piedmont by “channel flow” of ductile (viscous) material beneath the approaching Carolina superterrane, in part along the Brevard Fault Zone, Merschat, A. (2009), a process that potentially was extended by the early-Alleghanian transform collisional forces discussed below. Jubb (2010).
Although called an orogeny for its effects farther to the north, and a major emplacement event in the Piedmont, the Acadian/Neoacadian convergent events did not generate significant mountains in the Southern Appalachian region. Stewart & Roberson (2007). However, careful geological research has identified numerous less dramatic effects of the Neoacadian within the Blue Ridge. These include: a lengthy period of renewed metamorphism, particularly in the Inner Piedmont, Merschat, A. (2009); contributions to polydeformation, in the form of small-scale folding that overprints Taconic deformation, Jubb (2010); and transform faulting along the Burnsville fault, Stachowitz et al (2019). In addition, a scattering of granitoid intrusions have been dated to the Devonian, particularly the Spruce Pine pegmatites and Pink Beds pluton. Hatcher (2005), Jubb (2010).
The Alleghanian Orogeny. The climactic phase of the convergent tectonics that created Pangea and shaped today’s Southern Appalachians is known as the Alleghanian orogeny. Beginning as the Neoacadian came to a close 330 – 320 MYA, it lasted 30 – 40 million years through the Pennsylvanian into the middle of the Permian period, and culminated with the collision/joinder of Gondwana and the eastern margin of Laurentia. Recent interpretations posit something of a north-to-south rolling collision of (using modern names) the west African prominence with eastern North America, with zippered closure from the north, substantial transform pressure in today’s Piedmont, and eventual direct southeast-to-northwest impact with the southeastern North American coast. Hatcher (2010).
Based on modern-day parallels, and on metamorphic levels reflected in rocks at today’s surface, the orogeny created a Himalayan-scale range of mountains, stretching 3,000 kms from Alabama to Newfoundland. Hatcher (2010). At the latitudes of the Southern Appalachians, these mountains were centered to the east in today’s Piedmont, where their heavily metamorphosed roots are now exposed.
Within the Blue Ridge province, the orogeny’s effects were somewhat less direct. Rather than piling mountains on top of the terranes emplaced during the Taconic (and thereby causing a further round of high-level metamorphism), the collision resulted in the detachment and northwestern transport on broad thrust faults of a large, multi-layered crustal sheet known as the Blue Ridge-Piedmont Megathrust Sheet. This thrust structure extended from north Georgia across the Carolinas and into southern New York, Hatcher (2010), and involved transport distances estimated at over 300 kms. Hatcher (2005). Its effect was to push older rocks from the Taconic and the Grenvillian basement over and across younger rocks formed in shallow seas west of the Taconic-era mountains. The basal thrust surface is visible today as the Linville Fault within the Grandfather Mountain Window; also, younger sedimentary rocks, surrounded by mountains formed from older crystalline rocks, can be seen at Cades Cove and similar windows along the western edge of the Blue Ridge.
While the en masse nature of the Alleghanian transport process generally preserved the Taconic spatial relationships among the Eastern and Central Blue Ridge terranes and the more westerly formations of the Laurentian margin, the southern portion of the principal fault system separating the Eastern and Central Blue Ridge (the Chattahoochee/Burnsville/Holland Mountain/Gossen Lead system), was at least reactivated, as evidenced by the Chattahoochee cross-cutting the early-Alleghanian Rabun pluton. Merschat, A. (2009). Moreover, the northwestward pressures left their mark within the Blue Ridge, particularly in fold structures at hand-sample to landscape scales, with a strong SW-to-NE regional strike – another, and probably dominant, Jubb (2010), round in Merschat’s polydeformational count. As also occurred during the earlier orogenies, the earliest stages of the Alleghanian left a legacy of notable igneous intrusions, including the Rabun pluton mentioned above, as well as the Looking Glass. Jubb (2010).
The Alleghanian orogeny is also responsible for shaping two other geologic provinces associated with the Southern Appalachians, the Valley and Ridge and, further to the northwest, the Appalachian Plateaus. These provinces consist of younger, layered sedimentary rocks deposited on a Precambrian base in shallow inland seas following both the rifting of Rodinia and the development of hinterland mountains during the Taconic. In the Valley and Ridge, the northwestward pressure created large-scale folds, striking SW to NE, across a 25 – 50 km swath stretching 1,500 kms into New York. Hatcher (2005). Reflecting variations over time in water depth, the sedimentary layers alternate among sandstones, limestones and shales. The varying resistance of their upturned edges has resulted in ridges dominated by resistant sandstones, and valleys that have been cut into the less resistant limestones and shales, all in parallel alignment with a distinct SW-to-NE strike. Clark (undated).
Farther to the west, a broad area of eroded uplands known as the Cumberland and Appalachian Plateaus stretches from Alabama across central Tennessee northeastward into Pennsylvania. The Plateaus consist of the same young, horizontally layered and undeformed sedimentary rocks found to the west, but are over 1,000′ higher in elevation than those more westerly areas. Their distinctive elevation is generally explained as a product of the orogenic processes creating the Appalachians, but without the folding and faulting found in the Valley and Ridge, which was closer to the impact zone. However, this explanation somewhat oversimplifies the process. A more nuanced approach would credit the Paleozoic orogenies with providing the raw materials for the Plateaus, in the form of sediments eroded off the mountains and ridges formed to the east as Gondwana collided with Laurentia. But the actual lifting does not appear to have been along the types of thrust faults that raised (and distorted) rocks of the Valley and Ridge, but rather was part of the much-more-recent Cenozoic uplift shared with the broader Appalachians and described below. US National Park Service (undated).
It should be noted that while the Plateaus’ elevated terrain can be rugged, this is an effect of modern streams cutting into an elevated landscape, and not of differential uplift or folded layering.
The Mesozoic (252 – 66 MYA) and Cenozoic (66 MYA to present). Following the assembly of Pangea, a process that was completed approximately 265 MYA, tectonic activity affecting the Southern Appalachians largely ceased for about 50 million years. Hatcher (2005). It then resumed in the form of rifting, with today’s Africa separating from North America on a 200 million-year march to the southwest, creating not only separate continents but also the Atlantic Ocean. While Africa took with it a portion of the great Alleghanian mountain range, it left a legacy of metamorphosed crystalline terranes, elevated thrust sheets, and complexly folded structures, all of which began to rapidly erode as the low-lying ocean basin opened to the east.
In recent decades, geologists have explored a serious problem with the easy and popular concept that today’s Southern Appalachians are the result of uninterrupted diffusive erosion of the larger mountain ranges formed during the Paleozoic orogenies. The problem is that expected rates of upland erosion would have leveled the Alleghanian mountains within 50 – 100 million years, far less than the 300 million years since their original formation, or the 200 million years since the rifting of Pangea would have accelerated their erosion. It thus seems that later uplift, or “rejuvenation,” must have occurred. Evidence that this is the case is now accumulating. Some is observational, such as recent erosion reflected in the steepness of V-shaped mountain valleys, and the sinuous paths of such major rivers as the French Broad (which would have developed sinuosity in broad low river bottoms, not in valleys descending from high mountains, but could have maintained the sinuosity in gorges cut during slow uplift of the established streambed). Evidence also includes studies: of knickpoints in relatively young V-shaped valleys, Gallen (2012), and other signs of topographic disequilibrium; of east-west lineaments in an otherwise SW-to-NE-striking landscape; of a mismatch between geologic provinces and points of highest elevation; and of resurgent sedimentary deposition on the Appalachian flanks, Hatcher (oral comments, 2019). Generally, Hill (2020).
While there is growing acceptance of regional uplift during the mid-Cenozoic, the process is not understood. No evidence exists of convergent tectonic activity (and related faults and thrust sheets). Isostatic rebound, as the crust adjusts to the removal of mass due to either the erosion of high mountains or melting of glaciers, can create uplift; however, neither of those sources of rebound can explain the Southern Appalachians, which are well west of the Piedmont where the burden of the highest mountains was removed, and well south of significant glacial accumulations. Possible candidate mechanisms related to the mantle include upward pressure from a large area of unusually high mantle temperatures, and circulation effects of the remnants of the Farallon Plate as it completes its eastward subduction under North America from the Pacific coast. The latter hypothesis seems, however, to suffer from lack of any similar effect as those remnants crossed under the Great Plains and Mississippi Valley. Another possibility that appeals to this author is delamination of a large swath of dense, eclogite-rich material from the base of the crust, enabling the relatively lighter remaining crust to isostatically rise. Hill (2020). A zippered event of this sort, beginning in the south and proceeding to the north, could explain both why the Southern Appalachians are today the highest and widest portion of the overall Appalachian structure, and why the unusual lineaments are oriented east-to-west rather than radially as would be expected with an isochronous event creating a single dome.
Regardless of the cause of Cenozoic uplift, the rugged topography of today’s Southern Appalachians is best understood as the interaction of such uplift and renewed erosion as higher elevations increased the erosive power of water flowing off the fresh uplands. It is also in part a product of higher rainfalls associated with Gulf air masses being forced over the region’s renewed elevations. This type of energetic erosion accounts for the steep slopes of many of the region’s ridges, which also often have sharp tops. As a general matter, erosion would have proceeded at a faster pace along the existing sinuous rivers as they cut gorges through the rising landscape (such as the French Broad, and quite likely the sinuous Little Tennessee, Pigeon and Tuckasegee systems), and also in areas where the bedrock has extensive pre-existing fractures (such as the principal valleys in the Highlands-Cashiers area, which frequently are located along the axial planes of long-eroded AMS anticlines). On the other hand, slower erosion would be expected in harder rocks, such as the quartzites of the Black Mountains and metasandstones of the high ridges of the Great Smokies, Clark (undated), and the sandstones frequently found along the ridges of the Valley and Ridge province. Slower erosion would also be expected in less deeply fractured rocks, such as the plutons that form great rock walls and exfoliation domes in the Eastern Blue Ridge, or that underlie the high valleys of the Highlands-Cashiers Plateau (Jacobs, 2019).
References and Bibliography
Clark, Sandra H.B., 2008, Geology of the Southern Appalachian Mountains: US Geological Survey.
Clark, Sandra H.B., undated, Birth of the Mountains: US Geological Survey.
Gallen, Sean F. et al, 2012, Miocene rejuvenation of topographic relief in the southern Appalachians: GSA Today, V. 23, no 2.
Hatcher, Robert D., Jr., 2005, Southern and Central Appalachians: in compendium of articles on North American geology, Elsevier.
Hatcher, Robert D. Jr., 2010, The Appalachian orogen: A brief summary: The Geological Society of America, Memoir 206.
Hill, Jesse S., 2020, Not old dead mountains, but slumbering giants; Cenozoic uplift of the southern Appalachians: UNCA lecture series, March 6, 2020.
Jacobs, William S., 2019, Whence These Special Places? The Geology of Cashiers, Highlands & Panthertown Valley: Great Rock Press.
Jubb, Mary Grace Varnell, 2010, Paradoxes in the deformational and metamorphic history of the eastern Blue Ridge …: available through the Tennessee Research and Creative Exchange (Trace).
Merschat, Arthur J., 2009, Assembling the Blue Ridge and Inner Piedmont: Insights Into the Nature and Timing of Terrane Accretion in the Southern Appalachian Orogen …: available through the Tennessee Research and Creative Exchange (Trace).
Merschat, Arthur J., Southworth, Scott, et al, 2016, Geology of the Mt. Rogers area, Revisited: Carolina Geological Society 2016 Annual Field Trip Guidebook.
Merschat, Carl E. and Cattanach, Bart L. (2008), Bedrock Geologic Map of the Western Half of the Asheville 1:100,000 Scale Quadrangle, North Carolina and Tennessee:NC Geological Survey, Geologic Map Series – 13.
National Park Service (undated), Physiographic Provinces Series, Appalachian Plateaus Province, https://www.nps.gov/articles/appalachiannplateausprovince.htm
Stachowitz, Liana, Stith, Felix and Langille, Jackie, 2019, Bedrock Geologic Map of the Clyde 7.5-minute Quadrangle, western North Carolina: UNCA, Technical Report USGS EDMAP Grant G18AC00099.
Stewart, Kevin G. and Roberson, Mary-Russell, 2007, Exploring the Geology of the Carolinas: The University of North Carolina Press.
|
<urn:uuid:ae20d927-6e38-4175-a898-a83296c6edf6>
|
CC-MAIN-2023-23
|
https://greatrockpress.com/a-geologic-history-of-the-southern-appalachians/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652149.61/warc/CC-MAIN-20230605153700-20230605183700-00511.warc.gz
|
en
| 0.94514 | 5,941 | 3.234375 | 3 |
second island chain
From a Mandarin term
- (politics) The next chain of archipelagos out from the East Asian continental mainland coast, beyond the first island chain. Principally composed of the Bonin Islands, Marianas Islands, Caroline Islands; from Honshu to New Guinea.
Some definitions of the second island chain anchor the northern end on the Kamchatka Peninsula, with the first link on the chain being the Kuril Islands, through Hokkaido and Honshu. The inclusion of the entire Indonesian Archipelago from New Guinea to the Malay Peninsula is also sometimes included. The southern end of the chain is also sometimes considered to be anchored in Australia.
|
<urn:uuid:55e778d0-098c-4088-b8c9-89fc6eb0c20b>
|
CC-MAIN-2014-10
|
http://en.wiktionary.org/wiki/second_island_chain
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021083897/warc/CC-MAIN-20140305120443-00001-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.871694 | 139 | 2.71875 | 3 |
Every person these days generates data at an unprecedented rate. Unfortunately, most of this data goes to the cloud, which takes it out of your hands and into the hands of large corporations. You just need to hope that their policies and security measures are good enough to trust them with your data.
But what about data stored locally on your computer or mobile device? If someone gets hold of your media, will they be able to recover the information you deleted? If you didn’t know, in most cases, “deleting” information from your hard drive, SD card, or phone’s internal storage does not actually destroy the data. Instead, this portion of the disk is simply marked as “free space” so that new data can be written to it.
The problem is that there are various methods to recover deleted files from free space. With full disk encryption, this isn’t as much of a problem as it used to be. However, if someone manages to get into your device, the fact that your drive is encrypted means nothing.
This is where free space shredders come into play. These software applications run special data erasure protocols in areas of free space where your file data may still exist, making it nearly impossible to recover any information. Therefore, when you delete sensitive information such as medical, legal or financial documents, you can be sure that it is gone forever.
Here are five great examples of file shredding programs for each of the most popular operating systems. Remember, it’s not paranoia if someone really wants to get you.
File Shredder (Windows)
File Shredder is a free open source application that allows you to completely wipe data and properly wipe free space on Windows hard drives using the “Disk Wiper” option. File Shredder includes five shredding algorithms, each more powerful than the last. However, harder chopping requires more time and processor power.
Apart from being free to use, it is open source software. Any member of the community can make sure there is no malicious code or hidden features in it. The downside is that the tool doesn’t support. Luckily, the author has put a comprehensive list of commercial alternatives on the File Shredder homepage if you are looking to spend some money.
BitRaser File (Mac)
Speaking of money, if you have a Mac and want the same data-destroying perfection, you’ll have to spend a few bucks on BitRaser for File from the developer Stellar.
While there are several free apps on the Mac store that promise to do the same job, they tend to be less specialized than BitRaser and less user-friendly. This seems counter to the reasons why Mac users love their computers in the first place.
However, BitRaser costs around $ 40, it’s not that expensive and only does one thing as well as possible. You can destroy individual files, erase all hard drives and free space. It also has the ability to automatically destroy sensitive data such as Internet browsing history and cached information.
It includes six data cleansing algorithms to choose from and, most importantly, lets you schedule and automate cleansing tasks. Therefore, even if you cannot connect to your computer, you can be sure that certain data cannot be recovered.
All of our lives now live on our phones, and there are probably a few things on your Android smartphone right now that you would rather no one ever see. The good news is, with an app like Shreddit, it’s a breeze to permanently destroy this data.
It’s fast, depending on which erasing algorithm you are using. Some of the suggested options use up to seven passes to truly ensure that no data recovery technician gets this data back.
The application is integrated with Android file explorer and can work with both internal and external media. The main caveat is that any Android 4.4 or later user will need to root their phones in order to use Shreddit on their SD card.
Therefore, if you do not want to do this, store confidential information only in the internal memory. For most people, however, this shouldn’t be a problem as modern phones either don’t have SD expansion slots or have so much internal storage that SD cards aren’t particularly useful.
The app is ad-supported, but you can make a small donation to remove ads.
Linux already has a pretty powerful built-in disk cleanup feature, but using a program like BleachBit is much more convenient. It can automatically wipe sensitive data from many common applications, and it also offers file shredding and free space cleaning functions.
BleachBit is completely free, but will accept donations to support development. This is an incredibly popular method among Linux users and, given its usefulness, it comes as no surprise. If you want to improve the privacy of your Linux machine, this is undoubtedly the first stop you should make.
iShredder is available for Android, Windows and Mac, but it is noted here as one of the few iOS shredders. The software isn’t free, but it will set you back about thirty dollars. iShredder can safely erase an entire device before handing it over to someone else.
It can also quickly wipe free space on your iDevice, ensuring that all the data you’ve deleted in the past is impossible to recover. Its deletion algorithms are the same as used by governments and it is the most elegant tool we have seen for millions of iOS users around the world to protect themselves from unwanted data recovery attempts.
Privacy is more important than ever, and it’s a good habit to make sure the information you want stays disappears. Of course, there are many alternatives to the aforementioned multi-OS examples. No matter which tool you use, you’ll sleep a little better knowing that no one who grabs one of your hard drives can dig up the dirt, no matter how hard they try.
|
<urn:uuid:e120a46b-6a52-478d-bceb-cfb4b8027f15>
|
CC-MAIN-2023-40
|
https://articles.pkrtousd.gb.net/5-tools-that-can-permanently-destroy-your-data-for/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510238.65/warc/CC-MAIN-20230927003313-20230927033313-00045.warc.gz
|
en
| 0.946802 | 1,252 | 2.625 | 3 |
This report aims to improve targeting of initiatives for households in poverty by increasing our
knowledge of the economic activity status and skills levels of households.
Income poverty is set to rise by 2020. Two key ways for policy to increase household incomes are: to reduce worklessness, and to improve prospects for those trapped in low-wage and low-skilled work. However, these interventions tend to focus on individuals, whereas poverty is experienced at the household level. This report explores the following research questions:
- What are the key differences between poor and non-poor households in terms of economic activity status and skills?
- What are the other socio-economic and labour-market-related characteristics that differentiate poor and non-poor households?
- What are the labour market attitudes and aspirations of non-working households?
- How can the research improve the targeting of labour market and skills initiatives for households in poverty?
What are the links between poverty, economic status and skills? Share your thoughts on the issues raised in this report using the comments section below.
For more reports like this one, subscribe to our monthly spam-free future of work newsletter.
|
<urn:uuid:f5853429-726c-49ae-b3ed-094affd9c4c2>
|
CC-MAIN-2017-26
|
http://www.futureofworkhub.info/allcontent/2013/11/4/poverty-economic-status-and-skills-what-are-the-links
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320823.40/warc/CC-MAIN-20170626152050-20170626172050-00444.warc.gz
|
en
| 0.954212 | 232 | 2.546875 | 3 |
By Alan Caruba
The rocketing costs of gasoline and the price of corn being paid worldwide are the result of U.S. government mandates requiring the inclusion of ethanol in the gasoline all Americans must use. The time has long since passed to eliminate ethanol from this primary fuel.
A recent report by ActionAid USA, “Fueling the Food Crisis: The Cost to Developing Countries of U.S. Corn Ethanol Expansion” is based on work by researchers at Tufts University. ActionAid USA is an anti-poverty group. The study found that the corn-importing countries of Central America and North Africa are at the highest risk from ethanol expansion—the requirement to include ethanol with gasoline.
“Strong policy should not be based on prayers for good weather, especially when the stakes are so high. From the U.S. Environmental Protection Agency to the G20, it is time to recognize that current biofuel mandates are unsustainable,” said Kristin Sundell, a policy analyst for ActionAid USA.
The group is calling on G20 leaders who are meeting on World Food Day, October 16, to eliminate incentives that encourage unsustainable biofuels production.
The idea behind ethanol is that it reduces carbon dioxide (CO2) emissions and, in doing so, it saves the Earth from global warming/climate change, but CO2 plays no role in climate change, and shows up well after any increase or decrease of temperatures. Ethanol is bad science. It is bad for the engines of cars that must use such a gasoline blend. It increases the cost of gasoline and all other corn-based products. It actually increases the amount of CO2 in the atmosphere. And it reduces the mileage a car can achieve with pure gasoline.
An authority on the U.S. oil industry is Sel Graham, the author of “Why Your Gasoline Prices Are High”. He is a man with more than fifty year’s experience, first as a petroleum reservoir engineer and later as an oil and gas attorney. He is also a graduate of West Point.
Here’s what Graham has to say about the current gas prices:
“Gasoline prices could be decreased instantly by President Obama if he wanted to do so. Republicans have not yet picked up on this issue.”
“Abolishing the ethanol mandate requiring ethanol to be blended with gasoline at the pump or waiving the Renewable Fuel Standard (
RFS) would: (1) lower gasoline prices by millions
of dollars; (2) result in billions of miles of free travel annually; (3)
prevent millions of tons of additional carbon dioxide from being emitted into
the air; and (4) improve national security and the energy picture since it is
impossible for US ethanol to ever replace foreign oil imports.”
“The following is reference data for skeptics. Gasoline prices can be lowered instantly by either abolishing the ethanol mandate which requires that ethanol be blended with gasoline at the pump or waiving the
RFS. This would eliminate the millions of
dollars in waivers which refineries are required to purchase because there is
no cellulosic ethanol production, thereby decreasing the price of
cellulosic ethanol is 8.65 million gallons. Cellulosic ethanol
production through August 2012 has been only 20,069 gallons, a shortage of
8.63 million gallons requiring $0.78 per gallon waivers.”
An essential truth that few Americans are aware of is that “The price of U.S. oil is always lower than the price of foreign oil. Last year, U.S. oil averaged $95.73 per barrel, $7.25 cheaper than foreign oil imports at $102.98 per barrel. If U.S. oil replaced the 3,261 million barrels of foreign oil imports, it would be a savings to Americans of $23.6 billion annually.”
Given the enormous oil reserves in America, both domestic and offshore, there is no reason why they should not be extracted, but the environmental movement in combination with the Environmental Protection Agency, the Interior and Energy Departments, has restricted access to our own oil.
The ethanol mandates are not just robbing Americans at the gas pump, they are endangering the cost of food prices worldwide
Current government energy policies are a definition of insanity.
© Alan Caruba, 2012
|
<urn:uuid:86e844c5-e39d-4956-98bf-172dd8d33830>
|
CC-MAIN-2017-39
|
http://factsnotfantasy.blogspot.com/2012/10/the-great-ethanol-scam.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686034.31/warc/CC-MAIN-20170919202211-20170919222211-00468.warc.gz
|
en
| 0.936852 | 908 | 2.890625 | 3 |
Salt intake levels in many low- and middle-income countries including India continues to be high, says study
Salt intake continues to be high in many countries across the world including India. New studies of salt intake levels from thirteen countries, including India, showed salt intake in all countries is well above the World Health Organization (WHO) recommendation of less than 5 gram per day. The implementation of population-based salt reduction strategies remains a challenge in many low and middle-income countries.
This is the conclusion of an updated review of the evidence on population salt intake globally since the GBD study 2010 was undertaken by an International collaborative research team comprising researchers from the George Institute for Global Health Australia and India, University of Otago, Ontario Tech University and University of Calgary.
This review is the first to compare recent salt intake measurements to those derived from the estimates in the GBD study 2010 and shows that salt levels in six countries, including India, remain the same as the 2010 estimates. Studies were searched from MEDLINE, SCOPUS and EMBASE databases and those that provided nationally representative estimates of salt intake among the healthy adult population based on the 24‐hour urine collection were selected.
An estimated 1.65 million annual deaths from cardiovascular diseases are attributed to salt consumption above the World Health Organization's recommended daily intake of < 2 g/day sodium (5 g/day salt). In the absence of reliable baseline data on salt intake at the population level, it has become quite difficult to evaluate the effectiveness of salt reduction initiatives that have been implemented for most of the countries. The Global Burden of Diseases (GBD), Injuries, and Risk Factor Study 2010 produced baseline estimates for mean population salt intake for each country in 1990 and 2010.
The present study entitled “The Science of salt: Updating the evidence on global estimates of salt intake” identified 13 studies measuring national population salt intake conducted between 2011 and 2018. Four of the studies were from low‐ and middle‐income countries (Fiji, Benin, Samoa and India) and nine from high‐income countries (Italy, Portugal, Switzerland, England, Canada, the United States, Barbados, Australia, and New Zealand). The recent study included from India was undertaken in slum and urban areas and rural communities in North India (Delhi and Faridabad, Haryana) and in south India (Hyderabad and West Godavari, Andhra Pradesh). This study showed that salt consumption in India is almost twice the WHO‐recommended intake.
The salt intake in the identified studies ranged from 6.75 g/d (6.32‐7.17) in Barbados to 10.66 g/d (10.52‐10.81) in Portugal. Seven countries that recently measured mean population salt intake had levels, which differed from the GBD 2010 estimates: Italy, England, Canada, and Barbados had lower salt intakes and Fiji, Benin, and Samoa had higher population salt intakes compared to GBD 2010. There was no change in salt intake levels in six countries including India. It is not possible to know whether these findings reflect true reductions in salt intake as a result of programs to reduce salt or if they are due to different sampling or salt measurement approaches as many of the GBD estimates were based on different sources.
“Our findings show that salt intake continues to be high and thus there is an urgent need for national salt reduction strategies that are well suited to settings and effective salt reduction policies in achieving the United Nations global targets of a 30% reduction in mean population salt intake by 2025,” said Sudhir Raj Thout, Research Fellow at the George Institute India.
Population salt reduction strategies are urgently needed in LMICs but remain a challenge, as these countries are experiencing a double burden of communicable and non‐communicable diseases, competing health priorities and limited resources for health. However, population salt reduction initiatives have been effectively implemented in many HICs and include a range of activities such as food reformulation, consumer education, front of pack labelling, and interventions in public institutions. There is an urgent need to translate learning to LMICs.
|
<urn:uuid:389f6589-1467-4ac4-851c-9df2348e4e73>
|
CC-MAIN-2020-16
|
https://www.georgeinstitute.org/media-releases/salt-intake-levels-in-many-low-and-middle-income-countries-including-india-continues
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506580.20/warc/CC-MAIN-20200402014600-20200402044600-00080.warc.gz
|
en
| 0.945923 | 846 | 3.28125 | 3 |
The Ten Kingdoms was a period in the history of Southern China that followed the fall of the Tang dynasty in 907. It lasted until the founding of the Song dynasty in 960. Nine of the kingdoms were in the South and one small kingdom was in the far North. Many states were effectively independent as governorships long before the Tang Empire dissolved. The last of the Ten Kingdoms, the Northern Han, survived until 979.
While Five Dynasties succeeded one another in Kaifeng, the regimes of South China each controlled a separate geographic region. Each court was a center of artistic excellence. The period is noted for the vitality of its poetry and for its economic prosperity. Commerce grew so quickly that there was a shortage of metallic currency. This was partly addressed by the creation of bank drafts, or "flying money" (feiqian), as well as by certificates of deposit, both of which originate in the North. Wood block printing became common during this period, 500 years before Johannes Gutenberg's press.
|History of China|
|Xia c. 2070–c. 1600 BC|
|Shang c. 1600 – 1046 BC|
|Zhou 1045–256 BC|
|Qin 221–206 BC|
|Han 206 BC – 220 AD|
|Three Kingdoms 220–280|
|Northern and Southern|
| Five Dynasties and|
Ten Kingdoms 907–960
|People's Republic 1949–present|
Wuyue, Chu, Jingnan
Historical accounts suggest that these kingdoms were non-Chinese, possibly Turkic or even Arab. Islam likely entered China through these states, as indicated by the presence of Muslim religious items.
Northern and Southern Han
Relatives of Liu Zhiyuan founded these states. Despite the weakness of the Later Han, these two states lasted a long time, with the Northern Han still standing well into the Song Dynasty.
Former and Later Shu
The Shu state was founded by members of the Shu clan, a royal family dating back to the Han Dynasty. Initially one state, it split into two because of infighting within the family. The Former Shu was controlled by the family patriarchs, the Later Shu was controlled by Shu Xie and his family.
The Southern Tang was composed mostly of Manchu and Tibetans, and is considered the forebear of the Qing Dynasty. It was one of the first dynasties to fall, after the other states withdrew support for what they saw as an inferior group.
Wu and Min
There is less historical information on these states than the others; only personal accounts exist. The Wu state is described as very militaristic, however they hewed to Mohist thought and used their might only for defense. The Min state is described as being wealthy through trade, but weak otherwise. They used trade connections to purchase protection from neighboring states.
Among the other kingdoms and states are Yan, Chengde Jiedushi, Yiwu Jiedushi, Ganzhou and Qi. For various reasons, they are not counted as proper kingdoms by most historians.
|
<urn:uuid:9a86a163-de1d-413a-8d96-f09f29f989a7>
|
CC-MAIN-2023-14
|
https://www.conservapedia.com/Ten_Kingdoms
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00399.warc.gz
|
en
| 0.968028 | 756 | 3.71875 | 4 |
Qualities and Characteristics of a Good Laying Chicken
We all want to go into Farming, especially egg production business.
Yet we know little to nothing about Laying chicken.
The earlier the better, If we could Identify and Know the qualities and Characteristics of a good laying Chicken, then we would not be feeding a Non Productive Chicken.
Physical indicators to help you identify poor or good layers
For a good layer, the combs and wattles should be full, large, waxy, bright red and warm. A poor layer has small, scaly, pale, and shrivelled comb and wattles. This is also a sign of possible illness.
A good layer should have flexible pubic bones, wide apart to allow three fingers to fit between them while those of a poor layer are tight, quite rigid and narrow, not allowing the fingers. This wide pubic bone space normally facilitates easy passage of eggs.
When you pull back the tail feathers of the bird and inspect the vent, it should be wide, oval, moist and warm for a good layer. The poor laying bird will have the vent dry, small/tight, round and cold.
The eyes of a good layer should be large, bright, prominent and sparkling. A poor layer often has small, sleepy/dull and sunken eyes.
Good layers have clean-cut, strong, refined heads while poor layers have coarse, meaty/thin, blocky, weak looking head.
A good layer has an abdomen that is deep and soft, easily pliable without body fat accumulation by probing fingers while a poor layer has a hard and shallow abdomen.
This depth of the abdomen is measured between the breast bone and the pubic bones.
By behaviour, a good layer is normally alert to her surroundings and is not lazy. The bird is active and exhibits normal chicken behaviour like scratching litter and running around with others.
Poor layers, on the other hand, look dull and are most of the times droopy.
Delaying culling-Moulting period (when birds lose feathers) requires better understanding of the feathers. It is, thus, advisable to delay culling when a significant portion of the flock is moulting, lest you remove some good laying birds.
During this time, most hens stop producing eggs until moulting is complete. Laying for some chicken may not be affected, but their moulting may be lengthened.
Moulting in good layers starts late and is quite rapid while in poor layers, it starts early and is slow, making the latter appear better groomed.
The grooming does not reflect good laying, in fact, in late moulters, the feathers are replaced at the same time they are lost, enabling them to return to their full production sooner.
Dirty and ragged feathers
The feathers of an active laying hen should be dirty and ragged looking. This is because they use much of their energy on producing eggs and are more prone to playing in the dirt or being followed by roosters.
A hen that looks clean and perfect most of the time could be a poor layer. Be careful when dealing with pullets (young hens), lest their size makes you rule them out as poor layers.
In general, all the indicators will help you do away with the unproductive part of the flock, a practice known as culling.
Ideally, culling should be a continuous exercise throughout the entire production period until the whole flock is productive no more.
Leave a Reply
|
<urn:uuid:a51216f8-07c6-49ea-be30-9f4a84d00e25>
|
CC-MAIN-2023-14
|
https://www.nagro.com.ng/2022/03/qualities-and-characteristics-of-a-good-laying-chicken.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00234.warc.gz
|
en
| 0.937943 | 728 | 2.953125 | 3 |
There are several techniques that you can try to organically rid your garden of squash bugs. Since some will work better than others in your individual circumstances, if one method doesn't seem to work, try another.
Natural Pest Control for Squash Bugs
Now that you know what squash bugs are and what they attack, you can better understand how to combat them. One of the most important things you can do is to isolate cucurbits from each other in your garden. Try not to plant more than two plants from the cucurbit list within eight feet of each other. This way, if you get an infestation in one set of plants, you may be able to save the others. Use organic pest control recipes to make your task quick and easy.
There are plants that squash bugs just don't like. Sometimes, planting these around and among the cucurbits can deter the squash bugs completely. The plants that seem to be the most effective for this purpose are:
- Bee Balm
You can also deter squash bugs by avoiding the plants they are most likely to attack. These include:
- Yellow Squash
- Winter Squash
You can also use these as a trap crop, meant to keep the squash bugs away from your zucchini and cucumbers. Allow the trap crop to grow as late as possible, long after your other cucurbit crops have been harvested. The bugs will be drawn to your trap crop and you can use 1% Rotenone or other method of killing them. This will greatly cut down on the number of bugs that infest your garden the next year.
Planting varieties that are resistant to squash bugs can also help keep them from taking over your garden.
Plant Later in the Season
By planting later in the season, you may avoid squash bugs completely. This can work very well in areas where there is only one growing season. It doesn't work as well in areas like the South and Southwest where there is more than one growing season.
Use Floating Row Covers
You can use gauze to cover your plants. This lightweight material keeps the bugs off your plants, but allows sun and rain in to them.
Certain other insects are the natural predators of squash bugs. These will not kill off all the bugs on your plants the first season, but will cut back on infestations over the next several seasons. Using natural predators should be part of a program to get rid of squash bugs; it shouldn't be the only technique used.
Some of the natural predators are:
- Ground beetles
- Tachinid flies
Pick Them Off
It is important to inspect your plants often and pick off the squash bugs, nymphs (immature stage), and eggs that you find. You can drop the bugs into a can of molasses. When you are done picking them off just put a top on the can and leave it out in the sun. By the next day, when you repeat the process, the previous day's bugs will be dead.
Also, cut and dispose of any wilted or brown leaves, or any leaves that touch the ground. These are prime areas for the bugs to congregate. Do not put these leaves in your compost. Burn them or bag them and throw them away.
You can spray neem oil on the plants that are susceptible to squash bugs. Use it every two weeks throughout the growing season.
Plants that Squash Bugs Attack
The plants most likely to be attacked by squash bugs are in the cucurbit family. These include:
- Melons of all types, including Bitter Melon
- Summer squash
- Winter squash
Commitment Is Key
Squash bugs are a common problem, but you can get rid of them organically. It may take a few seasons, but you can get a squash bug infestation under control with some common sense and the commitment to implement the techniques and the tips given here.
|
<urn:uuid:71009fec-fd84-4c20-bb1a-cd1dd6852a86>
|
CC-MAIN-2017-43
|
http://organic.lovetoknow.com/How_Do_You_Get_Rid_of_Squash_Bugs_Organically
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824775.99/warc/CC-MAIN-20171021114851-20171021134851-00423.warc.gz
|
en
| 0.951823 | 807 | 3.140625 | 3 |
How Important Is “Feeling Good about Yourself”? (page 2)
Ask a group of educators what are important school outcomes and undoubtedly one reply will be that students should “feel good about themselves.” The perceived importance of ego enhancement has resulted in various programs designed to build students’ self-esteem that stress their worth as individuals and praise from others.
Although self-efficacy is a key variable in Bandura’s social cognitive theory, self-efficacy is not the same as “feeling good about yourself.” Self-efficacy refers to specific beliefs about what one believes one can do. The best (most reliable) source of information to gauge self-efficacy comes from one’s actual accomplishments. Programs that seek to raise self-esteem and that are not tied to learning or performance are ineffective in raising task-specific self-efficacy, motivation, or skills (Pajares & Schunk, 2001).
It is true that self-efficacy can be raised by persuasion from others (e.g., “You can do this.”). But this increase will not endure if students actually try the task and fail. A real danger of self-esteem enhancement programs is that they do not prepare students for life’s realities. Sooner or later everyone fails, receives criticism, and becomes discouraged. More important than raising self-esteem is to teach students strategies for coping with difficulties and rebounding from setbacks, which likely requires changes in the school curriculum. In short, although developing in students a healthy sense of self-esteem is a laudable goal, it should not be the preeminent goal of schooling and must be tied to a curriculum that teaches them skills and strategies for self-regulating learning.
© ______ 2008, Merrill, an imprint of Pearson Education Inc. Used by permission. All rights reserved. The reproduction, duplication, or distribution of this material by any means including but not limited to email and blogs is strictly prohibited without the explicit permission of the publisher.
- Kindergarten Sight Words List
- Signs Your Child Might Have Asperger's Syndrome
- Coats and Car Seats: A Lethal Combination?
- Child Development Theories
- GED Math Practice Test 1
- The Homework Debate
- Graduation Inspiration: Top 10 Graduation Quotes
- Social Cognitive Theory
- 10 Fun Activities for Children with Autism
- First Grade Sight Words List
|
<urn:uuid:b78b90b0-9987-4934-acaa-3a5c6d079e2c>
|
CC-MAIN-2013-48
|
http://www.education.com/reference/article/how-important-feeling-good-about-self/?page=2
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345760669/warc/CC-MAIN-20131218054920-00010-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.943896 | 506 | 3.4375 | 3 |
Brahmastra (Sanskrit: ब्रह्मास्त्र, Brahmāstra) was a weapon created by the creator Brahma, for the purpose of upholding Dharma and Satya (truth).
When the Brahmastra was discharged, there was neither a counterattack nor a defense that could stop it, except by Brahmadanda, a stick also created by Brahma.
It could be evoked by user after severe concentration and meditating on its creator and could be used only once in a day to destroy an enemy, who could not be defeated by any other means.
Brahma had created a weapon even more powerful than the Brahmastra, called the BrahmaSirōnāmāstra, which was never used in war, as it had power equivalent to (Brahmāstra)4, representing 4 heads of Brahma and in Mahabharata war, it was known only to Dronacharya, Aswatthama, Arjuna and Karna.
Brahmastra also causes severe environmental damage. The land at which it falls becomes barren for many centuries and both men and women became infertile in that region.
Greenery will vanish, rainfall would decrease and land will develop cracks as in drought.
People in that region will be genetically effected and will give birth to defective children for next few generations.
All these descriptions indicate that Brahmāstra was indeed a nuclear weapon, as the effects sound similar to what happened in Hiroshima and Nagasaki of Japan during 2nd world war.
Brahmastra usage was mentioned multiple time in puranas and epics.
- Viswamitra used it against VaSishTa, but the Brahmastra was absorbed by Brahmadanda as VaSishTa was brahmarshi (rishi capable of alternate creation like brahma).
- In Ramayana, Rama tried to use it to make way out of sea so that the army of Vanaras can march towards Lanka. But Samudra (lord of oceans) appeared and told Rama, about the technical issues of using the weapon and requested not to dry the ocean and kill all living beings in it.
So, Rama aimed it towards Dhrumatulya, which fell at the place of modern day Rajasthan causing it to become a desert.
Later, Rama chose to built a bridge, which still exists between India – Sri Lanka.
- Indrajit used Brahmastra to capture hanuman, who was destroying Ashok Vatika after discovering Seetha.
Indrajit also used it against Lakshmana, but it was counter attacked.
- During the confrontation of Arjuna and Aswatthama in Mahabharata, both have evoked BrahmaSirOnAmAstra but the combined power of both weapons would have ended all life on earth. So Veda Vyas interfered and asked them to withdraw their weapons. Arjuna could called it back but Aswatthama had no idea of recall, so he re-directed it to attack the unborn grandchild of Arjuna (Parikshit) who was still in his mother’s womb.
This withdrawl of weapon or calling back the weapon after its usage, sounds similar to Boomerang
There are multiple weapons described in puranas and vedas like Agneyastra, Brahmastra, Garudastra, Kaumodaki, Narayanastra, Pashupatastra, Shiva Dhanush, Sudarshana Chakra, Trishul, Vaishnavastra, Varunastra, and Vayavastra; but the trishul, Sudarshana Chakra and the Brahmastra, are the most powerful.
Brahmastra – Nuclear Missile evoked and energized by Gayatri Mantra
Brahmastra is released by Gayatri Mantra but in a different way.
Any weapon or even a grass straw can be energized by concentrating and spelling Gayatri Mantra in exact reverse sequence of its syllables.
This method of chanting a mantra is known as viloma (normal way is anuloma).
Combined effect of anuloma-viloma chanting multiplies the power of that mantra and sadhaka attains siddhi quicker than normal.
If it is so simple, then why can’t everyone who knows Gayatri mantra release Brahmastra ?
In mantra SAstra, a sadhaka (practitioner) gains siddhi over a mantra only after practicing it for certain period of time for a specified number of times with immense concentration.
So, one has to be initiated to chant Gayatri Mantra first in a proper way, then one has to practice it for many years and gain command over it.
Then he has to practice the reverse chanting of that mantra in same speed and frequency and again attain siddhi in it.
Only after this, a person is trained how to chant Gayatri mantra for Missile purpose. He has to gain siddhi on it, and when he acquires it, he gets energized.
With that energy when he releases even a grass straw by chanting that mantra it turns into Brahma Missile due to his own charged energy and the missile inturn derives it’s power from the creator, Brahma.
The entire mantra SAstra is based on the concept that sounds producers vibrations and inturn frequencies which can kill, heal or transcend.
We have seen practically how high pitch sound breaks glass and even other objects as well.
Atharva Veda has proved that mantras can change weather, bring rainfall, produce heat, change thoughts in human minds around us, control animals and birds etc.
Ahirbudhnya Samhita of the Pancharatra Agama volume 1:
phantam vahnisamayuktam vyoma halasamanvitam |
mesadvayam dantayutam halahalamatah param || 34-5 ||
ghanadyam vayupurvam ca dantayuktamathantimam |
sarasam carksaparyayam bhantam bhrgumatahh param || 34-6 ||
ambaram vayusamyuktamarimardanamapyatah |
pradIptamatha vaktavyam paramam ca padam tatah || 34-7 ||
tatte pade prayoktavye gayatryaA madhyamam tatahH |
padatrayam prayoktavyametad brahmastramIritam || 34-8 ||
“It contains air, fire and cosmic poison, two goat-like fangs, full of poison, weighty, emits air, contains mercury, fiery, sparkling, sky is filled with air, enemy killing greatly radiant and it is projected with three hymn, Gayatri at its centre, it is known as brahmastra”
When a Sadhaka meditates and raises his Kundalini (Brahmastra in this context), it derives energy from his base chakra (Mooladhara) and propels upwards.
Then, it penetrates through 5 other chakras deriving energies from each of them at every stage.
Finally it hits the target : Crown Chakra (Sahasrara) and explodes there with a brilliance.
That explosion annihilates all “Illusion(Maaya)“, and leaves the sadhaka with the debris called as “Aham Brahmasmi (I’m Brahma)” feeling.
We don’t need to evoke Brahmastram today and kill people or destroy the creation today.
Instead, we can practice Gayatri Mantra with reversed syllables and release our inner Brahmastra to achieve our target of Nirvana or Salvation.
|
<urn:uuid:20453a08-313d-4e36-877d-13d5089c4f08>
|
CC-MAIN-2017-47
|
https://www.booksfact.com/mantra-sastra/brahmastra-nuclear-weapon-energized-gayatri-mantra.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807056.67/warc/CC-MAIN-20171124012912-20171124032912-00750.warc.gz
|
en
| 0.948319 | 1,682 | 3.15625 | 3 |
The “premediation”1 of self-driving vehicles as an almost inevitable future is in full swing, with repeated reports and opinion pieces as well as academic research concerned with the potential technical, legal, political, and social transformations.2 The discourse varies from excited anticipation of the suggested transportation revolution to skeptical visions of self-driving vehicles and their vulnerable algorithms.3 Automated cars often appear as either “utopias of mobilities,” potentially distributing and organizing mobilities in new ways, or as dystopias of malfunctioning technology.4 The suggested “new automobile paradigm” significantly affects not only the act of driving but also our “aesthetic, emotional and sensory responses to driving, as well as patterns of built environment, political process, sociability, habitation, family and work.”5 “As a complex amalgam of interlocking machines, social practices and ways of dwelling,”6 the imagination of automated automobility puts into question not only the control of the vehicle but also how the entire system of driving implies, affords, and enhances multiple senses, thoughts, and feelings (e.g., of safety, power, security, citizenship, etc.) on an individual and societal level.7
In US culture in particular, the automobile promoted multiple grand narratives of independence, individualism, and freedom, transforming the United States into a “republic of drivers” in the twentieth century, if mainly for white men.8 While both women and men of all races need to be considered within this notion of “driver” today, the history of the car has been highly gendered and racialized, with divergence in the respective social and cultural access and acceptance of driving for nonmale or nonwhite drivers.9 The system of automobility also deeply shaped racialized and gendered spaces of American cities.10 Now, with the emergence of automated cars, the automobility system is set to become more connected with mobile media (as such devices become the “key” for ordering and unlocking the vehicle’s capacities for navigation, personalization, communication, and signification) and with “smart” infrastructure that will potentially reshape urban space, including its racial and gender formations. Given that systems of automobility and communication technology are already gendered and racialized in particular ways, one can ask how emerging automated technologies both reconfigure and reproduce gendered and raced representations, meanings, and practices of (auto)mobility. Insofar as transitions in automobility engage embodied relations of power that are already deeply performative of car cultures and urban space, one must ask how performances of automated (auto)mobility will either change or reproduce the racial and gendered constructs of space and mobile subjectivity that are embedded in material traces, objects, and spaces of transportation.11
In this article, we will draw on current media images of and discourses around the driverless car to explore several different hypotheses. As advanced media and communication technologies, companies, and users become more deeply integrated with the autonomous car’s design and marketing, we seek to explore how they might further mesh capabilities for communication, social media, and augmented reality into the fabric of the vehicle and the fabric of urban space.12 Rather than a degendering of the driver, we suggest a multiplication of gendered/raced technologies of (auto)mobility focused on the individualized white masculine mobile subject via several forms of hypermediation. By qualitatively analyzing several audiovisual depictions of “self-driving” cars and briefly reflecting on the discursive topics surrounding them, this article intends to illuminate gendered and racialized configurations concerning the projected utopian space for a particularized human body in the imagined robotic car in these dominant visions of autonomous automobility.
Gendered and Raced Automobilities
“Like it or not, in our present culture, our activities are coded as ‘male’ and ‘female’ and will function as such within the prevailing systems of ‘gender-power relations,’” writes Susan Bordo.13 Consequently, it seems important to “uncover the variation in these gender-power relations, the way in which they are maintained and the ways in which they might be undermined.”14 Since the 1970s, mobility research has brought varying degrees of attention to “gendered mobility patterns,” encompassing mundane and extraordinary movement, as well as social, cultural, generational, and economic mobilities.15 Noteworthy are the many ways gender can be understood and explored. A reductionist “binary” approach obscures the multiple dimensions through which gender is constructed, performed, and related.16 Nevertheless, the simplification of “male” and “female” is comparable to numerous other simplistic contrasting pairs describing social, cultural, and spatial phenomena. In this sense, the dialectic of public and private has been assigned to man and woman similar to the binary of flow and fixity, mobility and place.17
With regard to automobility, several scholars have highlighted how the constructed and performed relation of women to the automobile contrasts the respective man-car relations regarding functionality, safety, or design and accessories.18 In its operational ease, cleanliness, and limited range, the early electric car was perceived as particularly suitable for (white) women, compared to the noisy, unreliable, and skill-requiring petrol car as masculine “adventure machine.”19 Insofar as the male-oriented driving culture of the petrol car is being reconfigured by the emergence of electric self-driving vehicles, where all people become “passive” passengers rather than “active” drivers, is there a feminization of the “gender-power relations” of automobility?20 The “masculine” act of driving is assigned to the autonomous car, which leaves the former human driver on the feminized passenger seat.
Similarly, historians such as Kathleen Franz and Cotten Seiler have detailed specific regional histories of the racial politics of automobility in the United States, highlighting the experience of African Americans.21 The field of transportation equity also documents the inequitable race and class distribution of transport access,22 creating what Tim Cresswell calls the “mobility poor,” who in the United States are predominantly black, Latin American, or racialized immigrant populations.23 Paul Gilroy has analyzed the deep link between racial/cultural performances of automobility and the African American search for freedom, arguing that “automobiles acquired a particular significance in the context of the US racial nomos—a legal and spatial order—that secured segregation and promoted the reproduction of racial hierarchy.”24 For black men in particular, the historical struggle against transport inequity and for full citizenship is articulated with cultural practices of automobility, including those involving the use of media such as the car radio.25 Racial formations therefore remain central to understanding mobility transitions and the underlying racial politics that reproduce mobility injustice. As Stephen Zavestoski and Julian Agyeman argue, “streets should not be thought of as merely physical spaces, but as symbolic and social spaces.”26 The production of space through the racial segregation of neighborhoods, private and public spaces, transit corridors, and vehicles are all key arenas of racial domination, racial privilege, and demands for racial justice, politically contested via black women’s famous protests for access to public transit but also via black masculine subaltern car cultures.27 How do visions of automated driving also interact with these raced/gendered formations?
This article seeks to examine how visions of automated driving address these racial and gendered forms of automobility, both explicitly and implicitly. We explore how the gendered and racialized affordances and spaces of the self-driving car as they are currently imagined, projected, and designed continue or reconfigure past and present gendered and racial representations, meanings, and embodied practices of (auto)mobility and car culture. Although mobilities research has shed light on gendered and raced notions of mobility in the past, a similar awareness has been largely missing in recent policy discourses regarding transportation transitions28 and driverless cars in particular.
A prevailing metaphor in the literature surrounding car culture is the automobile as extension of the driver’s body.29 Tim Dant extends this notion and speaks of the “driver-car” as “both an extension of the human body and an extension of technology and society into the human.”30 In other words, the automobile and particularly the act of driving invoke a specific kind of “subjectivity—simply put, the way of being in and perceiving the world around us.”31 Thus, driving—constituted by the car—can be seen as a kind of medium that affects feelings, thoughts, and perceptions. Marshall McLuhan and media ecologists following in his footsteps strongly promote this comprehensive approach to “technology,” arguing for any human-made artifact to be seen as an extension of human faculties impacting motions, notions, and emotions of individuals on a micro level but also affording social and cultural transformations on a macro level.32 Similar to Ernst Kapp’s understanding of tools as organ projections,33 McLuhan views wheels as extending the human feet, the car shell as extending the human skin, and so forth. However, once an organ is extended out too far, a numbness occurs, blurring the relation the human had to the medium. Eventually, the numbed extension is amputated, inhibiting a realization of what the medium used to be, what it has turned into, and what implications the loss of this organ has for the human. While humans shape their extensions, the amputated technology has the potential to shape them and their environment.
As we see even the act of driving being extended to the autonomous vehicle, we are urged to question how the internal and external car ecology, and our place within it, changes. However, this foray into the car-driver hybrid noticeably ignores the gendered and raced meanings of driving, and how these are inscribed on the driving body, on the car, and on the surrounding space of the roadway and city. In his expression “extensions of man” (emphasis added), it is unclear if McLuhan acknowledges the paradigm of the motor car centering on masculinity, not to mention whiteness, as a mobile form of freedom and mastery. In either case, it is crucial to connect a media ecological approach with an awareness of gendered and racial material social relations and symbolic meanings as technologies of media and mobility are envisioned, designed, and introduced. This goes beyond existing approaches to the “car-driver” as hybrid assemblage because it addresses not only the sociotechnical system but also its associated social practices and differentiated forms of subjectification. When technology has shifted the affordances for gendered and raced meanings, environments, and performativities, a media ecological perspective calls for asking how affects, sensitivities, and subjectivities transform.
This inquiry employs such a media ecological approach to the gendered and racialized spaces of the driverless car within the qualitative textual analysis of two concept car previews from traditional automotive manufacturers, Nissan (Japan) and Volvo (Sweden). We consider current autonomous and electric concept car visions and to what extent the moving images address, confirm, reproduce, or reimagine traditional gendered and racial components of automobility. Of special interest is the interior space of the vehicle and its affordances for the human. Despite the suggested transformation of the masculine motor car to the feminine electric car and the “amputation” of the masculine driver toward an all-passenger feminine space, the futures promoted in the concept car films still feature the car as an extension of the empowered man. At the same time, the imagery reiterates the white and Asian male as the early adopter of this technology while moving through urban spaces that have been “sanitized” of racial minorities.
Furthermore, the analysis of the images is complemented and expanded by drawing on the journalistic and academic discourse around present and future automobility. Here, we aim to show to what extent the autonomous vehicle continues to invoke spatial metaphors as a sanctuary and communicative environment, but also as traffic trap, virtual glass house, and algorithmic target. Surfacing in this second instance are notions of nonhegemonic spaces including both gender and race. These different gendered (pre)mediations lead to a hypermediation of overlapping hegemonic (masculine, white) and nonhegemonic (feminine, nonwhite) spaces and affordances of depicted and debated autonomous automobility.
This analysis thus explores past and present “imagined futures” to unpack “underlying cultural, political, and economic logics that continue to animate dreams of technological and social mastery over everyday life.”34 Richard Grusin’s concept of “premediation” as the imagination of multiple possible futures affecting the present resonates in this approach.35 Exploring such “literature of the future”36 can give important insights into how the development and introduction of self-driving vehicles and their gendered and racial dimensions are advertised, promoted, or transformed.
Gender and Space in Depictions of Autonomous Concept Cars
The concept cars analyzed here are unlike the Mercedes-Benz F 015 “Luxury in Motion” self-driving concept car, geared toward affluent customers, or the Chevrolet FNR concept car, which explicitly addresses traditional masculine car cultures, being described as “like a Hot Wheels car for The Matrix.”37 Similarly, the Google self-driving car project overtly turned toward nonhegemonic depictions of women, children, the elderly, and impaired people as early test riders in its playful video of its prototype. In contrast, the electric driverless cars of Nissan and Volvo are positioned as more complex and subtler in terms of what gender, age, race, and class they intend to attract. Nevertheless, a closer look at their future visions shows distinctive racialized and gendered spaces and affordances that are expressed through driving and communicating technologies, as well as the ways in which these are joined together to interpolate the masculinity of the repositioned “driver.”
The Nissan IDS Concept
At the Tokyo Motor Show in October 2015, Nissan Motor Co. Ltd. unveiled its vision for the future of autonomous driving and zero-emission electric vehicles: the Nissan IDS Concept car. Equipped with two different driving modes, Piloted Drive and Manual Drive, the sleek self-driving car presents the first step toward a safer and cleaner driving experience. “Nissan’s forthcoming technologies will revolutionize the relationship between car and driver, and future mobility,” Nissan president and CEO Carlos Ghosn confidently proclaims when introducing the Nissan IDS Concept.38 Connecting advanced vehicle control systems with state-of-the-art artificial intelligence, the concept car is one example of how promotions of autonomous driving premediate gendered affordances and racial spaces, examined here through the Nissan IDS Concept preview trailer “Together We Ride.”
The approximately seven-minute clip starts out with a disclaimer announcing that what the viewer is about to see “excites on a whole new level. A little taste of what’s next of what we call Nissan Intelligent Driving.”39 The narrative then begins with synchronized shots between car and man as a sleeping, presumably Japanese, male slowly awakens to automatically opened windowshades and freshly brewed coffee. His schedule is projected onto a glass wall with a profile image of the Nissan IDS seemingly informing him about his duties: “Together We Plan” states a written insertion, a style repeated throughout the clip. The young, bearded protagonist appears to live by himself in a state-of-the-art home with the car serving as chief of staff. The recurring “Together We” titles indicate a partnership between man and machine, in which the vehicle—so far—takes up feminine roles of daily domestic organization and planning displayed by the translucent screens of the “smart” home. The image of a young Asian woman briefly flickers on his schedule projection. A full shot of the car immediately follows, visually suggesting a triangular relationship between the three, with the car insinuated into his interior space and personal relations, not just his mobility.
The Nissan IDS Concept pulls toward the entrance, greeting the young man with a message illuminated on the car’s front: “Good morning, Hiro!” He steps into the spacious vehicle, buckles up, and chooses Manual Drive over Piloted Drive.40 The manual option allows him to drive himself as a steering wheel in the style of a gaming console replaces a flat screen in front of the driver’s seat.41 Hiro drives in a futuristic urban environment, largely empty of people, while sensor activity is assisting him “behind the scenes” by monitoring conditions.42 The human man enjoys an illusion of full control, while the “intelligent” car is still a significant if not more powerful player in the background.
Soon Hiro operates the command switch and Piloted Drive sets in. The steering wheel recedes back into the instrument panel, and a flat screen featuring a seemingly agendered and aracial stick figure face, humanizing the car as communicative agent, takes its place. The vehicle takes over driving, simultaneously adjusting Hiro’s schedule to any changes. As the protagonist arrives at his first destination, an older woman with a walking cane steps into another Nissan IDS, suggesting the car’s suitability for senior citizens and the impaired. Although she is sitting in the driver’s seat, she does not—in contrast to Hiro—immediately switch to Manual Drive. While this action is plausible in light of her age and suggested impairment, it presents a subtle instance in which the interior space of the car is gendered in terms of driving the car (male) and not driving the car (female).
This observation is confirmed when the previously displayed young woman enters the narrative, first seen with two non-Asian men—the first suggestion of racial diversity in the video—and departing separately. “Hello, Yume” the Nissan illuminates in its front panel, recognizing Hiro’s female companion. Both get into the car—Hiro in the driver’s seat, Yume in the feminized passenger’s seat (see Figure 1). Hiro represents a “knight in shining armor,” a masculine metaphor that the car medium retrieves from the past, according to McLuhan and McLuhan.43 As the couple enter a scenic mountainous area, the car seems to be taking pictures of the outside and inside of the vehicle. Yume catches Hiro romantically gazing at her in one of the photos. While Hiro appears embarrassed, the animated eyes of the intelligent car blink, possibly innocently, amusedly, or conspiratorially. The triangular relationship between the three actors is manifest along with several gendered components of the scene. As passive passenger, Yume fulfills the female cliché, while Hiro, who is just as passive in this sequence, maintains control in the driver’s seat with the steering wheel a click away. The traditional sense of passivity inherent in being a passenger is thus unevenly distributed between Hiro and Yume. Moreover, the animated “face of the vehicle” is placed in front of the “driver’s seat,” inviting that passenger more directly to engage with it while not steering. Additionally, the car’s communicative affordances, such as image taking, are highlighted when the female protagonist is in the vehicle, confirming gendered notions of communicating being feminine.44
While Hiro’s stereotypical masculinity was briefly threatened in the exposure of his prolonged gaze on Yume, he has the chance to retrieve and display his manhood in a suggested race with three motorcyclists who drive alongside in a challenging way. The previously playful music switches into a rock tune, as Hiro again operates the command switch and “takes back control” of the car. (In contrast to the narrative, the inserted title, “Together We Sense,” implies that Hiro is in fact not the sole pilot of the movement, but that the car is still sensing and assisting when needed.) The young man remains in Manual Drive while skillfully navigating the now urban environment, which is audiovisually enhanced with soaring sound effects. Attracting views from impressed and noticeably “diverse” young pedestrians, the car races through the streets at dusk, while inserted titles suggest a “perfect partnership between man and machine” (emphasis added). The beat of the music picks up as the car drops off the couple at the venue for Hiro’s act “live on stage.” A huge animated projection on a building’s façade and a cheering crowd, including non-Asians, inform the viewer that Hiro is a popular DJ. Here, he represents the hip, distanced masculinity of electronic dance music, suggesting a parallel between spinning turntables and the shift to the piloted automated car, replacing both the traditional motorcyclists and their rock music.45 With a last glance at the vehicle, Hiro learns that it will be “ready when you are” as illuminated in front of the instrument panel.
In terms of gendered and racial space and affordances, Nissan’s vision of future “intelligent mobility” implies the car-(hu)man relationship to equal that of a modern “superstar” with a competent assistant and partner. The Nissan IDS Concept not only smooths out the protagonist’s daily rhythms but also assists his capabilities and increases his desirability with a racially “alike” female but with a racially diverse fan culture. “The car takes part in the ego-formation of the driver as competent, powerful and able (as advertisers have tapped into),”46 while depicting an environment that supports his gendered and racial identity. Despite the autonomous vehicle taking over significant portions of control, the depiction of the concept car suggests a simultaneous empowering of the owner and of Japanese cultural leadership in advanced technology and global DJ culture. Noteworthy is the visualization of the independent male as empowered “driver” personified by Hiro and the seemingly more dependent female as empowered “passenger” in the example of the older woman and Yume, repositioning a particular version of Japanese masculinity.
Despite multiple directions the visualization of the autonomous automobile space could take, Nissan draws on existing paradigmatic conceptions of male and female, and Asian and non-Asian, in this premediation of self-driving vehicles. Moreover, it remains a personally owned and highly personalized experience of technology; there is no hint of a “shared mobility” culture. By suggesting the vehicle’s position as domestic manager, scheduler, chauffeur, communicative medium, and matchmaker, the depiction allays any anxieties over the driver’s loss of masculinity. It reinvents existing gendered notions of automobility by narratively and audiovisually stressing a postmodern masculinity. The hints of a slightly diverse Japanese urbanism, in which racial minorities are associated with nighttime music scenes, also safely assimilates global influences while other Japanese cultural cues remain undisturbed.
The Volvo Concept 26
In contrast to Nissan, Volvo is focusing on an interior design concept available for sale today initially omitting concerns with futuristic exterior design and technology.47 Named Concept 26 because drivers in the United States may regain an average of twenty-six minutes per commute when letting the vehicle drive, the concept, unveiled at the Los Angeles Motor Show in November2015, focuses on innovative seat design and transformative “driving” modes: Drive, Create, and Relax. While Nissan presents a short narrative in “Together We Ride,” Volvo combines its futuristic depiction of autonomous automobility with interview snippets from project representatives in corporate documentary-style format secured by authoritative male voices.
The Volvo Concept 26 preview starts with the words “Cars have always been a symbol of freedom,” a quote then attributed to Thomas Ingenlath, senior vice president of design, who further suggests that “autonomous driving will soon broaden the experience of how people spend their time in the car.”48 “Vehicles are driven by people. Everything we do at Volvo has always been taking care of people and of our products,” says Doug Frasher, advanced concepts director at Volvo Monitoring and Concept Center. His reassuring and even paternalistic words are accompanied by a vintage clip of a couple driving in a car, the man in the driver’s seat and the woman in the passenger’s seat. Next, two young girls sit in the back seat of a present-day vehicle laughing and playing, as Frasher admits that “in today’s world, everybody is stressed for time.” In this brief sequence, the clip already brings up established gendered notions of driving, and Frasher’s voiceover suggests the continuation of traditional driving aspects by emphasizing that Volvo’s goal “has always been” the same. Thus it generalizes “traditional” ideas about driving as freedom, pressured time, and taking care of family.49
When Anders Tylman-Mikiewicz, general manager of Volvo Monitoring and Concept Center, mentions how “broken” aspects of contemporary driving need to be “fixed,” the clip features a five-lane congested traffic situation in California. A medium shot of a male driver seemingly distressed in this congestion follows, supporting the previously featured vintage footage of the man as driver. Instead of focusing on the technology, Tylman-Mikiewicz explains that Volvo puts “people” at the center of each question. This somewhat “soft” approach centering people over technology in the introduction and familiarization of customers to autonomous automobility hints at a feminized strategy. Moreover, Volvo’s concentration on the interior design of the vehicle similarly invokes associations of the semiprivate feminine domestic space placed at the core of the emerging automobile culture. However, this interpretation stands in contrast to the continuing visual emphasis on stereotypical gender distributions in the depiction and narration of Volvo’s autonomous driving future.
When the two aforementioned white male interviewees keep stressing the importance of control “for humans in general” and, consequently, Volvo, the images exclusively portray a white man (again further masculinized with a beard) assumedly being “in control” inside the Concept 26. “Driving” and “delegating driving” are the two rhetorically active and thus stereotypically male mobile modes, further assuring control and mastery of the vehicle. Similarly, the more specific modes of Drive, Create, and Relax all avoid passive connotations. As the driver—sitting in front of the steering wheel—delegates the car to turn into Relax mode, the steering wheel and ergonomic seat retract, allowing extra space for the male body. (There is no mention of ergonomic seats being designed for different-sized people). Legs crossed in a relaxed position of ease, the man is depicted reading a book, while the two narrators continue to confirm the maintenance of human control over the car (see Figure 2).
The driver seems to have “delegated” the Create mode when a twenty-five-inch flat screen monitor slowly flips up from the passenger side dashboard. “Show me my unwatched episodes” reads the screen, with a flicker of a female image on one of the media options, suggesting the user of the car is about to consume media. Interestingly, these increasingly relevant communication affordances of the self-driving car are assigned and placed in front of the feminized passenger seat. Traditional comprehensions of passivity but also consumption and communication as gendered toward the female are thus upheld.
As Tylman-Mikiewicz speaks of the “amazing things” the “human brain” can do while engaged, we see images of a different white male driver (with beard) occupied with focusing on the road while, in the following shot, a white female passenger is resting with eyes closed in the back seat. This image of the sleeping woman is awkwardly juxtaposed with Tylman-Mikiewicz’s comment about the brain being “not that great when we get bored.” While the narrator is speaking of distraction from outside cues and smart devices inside the vehicle and is not necessarily suggesting female mental inferiority, this sequence nevertheless reinforces gendered notions of automobile space and culture as one in which female occupants might pose a risk, or be in need of greater protection.
When the white male protagonist engages with the tablet between the two front seats, an e-mail from a person named Eva appears, announcing two extra guests for dinner. An image of a white woman with long brown hair flickers past. Traditional comprehensions of the mobile public space as the male and private domestic space as the female domain are strongly reinforced in this narrative.50 Besides this small virtual, textual, and thus disembodied appearance of a female character, no other women are introduced in the audiovisual preview of Volvo Concept 26. This observation stands in contrast to the narration of Tylman-Mikiewicz, who expresses Volvo’s intention of producing not something futuristic but something “you can recognize.” If we recognize it, it is as a heteronormative and racially homogeneous tradition of mobility and spatial arrangements.
While stereotypically gendered spaces and affordances are discernable in these visuals, it is questionable whether female drivers have much to associate with beyond the traditional distribution of gendered spaces and tasks. Similarly, the general manager emphasizes how “autonomous driving is much more of a people’s story rather than a technology’s story.” The notion of “people” presented in the image film seems to be exclusive to white male protagonists, given that not only the depicted driver but also the two narrating representatives of Volvo are male (and white). The depiction of Volvo’s vision of autonomous driving intends to show how the emerging technology transforms people’s lives “into something totally new.” However, while the “total newness” is attributed to solely the technology, the visualized agents, spaces, and affordances are a repetition of the old gendered notions of driving as masculine, and as a white mastery of both urban and wilderness-like areas.
Gender, Race, and Space in Discourses around Autonomous Automobility
Both of the explored audiovisual previews by Nissan and Volvo depict and describe the future automobility space as continuing to be predominantly masculine and associated with the dominant racial order. Although the arrival of autonomous technologies might allow for a rethinking of hegemonic technology, design, habit, and infrastructure, the current representations of them reinforce hegemonic masculinity and whiteness (including so-called honorary whiteness in the example of the Asian protagonist).51 The female body is mostly absent or visually delegated to the passenger seat and to the space of family or romantic partner, and matched with the race of the male driver. Although it is possible to imagine self-driving vehicles as “degendering (or deracializing) the driver,” these future visions stylistically, visually, and narratively counteract such notions by retrieving and underlining older gender distributions and racial perceptions.
Stretching across these visions of autonomous automobility and the contemporary journalistic and scholarly discourses surrounding them are several conceptions of what kind of space the emerging automotive technology creates and affords. Some of them are metaphors that have been circulating with the proliferation of the motor car in general and find themselves continued in the conversation around self-driving automobiles. Examples are the car as “safe space” and “sanctuary,” a space for communication and entertainment. In terms of gender, this emphasis on safety and harmony prompts notions of femininity rather than masculinity. Women are said to prefer bigger and safer cars, that is, “family models,” in their responsibilities as mother and housewife.52 Men, however, would be drawn to “the ‘Top Gear’ fast sports car or the impractical ‘classic car.’”53 Second, the concept car visions also try to allay fears of the more dystopic imagery of the car as a physical trap in congested traffic, or notions of the car being a virtual glass house and an algorithmic target. These fears are related to both the militarized basis of technologies of securitization54 and the concerns over bias in algorithmic biometric technologies such as facial recognition,55 both of which suggest underlying raced and gendered processes inherent in the automation of automobility.
As an extension of the human body, the motor car is frequently discussed as providing an extra layer of skin, shielding off not only physical danger but also sensual distraction such as the smells and sounds of the street.56 The car affords “the experience of privatized movement to the sound of communication technologies in the automobile.”57 With the car and its safe and secluding environment, the driver can privatize and personalize the inhabited space.58
Michael Bull speaks of the “desire for accompanied solitude,”59 which the two analyzed concept cars directly fall back on. In most of the scenes, the white male driver is alone in the self-driving vehicle but nevertheless accompanied by the car as partner. Interestingly, the depicted drivers do not directly address or communicate with the vehicle beyond nonverbal gestures stressing the notion of solitude. The “accompanied solitude” enhances the impression of the car not only as a safe space, since it is always alert and ready to assist the driver, but also as a sanctuary, since the driver is not obligated to talk to the artificially intelligent company. The Volvo Concept 26 preview, in particular, strongly emphasizes the vision to provide space and time to relax and create. Yet this “safe space” is also a space of surveillance, which may not be experienced in the same way by differently raced and gendered mobile subjects, who may find themselves targets of securitization.
Nissan’s descriptions also have recourse to previously observed qualities of the car as providing a secure space to dwell in.60 “The car’s [Nissan IDS Concept] bluish satin silver body color heightens the impression of a comfortable and secure cabin space,” and being inside is “like relaxing in a living room.” Moreover, Nissan’s design and features are to create “confidence” and “harmony”: a “natural, harmonious system of communication” between driver and car as well as car and outside actors are to further provide a space that is safe and stress-reducing for the car dweller.61 The unlikely depiction of almost empty streets and limitless space in these visual “imagined futures” likewise prompts notions of safety and harmony. If this first metaphor of the driverless car as sanctuary might already illuminate a slight turn toward seemingly female spatial values of the car interior rather than exclusively male desires, these threats of passivity are quickly counteracted by active narratives. The comfortable “living room” also assumes certain dominant subjectivities, for whom the work of maintenance, cleaning, and reproduction are performed by racialized and gendered subordinates, who here remain invisible.
Second, the autonomous automobile is clearly premediated as a “virtual hangout” affording multiple communicative and/or social activities both on- and offline. This notion too needs to be seen as a continuation and enhancement of previously integrated qualities into the car. Employing the “screen” as conceptual framework, Jeremy Packer and Kathleen F. Oswald argue that “what happens under the hood comes to be understood as information rather than sensation” as more and more communication and information technologies enter the automobile interior.62 The authors observe a “triple displacement” through screens: “First, the driver was distanced from the road environment; second, the driver displaced the vacuum of sensation with entertainment and advanced information systems; and third, these systems have developed into networks that work to displace the driver from … driving.”63 The gendered communication tools that now dominate the car’s interior (and possibly exterior, as visible in Nissan’s communicative exterior design) theoretically lead to a “degendering” of the male driver. With the automobile as extension of man, the amputation of this extension sets in as the medium has been extended out too far: the driver is not driving any more. The void of activity for the now passenger can be filled with increasing communicative actions associated with the female gender. Moreover, the “quiet” forms of media that are featured also downplay the desire for loud sound systems and boom boxes associated with minority ethnic drivers who personalize their cars with “excessive” sonic capabilities.64
Regarding scenarios that are more dystopian, the automobile space can also be discussed as a trap. While the mobile car and the immobile body inside already often find themselves stuck in heavy traffic and congested roads, this immobility and ultimate “incarceration” may increase with a larger number of not only driverless but even humanless vehicles on the road.65 This vision feeds into concerns of explosive growth of vehicle trips locking in humans that are in fact riding in the cars.66 Projected congestion disasters speak to gendered perceptions of immobility, fixity, and passivity as feminine in contrast to mobile masculinity. The concept car films fend off any notion of congestion by showing extremely empty and open roads. They also promote highly personalized car ownership, avoiding any indications of the need for car sharing. In addition, these visions depict ease of parking of the self-driving cars in the future, and no substantial changes to the urban land use associated with traditional automobility.
Another spatial notion of the self-driving car concerns even darker visions of visibility and surveillance. In its necessary multiconnectivity, the formerly “semiprivate” vehicle turns into a “glass house.” In terms of virtual visibility, the emerging technology must be understood as fully “public” similar to social digital media and the Internet in general. Surveillance technologies are likely to make up a crucial component of automated vehicles. Carports already transmit such data for insurance reasons at the least, and possibly for more direct purposes of surveillance in the future. This quality leads to an exposure of passengers and their activities inside the car, dissolving the former protective cocoon of the car shell toward virtual visibility and vulnerability. Anonymity could turn into a hindrance to the vehicle’s functionality. In the Nissan IDS Concept, the car’s personal identification of the driver, possibly through biometric data, is presented as a charming (“Good morning, Hiro”) and secure feature. While securitization may be increased and criminal activities such as car theft may become a thing of the past, the driver will be subject to surveillance of an invisible third eye, threatening traditional rights to privacy.
This exposure to a virtual gaze again invokes connotations of the male gaze onto an objectified female body. While this virtual gaze is foremost algorithmic in its detection of deviant behavior, the respective code is still designed and implemented by a human, who, considering the lack of gender diversity in computer programming, is still more likely to be male and, thus, an extended male gaze.67 Through overextension of the driver-car into an autonomous technology with surveillance capacities, the male body reverses into an overexposed feminized object of investigation. In contrast to the audio-visual previews of the masculine self-driving concept car, this vision again presents an automobile space that mediates femininity. Beyond this, though, biometric surveillance is a highly racialized technology, which has been used to differentiate “trusted travelers” with data-ready bodies from those (often minorities) who are suspect.68 Thus, from the point of view of the nonwhite, non-Asian driver, such imagery may provoke fear and insecurity more than a sense of safety and security. The glass house of a powerfully mediated car may be experienced as a panoptic or algorithmic site of surveillance of the racial or sexualized “other,” a scopic trap.
Last, and in close relation to the car as surveyed space, examples of the self-driving vehicle as target must be discussed. Although the hard physical shell of the car may protect from outside harm, the soft virtual space inside the car may invite cyber attacks. The driver’s disempowerment and the technical vulnerability of the “communico-automobile assemblage” raise issues regarding enactment, control, and exploitation of mobility: “If any automobile can be hacked and remotely controlled, all cars could become remotely controlled weapons” state Packer and Oswald.69 Moreover, it may not be external forces that interfere with the man-machine partnership; it may be the vehicle’s internal, conceivably gendered and racialized algorithms that calculate which subjects to protect (e.g., white women) and which to harm (e.g., racialized “suspects”). This may result in the possible moral and ethical dilemma of an innocent driver being let to die to save others, or a “suspect” vehicle being brought to a halt under external control. This notion of the driver within the “safe space” of the vehicle turning into a “target” again relates to gendered and racial imaginations of mobility and immobility, activity and passivity, and women and minorities as targets for attack and surveillance in public.70 A closer analysis and critical discussion of race and racialized notions of “driving while black” seems crucial in this context,71 especially insofar as it relates to “driving while female,” and the absence of both subject positions in these imagined futures of driverless automobility. There is furthermore a double erasure of the black female mobile subject, reinforcing the invisibility of black women’s potential geographies of (auto)mobility and the potential violences inherent in the loss or devaluing of collective public spaces of transit.72
In this analysis of two advertising promotions for future automated cars, we have explored how designers and corporate promotions continue to evoke utopian spatial metaphors of the car as sanctuary and communicative environment while allaying or suppressing fears of dystopian metaphors of the vehicle as traffic trap, virtual glass house, and algorithmic target. In repositioning the white or honorary white male driver as central to their narratives, the two concept cars presented by Nissan and Volvo extend the male driver and embody white masculinity within hegemonic masculine spaces of automobility. In deconstructing the automated car as a safe space, physical trap, virtual glass house, and algorithmic target, we have attempted to address the feminine and nonhegemonic bodies and spaces of automobility that are subordinated or absented within these corporate imaginaries. If the automated car is envisioned as a safe and comfortable living room, one must ask for whom is it safe and with what risks or harms to others? If the automated car is envisioned as a powerful mediascape moving through a “smart” city and connected to a “smart” home, then one must ask which mobile and dwelling subjects might be at risk within such a “code/space” and how might its software support the transduction of hegemonic spatialities and subjectivities.73
The brief analysis calls for greater attention to the remediation of existing gender orders, racial orders, and power relations within the making of emerging (auto)mobile futures. The hypermediation of the autonomous automobile as hybrid extension of the white masculine subject points toward the question of what kind of “auto” is being envisioned and developed.74 Like the “auto” in contemporary automobility, a “self” in the future “self”-driving mobility is shaped and shaping not only as a mode of transportation but also as a medium of communication. McLuhan remarks how after the extended organ has been amputated, it impacts the human body and mind without us being aware of it. What have “drivers” given up to the self-driving car? We suggest that race or gender have not been given up, for there is no degendering of the driver. Gender and racial orders remain productive of the system of automobility of the future, and the system of automobility will remain productive of racialized and gendered orders.
What, then, has been amputated in the emerging self-driving car assemblage? Nigel Thrift observes that the car turns into “something akin to a Latourian delegate.”75 “First it has been made by humans; second, it substitutes for the actions of people and is a delegate that permanently occupies the position of a human; and, third, it shapes human action by prescribing back.”76 More than ever, the car is “prescribing back” to its increasingly symbolic driver. It begins to imbricate the human not only in the mechanical processes of driving and the regulatory processes of the highway code but now also in the mediated processes of mobile information and communication networks. Secured in the self-driving vehicle, the human “driver” will be assured in his dominance and masculinity by exercising a new “auto” mobility.
As designers, engineers, advertisers, journalists, and corporate publicists currently envision, depict, discuss, develop, and design future “self”-driving automobility, it is imperative for social scientists and the wider public to contemplate how these decisions concerning automobile spaces and bodies reproduce and remediate gendered and racialized social hierarchies and moral orders. Automated cars may eventually not only “drive” themselves and the people within them but also will shape us and our spaces and bodies in return. The car not only drives itself but also drives what kinds of selves-information are possible while hinting at the possible paths of resistance that it will most certainly elicit.
This article benefitted from its presentation at Drexel University’s STS Works-in-Progress Series in 2016 and the 14th Annual Conference of the International Association for the History of Transport, Traffic, and Mobility (T2M) in Mexico City in 2016. We thank the participants for their thoughtful responses and the editors and anonymous reviewers for their insightful comments.
Richard Grusin, Premediation (Basingstoke: Palgrave Macmillan, 2010).
Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan, “Autonomous Vehicles Need Experimental Ethics,” Cornell University Library, 12 October 2015, http://arxiv.org/abs/1510.03346; Newsdesk, “Driverless Cars Spell Trouble for Insurers,” Insurance Times, 25 November 2015, http://www.insurancetimes.co.uk/ driverless-cars-spell-trouble-for-insurers-axa-underwriting-boss/1415994.article; Cat Zakrzewski, “Federal Officials Are Warming Up To Self-Driving Cars,” TechCrunch, 25 November 2015, http://techcrunch.com/2015/11/25/federal-offi cials-are-warming-up-to-self-driving-cars.
Sarah Kaplan, “What If Your Self-Driving Car Decides One Death Is Better Than Two—and That One Is You?,” Washington Post, 28 October 2015, https://www.washingtonpost.com/news/morning-mix/wp/2015/10/28/what-if-your-self-driving-car-decides-one-death-is-better-than-two-and-that-one-is-you; Dave Lee, “Google’s Driverless Car Is Brilliant but So Boring,” BBC News, 2 October 2015, http://www.bbc.com/news/technology-34423292; Patrick Lin, “The Robot Car of Tomorrow May Just Be Programmed to Hit You,” WIRED, 6 May 2014, http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you.
Ole B. Jensen and Malene Freudendal-Pedersen, “Utopias of Mobilities,” in Utopia: Social Theory and the Future, ed. Michael Hviid Jacobsen and Keith Tester (Aldershot: Ashgate, 2012), 197–217.
Mimi Sheller, “Automotive Emotions: Feeling the Car,” Theory, Culture and Society 21, nos. 4–5 (2004): 221–242, here 222.
Mimi Sheller and John Urry, “The City and the Car,” International Journal of Urban and Regional Research 24, no. 4 (2000): 737–757, here 739; Jeremy Packer, Mobility without Mayhem (Durham, NC: Duke University Press, 2008); Cotten Seiler, Republic of Drivers (Chicago: University of Chicago Press, 2009).
Packer, Mobility without Mayhem; Seiler, Republic of Drivers.
Seiler, Republic of Drivers; Jane Jacobs, The Death and Life of Great American Cities (New York: Vintage, 1961); Peter Newman and Jeffrey Kenworthy, Cities and Automobile Dependence (Aldershot: Gower, 1989).
Packer, Mobility without Mayhem; Seiler, Republic of Drivers.
Jason Henderson, “Secessionist Automobility: Racism, Anti-Urbanism, and the Politics of Automobility in Atlanta, Georgia,” International Journal of Urban and Regional Research 30, no. 2 (2006): 293–307; Sikivu Hutchinson, Imagining Transit: Race, Gender, and Transportation Politics in Los Angeles (New York: Peter Lang, 2003); Cotton Seiler, “‘So That We as a Race Might Have Something Authentic to Travel By’: African American Automobility and Cold-War Liberalism,” American Quarterly 58, no. 4 (2007): 1091–1117; Margaret Walsh, “Gendering Mobility: Women, Work and Automobility in the United States,” History 93, no. 311 (2008): 376–395; Mimi Sheller, “Racialized Mobility Transitions in Philadelphia: Urban Sustainability and the Problem of Transport Inequality,” City and Society 27, no. 1 (2015): 70–91.
Mimi Sheller, “The Emergence of New Cultures of Mobility: Stability, Openings, and Prospects,” in Automobility in Transition? A Socio-Technical Analysis of Sustainable Transport, ed. Frank W. Geels, René Kemp, Geoff Dudley, and Glenn Lyons (New York: Routledge, 2011), 180–202.
Newman and Kenworthy describe urban fabrics as “the material reality created by certain urban lifestyles and functions … shaped primarily by transportation infrastructure,” which supports either “walking urban fabric,” “transit urban fabric,” or “auto urban fabric.” Peter Newman and Jeffrey Kenworthy, The End of Automobile Dependence (Washington, DC: Island Press, 2015), 108.
Susan Bordo, “Feminism, Postmodernism and Gender-Scepticism,” in Feminism/ Postmodernism, ed. Linda J. Nicholson (London: Routledge, 1990), 133–156, here 152.
Linda McDowell, Gender, Identity and Place (Minneapolis: University of Minnesota Press, 1999), 248.
David Kronlid, “Mobility as Capability,” in Gendered Mobilities, ed. Tanu Priya Uteng and Tim Cresswell (Abingdon: Ashgate, 2008), 15–33, here 18; Georgine Clarsen, “Feminism and Gender,” in The Routledge Handbook of Mobilities, ed. Peter Adey, David Bissell, Kevin Hannam, Peter Merriman, and Mimi Sheller (London: Routledge, 2014), 94–102.
Judith Butler, Gender Trouble (New York: Routledge, 2006); McDowell, Gender, Identity and Place; Priya Uteng and Cresswell, Gendered Mobilities.
Priya Uteng and Cresswell, Gendered Mobilities, 2.
David Gartman, “Three Ages of the Automobile: Cultural Logics of the Car,” in Automobilities, ed. Mike Featherstone, Nigel Thrift, and John Urry (London: Sage, 2005), 169–195, here 183; Sheller and Urry, “The City and the Car.”
Colin Divall, “Transport History, the Usable Past and the Future of Mobility,” in Mobilities: New Perspectives on Transport and Society (Abingdon: Ashgate, 2012), 305–319; Gartman, “Three Ages of the Automobile,” 169–195; Priya Uteng and Cresswell, Gendered Mobilities; Gijs Mom, Atlantic Automobilism: Emergence and Persistence of the Car 1895–1940 (New York: Berghahn Books, 2015).
Seiler, Republic of Drivers; Priya Uteng and Cresswell, Gendered Mobilities; Annette Jerup Jørgensen, “The Culture of Automobility: How Interacting Drivers Relate to Legal Standards and to Each Other in Traffic,” in Priya Uteng and Cresswell, Gendered Mobilities, 99–111.
Kathleen Franz, “The Open Road,” in Technology and the African-American Experience: Needs and Opportunities for Study, ed. Bruce Sinclair (Cambridge, MA: MIT Press, 2004), 131–153; Seiler, “So That We as a Race.”
Robert D. Bullard, Glenn S. Johnson, and Angel O. Torres, “Dismantling Transportation Apartheid in the United States Before and After Disasters Strike,” Human Rights 34, no. 3 (2007): 2–6; Robert D. Bullard and Glenn S. Johnson, Just Transportation (Gabriola Island, BC: New Society Publishers, 1997).
Tim Cresswell, On the Move (New York: Routledge, 2006), 260.
Paul Gilroy, Darker Than Blue (Cambridge, MA: Belknap Press, 2010), 14.
Art Blake, “Audible Citizenship and Automobility: Race, Technology and CB Radio,” American Quarterly 63, no. 3 (2011): 531–553.
Stephen Zavestoski and Julian Agyeman, Incomplete Streets (New York: Routledge, 2014), i.
Tim Cresswell, “Black Moves: Moments in the History of African-American Masculine Mobilities,” Transfers 6, no. 1 (2016): 12–25; Sheller, “Racialized Mobility Transitions.”
Mimi Sheller, “Gendered Mobilities: Epilogue,” in Priya Uteng and Cresswell, Gendered Mobilities, 257–266.
Tim Dant, “The Driver-Car,” in Automobilities, ed. Mike Featherstone, Nigel Thrift, and John Urry (London: Sage, 2005), 61–79; Packer, Mobility without Mayhem; Seiler, Republic of Drivers; Sheller and Urry, “The City and the Car”; Nigel Thrift, “Driving in the City,” in Featherstone et al., Automobilities, 41–59.
Dant, “The Driver-Car,” 75.
Seiler, Republic of Drivers, 3.
Marshall McLuhan, Understanding Media: The Extensions of Man (Cambridge, MA: MIT Press, 1994).
Ernst Kapp, Grundlinien einer Philosophie der Technik: Zur Entstehungsgeschichte der Cultur aus neuen Gesichtspunkten (Braunschweig: Druck & Verlag von Georg Westermann, 1877).
Jeremy Packer, “Automobility and the Driving Force of Warfare: From Public Safety to National Security,” in The Ethics of Mobilities: Rethinking Place, Exclusion, Freedom and Environment (Abingdon: Ashgate, 2012), 39–64, here 61.
Grusin, Premediation, 8.
James W. Carey and John J. Quirk, “The History of the Future,” in Communication as Culture: Essays on Media and Society (London: Psychology Press, 1989), 173–200, here 195–196.
Google, “Google Self-Driving Car Project,” https://www.google.com/selfdrivingcar (accessed 6 February 2016); Mercedes-Benz, “The Mercedes-Benz F 015 Luxury in Motion,” https://www.mercedes-benz.com/en/mercedes-benz/innovation/research-vehicle-f-015-luxury-in-motion (accessed 6 February 2016); Volvo Car USA, “Volvo Cars | Concept 26,” video, 3:44, 18 November 2015, www.youtube.com/watch?v=ihsa3H7Awp8.
“Nissan IDS Concept: Nissan’s Vision for the Future of EVs and Autonomous Driving,” Nissan News, 27 October 2015, http://nissannews.com/en-US/nissan/ usa/releases/nissan-ids-concept-nissan-s-vision-for-the-future-of-evs-and-aut onomous-driving.
Nissan Motor Co. Ltd. [in Japanese], “Together We Ride,” video, 7:14, 27 October 2015, https://www.youtube.com/watch?v=9zZ2h2MRCe0.
Nissan Motor Co. Ltd. [in Japanese], “Introducing the Nissan IDS Concept,” video, 3:48, 27 October 2015, https://www.youtube.com/watch?v=h-TLo86K7Ck.
Nissan News explains that the steering wheel “takes styling cues from reins for horse riding.” “Nissan IDS Concept.” While this inspiration is visible in the design, the shape of the wheel still seems more closely related to gaming consoles such as Steelseries SRW-S1.
“Nissan IDS Concept.”
Marshall McLuhan and Eric McLuhan, Laws of Media: The New Science (Toronto: University of Toronto Press, 1992), 148.
Seiler, Republic of Drivers.
Thanks to Brent Luvaas for this suggestion.
Sheller and Urry, “The City and the Car,” 747.
Volvo Cars of North America, “Volvo Cars Debuts Concept 26—An Autonomous Drive Concept,” 18 November 2015, https://www.media.volvocars.com/us/en-us/media/pressreleases/169493/volvo-cars-debuts-concept-26-an-autonomous-drive-concept.
Volvo Cars US, “Volvo Cars | Concept 26.”
Malene Freudendal-Pederson, Mobility in Daily Life: Between Freedom and Un-freedom (Farnham: Ashgate, 2009).
Sheller, “Gendered Mobilities: Epilogue.”
Mia Tuan, Forever Foreign or Honorary White: The Asian Ethnic Experience Today (New Brunswick, NJ: Rutgers University Press, 1999); Min Zhou, “Are Asian Americans Becoming ‘White’?,” Contexts 3, no. 1 (2004): 29–37.
Sheller, “Automotive Emotions.”
Sheller and Urry, “The City and the Car,” 748.
Packer, “Automobility and the Driving Force of Warfare”; Sheller, “Emergence of New Cultures of Mobility.”
Lucas Introna and David Wood, “Picturing Algorithmic Surveillance: The Politics of Facial Recognition Systems,” Surveillance and Society 2, nos. 2–3 (2004): 177–198; Lucas Introna and Helen Nissenbaum, Facial Recognition Technology: A Survey of Policy and Implementation Issues, 22 July 2009, Center for Catastrophe Preparedness and Response, New York University, http://ssrn.com/abstract=1437730.
Michael Bull, “Automobility and the Power of Sound,” in Featherstone et al., Auto-mobilties, 243–259.
Ibid.; John Urry, “The ‘System’ of Automobility,” Theory, Culture and Society 21, nos. 4–5 (2004): 25–39.
Bull, “Automobility and the Power of Sound,” 253.
Sheller and Urry, “The City and the Car”; Urry, “The ‘System’ of Automobility”; “Nissan IDS Concept.”
“Nissan IDS Concept” (this and the quotations in the previous sentence).
Jeremy Packer and Kathleen F. Oswald, “From Windscreen to Widescreen,” Communication Review 13, no. 4 (2010): 309–339, here 318.
Blake, “Audible Citizenship.”
Thrift, “Driving in the City.”
“Autonomous Vehicles and Mobility Services Could Add One Trillion More Vehicle Miles,” PR Newswire, 17 November 2015, http://www.prnewswire.com/news-releases/autonomous-vehicles--mobility-services-could-add-one-trillionmore-vehicle-miles-traveled-annually-by-2050-kpmg-research-300180442.html Jarrett Walker, “Self-Driving Cars,” Human Transit, 25 November 2015, http://humantransit.org/2015/11/self-driving-cars-a-coming-congestion-disaster.html.
“2015 Developer Survey,” Stack Overflow, http://stackoverflow.com/research/developer-survey-2015#profile-gender (accessed 9 April 2016).
Peter Adey, “Facing Airport Security,” Environment and Planning D: Society and Space 27, no. 2 (2009): 274–295; Louise Amoore and Alexandra Hall, “Taking People Apart,” Environment and Planning D: Society and Space 27, no. 3 (2009): 444–464.
Packer and Oswald, “From Windscreen to Widescreen,” 335.
McDowell, Gender, Identity and Place.
Packer, Mobility without Mayhem.
Katherine McKittrick, Demonic Grounds: Black Women and the Cartographies of Struggle (Minneapolis: University of Minnesota Press, 2006).
Martin Dodge and Rob Kitchin, Code/Space: Software and Everyday Life (Cambridge, MA: MIT Press, 2011).
Sheller and Urry, “The City and the Car.”
Thrift, “Driving in the City,” 49.
Bruno Latour, “Where Are the Missing Masses?,” in Shaping Technology (Cambridge, MA: MIT Press, 1992), 225–258, here 235.
|
<urn:uuid:e02724b4-741e-42f9-9695-66d310abe49b>
|
CC-MAIN-2020-24
|
https://www.berghahnjournals.com/view/journals/transfers/8/1/trans080106.xml?rskey=axDtNW&result=1
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439928.61/warc/CC-MAIN-20200604094848-20200604124848-00130.warc.gz
|
en
| 0.925287 | 13,467 | 2.6875 | 3 |
MANHATTAN, Kan. — Manure provides nutrients for farm fields and
improves soil quality and tilth, but there are other implications to
Mark Risse, University of Georgia professor, will give an overview of
the science of manure to lead off the free monthly manure management
Webcast from eXtension Oct. 17.
Risse is one of three university specialists who will discuss the
impacts of manure application on runoff and soil erosion, water
holding capacity of soils and the need for irrigation during the Web-
based seminar that is open to the public.
They will discuss manure from an organic farm systems perspective and
discuss compost effects on soil quality. They also will discuss the
liming effect of some manures including poultry litter,
mineralization rates including carbon mineralization and salt
Washington State University researchers Craig Cogger and Ann-Marie
Fortuna will join Risse. Cogger is a soil scientist who has extensive
experience with compost methods and using biosolids. His current
research is on organic and sustainable cropping systems.
Fortuna is a faculty member in soil biology. Her research emphasis is
to determine the role of organisms in plant nutrient acquisition and
health and trace the fate of pathogens and beneficial organisms in
The Friday, Oct. 17 session begins at 2:30 p.m. Eastern Daylight
Time. The Webcasts are hosted by the Livestock and Poultry
Environmental (LPE) Learning Center, an information resource
developed by more than 150 experts from land-grant universities,
agencies and other organizations. The center is part of the national
eXtension interactive Web resource customized with links to local
Cooperative Extension Web sites. Kansas State University Research and
Extension is part of eXtension.
The Webcast meeting room opens 15 minutes before the start time. Go
to http://www.extension.org/pages/Live_Webcast_Information to view.
|
<urn:uuid:0c5f0233-5d62-4dc3-b650-4f190ee4938f>
|
CC-MAIN-2014-41
|
http://dairybusiness.com/archive/?p=802
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137145.1/warc/CC-MAIN-20140914011217-00169-ip-10-234-18-248.ec2.internal.warc.gz
|
en
| 0.896987 | 415 | 2.890625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.