text
stringlengths
144
682k
Marble head of Athena: The so-called Athena Medici Period: Mid-Imperial, Antonine period Date: ca. A.D. 138–92 Culture: Roman Medium: Marble Dimensions: H.: 7 7/8 in. (20 cm) Classification: Stone Sculpture Credit Line: Rogers Fund, 2007 Accession Number: 2007.293 Only rarely is it possible to get an impression of the majesty and beauty of the statues produced in Athens during the mid-fifth century B.C.–the High Classical period. This head is from a fine Roman copy of an overlifesize statue of the goddess Athena that has long been attributed to Pheidias, the most famous artist of that era. The marble face is modeled with extreme restraint and sensibility, imparting a powerful yet youthful radiance to the expression. The eyes were once inset with colored stones. The head retains part of the frontlet and neck guard of an Attic helmet that was originally completed in wood and gilded. This combination of marble and wood, whereby the drapery and attributes such as the helmet were worked in wood and gilded while the flesh parts were carved in marble, is known as the acrolithic technique. It imitated the appearance of immensely valuable gold and ivory statues such as the great Athena Parthenos that stood inside the Parthenon in Athens and the colossal seated statue of Zeus at Olympia.
Definición de necessary en inglés: Compartir esta entrada Pronunciación: /ˈnɛsəs(ə)ri/ 1Needed to be done, achieved, or present; essential: they granted the necessary planning permission it’s not necessary for you to be here Más ejemplos en oraciones • We had always stayed out of each others private business, prying only when we deemed it absolutely necessary. • A jury of experts reviewed a draft of the survey and made changes where necessary. • Often it's impossible for the architect of a company to make the changes necessary to ensure it survives. obligatory, requisite, required, compulsory, mandatory, imperative, demanded, needed, called for, needful; essential, indispensable, vital, of the essence, incumbent; French de rigueur 2Determined, existing, or happening by natural laws or predestination; inevitable: a necessary consequence Más ejemplos en oraciones • As much as I'd like to think that spying doesn't happen, it's going to happen as a necessary consequence of competition. • The magazine, he says, is ‘a necessary consequence of their superstar status’. • He saw radical skepticism as a necessary consequence of the misery of the human condition. inevitable, unavoidable, certain, sure, inescapable, inexorable, ineluctable, fated, destined, predetermined, predestined, preordained 2.1 Philosophy (Of a concept, statement, etc.) inevitably resulting from the nature of things, so that the contrary is impossible. Oraciones de ejemplo • Bacon and Locke had discussed the question of a necessary knowledge of nature from a scholastic standpoint. • If it is shown that the opinion actually formed is not an opinion of this character, then the necessary opinion does not exist. • There could be no solution, they claimed, until the mind first grasped the necessary idea. 2.2 Philosophy (Of an agent) having no independent volition. Oraciones de ejemplo • The sixth is that if a man were not a necessary agent he would be ignorant of morality and have no motive to practice it. (usually necessaries) 1The basic requirements of life, such as food and warmth: not merely luxuries, but also the common necessaries poor people complaining for want of the necessaries of life Más ejemplos en oraciones • Shelter, medicine, basic schooling, and even necessaries like food and water were now being provided to the million upon million of starving adults, and their children. • Others resort to exploitation, as in the case of an injured officer who, with the help of a resentful assistant, attempts to trade tobacco leaves with the retreating soldiers in exchange for food and other necessaries. • Work was about to be resumed at the Emergency Kitchen for the relief of the sick poor of York, which was for the immediate relief of all poor persons who were ill, and too poor to afford the pressing necessaries their sufferings required. 1.1 (the necessary) informal The action or item needed: see when they need a tactful word of advice and do the necessary Más ejemplos en oraciones • Unable to get the human variety, the club has hired a team of four llamas to do the necessary. • Of course, if she's busy reading or otherwise occupied, she sends the sprog in her place to do the necessary. • Unafraid of blood and guts, I went with him to the top of the garden where he did the necessary. 1.2 (the necessary) British informal The money needed: a bag containing my wallet: the money, the necessary Más ejemplos en oraciones • Paul, a successful model with ambitions to run his own place, came up with the necessary. money, cash, the wherewithal, funds, finances, capital, means, resources informal dough, bread, loot, the ready, the readies British informal dosh See also money a necessary evil Something that is undesirable but must be accepted: for many, paying taxes is at best a necessary evil Más ejemplos en oraciones • In fact they enter the course regarding cryptography as a necessary evil that must be endured in order for them to obtain an Information Security qualification. • If this was making inroads into the problem then many Americans would reluctantly accept this as a necessary evil. • But if lights are occasionally necessary, they are a necessary evil. Late Middle English: from Latin necessarius, from necesse 'be needful'. For editors and proofreaders Saltos de línea: ne¦ces|sary Compartir esta entrada ¿Qué te llama la atención de esta palabra o frase?
Anthropic principle From Wikipedia, the free encyclopedia   (Redirected from Weak anthropic principle) Jump to: navigation, search The anthropic principle (from Greek anthropos, meaning "human") is the philosophical consideration that observations of the Universe must be compatible with the conscious and sapient life that observes it. Some proponents of the anthropic principle reason that it explains why this universe has the age and the fundamental physical constants necessary to accommodate conscious life. As a result, they believe it is unremarkable that this universe has fundamental constants that happen to fall within the narrow range thought to be compatible with life.[1][2] The strong anthropic principle (SAP) as explained by John D. Barrow and Frank Tipler states that this is all the case because the universe is in some sense compelled to eventually have conscious and sapient life emerge within it. Some critics of the SAP argue in favor of a weak anthropic principle (WAP) similar to the one defined by Brandon Carter, which states that the universe's ostensible fine tuning is the result of selection bias: i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing and reflecting upon fine tuning. Most often such arguments draw upon some notion of the multiverse for there to be a statistical population of universes to select from and from which selection bias (our observance of only this universe, compatible with life) could occur. Definition and basis[edit] The term anthropic in "anthropic principle" has been argued[3] to be a misnomer.[4] While singling out our kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism.[5][6] Any form of life or any form of heavy atom, stone, star or galaxy would do; nothing specifically human or anthropic is involved. Anthropic coincidences[edit] Main article: Fine-tuned Universe In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random.[10] Instead, biological factors constrain the universe to be more or less in a "golden age," neither too young nor too old.[11] If the universe were one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence which had inspired Dirac's varying-G theory. Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg[12] gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics").[13] However, if the cosmological constant were only one order of magnitude larger than its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life. Roger Penrose explained the weak form as follows: The argument can be used to explain why the conditions happen to be just right for the existence of (intelligent) life on the Earth at the present time. For if they were not just right, then we should not have found ourselves to be here now, but somewhere else, at some other appropriate time. This principle was used very effectively by Brandon Carter and Robert Dicke to resolve an issue that had puzzled physicists for a good many years. The issue concerned various striking numerical relations that are observed to hold between the physical constants (the gravitational constant, the mass of the proton, the age of the universe, etc.). A puzzling aspect of this was that some of the relations hold only at the present epoch in the Earth's history, so we appear, coincidentally, to be living at a very special time (give or take a few million years!). This was later explained, by Carter and Dicke, by the fact that this epoch coincided with the lifetime of what are called main-sequence stars, such as the Sun. At any other epoch, so the argument ran, there would be no intelligent life around in order to measure the physical constants in question — so the coincidence had to hold, simply because there would be intelligent life around only at the particular time that the coincidence did hold! — The Emperor's New Mind, Chapter 10 Since Carter's 1973 paper, the term "anthropic principle" has been extended to cover a number of ideas which differ in important ways from those he espoused. Particular confusion was caused in 1986 by the book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler,[16] published that year which distinguished between "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section. Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologist Alfred Russel Wallace anticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required ... in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man."[17] In 1957, Robert Dicke wrote: "The age of the Universe 'now' is not random but conditioned by biological factors ... [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem."[18] Weak anthropic principle (WAP) (Carter): "we must be prepared to take account of the fact that our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers." Note that for Carter, "location" refers to our location in time as well as space. Strong anthropic principle (SAP) (Carter): "the universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est." In their 1986 book, The Anthropic Cosmological Principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows:[19][20] Unlike Carter they restrict the principle to carbon-based life, rather than just "observers." A more important difference is that they apply the WAP to the fundamental physical constants, such as the fine structure constant, the number of spacetime dimensions, and the cosmological constant — topics that fall under Carter's SAP. Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality (see It from bit) and his Participatory Anthropic Principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner. Modified anthropic principle (MAP) (Schmidhuber): The 'problem' of existence is only relevant to a species capable of formulating the question. Prior to Homo sapiens intellectual evolution to the point where the nature of the observed universe - and humans' place within same - spawned deep inquiry into its origins, the 'problem' simply did not exist.[24] The philosophers John Leslie[25] and Nick Bostrom[26] reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance for anthropic bias, that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects) — the necessity for observers to exist in order to get a result. He writes: Many 'anthropic principles' are simply confused. Some, especially those drawing inspiration from Brandon Carter's seminal papers, are sound, but... they are too weak to do any real scientific work. In particular, I argue that existing methodology does not permit any observational consequences to be derived from contemporary cosmological theories, though these theories quite plainly can be and are being tested empirically by astronomers. What is needed to bridge this methodological gap is a more adequate formulation of how observation selection effects are to be taken into account. — Anthropic Bias, Introduction[27] According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow." To gain more predictive power, additional assumptions on the prior distribution of alternative universes are necessary.[24][28] It's this simple paradox. The Universe is very old and very large. Humankind, by comparison, is only a tiny disturbance in one small corner of it - and a very recent one. Yet the Universe is only very large and very old because we are here to say it is... And yet, of course, we all know perfectly well that it is what it is whether we are here or not.[29] Character of anthropic reasoning[edit] The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters: as Einstein said, "What really interests me is whether God had any choice in the creation of the world." In 2002, proponents of the leading candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle"[33] since there would be no free parameters to select. Ironically, string theory now seems to offer no hope of predicting fundamental parameters, and now some who advocate it invoke the anthropic principle as well (see below). The modern form of a design argument is put forth by Intelligent design. Proponents of intelligent design often cite the fine-tuning observations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning, Sober (2005)[34] and Ikeda and Jefferys,[35][36] argue that the Anthropic Principle as conventionally stated actually undermines intelligent design; see fine-tuned universe. 4. Intelligent Design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence. Omitted here is Lee Smolin's model of cosmological natural selection, also known as "fecund universes," which proposes that universes have "offspring" which are more plentiful if they resemble our universe. Also see Gardner (2005).[37] The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983)[38] inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links. Antonio Feoli and Salvatore Rampone dispute this conclusion, arguing instead that the estimated size of our universe and the number of planets in it allows for a higher bound, so that there is no need to invoke intelligent design to explain evolution.[39] Observational evidence[edit] Philosopher John Leslie[40] states that the Carter SAP (with multiverse) predicts the following: • Various theories for generating multiple universes will prove robust; Probabilistic predictions of parameter values can be made given: One thing that would not count as evidence for the Anthropic Principle is evidence that the Earth or the solar system occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers. Applications of the principle[edit] The nucleosynthesis of carbon-12[edit] Fred Hoyle may have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned from the prevalence on Earth of life forms whose chemistry was based on carbon-12 atoms, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electron-volts.[42][43] Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction. However, a recently released paper argues that Hoyle did not use anthropic reasoning to make this prediction.[44] Cosmic inflation[edit] Main article: Cosmic inflation Don Page criticized the entire theory of cosmic inflation as follows.[45] He emphasized that initial conditions which made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle.[46] While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value — due to random quantum fluctuations — to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require. String theory[edit] Steven Weinberg[47] believes the Anthropic Principle may be appropriated by cosmologists committed to nontheism, and refers to that Principle as a "turning point" in modern science because applying it to the string landscape "...may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator." Others, most notably David Gross but also Lubos Motl, Peter Woit, and Lee Smolin, argue that this is not predictive. Max Tegmark,[48] Mario Livio, and Martin Rees[49] argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present. Properties of n+m-dimensional spacetimes Main article: Spacetime The Anthropic Cosmological Principle[edit] Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values." FAP places strong constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality.[56] One such constraint is that the universe must end in a big crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas. In his review[57] of Barrow and Tipler, Martin Gardner ridiculed the FAP by quoting the last two sentences of their book as defining a Completely Ridiculous Anthropic Principle (CRAP): Carter has frequently regretted his own choice of the word "anthropic," because it conveys the misleading impression that the principle involves humans specifically, rather than intelligent observers in general.[59] Others[60] have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects. A common criticism of Carter's SAP is that it is an easy deus ex machina which discourages searches for physical explanations. To quote Penrose again: "it tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts."[61] Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies, that is, statements true solely by virtue of their logical form (the conclusion is identical to the premise) and not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying "if things were different, they would be different," which is a valid statement, but does not make a claim of some factual alternative over another. Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue[who?] that it does make falsifiable predictions. A modified version of this criticism is that we understand so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result.[62] Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism", see also alternative biochemistry).[66] The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine tuned universe have argued.[67] For instance, Harnik et al.[68] propose a weakless universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind — stars, planets, galaxies, etc. Lee Smolin has offered a theory designed to improve on the lack of imagination that anthropic principles have been accused of. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe.[69][self-published source?][unreliable source?] Some versions of the anthropic principle are only interesting if the range of physical constants that allow certain kinds of life are unlikely in a landscape of possible universes. But Lee Smolin assumes that conditions for carbon based life are similar to conditions for black hole creation, which would change the a priori distribution of universes such that universes containing life would be likely. In Smolin vs. Susskind: The Anthropic Principle[70][self-published source?][unreliable source?] the string theorist Leonard Susskind disagrees about some assumptions in Lee Smolin's theory, while Smolin defends his theory. The philosophers of cosmology John Earman,[71] Ernan McMullin,[72] and Jesús Mosterín contend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation".[73] A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours: The suggestion that an infinity of objects characterized by certain numbers or properties implies the existence among them of objects with any combination of those numbers or characteristics [...] is mistaken. An infinity does not imply at all that any arrangement is present or repeated. [...] The assumption that all possible worlds are realized in an infinite universe is equivalent to the assertion that any infinite set of numbers contains all numbers (or at least all Gödel numbers of the [defining] sequences), which is obviously false. See also[edit] 1. ^ Anthropic Principle 2. ^ James Schombert, Department of Physics at University of Oregon 3. ^ Mosterín J., (2005), Antropic Explanations in Cosmology, in Hajek, Valdés & Westerstahl (eds.), Proceedings of the 12th International Congress of Logic, Methodology and Philosophy of Science;" 4. ^ "anthropic" means "of or pertaining to mankind or humans" 5. ^ The Anthropic Principle, Victor J. Stenger 6. ^ Anthropic Bias, Nick Bostrom, p.6 7. ^ Merriam-Webster Online Dictionary 8. ^ The Strong Anthropic Principle and the Final Anthropic Principle 9. ^ On Knowing, Sagan from Pale Blue Dot 10. ^ Dicke, R. H. (1961). "Dirac's Cosmology and Mach's Principle". Nature 192 (4801): 440–441. Bibcode:1961Natur.192..440D. doi:10.1038/192440a0.  11. ^ Davies, P. (2006). The Goldilocks Enigma. Allen Lane. ISBN 0-7139-9883-0.  12. ^ Weinberg, S. (1987). "Anthropic bound on the cosmological constant". Physical Review Letters 59 (22): 2607–2610. Bibcode:1987PhRvL..59.2607W. doi:10.1103/PhysRevLett.59.2607. PMID 10035596.  13. ^ New Scientist Space Blog: Physicists debate the nature of space-time - New Scientist 14. ^ How Many Fundamental Constants Are There? John Baez, mathematical physicist. U. C. Riverside, April 22, 2011 15. ^ Carter, B. (1974). "Large Number Coincidences and the Anthropic Principle in Cosmology". IAU Symposium 63: Confrontation of Cosmological Theories with Observational Data. Dordrecht: Reidel. pp. 291–298. ; republished in General Relativity and Gravitation (Nov. 2011), Vol. 43, Iss. 11, p. 3225-3233, with an introduction by George Ellis (available on Arxiv 16. ^ Barrow, John D.; Tipler, Frank J. (1988). The Anthropic Cosmological Principle. Oxford University Press. ISBN 978-0-19-282147-8. LCCN 87028148.  17. ^ Wallace, A. R. (1904). Man's place in the universe: a study of the results of scientific research in relation to the unity or plurality of worlds (4th ed.). London: George Bell & Sons. pp. 256–7.  18. ^ Dicke, R. H. (1957). "Gravitation without a Principle of Equivalence". Reviews of Modern Physics 29 (3): 363–376. Bibcode:1957RvMP...29..363D. doi:10.1103/RevModPhys.29.363.  19. ^ Barrow, John D. (1997). "Anthropic Definitions". Quarterly Journal of the Royal Astronomical Society 24: 146–53. Bibcode:1983QJRAS..24..146B.  20. ^ Barrow & Tipler's definitions are quoted verbatim at Genesis of Eden Diversity Encyclopedia. 21. ^ Barrow and Tipler 1986: 16. 22. ^ Barrow and Tipler 1986: 21. 23. ^ Barrow and Tipler 1986: 22. 24. ^ a b Jürgen Schmidhuber, 2000, "Algorithmic theories of everything." 25. ^ Leslie, J. (1986). "Probabilistic Phase Transitions and the Anthropic Principle". Origin and Early History of the Universe: LIEGE 26. Knudsen. pp. 439–444.  26. ^ Bostrom, N. (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. Routledge. ISBN 0-415-93858-9.  5 chapters available online. 27. ^ Bostrom, N. (2002), op. cit. 29. ^ Michael Frayn, The Human Touch. Faber & Faber ISBN 0-571-23217-5 30. ^ Collins C. B.; Hawking, S. W. (1973). "Why is the universe isotropic?". Astrophysical Journal 180: 317–334. Bibcode:1973ApJ...180..317C. doi:10.1086/151965.  31. ^ Tegmark, M. (1998). "Is 'the theory of everything' merely the ultimate ensemble theory?". Annals of Physics 270: 1–51. arXiv:gr-qc/9704009. Bibcode:1998AnPhy.270....1T. doi:10.1006/aphy.1998.5855.  32. ^ Strictly speaking, the number of non-compact dimensions, see String theory. 33. ^ Kane, Gordon L.; Perry, Malcolm J. & Zytkow, Anna N. (2002). "The Beginning of the End of the Anthropic Principle". New Astronomy 7: 45–53. arXiv:astro-ph/0001197. Bibcode:2002NewA....7...45K. doi:10.1016/S1384-1076(01)00088-4.  34. ^ Sober, Elliott, 2005, "The Design Argument" in Mann, W. E., ed., The Blackwell Guide to the Philosophy of Religion. Blackwell Publishers. Archived September 3, 2011, at the Wayback Machine. 35. ^ Ikeda, M. and Jefferys, W., "The Anthropic Principle Does Not Support Supernaturalism," in The Improbability of God, Michael Martin and Ricki Monnier, Editors, pp. 150-166. Amherst, N.Y.: Prometheus Press. ISBN 1-59102-381-5 37. ^ Gardner, James N., 2005, "The Physical Constants as Biosignature: An anthropic retrodiction of the Selfish Biocosm Hypothesis," International Journal of Astrobiology. 38. ^ Carter, B.; McCrea, W. H. (1983). "The anthropic principle and its implications for biological evolution". Philosophical Transactions of the Royal Society A310 (1512): 347–363. Bibcode:1983RSPTA.310..347C. doi:10.1098/rsta.1983.0096.  39. ^ Feoli, A. & Rampone, S. (1999). "Is the Strong Anthropic Principle too weak?". Nuovo Cim B114: 281–289. arXiv:gr-qc/9812093. Bibcode:1999NCimB.114..281F.  40. ^ Leslie, J. (1986) op. cit. 41. ^ Hogan, Craig (2000). "Why is the universe just so?". Reviews of Modern Physics 72 (4): 1149–1161. arXiv:astro-ph/9909295. Bibcode:2000RvMP...72.1149H. doi:10.1103/RevModPhys.72.1149.  42. ^ University of Birmingham Life, Bent Chains and the Anthropic Principle Archived September 27, 2009, at the Wayback Machine. 43. ^ Rev. Mod. Phys. 29 (1957) 547 44. ^ Kragh, Helge (2010) When is a prediction anthropic? Fred Hoyle and the 7.65 MeV carbon resonance. 46. ^ Davies, P.C.W. (1984). "Inflation to the universe and time asymmetry". Nature 312 (5994): 524. Bibcode:1984Natur.312..524D. doi:10.1038/312524a0.  47. ^ Weinberg, S. (2007). "Living in the multiverse". In B. Carr (ed). Universe or multiverse?. Cambridge University Press. ISBN 0-521-84841-5.  preprint 48. ^ Tegmark (1998) op. cit. 49. ^ Livio, M. & Rees, M. J. (2003). "Anthropic reasoning". Science 309 (5737): 1022–3. Bibcode:2005Sci...309.1022L. doi:10.1126/science.1111446. PMID 16099967.  54. ^ Hicks, L. E. (1883). A Critique of Design Arguments. New York: Scribner's.  55. ^ Barrow and Tipler 1986: 23 56. ^ Tipler, F. J. (1994). The Physics of Immortality. DoubleDay. ISBN 0-385-46798-2.  57. ^ Gardner, M., "WAP, SAP, PAP, and FAP," The New York Review of Books 23, No. 8 (May 8, 1986): 22-25. 58. ^ Barrow and Tipler 1986: 677 59. ^ e.g. Carter (2004) op. cit. 60. ^ e.g. message from Martin Rees presented at the Kavli-CERCA conference (see video in External links) 61. ^ Penrose, R. (1989). The Emperor's New Mind. Oxford University Press. ISBN 0-19-851973-7.  Chapter 10. 62. ^ Starkman, G. D.; Trotta, R. (2006). "Why Anthropic Reasoning Cannot Predict Λ". Physical Review Letters 97 (20): 201301. arXiv:astro-ph/0607227. Bibcode:2006PhRvL..97t1301S. doi:10.1103/PhysRevLett.97.201301. PMID 17155671.  See also this news story. 63. ^ Gould, Stephen Jay (1998). "Clear Thinking in the Sciences". Lectures at Harvard University.  64. ^ Gould, Stephen Jay (2002). Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time. ISBN 0-7167-3090-1.  65. ^ Shermer, Michael (2007). Why Darwin Matters. ISBN 0-8050-8121-6.  66. ^ e.g. Carr, B. J.; Rees, M. J. (1979). "The anthropic principle and the structure of the physical world". Nature 278 (5705): 605–612. Bibcode:1979Natur.278..605C. doi:10.1038/278605a0.  67. ^ Stenger, Victor J (2000). Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Prometheus Books. ISBN 1-57392-859-3.  68. ^ Harnik, R.; Kribs, G.; Perez, G. (2006). "A Universe without Weak interactions". Physical Review D74 (3): 035006. arXiv:hep-ph/0604027. Bibcode:2006PhRvD..74c5006H. doi:10.1103/PhysRevD.74.035006.  69. ^ Lee Smolin (2001). Tyson, Neil deGrasse; Soter, Steve, eds. Cosmic Horizons: Astronomy at the Cutting Edge. The New Press. pp. 148–152. ISBN 978-1-56584-602-9.  70. ^ Smolin vs. Susskind: The Anthropic Principle 71. ^ Earman John (1987). "The SAP also rises: A critical examination of the anthropic principle". American Philosophical Quarterly 24: 307–317.  73. ^ Mosterín, Jesús. (2005). Op. cit. External links[edit]
Blender- wrapping a shape around a cylinder? Discussion in 'Software and Applications' started by StuffBySteve, Mar 30, 2010. 1. StuffBySteve StuffBySteve New Member If you have a solid shape that looks something like the attached picture, is there a tool in Blender to wrap it into a cylinder, so the two end faces meet? Attached Files: 2. sublimate sublimate New Member The closest thing I can think of is Shift-W in edit mode. You want to be looking at it from the top and the cursor will be the center of the circle it will wrap to. 3. iguffick iguffick New Member Another way of doing this is to use a curve modifier. You can then move/change the curve or object interactively to get exactly what you want. 4. EricFinley EricFinley New Member Curve modifier is definitely the way to go. Quick warning, though - don't try to use a "Bezier Circle" as your curve. It has to have two ends. A circle with a tiny piece cut out can work, or you can get there starting from the default Bezier Curve object. Either way works. 5. stuartar stuartar New Member In Top View, make a Circle with 16 vertices (make sure Fill is enabled). Select only the inside Edges (spokes) of the Circle and press Subdivide once. Now Delete all the inner Faces, so you end up with a donut shape. Alt Select the inside Edge of the Circle, and Scale up to the desired thickness. Now Select alternate pairs of Edges (spokes) around your wheel, so you end up with 4 pairs selected and 4 pairs not selected. Go into Front View, and move the Selected Edges down in the Z direction to the amount you want. Select All Faces, Extrude down to the desired thickness. All done in less than a minute. Hope it helps!. 6. randomhuman randomhuman New Member I used a bezier circle for a curve modifier before and it works fine as long as the circle is sized correctly. You can modify the diameter of the circle to change the number of times the object is wrapped around it, while the distance between the circle and the object determines the diameter of the circle the object goes to. If you rotate the circle relative to the object you get a spring effect. It takes a bit of practise to get it to do what you want (some things are a bit backwards), but it works quite well when you get used to it. 7. Tommy_2Tall Tommy_2Tall New Member Hi SteveLikesCubes! I would go with Shift+W (also found in the menu "Mesh"/"Transform"/"Warp" if I recall correctly). It doesn't involve curves or object modifiers so it's less complicated. View the mesh in the top view (if that's the view you want it to look like a circle in). Drag the mesh so that it's at the right distance from the 3D cursor (if the closest side is X mm from the cursor the ring will have an inner diameter of X mm). Press Shift+W and drag the mouse cursor to see the effect (you can type in 360 and hit Enter for a full 360 degrees straight away). Last edited: Aug 11, 2010
Hurricane Sandy’s Impact on Fish and Wildlife Hurricane Sandy made landfall on the East Coast this week and due to its unusual West-turning track, it came ashore midway in the eastern “Megalopolis” with its 65 million people.  Virginia and Maryland were drenched and pummeled and New York and New Jersey were flooded and smashed.  Human impact is the main concern for so many but, what  happens to fish and wildlife during such major storms? After Hurricane Irene devastated the east coast in August of 2011, we wrote a synopsis of the ways species are affected by major storms coming ashore and some things you can do to help them. Here is an updated “Sandy” version of that blog post. Scattered to the Winds Seagoing Northern Gannet: U.S. FWS The powerful winds from Sandy have blown many sea birds inland and this will cause them to end up in unusual places sometimes hundreds of miles away from their home habitat.  Species of birds such as gannets, gulls and petrels are often picked up by hurricane-force winds and are pushed far distances with little ability to resist.  In 2010, a North Carolina brown pelican was found on the roof of a night club in Halifax, Nova Scotia after a major storm. With Sandy, most of the Fall migration is over for the year but there are still some birds such as scoters and cormorants making their way to warmer waters and weather.  And, sometimes younger or weaker birds become separated from their flock and many can take days and weeks to return home. Sea birds and waterfowl are most exposed in hurricanes.  Songbirds and smaller woodland birds, by contrast, have less difficulty. They are specially adapted to hold on, lay low and ride things out. In very strong winds, their toes automatically tighten around their perch. This holds them in place during high winds or when they sleep.  Woodpeckers and other cavity nesters will, barring the destruction of the tree itself, ride out storms in tree holes.  Shorebirds, such as sandpipers, often move to inland areas. In a unique effect of cyclonic hurricanes, the eye of the storm with its fast-moving walls of intense wind can form a massive “bird cage” holding birds inside the eye until the storm dissipates.  It is often the eye of the storm that displaces birds, more than its strong winds.  Sandy’s eye was less well-defined when compared to other hurricanes. Flattened Forests The “tree toll” of Sandy has not yet been tallied but in 1992, Hurricane Andrew generated incredible wind velocities onshore and knocked down as many as 80 percent of the trees on several coastal Louisiana basins, such as the Atchafalaya. Tree loss during Hurricane Katrina in 2005 caused even more extensive damage. Loss of coastal forests and trees can be devastating to dependent wildlife species and migratory species. Many wildlife species have very specialized niches in these forests, and specific foods can disappear too. High winds will often strip fruits, seeds and berries from bushes and trees. Want to help? Donate to NWF through CrowdRise and Craig Newmark will match your donations up to $25,000. Dune and Beach Loss Sandy has clearly been tough on the Midatlantic’s sand shoreline. Storm surges, wave action, and winds cause beach and dune erosion and that can severely affect wildlife species. Many wildlife species live in ecological niches in the sandy areas and dunes of coastal barrier islands.  In some cases the storm can cause a beach area to fully disappear.  Sea turtle nests, for example, are dug right in to the beach and can be washed out, or a water surge, called a “wash over” can submerge these nests or nearby tern and plover nesting areas. Saltwater in Freshwater Areas The sustained and powerful winds of a hurricane will cause salty ocean water to pile up and surge onshore.  Sandy pushed water into lower Manhattan and that has gathered most of the headlines but coastal marshes and bays can litterally be poisened by too much salt.  These “storm surges” can be huge. Hurricane Irene’s surges, in 2011, brought water levels that were as much as 8 feet above normal high tide and Sandy’s peaked between 10 and 13 feet.  Katrina, in 2005, pushed a 30 foot high surge onto the coast.  In addition to the physical damage this causes, the salt contained in sea water dramatically shifts the delicate balance of freshwater and brackish wetland areas such as in the Chesapeake Bay and along the Atlantic Coast.  Creatures and vegetation that are less salt-tolerant will be harmed and many will not survive the influx of sea water. Marsh grasses, crabs, minnows, fish hatchlings, insects, and myriad creatures of freshwater and estuarine environments are harmed by a surge. The salt water intrusion in these some of these areas does not drain off very quickly and can even harm or kill off bottomland forests and other coastal trees. Massive Flooding of Rivers, Bays and Wetlands The reverse is true too.  The heavy rains generated by hurricanes will dump water in coastal area river basins (called watersheds) and this, in turn, can send vast amounts of fresh water surging downstream into coastal bays and estuaries.  This upsets the delicate and finely tuned freshwater/salt water balance that can be so vital for the health of these ecosystems.  In 1972, Hurricane Agnes sent such massive amounts of freshwater into the Chesapeake Bay.  A similar thing is happening with water from Sandy’s eight to 10 inches of rainfall.  The normally brackish (partially salty) water of the Bay was fresh for months following Agnes placing great pressure on the species living there. Dark, Muddy Water Violent Waters Everywhere Climate Change The prognosis for wildlife surviving hurricanes can be hard to assess. There are many success stories and also accounts of major devastation. The question remains, however, about whether wild creatures will. like humans, be experiencing more catastrophic hurricanes in the future. Amanda Staudt, NWF’s climate scientist, posted a piece at Wildlife Promise a couple of days ago that looks at how continued warming through climate change may be fueling major hurricanes and may have been a factor with Sandy. What Can You Do? In addition, be wildlife friendly during this election and demand action on climate change. Urge our candidates to tell us their plans to address climate change now. Never Miss A Story! Protect Ocean Habitat
From: Robby Findler (robby at cs.uchicago.edu) Date: Thu May 24 18:09:14 EDT 2007 Uhh.. you don't need macros to curry functions! A curried function is a function that accepts two (or more) arguments, but does it in a strange way. That is, it is really a function that accepts one argument but they returns a new function that accepts the second argument. This is a canonical example: ;; two-arg-adder : number number -> number ;; uncurried version (define two-arg-adder (lambda (x y) (+ x y))) ;; curried-adder : number -> (number -> number) ;; curried version (define curried-adder (lambda (x) (lambda (y) (+ x y)))) To call them, you'd write this: (two-arg-adder 1 2) ; => 3 ((curried-adder 1) 2) ; => 3 The advantage of the curried version is that you don't have to supply all the arguments at once. Eg: (define add3 (curried-adder 3)) (add3 4) ; => 7 (add3 11) ; => 14 This can be useful in conjunction with Scheme's "loop"s, like map: (map (curried-adder 4) (list 1 2 3)) ; => (list 5 6 7) Also, you can use the define syntax to define the two functions above: (define ((curried-adder x) y) (+ x y)) This definition is semantically identical to the one above. In some languages (eg SML) it is impossible to write a two argument function. All functions take a single argument, and they use currying to supply multiple arguments. Maybe that's where the uninformed macro comment comes from. I don't know. > _________________________________________________ > For list-related administrative tasks: Posted on the users mailing list.
Country Listing Cambodia Table of Contents Dietary habits appear to be basically the same among the Khmer and other ethnic groups, although the Muslim Cham do not eat pork. The basic foods are rice--in several varieties, fish, and vegetables, especially trakuon (water convolvulus). Rice may be less thoroughly milled than it is in many other rice-eating countries, and consequently it contains more vitamins and roughage. The average rice consumption per person per day before 1970 was almost one-half kilogram. Fermented fish in the form of sauce or of paste are important protein supplements to the diet. Hot peppers, lemon grass, mint, and ginger add flavor to many Khmer dishes; sugar is added to many foods. Several kinds of noodles are eaten. The basic diet is supplemented by vegetables and by fruits--bananas, mangoes, papayas, rambutan, and palm fruit--both wild and cultivated, which grow abundantly throughout the country. Beef, pork, poultry, and eggs are added to meals on special occasions, or, if the family can afford it, daily. In the cities, the diet has been affected by many Western items of food. French, Chinese, Vietnamese, and Indian cuisine were available in Phnom Penh in pre-Khmer Rouge days. Rural Khmer typically eat several times a day; the first meal consists of a piece of fruit or cake, which workers eat after arriving at the fields. The first full meal is at about 9:00 or 10:00 in the morning; it is prepared by the wife or daughter and brought to the man in the field. Workers eat a large meal at about noon in the field and then have supper with their families after returning home around 5:00 P.M. Before the early 1970s, the Cambodian people produced a food supply that provided an adequate diet. Although children gave evidence of caloric underconsumption and of a deficiency in B vitamins. During the Khmer Rouge era, malnutrition increased, especially among the people who were identified as "new people" by the authorities (see Society under the Angkar , ch. 1). Collective meals were introduced by 1977. Food rations for the new people were meager. Refugees' statements contain the following descriptions: "[daily rations of] a tin of boiled rice a day mixed with...sauce"; "we ate twice a day, boiled soup and rice only"; "one tin of rice a day shared between three people. Never any meat or fruit"; "Ration was two tins of rice between four persons per day with fish sauce." People were reduced to eating anything they could find-- insects, small mammals, arachnids, crabs, and plants. The food situation improved under the PRK, although in the regime's early years there were still serious food shortages. International food donations improved the situation somewhat. In 1980 monthly rice rations distributed by the government averaged only one to two kilograms per person. People supplemented the ration by growing secondary crops such as corn and potatoes, by fishing, by gathering fruit and vegetables, and by collecting crabs and other edible animals. A 1984 estimate reported that as many as 50 percent of all young people in Cambodia were undernourished. Data as of December 1987
By Suzanne York A new study by the Potsdam Institute for Climate Impact Research, published in the journal Proceedings of the National Academy of Sciences, found that climate change is likely to put 40 percent more people worldwide at risk of absolute water scarcity, due to changes in rainfall and evaporation. Unsurprisingly, the study noted that “Expected future population changes will, in many countries as well as globally, increase the pressure on available water resources.” With a mid-range United Nations projection of 9.6 billion people by 2050, how countries around the world manage water resources is becoming more critical with each passing day. And a changing climate is likely to play havoc with even the best laid plans. Today, between one and two people out of 100 live in countries with absolute water scarcity (defined as less than 500 cubic meters of water available per year and per person). On average, each person consumes about 1,200 cubic meters of water each year, and even more in industrialized countries. Yet the impacts of continued population growth and increasing climate changes could bring the ratio of people living in countries with absolute water scarcity up to about 10 in 100 people. The Mediterranean, Middle East, southern U.S. and southern China could experience “a pronounced decrease of available water;” southern India, western China and parts of eastern Africa could have an increase. The authors of the study found that “This dwindling per-capita water availability is likely to pose major challenges for societies to adapt their water use and management.” Just last month, the World Resources Institute (WRI) released the results from its Aqueduct water project in which it found that 37 countries face “extremely high” levels of baseline water stress. This means that more than 80 percent of the water available to agricultural, domestic and industrial users is withdrawn annually—leaving businesses, farms and communities vulnerable to scarcity. According to WRI : Greater conservation and more efficient water systems (especially for industrial agriculture) will help. Also incorporating traditional and indigenous methods of water storage and usage that is applicable to each community and/or region will be needed. But what is most needed is global action on climate change to reduce global greenhouse emissions and thereby put the world on a path toward a more sustainable future. There is too much at stake, and water is too precious of a resource to not implement policies to help countries, communities and families adapt to the coming changes.
Practice Reading Vowel Diphthongs: ew Standards RF.2.3.b 4.3 based on 25 ratings Words containing tricky diphthongs, or vowel groups that create a unique sound, are often stumbling blocks for beginning readers. Give your second grader practice reading and decoding ew words with this worksheet in which she'll choose the word that completes each sentence. Second Grade Phonics Worksheets: Practice Reading Vowel Diphthongs: ew Download Worksheet How likely are you to recommend to your friends and colleagues? Not at all likely Extremely likely
11.3 Making Index Entries Concept index entries consist of text. The best way to write an index is to devise entries which are terse yet clear. If you can do this, the index usually looks better if the entries are written just as they would appear in the middle of a sentence, that is, capitalizing only proper names and acronyms that always call for uppercase letters. This is the case convention we use in most GNU manuals’ indices. If you don’t see how to make an entry terse yet clear, make it longer and clear—not terse and confusing. If many of the entries are several words long, the index may look better if you use a different convention: to capitalize the first word of each entry. Whichever case convention you use, use it consistently. In any event, do not ever capitalize a case-sensitive name such as a C or Lisp function name or a shell command; that would be a spelling error. Entries in indices other than the concept index are symbol names in programming languages, or program names; these names are usually case-sensitive, so likewise use upper- and lowercase as required. It is a good idea to make index entries unique wherever feasible. That way, people using the printed output or online completion of index entries don’t see undifferentiated lists. Consider this an opportunity to make otherwise-identical index entries be more specific, so readers can more easily find the exact place they are looking for. When you are making index entries, it is good practice to think of the different ways people may look for something. Different people do not think of the same words when they look something up. A helpful index will have items indexed under all the different words that people may use. For example, one reader may think it obvious that the two-letter names for indices should be listed under “Indices, two-letter names, since “Indices” are the general concept. But another reader may remember the specific concept of two-letter names and search for the entry listed as “Two letter names for indices”. A good index will have both entries and will help both readers. Like typesetting, the construction of an index is a skilled art, the subtleties of which may not be appreciated until you need to do it yourself.
The Strategic Importance of Keystone XL Obama's decision on the Keystone XL pipeline could make or break the future of tar-sands oil. This story first appeared on the TomDispatch website. Presidential decisions often turn out to be far less significant than imagined, but every now and then what a president decides actually determines how the world turns. Such is the case with the Keystone XL pipeline, which, if built, is slated to bring some of the "dirtiest," carbon-rich oil on the planet from Alberta, Canada, to refineries on the US Gulf Coast. In the near future, President Obama is expected to give its construction a definitive thumbs up or thumbs down, and the decision he makes could prove far more important than anyone imagines. It could determine the fate of the Canadian tar-sands industry and, with it, the future well-being of the planet. If that sounds overly dramatic, let me explain. Ever since the president postponed the decision on whether to proceed, powerful forces in the energy industry and government have been mobilizing to press ever harder for its approval. Its supporters argue vociferously that the pipeline will bring jobs to America and enhance the nation's "energy security" by lessening its reliance on Middle Eastern oil suppliers. Their true aim, however, is far simpler: to save the tar-sands industry (and many billions of dollars in US investments) from possible disaster. Opponents of Keystone XL, who are planning a mass demonstration at the White House on February 17th, have also come to view the pipeline battle in epic terms. "Alberta's tar sands are the continent's biggest carbon bomb," McKibben wrote at TomDispatch. "If you could burn all the oil in those tar sands, you'd run the atmosphere's concentration of carbon dioxide from its current 390 parts per million (enough to cause the climate havoc we're currently seeing) to nearly 600 parts per million, which would mean if not hell, then at least a world with a similar temperature." Halting Keystone would not by itself prevent those high concentrations, he argued, but would impede the production of tar sands, stop that "carbon bomb" from further heating the atmosphere, and create space for a transition to renewables. "Stopping Keystone will buy time," he says, "and hopefully that time will be used for the planet to come to its senses around climate change." A Pipeline With Nowhere to Go? Why has the fight over a pipeline, which, if completed, would provide only 4% of the US petroleum supply, assumed such strategic significance? As in any major conflict, the answer lies in three factors: logistics, geography, and timing. Start with logistics and consider the tar sands themselves or, as the industry and its supporters in government prefer to call them, "oil sands." Neither tar nor oil, the substance in question is a sludge-like mixture of sand, clay, water, and bitumen (a degraded, carbon-rich form of petroleum). Alberta has a colossal supply of the stuff—at least a trillion barrels in known reserves, or the equivalent of all the conventional oil burned by humans since the onset of commercial drilling in 1859. Even if you count only the reserves that are deemed extractible by existing technology, its tar sands reportedly are the equivalent of 170 billion barrels of conventional petroleum—more than the reserves of any nation except Saudi Arabia and Venezuela. The availability of so much untapped energy in a country like Canada, which is private-enterprise-friendly and where the political dangers are few, has been a magnet for major international energy firms. Not surprisingly, many of them, including ExxonMobil, Chevron, ConocoPhillips, and Royal Dutch Shell, have invested heavily in tar-sands operations. Tar sands, however, bear little resemblance to the conventional oil fields which these companies have long exploited. They must be treated in various energy-intensive ways to be converted into a transportable liquid and then processed even further into usable products. Some tar sands can be strip-mined like coal and then "upgraded" through chemical processing into a synthetic crude oil—SCO, or "syncrude." Alternatively, the bitumen can be pumped from the ground after the sands are exposed to steam, which liquefies the bitumen and allows its extraction with conventional oil pumps. The latter process, known as steam-assisted gravity drainage (SAGD), produces a heavy crude oil. It must, in turn, be diluted with lighter crudes for transportation by pipeline to specialized refineries equipped to process such oil, most of which are located on the Gulf Coast. The Pipelines That Weren't Like an army bottled up geographically and increasingly at the mercy of enemy forces, the tar-sands producers see the completion of Keystone XL as their sole realistic escape route to survival. "Our biggest problem is that Alberta is landlocked," the province's finance minister Doug Horner said in January. "In fact, of the world's major oil-producing jurisdictions, Alberta is the only one with no direct access to the ocean. And until we solve this problem... the [price] differential will remain large." A presidential thumbs-down and resulting failure to build Keystone XL, however, could have lasting and severe consequences for tar-sands production. After all, no other export link is likely to be completed in the near-term. The other three most widely discussed options—the Northern Gateway pipeline to Kitimat, British Columbia, an expansion of the existing Trans Mountain pipeline to Vancouver, British Columbia, and a plan to use existing, conventional-oil conduits to carry tar-sands oil across Quebec, Vermont, and New Hampshire to Portland, Maine—already face intense opposition, with initial construction at best still years in the future. The Northern Gateway project, proposed by Canadian pipeline company Enbridge, would stretch from Bruderheim in northern Alberta to Kitimat, a port on Charlotte Sound and the Pacific. If completed, it would allow the export of tar-sands oil to Asia, where Canadian Prime Minister Stephen Harper sees a significant future market (even though few Asian refineries could now process the stuff). But unlike oil-friendly Alberta, British Columbia has a strong pro-environmental bias and many senior provincial officials have expressed fierce opposition to the project. Moreover, under the country's constitution, native peoples over whose land the pipeline would have to travel must be consulted on the project—and most tribal communities are adamantly opposed to its construction. Another proposed conduit—an expansion of the existing Trans Mountain pipeline from Edmonton to Vancouver—presents the same set of obstacles and, like the Northern Gateway project, has aroused strong opposition in Vancouver. Michael Klare is a professor of peace and world security studies at Hampshire College, a TomDispatch regular and the author, most recently, of The Race for What's Left, just published in paperback. A documentary movie based on his book Blood and Oil can be previewed and ordered at You can follow Klare on Facebook by clicking here. To stay on top of important articles like these, sign up to receive the latest updates from here.
(Page 2 of 2) By grouping the texts into three parts with loose thematic links, Mr. Rorem has devised a work with an almost narrative thrust. The first part, called ''Beginnings,'' is just that, songs about ''moving forward, and the wistful optimism of love,'' Mr. Rorem writes in a program note. ''Middles'' has songs about coming of age, the horrors of war and the disappointments of love. ''Ends,'' about death, concludes with a text by the Quaker pioneer William Penn, which echoes Corinthians II and gives the work its title: ''Look not to things that are seen, but to that which is unseen; for things that are seen pass away, but that which is unseen is forever.'' This is a fervent religious sentiment for a composer who professes to be an atheist. ''I don't believe in God, and I know there is no afterlife,'' Mr. Rorem said. ''Yet I do believe in belief. I'm not moved by the belief of the Moonies, but I am by the belief of Michelangelo, King David, Paul Goodman.'' The music in this new work is typical of Mr. Rorem's elegant style, with lucid, tonally grounded yet pungent harmonies, with vocal lines that are singable even when challenging and with a lyrical sensibility that pays equal homage to Poulenc and Billie Holiday. Mr. Rorem says he has never been ''smarmy and romantic'' about music's power to ennoble or motivate. ''The most convincing music, in terms of making people take a political action, like marching into war, is usually not the of best quality,'' he explained. ''A Sousa march can do that, but it's not 'Daphnis et Chloe.' And great music can be used for despicable ends. Beethoven was played as prisoners were marched to death in Nazi camps.'' Yet when asked if he thought he could write a piece that would make people march away from war, Mr. Rorem, who was reared by pacifist Quaker parents, said he would do it in a minute, and ''to hell with quality.'' It is enough, he says, if his new song cycle simply moves people, and comforts them. It has comforted him. Mr. Rorem originally wanted to call the piece ''The Art of the Song.'' He was persuaded to come up with ''something less vain,'' he said, by Steven Blier, a co-artistic director of the New York Festival of Song, and one of the pianists playing the premiere. Surprisingly, for a composer so involved with song, Mr. Rorem says he has never been a fancier of singing. ''I care about the voice, but I am not a voice freak, nor an opera buff,'' he said. ''Insofar as a singer can impart a text, I am moved, and insofar as a divine singer cannot impart a text, I am not moved.'' Mr. Blier challenges Mr. Rorem's claim that he is no vocal connoisseur. ''Ned's understanding of music for the trained voice is so refined that it cannot be the simple result, as he has said, of his love of poetry,'' Mr. Blier explained. ''His music sings beautifully.'' Mr. Rorem's perspective on the current state of the song recital is as negative as his take on contemporary music. By song recital he means an evening of literate works combining words and music in ways that cause audiences to think, as opposed to the diva recital, in which a star singer performs some token songs on the way to the hit arias the fans have really come for. Diva recitals still flourish, Mr. Rorem says, but the song recital is in death throes. Yet Mr. Rorem confesses that he rarely attends concerts these days (''I'm a musician, not a music lover''), and may be exaggerating the bad news. Others disagree with his view, including Mr. Blier. ''The song is in far better shape than it was,'' Mr. Blier said, ''partly because of a new breed of intellectual, thinky opera singers who have tailored the song recital to their own interests, who sing lots of American music, and new music.'' He cites Dawn Upshaw, Thomas Hampson and Renee Fleming, among others. ''I'd like to think that we at the New York Festival have helped,'' Mr. Blier added. ''Our audiences have grown to realize that a short song can be quite stimulating, and paint a whole cultural universe in three minutes.'' American Singers Committed to Song The singers who will be performing Mr. Rorem's work are of the type Mr. Blier describes, Americans who sing much opera but are deeply committed to song: the soprano Lisa Saffer, the mezzo-soprano Delores Ziegler, the tenor Rufus Muller and the baritone Kurt Ollmann. The work will be presented again at the Library of Congress on April 18. And Michael Barrett, who is co-director of the New York festival and will be the other pianist for the premiere, will offer the work in September at the Moab Music Festival in Utah, which he runs. During the interview, in talking about the completion of this sprawling new cycle and his long, full life, Mr. Rorem's spirits started to rally. By the time the tea was finished and some bakery-bought pastries sampled, he had cheered somewhat. ''I'm very lucky to have always known what I wanted to do,'' he said. ''Not many people can say the same. That goes for Donald Trump, who is a billionaire because he couldn't do anything else. Artists, who are thought so crazy, are the most stable people in the world, because they know what they must do all their lives. And if you are also appreciated for that, no matter how slightly, that's great.'' Photos: Ned Rorem accompanying the tenor Jerry Hadley at Weill Recital Hall. (Chang W. Lee for The New York Times)(pg. E6); Ned Rorem, who says he has gotten through life by writing music, sits by the piano in his vacation home on Nantucket. His new concert-length song cycle is to be presented in New York on Thursday. (Stephen Rose for The New York Times)(pg. E1)
Moving a 21st century skills agenda to scale in philadelphia, michael hoch Published on Published in: Education, Business • Be the first to comment • Be the first to like this No Downloads Total views On SlideShare From Embeds Number of Embeds Embeds 0 No embeds No notes for slide • Introductions of PresentersIntroduction of Topic • MIKE • KATE • Moving a 21st century skills agenda to scale in philadelphia, michael hoch 1. 1. Project Based Learning to Enhance 21st Century Skills<br />Philadelphia Academies, Inc<br />NAF Conference<br />July 13, 2010<br /> 2. 2. Opening Activity<br />Constructivist Protocol <br />National School Reform Faculty<br />OBJECTIVE: To gain a deeper understanding of what a student who is prepared for post-secondary opportunities looks like<br /> 3. 3. Individual Brainstorm<br />Think of a student that you saw graduate and knew would be successful either in college or the workforce<br /> 4. 4. Guiding Questions<br />What about that student (what characteristics) made him/her seem likely to be successful?<br />Did he/she have the support of mentors/peers/teachers? What did that look like?<br />Did he/she tend to work alone or with other people?<br />Was it hard for him/her to reach success? Easy? Risky? Safe?<br />What motivated him/her to do well?<br /> 5. 5. Small Group Share Out<br />Each person should share out about their student<br />Group members are allowed to ask clarifying questions<br />After everyone has shared, group should compile list of characteristics, based on the guiding questions, that describe: <br />High School Graduate Prepared to Go to College or Into Workforce<br /> 6. 6. Full Group Share Out<br />What do we see? Any surprises?<br />Do all of our students look this way? Why or why not?<br />What does this mean for our work as practitioners in a school?<br /> 7. 7. PAI Discovery: 21st Century Skills<br /><br /> 8. 8. 21st Century Skills<br />Basic Knowledge<br />English language (spoken)<br />Reading comprehension (English)<br />Writing in English<br />Mathematics & Science<br />Government & Economics<br />Humanities/Arts<br />Foreign Languages<br />History/Geography<br /> 9. 9. 21st Century Skills<br />Applied Skills<br />Critical thinking<br />Oral Communications<br />Written Communications<br />Teamwork/ Collaboration<br />Diversity<br />Info. Tech. Application<br />Leadership<br />Creativity/ Innovation<br />Lifelong Learning<br />Professionalism<br />Ethics/Social Responsibility<br /> 10. 10. Project Based Learning<br />Orange Activity<br />In your group, come up with as many different lessons as possible that you could teach with your item.<br />What did you come up with?<br />What 21st Century Skills are developed during this lesson?<br /> 11. 11. The Student Learning Problem<br />21st Century Skills were not being emphasized in schools.<br /> 12. 12. Our Solution<br />Project Based Learning<br />British Math Study in 1997<br />Three times as many students in the PBL school received the top grade on the national math exam<br />Challenge 2000<br />Study found increased student engagement, greater responsibility for learning, increased peer collaboration, and greater achievement gains by students labeled “low level learners.”<br /> 13. 13. How Do I…<br />Teachers interested in the skills, but concerned about how they could be implemented within the constraints of the core curriculum.<br /> 14. 14. Why Ford PAS?<br />“Learning Pillars” = 21st Century Skills<br />Academically rigorous <br />Standards-based <br />Collaborative<br />Project-and-inquiry based<br />Modules available online, free of cost<br />Technical assistance hubs around the country can offer PD and support with integration<br /> 15. 15. Experiencing Ford PAS: Making Product Decisions<br />Acme Soft Drink Company <br /> 16. 16. About the company… <br />Acme Soft Drink Company is a medium-sized soft-drink maker with a strong presence in both the United States and several other countries. Acme is unusual among soft-drink manufacturers in that it makes both soft drinks and their containers (aluminum cans).The company sells three soft drinks, of which its cola is the most popular,<br /> but it is thinking of expanding its product line.<br />Product Distribution: Acme Soft Drinks are available in the United States, Europe, Japan, and China<br />Current Products: Retro Root Beer, Super Orange Fizz, Extreme Cola<br />Number of Employees: 1,000 worldwide<br />Manufacturing Plants: One in Iowa, one in Georgia, and several other plants around the world <br />Consumer Base: Young adults between the ages of 14 and 26<br /> 17. 17. The Problem!! <br />Acme wants to expand its line by developing a new brand of soft drink for the U.S. market. The company is thinking about a lemon-lime-flavored soft drink. <br />Competition with other brands will be stiff, so its product will have to meet customer needs and be favorably priced. This new soft drink will also need an aggressive marketing campaign. <br />Many different departments at Acme will be involved in the development, production, and marketing of this new product, and they all have different concerns. How can Acme develop a new soft drink that will satisfy everyone and make a profit for the company?<br /> 18. 18. Your Job <br />Acme is planning to hold a company-wide meeting to decide on a strategy for producing the new soft drink. Acme has asked members from the following teams to come up with a Soft Drink Production Plan to share with the company.<br />Corporate Citizenship <br />Finance<br />Marketing and Sales <br />Production Departments<br /> 19. 19. The Process<br />Work in small groups, by department<br />Corporate Citizenship <br />Finance<br />Marketing and Sales <br />Production Departments<br />Select one representative from your department<br />This person will represent your department in a company-wide meeting to make decisions about Acme Soft Drink Company<br />Company Meeting<br /> 20. 20. The Company Meeting<br />It’s decision time….<br />What is Acme’s soft drink release date?<br />What is the cost of the product?<br />What is the marketing strategy and cost?<br />How many milligrams of caffeine per 12 Oz.?<br />What Acme Soft Drink Company use for packaging materials?<br />What is the packaging design (such as 20 oz wide mouth)<br />At which plant will the product will be manufactured?<br /> 21. 21. Reflection<br />Decision-making process: Did one department have more influence over the decision making process than the others?  Were there some goals that took precedence over others? If so, why do you think that happened? <br /> <br />Learning from Teamwork:  How did teamwork aspect deepen and/or support your learning?<br /> <br />Challenges of Teamwork: What are the challenges of working in a team? What are some strategies to overcome the challenges of teamwork in the classroom/with colleagues? <br />Making the Connection: How does this type of learning relate to how tasks are accomplished in your practice? <br /> 22. 22. Resources<br />Philadelphia Academies, Inc.:<br />Partnership for 21st Century Skills:<br />National School Reform Faculty:<br />Ford Partnership for Advanced Studies:<br /> 23. 23. Contacts for Further Information<br />Jennifer Cardoso, Manager of Post-Secondary and Academic Supports:<br />Michael Hoch, Site Facilitator:<br />Kate Balliet, Site Facilitator:<br />
Posts Tagged‘launch’ SpaceX’s Falcon 9 Lands Successfully, Stands…Unsuccessfully. If there’s one thing that SpaceX has shown us is that landing a rocket from space onto a barge in the middle of the ocean is, well, hard. Whilst they’ve successfully landed one of their Falcon-9 first stages on land not all of their launches will match that profile, hence the requirement for their drone barge. However that barge presents its own set of challenges although the last 2 failed attempts were due to a lack of hydraulic fluid and slower than expected throttle response. Their recent launch, which was delivering the Jason 3 earth observation satellite into orbit, managed to land successfully again however failed to stay upright at the last minute. A video posted by Elon Musk (@elonmusk) on Elon stated that the failure was due to one of the lockout collets (basically a clamp) not locking properly on one of the legs. Looking at the video above you can see which one of those legs is the culprit as you can see it sliding forward and ultimately collapsing underneath. The current thinking is that the failure was due to icing caused by heavy fog at liftoff although a detailed analysis has not yet been conducted. Thankfully this time around the pieces they have to look at are a little bigger than last times rather catastrophic explosion. Whilst it might seem like landing on a drone ship is always doomed to failure we have to remember that this is what the early stages of NASA and other space programmes looked like. Keeping a rocket like that upright under its own strength, on a moving barge no less, is a difficult endeavour and the fact that they’ve managed to successfully land twice (but fail to remain upright) shows that they’re most of the way there. I’m definitely looking forward to their next attempt as there’s a very high likelihood of that one finally succeeding. The payload it launched is part of the Ocean Surface Topography from Space mission which aims to map the height of the earth’s oceans over time. It joins one of its predecessors (Jason-2) and combined they will be able to map approximately 95% of the ice-free oceans in the world every 10 days. This allows researchers to study climate effects, providing forecasting for cyclones and even tracking animals. Jason-3 will enable much more high resolution data to be captured and paves the way for a future, single mission that will be planned to replace both of the current Jason series satellites. SpaceX is rapidly decreasing the access costs to space and once they perfect the first stage landing on both sea and land they’ll be able to push it down even further. Hopefully they’ll extend this technology to their larger family of boosters, once of which is scheduled to be test flown later this year. That particular rocket will reduce launch costs by a factor of 4, getting us dangerously close to the $1,000/KG limit that, when achieved, will be the start of a new era of space access for all. New Shepard Second Launch Blue Origin’s New Shepard Has Second Successful Test Flight. It seems that Blue Origin is ready to step out of the cloak of secrecy it has worn for so long. Once an enigmatic and secretive company they have been making many more waves as of late, setting the scene for them to become more heavily involved in the private space industry. Progress hasn’t been all that fast for them however although, honestly, it’s hard to tell with the small dribs and drabs of information they make public. Still they managed to successfully fly their current launch vehicle, New Shepard, at the end of April this year. That test wasn’t 100% successful however as, whilst the crew capsule was returned safely, the booster (which has the capability to land itself) did not fair so well and was destroyed. Today marks a pivotal moment for Blue Origin as their second flight of their New Shepard craft was 100% successful, paving the way for their commercial operations. New Shepard Second Launch The New Shepard craft isn’t your typical craft that we’ve come to expect from private space companies. It’s much more alike to Virgin Galactic’s SpaceShipTwo as it’s designed for space tourists rather than transporting cargo or humans to orbital destinations. That doesn’t mean it’s any less interesting however as they’ve already demonstrated some pretty amazing technology that few other companies have been able to replicate. It’s also one of the most unusual approaches to sub-orbital tourism I’ve seen, almost being a small scale replica of a Falcon-9 with a couple unusual features that enable it to be a fully reusable craft. A ride on a New Shepard will take you straight up at speeds of almost Mach 4 getting you to a height of just over 100KM, the universally agreed boundary of Earth and space. However not all of the rocket will be going up there with you, instead once the booster has finished its job it will disconnect from the crew capsule, allowing the remaining momentum to propel the small cabin just a little bit further. The cabin then descends back down to Earth, landing softly with the aid of your standard drag chutes that are common in capsule based craft. The booster however uses some remaining fuel to soft land itself and appears to be able to do so with rather incredible accuracy. The final part of the video is what failed on the previous launch as they lost hydraulic pressure shortly after the craft took off. In this video though it’s clear to see the incredible engineering at work as the rocket is constantly gimbaling (moving around) the thrust in order to make sure it can land upright and in the desired location. This is the same kind of technology that SpaceX has been trialling with its recent launches, although they have the slightly harder target of a sea barge and a much larger rocket. Still the fact that Blue Origin have it working, even on a smaller scale, says a lot for the engineering expertise that’s behind this rocket. I’m hopeful that Blue Origin will continue being a little more public as, whilst they might be playing with the big boys just yet, they’ve got all the makings of yet another great private space company. The New Shepard is a fascinating design that has proven to be highly capable with its second test flight and I have no doubt that multiple more are scheduled for the near future. It will be very interesting to see if the design translates well to their proposed Very Big Brother design as that could rocket (pun intended) them directly into competition with SpaceX. It certainly is a great time to be a space nut. Hitting a Bullseye From Orbit: SpaceX’s Automated Landing System. Reducing the cost of getting things into orbit isn’t easy, as the still extremely high cost of getting cargo to orbit can attest. For the most part this is because of the enormous energy requirement for getting things out of Earth’s gravity well and nearly all launch systems today being single use. Thus the areas where there are efficiencies to be gained are somewhat limited, that is unless we start finding novel methods of getting things into orbit. Without question SpaceX is at the forefront of this movement, having designed some of the most efficient rocket engines to date. Their next project is something truly novel, one that could potentially drop the total cost of their launches significantly. Pictured above is SpaceX’s Autonomous Spaceport Drone, essentially a giant flat barge  that’s capable of holding its position steady in the sea thanks to some onboard thrusters, the same many deployable oil rigs use. At first glance the purpose of such a craft seems unclear as what use could they have for a giant flat surface out in the middle of the ocean? Well as it turns out they’re modifying their current line of Falcon rockets to be able to land on such a barge, allowing the first stage of the rocket to be reused at a later date. In fact they’ve been laying the foundations of this system for some time now, having tested it on their recent ORBCOMM mission this year. Hitting a bullseye like that, which is some 100m x 30m, coming back from orbit is no simple task. Currently SpaceX is only able to get their landing radius down to an area of 10KM or so, several orders of magnitude higher than what the little platform provides. Even with the platform being able to move and with the new Falcon rockets being given little wings to control the descent SpaceX doesn’t put their chances higher than 50% of getting a successful landing the first time around. Still whilst the opportunity for first time success might be low SpaceX is most definitely up to the challenge and it’ll only be a matter of time before they get it. The reason why this is such a big deal is that any stage of the rocket that can be recovered and reused drastically reduces the costs of future launches. Many people think that the fuel would likely be the most expensive part of the rocket however that’s not the case, it’s most often all the other components which are the main drivers of cost for these launch systems. Thus if a good percentage of that craft is fully reusable you can avoid incurring that cost on every launch and, potentially, reduce turnaround times as well. All of these lead to a far more efficient program that can drive costs down, something that’s needed if we want to make space more accessible. It just goes to show how innovative SpaceX is and how lucky the space industry is to have them. A feat like this has never been attempted before and the benefits of such a system would reach far across all space based industries. I honestly can’t wait to see how it goes and, hopefully, see the first automated landing from space onto a sea platform ever. Orion Exploration Flight Test NASA’s Orion Takes Flight. Since before the Shuttle’s retirement back in 2011 NASA has been looking towards building the next generation of craft that will take humans into space. This initially began with the incredibly ambitious Ares program which was set to create a series of rockets that would be capable of delivering humans to any place within our solar system. That program was cancelled in 2010 by President Obama and replaced with a more achievable vision, one that NASA could accommodate within its meagre budget. However not all the work that was done on that program was lost and the Orion capsule, originally intended to ride an Ares-I into space, made its maiden flight last week signalling a new era for NASA. Orion Exploration Flight Test The profile for this mission is a fairly standard affair, serving as a shakedown of all the onboard systems and the launch stack as a whole. In terms of orbital duration it was a very short mission, lasting only 2 orbits, however that orbit allowed them to gather some key data on how the capsule will cope with deep space conditions. It wasn’t all smooth sailing for the craft as the mission was meant to launch the day before however a few technical issues, mostly to do with the rockets, saw NASA miss the initial launch window. However the second time around they faced no such issues and with the wind playing nice Orion blasted off for its twice around the world voyage. When I first read about the mission I was curious as to why it was launching into such an unusual orbit. To put it in perspective the apogee (the point of the orbit furthest away from the earth) was some 5,800KM which is an order of magnitude higher than anything else in low Earth orbit. As it turns out this was done deliberately to fling the Orion capsule through the lower Van Allen belt. These belts are areas of potentially damaging radiation, something which all intersolar craft must pass through on their journey to other planets in our solar system. Since Orion is slated to carry humans through here NASA needs to know how it copes with this potential hazard and, if there are any issues, begin working on a solution. The launch system which propelled the Orion capsule into orbit was a Delta-IV Heavy which currently holds the crown for the amount of payload that can be delivered to low Earth orbit. It will be the first and last time that we’ll be seeing Orion riding this rocket as the next flight, slated for launch towards the end of 2018, will be the Space Launch System. This is the launch system that replaced the Ares series of rockets when Obama cancelled the Constellation program and will be capable of delivering double the payload of the Delta-IV Heavy. It’s going to need that extra power too as the next Orion mission is an uncrewed circumlunar mission, something NASA hasn’t done in almost 5 decades. It’s great to see progress from NASA, especially when it comes to its human launch capabilities. The Shuttle was an iconic craft but it simply wasn’t the greatest way to get people into space. The Orion however is shaping up to be the craft that might finally pull NASA out of the rut it’s found itself in ever since the Apollo missions ended. We’re still a while off from seeing people make a return to space on the back of a NASA branded rocket but it’s now a matter of when, and not if, it will happen. Antares Rocket Explodes Shortly After Launch. It did make for a pretty decent light show, though. Chang'e 3 Chang’e 3 Launches: China’s First Luna Rover. MAVEN Orbiter MAVEN Launches, The Mars Atmosphere Mystery Awaits. Orbital Sciences Cygnus Launches, Heading For The ISS. JAXA’s HTV-4 Launches With Talking Robot Aboard. Russian Proton Rocket Crashes Shortly After Take Off.
From Wikipedia, the free encyclopedia Jump to: navigation, search See Operation Motorman for the British Army operation in Northern Ireland Hat pin from motorman on the Chicago, North Shore and Milwaukee railroad. A motorman is the title for a person who operates an electrified trolley car, tram, light rail, or rapid transit train. The term refers to the person who is in charge of the motor (of the electric car) in the same sense as a railroad engineer is in charge of the engine. The term was (and, where still used, is) gender-neutral. Though motormen have historically been male, females in the position (such as in the United States during the World Wars) were usually also called motormen as a job title. The term has been replaced by more neutral ones, as gender-specific job titles have fallen into disuse, and because many systems employ large numbers of women in the position. On the New York City Subway and London Underground, the position is now called train operator (T/Op). The operator of an electric locomotive or an electric multiple unit train on a commuter or mainline railroad is typically called an engineer or driver. The term may also refer to a person on a locomotive-hauled train when the train is being propelled by the locomotive. The driver is responsible for applying power in the locomotive, while the motorman (usually in a specially-built or converted vehicle) at the front of the train, is responsible for obeying signals, sounding the horn, and applying the brakes where necessary. Oilpatch motorman[edit] In the petroleum industry, a motorman is the member of the drilling crew who is responsible for the maintenance and operation of the engines on an oil rig. External links[edit]
An Olympic honour for Alan Turing The 2012 Olympics offer the perfect chance to mark the anniversary of a great mathematician – and marathon runner It's now well known that Turing laid down the foundations of computer science in the 1930s, helped shorten the second world war by breaking Nazi codes at Bletchley Park and investigated artificial intelligence. He went on to design early computers during the late 1940s and just before he died he was untangling the process of morphogenesis to understand why and how living beings take the shape they do. Only today are scientists appreciating the work he did in his last years, and every computer user can be thankful for his theoretical Turing machine, which captured the essence of the machines we all use. What is less known is that Turing was also an accomplished physical athlete. He was an excellent marathon runner, with a best time of 2 hours 46 minutes. He ran for a local club in Walton, Surrey while working at the National Physical Laboratory in Teddington. He is also said to have run between London and Bletchley Park for meetings during the second world war, and at age 14 he cycled 60 miles from Southampton to school at Sherborne during the general strike of 1926. The last time Britain hosted the Olympics, in 1948, Turing tried out for the British Olympic marathon team. He came fifth in the trials. He ended up attending the games as a spectator taking along two of his young nieces as guests. That year Britain took a silver in the marathon when Thomas Richards ran for 2 hours 35 minutes. Alan Turing was only 11 minutes slower. 2012 has great significance: it's the centenary of his birth on 23 June. To celebrate "Alan Turing Year", mathematicians and scientists across Britain and around the world are arranging events throughout the year. Celebrations of Turing's work will be held in Manchester (where he was living and working when he died) and at Bletchley Park. There's even a suggestion that Unesco should designate 2012 the year of computer science. Turing's life also deserves celebration far from the places he's most associated with. As Britons, we live in a world Turing helped create: computers have permeated our lives and his work at Bletchley Park with thousands of others helped bring the war with Nazi Germany to an end. As London shows off what's great about Britain through the Olympic games, let's show off a great Briton of whom we should be proud. What better way to honour Turing than by naming the 2012 marathon the "Turing marathon" and inviting his surviving nieces to witness the event? One of them could even be invited to fire the starting pistol that will set the runners off. Those little girls are elderly now, but their memories of Uncle Alan are bright. Inviting them would be a fitting tribute. Of course, detractors may be concerned about sullying the games by associating an individual with an event. But such concerns didn't stop Greece in 2004 from naming their entire Olympic stadium after Spiridon Louis (who won the marathon event in 1896). Honouring the life of a man would be a welcome antidote to the heavy commercialisation surrounding the games. Others may worry about raking over the embers of the dark days of anti-homosexuality laws. But there's little need to be concerned: celebrating Turing doesn't mean focusing on just that one aspect of his life; it means recognising a mental and physical athlete, a mathematician and marathon runner, and a man to whom we owe so much. It's rare that events coincide to give us one moment in time when a man like Turing can be celebrated in all his complexity. Let's not miss the chance in 2012. This article was commissioned after the author contacted us via a You Tell Us thread
We Now Grow Our Own Rubber (Jul, 1931) << Previous 1 of 3 << Previous 1 of 3 We Now Grow Our Own Rubber Mexico’s Wild Weed, Guayule, Raised on 5,600 Acres in California, Yields Precious Latex ACROSS the level surface of a sun- baked valley in central California, tractors drag strange, clanking machines down long, parallel rows of a grayish-green shrub that looks, at first sight, like sagebrush. In a near-by mill, giant crusher rolls grind dried bushes to a pulp, while steaming, high-pressure hydraulic chambers spew forth myriad tiny cellular particles the size of a grain of wheat. In the yard outside, men load freight cars with rectangular pine boxes filled with a spongy, porous material, whose acrid smell is strangely familiar. It is rubber— produced commercially on American soil for the first time in history. Mechanized American efficiency now promises to produce the crude rubber of industry at a cost that can successfully compete with the product of the labor of coolies who are virtually slaves. On a 5,600-acre tract near Salinas, Calif., “guayule,” a shrub imported from the highlands of Mexico, is being grown on a huge scale. When ground to a pulp in machines much like those of a large ore mill, this queer plant yields from thirteen to twenty percent of its own weight in pure raw rubber. The California rubber project represents the triumph of scientists who for years have been searching the world over for a rubber-producing plant that could be grown in the temperate zone. Great automatic machines are now flinging forth a challenge to the rubber plantations of the tropics, where for years man has bled the hevea tree of its sticky sap. This tree, from which almost all of the world’s supply of rubber is derived, grows only in a narrow section near the equator. ALTHOUGH plants and trees bearing ¦ special tubes filled with the milky “latex,” or sticky sap which becomes rubber, have long been known to exist in North America, few gave promise of practical commercial value. Only when it was discovered that in northern Mexico and southern Texas an immense tract 130,000 square miles in extent is covered with a native weed whose juices contain the precious latex, did American-grown rubber begin to influence the markets of the world. The strange desert shrub that secrets tiny cells of rubber in its bark and wood first came to the attention of science when American mining engineers in central Mexico found peon children chewing guayule twigs for material to make crude rubber balls. Starting with a bit of bark or the wood of the plants, and spitting out the splinters as they chewed, the children would get tiny balls of rubber. This simple trick had come down to them from ancestors during the centuries. Companions of Cortez, on his second voyage to America, found natives of southern Mexico playing a game much like modern tennis, with balls “so elastic that when they touch the ground, even when lightly thrown, they spring into the air with the most incredible leaps”—astounding to the Spaniards, who knew nothing of rubber. In the guayule bush, the rubber is contained in cells in all parts of the bush except the leaves, entirely surrounded by cellulose. In the rubber tree it is in the sap. Nature seems to have intended it to perform entirely different functions in the two plants. In the rubber tree, it forms a sticky residue when sap flows out of a cut or wound and thus protects it, keeps insects out, prevents decay, and helps the wound heal. But in the guayule bush, the rubber is evidently stored up as reserve food. In the goldenrod, and other plants which have been found to contain rubber, it is located mainly in the leaves, and its function is unknown. BRIDGING the gap between the wild weed known to the Indians of Mexico three centuries ago and the modern domesticated shrub, raised on American farms like sugar beets or potatoes, lies a strange story of patient search closely paralleling the amazing work of Luther Burbank. Guayule rubber first came to the United States when samples were sent from Durango, Mexico, to the Centennial Exposition at Philadelphia in 1876, but it was eighteen years before the first commercial guayule rubber was produced in Mexico. It was sticky and soft, vulcanized poorly, and had a low tensile strength. Rubber experts scoffed at this product of a weed. Heavily laden with unwanted resinous compounds, it could never compete with the pure latex that oozes from the hevea tree. Yet the need for more rubber became acute as the automobile chugged its way from the inventor’s workshop into American lite. The immense Mexican tracts of guayule were looked upon as a source from which rubber might be obtained. In its laboratory, the Diamond Rubber Company, of Akron, made first-class rubber from guayule but the cost was too high to be of practical value. Meanwhile, factories were set up in Mexico, where various companies spent hundreds of thousands of dollars in attempts at commercial manufacture—and “went broke.” As many as thirteen different enterprises tried their hand at guayule extraction, but could not make ends meet, although they produced as much as 150.000.000 pounds of rubber in a single year, and decimated the immense guayule fields of our southern neighbor. THE shrinking supply of wild guayule made it evident that a cultivated variety would have to be developed if a steady production was to be obtained. In 1907, the Continental Rubber Company began to cultivate the guayule on its Cedros range in Mexico; but when the guns of revolution boomed, laboratory work stopped. Large quantities of seed were then brought to central California, where tracts of land were set aside near Salinas as a nursery laboratory for research purposes. Here began a long series of experiments directed by Dr. W. B. McCallum. At once he exploded two popular fallacies regarding the guayule: First, that it would not reproduce itself from seed; and, second, that the wild plant, when grown commercially in the field, would not produce rubber. As the seed had been taken from plants on the range, it was inferior, and contained much chaff. As the prophets had predicted, it would not germinate. As an alternative, Dr. McCallum tried planting cuttings from the shrub, but with scant success. Out of many thousands of cuttings set out, fewer than one hundred grew, and those that did take root were lacking in vigor and vitality. Chemists and botanists went into consultation. After countless experiments, they learned to treat the seeds by chemical and other means, so that at least ninety-six out of every hundred would germinate. Once the seeds had sprouted, the guayule grew rapidly. Accustomed to shift for itself in its barren native habitat, the hardy shrub fell prey to blights and diseases when grown in the rich soil of the farm. Dr. McCallum had to learn how to get the soil into just the right physical condition to develop in the plant strong disease-resisting qualities. At once he was harassed by another problem. By irrigation, he could hasten the growth of the guayule and bring a large spreading shrub to maturity in a short time; but as the luxuriance of growth increased, the rubber content fell off to the vanishing point. A four-year-old range shrub yielded about fifteen percent of its dry weight in rubber, but the irrigated plant gave only-four percent. MANY range bushes five years old weighed only one pound, but contained a large amount of rubber. Dr. McCallum’s cultivated guayule weighed as much as twenty pounds, but contained almost no rubber. But he was not to be baffled by the plant’s eccentricities. In 1913 and 1914 he set out, in southern California, over a million plants grown from mixed seed from Mexico. Later a much larger number of plants was grown in Arizona. These myriad shrubs were catalogued and card-indexed, classified, selected, and reclassified. Out of this enormous number of specimens, only ten strains were chosen as commercial producers. Meanwhile, new difficulties arose. In the nursery, the young plants throve, but when transplanted to the field they refused to take root again. More analysis, more research. Study by the botanists at length revealed that, while the young guayule has a long, deep tap-root, with almost no branches, it later develops a branching root system that enables it to take advantage of the short, infrequent showers of its native region. The plant secretes most of its rubber during the dry season, when it lies almost dormant, little being stored up during the growing period. By patient, intensive culture, Dr. McCallum finally developed a plant whose period of strong root development coincided with the transplanting season, so that this work could be done by machines, without injury to the roots. In twenty years of research, Dr. McCallum has wrought marvelous changes in the wild Mexican shrub, completely domesticating it and at the same time raising its rubber content amazingly. Some specimens grown in the nursery have yielded as high as forty-five percent of their weight in pure rubber. The undesired resins, which Nature supplied to the guayule plant as a sort of dressing to heal its wounds, are reduced as much as forty percent by simple dessication of the plant before it is milled. THE vulcanized rubber has a softness of texture not found in the rubber of the tropics, yet specimens have been obtained with a tensile strength of 4,000 pounds per square inch. The improved strain produces in two years as much rubber as was stored by the wild plant in four. Dr. McCallum is now learning how to harvest the shrub at the fourth year and to get the guayule’s big root system to resprout, making possible several repeat crops at two- or three-year intervals. While he has been engaged in this revolutionary experimentation, chemists, peering through high-powered microscopes, analyzed the juices of the guayule bush and classified them as a “colloid” substance—that is, a suspension of countless minute particles, each a microscopic ball of rubber. From knowledge of the behavior of colloids, they predicted that these tiny bits of rubber could be squeezed together into large grains which could be separated from the woody fibers of the plant. This discovery has provided a basis for the production of guayule rubber without the application of costly chemicals and complicated processes, and has helped to make possible the complete mechanization of rubber manufacture. ON THE California guayule plantation where the Mexican weed is being cultivated on a huge scale, machines perform every operation from planting the tiny-shrubs, six rows at a time, to gathering the mature bushes, extracting their rubber content, and compressing the finished product into sheets and slabs for shipment. Devices much like vacuum cleaners suck up seeds for planting in the twenty-five-acre nursery, where the guayule bushes are started under the supervision of experts. Tractor-drawn mulchers prepare the top soil, making ready for the mechanical seeder which next passes over the beds, scattering the minute grains, so small that 28,000 of them weigh less than an ounce. When water is needed, overhead sprinklers cast a gentle spray over the seedling plots. By transplanting time, the tiny shrubs have large, strong roots. A mechanical cutter clips their tops. Next comes a machine that digs the plants up bodily. Another machine sorts and boxes them, 5,000 to the bunch. When the guayule has reached an age of four years or more, power-driven plows strip the bushes bodily from the ground and stack them in piles. Mechanical beaters flail the dirt from the roots, and the plants are allowed to dry out. Then harvester machines pick them up, chop them to shreds, and blow the bits into trailing trucks. In the mill, automatic elevators, endless distributing belts, and revolving screw conveyors carry the chopped guayule through a series of crusher rollers that gradually reduce it to a pulp. Water helps to break down the fibers of the wood. In great wooden tubs half the height of the factory building, the pulp is held until most of the waterlogged wood has settled to the bottom. Then it is run off into hydraulic chambers, where steam and pressure together waterlog the cork particles of bark, which sink to the bottom, while the rubber floats to the top. SKIMMED off, scrubbed by rubber-coated lead pellets the size of golf balls, squeezed through wringers, and dried in vacuum chambers that remove all but one percent of the water, the minute rubber grains are ready to be blocked into slabs. A hot press, exerting a pressure of 2,000 pounds to the square inch, does the rest, and the finished rectangular cubes are packed into pine boxes for shipment. The successful production of rubber on American soil may prevent another world monopoly such as that which a few years ago forced the price of rubber to almost prohibitive heights. At the present record low price of crude rubber, no operators can hope to make money; but officials of the American Rubber Producers, Inc., under whose direction the Salinas plant is run, expect to make a reasonable profit when a normal market exists. If the deadly blight which is the scourge of the hevea tree should sweep through the tropical plantations as it has already done in Brazil, guayule rubber might avert a serious world-wide rubber famine. 1. Ed says: July 19, 20086:46 pm Peon children? 2. Blurgle says: July 19, 20087:47 pm “Peon” has meant “landless Latin American agricultural labourer” in English since 1609. It’s still used in that meaning by specialists, although the colloquial meaning of “nobody” or “drudge” is better known to most English-speakers. Submit comment You must be logged in to post a comment.
Visual Studio Team Test provides a set of asserts through three classes:  Assert, StringAssert and CollectionAssert (all defined in Microsoft.VisualStudio.TestTools.UnitTesting namespace). All of them contains a long list of static methods to test several different kind of data in different ways. Even if there is a huge number of methods, there are many scenario that aren’t still covered. Let's see a simple example. We need to test if two instances of the same class are equals (contains the same data). Let's see the basic class: public class MyClass     public MyClass(int intProp)         this.intProp = intProp;     public int IntProp         get { return intProp; }         set { intProp = value; }     private int intProp; Here is our test case: Assert.AreEqual(new MyClass(12), new MyClass(12)); Executing the above test we will get a failure: "Assert.AreEqual failed. Expected:<MyLib.MyClass>, Actual:<MyLib.MyClass>". Why they aren’t equals? We are comparing tow different pointers, simple! Ok, but this isn’t the subject of the post, so, I’ll go ahead. In order to make the test case succesfull all you need to do is to override the method Equals in MyClass: public override bool Equals(object obj)     if (!(obj is MyClass))         return false;     return ((MyClass)obj).intProp == intProp; I execute again and it works. Is this enough? Not really. Lets consider another test case: Assert.AreEqual(new MyClass(14), new MyClass(12)); The test will fail as expected producing the following error message “Assert.AreEqual failed. Expected:<MyLib.MyClass>, Actual:<MyLib.MyClass>.” Not really clear what’s wrong here. We would have some more clear idea of what are the differences, since we are testing with Assert.AreEqual. The idea is to implement a custom assert. An assert class is nothing more than a class with a bunch of static methods that checks the parameter values. If something is not correct (assertion not satisfied), they throws an exception (AssertFailedException). So, in our basic sample we would have something like: public class MyClassAssert     public static void AreEqual(MyClass expected, MyClass actual)         if (expected.IntProp != actual.IntProp)             throw new AssertFailedException(                     "MyClass.IntProp values are different.{0}Expected: {1}, Actual: {2}", Applying the above assertion to our test case: MyClassAssert.AreEqual(new MyClass(14), new MyClass(12)); We will get a more clear error message: "MyClass.IntProp values are different. Expected: 14, Actual: 12" Ok, the sample is super simple and it doesn’t make too much sense, but it give an idea of how to implement custom assertions for more complex ‘real’ types providing meaningful test messages.
[an error occurred while processing this directive] BBC News watch One-Minute World News Last Updated: Monday, 9 October 2006, 13:58 GMT 14:58 UK Planet enters 'ecological debt' The Earth (Image: AP) Growing demand eats into Earth's natural capital, say the authors Rising consumption of natural resources means that humans began "eating the planet" on 9 October, a study suggests. The date symbolised the day of the year when people's demands exceeded the Earth's ability to supply resources and absorb the demands placed upon it. The figures' authors said the world first "ecological debt day" fell on 19 December 1987, but economic growth had seen it fall earlier each year. The data was produced by a US-based think-tank, Global Footprint Network. The New Economics Foundation (Nef), a UK think-tank that helped compile the report, had published a study that said Britain's "ecological debt day" in 2006 fell on 16 April. The authors said this year's global ecological debt day meant that it would take the Earth 15 months to regenerate what was consumed this year. 1987 - 19 December 1990 - 7 December 1995 - 21 November 2000 - 1 November 2005 - 11 October 2006 - 9 October (Source: Global Footprint Network/Nef) "By living so far beyond our environmental means and running up ecological debts means we make two mistakes," said Andrew Simms, Nef's policy director. "First, we deny millions globally who already lack access to sufficient land, food and clean water the chance to meet their needs. Secondly, we put the planet's life support mechanisms in peril," he added. The findings are based on the concept of "ecological footprints", a system of measuring how much land and water a human population needs to produce the resources it consumes and absorb the resulting waste. Global Footprint Network's executive director, Mathis Wackernagel, said humanity was living off its "ecological credit card" and was "liquidating the planet's natural resources". Graph showing nations' ecological footprint (BBC) "While this can be done for a short while, overshoot ultimately leads to the depletion of resources, such as forests, oceans and agricultural land, upon which our economy depends," Mr Wackernagel said. Fredrik Erixon, director of the European Centre for International Political Economy (Ecipe), a Brussels-based think tank, said he applauded the authors on their innovative way of focusing attention to the issue of resource depletion. But he added he found the concept of ecological debt to be "quite ludicrous". "When it comes to using footprints as a way to follow the micro effects of various economic behaviours on the environment, it can be quite good," Mr Erixon said. "But the way they are collecting and assessing information is wrong. We don't really get any serious information out of this." He also questioned the use of the term "debt": "A debt is where you have over-savings in one area of the economy, and under-savings in another. "Then you have a transfer of savings from one actor to another in the form of a loan. But who are we indebted to?" Mr Erixon asked. "Perhaps 'ecological exuberance' is better than ecological debt." He added that history had shown that technological advances had led to more efficient uses of natural resources, and had sustained economic growth. The BBC is not responsible for the content of external internet sites Has China's housing bubble burst? How the world's oldest clove tree defied an empire Why Royal Ballet principal Sergei Polunin quit Americas Africa Europe Middle East South Asia Asia Pacific
Navigation Links GM spuds beat blight The findings, funded by the Biotechnology and Biological Sciences Research Council and The Gatsby Foundation, will be published in 'Philosophical Transactions of the Royal Society B' on 17 February. In 2012, the third year of the trial, the potatoes experienced ideal conditions for late blight. The scientists did not inoculate any plants but waited for races circulating in the UK to blow in. Non-transgenic Desiree plants were 100% infected by early August while all GM plants remained fully resistant to the end of the experiment. There was also a difference in yield, with tubers from each block of 16 plants weighing 6-13 kg while the non-GM tubers weighed 1.6-5 kg per block. The trial was conducted with Desiree potatoes to address the challenge of building resistance to blight in potato varieties with popular consumer and processing characteristics. The introduced gene, from a South American wild relative of potato, triggers the plant's natural defence mechanisms by enabling it to recognise the pathogen. Cultivated potatoes possess around 750 resistance genes but in most varieties, late blight is able to elude them. "Breeding from wild relatives is laborious and slow and by the time a gene is successfully introduced into a cultivated variety, the late blight pathogen may already have evolved the ability to overcome it," said Professor Jonathan Jones from The Sainsbury Laboratory. In northern Europe, farmers typically spray a potato crop 10-15 times, or up to 25 times in a bad year. Scientists hope to replace chemical control with genetic control, though farmers might be advised to spray even resistant varieties at the end of a season, depending on conditions. The Sainsbury Laboratory is continuing to identify multiple blight resistance genes that will difficult for blight to simultaneously overcome. Their research will also allow resistance genes to be prioritized that will be more difficult for the pathogen to evade. In a new BBSRC-funded industrial partnership award with American company Simplot and the James Hutton Institute, the TSL researchers will continue to identify and experiment with multiple resistance genes. By combining understanding of resistance genes with knowledge of the pathogen, they hope to develop Desiree and Maris Piper varieties that can completely thwart attacks from late blight. Contact: Zoe Dunford Norwich BioScience Institutes Related biology news : 1. Secrets of potato blight evolution could help farmers fight back Post Your Comments: Breaking Biology News(10 mins): Breaking Biology Technology:
Roof collapses at Chernobyl nuclear plant: Ukraine A section of the Chernobyl nuclear power plant in Ukraine has collapsed under the weight of snow but there were no injuries or any increase in radiation from the reactor that exploded in 1986, the country's emergency agency said Wednesday. "The preliminary reason for the collapse was too much snow on the roof," the agency said, adding that the radiation situation is "within the norm" and nobody was harmed in Tuesday's incident. The roof was constructed after the 1986 disaster but is not part of the sarcophagus structure covering the reactor, it said. However the collapse underlines concerns about the condition of the now defunct nuclear plant over two-and-a-half-decades after the world's worst nuclear disaster. Part of the roof and some of the walls at the plant's machine room, close to the sarcophagus that seals the reactor number four which melted down in the 1986 accident, fell under the weight of the snow. The area of the accident is estimated about 600 square metres, (6,500 square feet), the emergency agency said. A statement on the website of the power station described the accident as the "partial failure of the wall slabs and light roof of the Unit 4 Turbine Hall." It said that the damaged structure was not critical part of the protection structures at the power plant. "There are no changes in the radiation situation at the Chernobyl nuclear power plant industrial site and in the exclusion zone. There were no injuries," it said. Chernobyl is only around 100 kilometres (60 miles) from Ukraine's capital Kiev and lies close to the borders with Russia and Belarus. The area around the plant is still very contaminated and is designated as a depopulated "exclusion zone". Amid concerns about the state of the sarcophagus, an arch-shaped structure called the New Safe Confinement is being built nearby to slide over the existing sarcophagus covering the reactor. The EBRD is administering the fund to build the shelter with the help of donor contributions. When it is finished in 2015, the structure will weigh 20,000 tonnes and span 257 metres (almost 850 feet). Two workers were killed by the April 26, 1986 explosion and 28 other rescuers and staff died of radiation exposure in the next months. Tens of thousands of people needed to be evacuated and fears remain over the scale of damage to people's health. In 1986 and 1987, the Soviet government sent more than half a million rescue workers, known as liquidators, to clear up the power station and decontaminate the surrounding area. However the total death toll from Chernobyl remains a subject of bitter scientific controversy, with estimates ranging from no more than a few dozen deaths directly attributable to the disaster to tens of thousands.
THE Internet may have radically changed ordinary life, but has anyone suggested that the Web might make it possible for people to sleep in a little longer? Last week, a Texas inventor did so with a patent for an alarm system that analyzes information from the Internet to decide whether to change the clock buzzer setting and roust a sleeper early or let him go on dozing. Mary Smith Dewey, who lives in Dallas, patented a system that downloads real-time information about weather and traffic to help people get to work or appointments on time. ''A problem arises when there is an accident on the route the user usually follows, or if the weather creates traffic problems or other delays,'' she writes in her patent application. ''On the other hand, many people may also desire a little extra time for sleeping if the traffic is particularly light one morning, or if, for example the weather causes a delay in a flight they are scheduled to take that morning.'' In Ms. Dewey's system, the clock will evaluate weather or traffic conditions as requested by a user. The device consists of a keypad, a clock, a modem link to a communication network like the Internet, and a processor with memory that evaluates the data and controls the other elements. To set up the system, a person would use the keypad to enter chosen conditions. The user would provide details like location and route to work, an interest in snow or ice storms, and the addresses of related Web sites. The user can also instruct the system to connect with the Internet, a local area network or a private network. The user would then designate a time for the alarm to ring under ordinary circumstances, followed by a second choice for days when certain weather or traffic conditions exist. The system would connect to the Internet and look for the status of the weather and traffic in the areas the user specified. ''Using the snow example, if indicates that it is snowing, then a second alert time will be computed,'' Ms. Dewey writes. If the user has instructed the alarm to ring at a time earlier than usual in the event of snow, the processor will set off the early buzzer. The same is true if the system determines that the person can sleep in because an airport is closed. ''In this example, the user would have to input a second alert time, an airline Web site to query and a selected event such as whether the flight he or she is on is canceled,'' Ms. Dewey writes. ''Then if the flight is canceled during the night, this event will be communicated from the Web site, through the Internet and into the processor. The processor will then change the set alert time and the user will be able to sleep in.'' As with just about everything else associated with the Internet, Ms. Dewey's invention comes with an advertising feature. ''Advertising may be substituted for an alarm signal,'' she says. She also says that the system can be wireless so that a person could travel with the clock. Ms. Dewey received patent number 6,229,430. A Novel Approach To Pasta al Dente Pasta lovers know that the amount of boiling water used is critical to cooking pasta al dente. Yet an Italian inventor has won a patent for cooking pasta with no water at all. Primo Bugane of Alpignano, Italy, instead prepares his spaghetti in a pot of tomato sauce. The pot is part of the patent, too. It is a pan with small valleys molded into its bottom and then coated with anti-adhesive material like Teflon. Mr. Bugane says the valleys allow steam to flow around the raw pasta as it cooks. In his patent application, Mr. Bugane says he has objections to the conventional method of cooking pasta in water: a separate pan is necessary for sauce; domestic water often contains chemicals or residue that give a bad taste; and boiling leeches vitamins and other nutrition from the pasta. So for the many millions of people around the world who like the Mediterranean diet of pasta with tomato sauce, he suggests (and patents) the following recipe: Cover the bottom of the special pot with a layer of fresh or peeled tomatoes, or tomato sauce. Add basil, which need not be minced, and onions or garlic. Add a uniform layer of spaghetti or other pasta. Add appropriate amounts of salt and oil. Make sure the pot bottom is covered with the layer of sauce and second layer of pasta. Do not add water. Cover the pot and place on a moderate flame. Bring the tomato layer to a boil. After ten minutes, when the pasta is soft, stir the noodles and sauce together. Continue cooking for about five minutes. ''During cooking, when the tomato sauce is boiling, flows of steam are generated within the valleys, thereby increasing turbulence within the pan and permitting a uniform cooking of the pasta,'' Mr. Bugane writes. He asserted that the system prevented overcooking, as well. ''It takes a delicious taste as it absorbs the tomato sauce during cooking,'' he promises. Mr. Bugane adds that large amounts of pasta can be cooked the same way by creating multiple layers of sauce and pasta. He received patent number 6,228,413.
Chemical unit trains to respond to stateside emergencies A U.S. soldier assigned to 21st Chemical Company, 2nd Company Battalion, 48th Chemical Brigade, helps identify chemicals during joint operational training held at Fort Bragg in Fayetteville, N.C., on June 26, 2013.<br>Michael C. Zimmerman/U.S. Air Force Disoriented and yelling, a man stumbled up to a group of masked, suited-up people begging for help. In this training scenario, the man had just been exposed to a nuclear explosion. The people in the white protective suits – Fort Bragg soldiers specially trained to respond to chemical, biological, radiological and nuclear emergencies – calmly scanned the man with a wand to determine if he had been exposed. "There are only a select few to our job," said Spc. Brian Patterson, a chemical, biological, radiological and nuclear specialist for the 21st Chemical Company. "The most rewarding part is knowing I'm making a difference and serving my country." Soldiers of the 21st Chemical Company, who traditionally train for deployments and are responsible for defending the country against the threat of weapons of mass destruction, shifted their focus to procedures for handling a nuclear contamination stateside. They ran through the scenario Wednesday with soldiers from the 118th Military Police Battalion and 36th Area Support Medical Company. Their goal is to decontaminate about 200 people or 70 non-ambulatory people in an hour. In the exercise, people either walked up to or were transported by litter to the soldiers with the 21st Chemical Company. The chemical soldiers did an initial exam by using a special wand to detect exposure to radiation. Exposed patients were sent to tents to go through the decontamination process. Patients who could walk moved through a tent to remove their clothing, rinse under hot water and dress in sterile clothing provided to them. In another tent, contaminated patients who could not walk remained on the litter as chemical soldiers moved them down a counter to cut their clothing, rinse their bodies and wrap them in blankets. "They provide a unique skill," said Capt. Victoria Wallace, commander of the 21st Chemical Company. "They really take it seriously," she said. "It could happen in a state the soldier is from. The previous training has been for deployments; now it's decontaminating in your town. When this happens, it's to mitigate the suffering and save lives." In a real-life situation, chemical company soldiers would receive orders to respond from the secretary of defense. They would team up with local, state and federal officials to determine procedures for decontamination. Although rarely called upon for decontamination missions, the soldiers must be ready to go. Leading up to the decontamination exercise, the chemical soldiers had already been trained and certified on hazardous material handling and procedures. They must wear protective suits, gloves, boots and masks while working. Noticeably absent are the soldiers' weapons and Stryker vehicles. "We're cognizant of our posture," Wallace said. "We don't want to look forceful." It's the only chemical company on Fort Bragg and has about 100 soldiers. There are a few other chemical companies at Army installations across the country. Wallace said the soldiers are training to stay sharp for upcoming national events. They could be called upon for the Democratic or Republican conventions, or the presidential inauguration, she said. In the past, the Army's chemical soldiers responded to Hurricane Katrina, she said. The 21st Chemical Company is part of the active-duty Defense Chemical, Biological, Radiological, Nuclear Response Force, which consists of 5,200 soldiers, sailors, airmen and Marines who deploy to conduct urban search and rescue, patient decontamination, casualty ground/air evacuation and general logistical support. The response force is part of the larger Task Force Operations, which is equipped to conduct initial rapid response missions, such as casualty search and rescue, patient decontamination, incident site surveying and monitoring. Another piece of the training incorporated soldiers of the 118th Military Police Company, 503d Military Police Battalion, 16th Military Police Brigade. Instead of security and patrol, the soldiers were tasked as general support to control the flow of incoming patients. Capt. Jaysen Ryberg, commander of the company, said it's the first time the soldiers have been able to participate in such training. "The most important thing is saving as many lives as we can," Ryberg said. "With training like this, I'm confident they can." ©2016 The Fayetteville Observer (Fayetteville, N.C.) Visit The Fayetteville Observer at www.fayobserver.com Distributed by Tribune Content Agency, LLC. Join the conversation and share your voice. Show Comments Stripes.com Editors' Picks
Top Definition The genre of music between rock, and pop. However, unlike "alternative" music, the genre: "good alternative", does not contain any Emo music, such as fall out boy, or panic at the disco. Hence why the genre is named "good alternative" instead of "bad alternative". This genre is derived from the need of a further classification of the "alternative" genre. Some examples of "good alternative" music are: Arcade Fire, Radiohead, Arctic Some examples of bad "alternative" music are: Fall Out Boy, Panic at the Disco, Rites of Spring. "I went to a bar last night. There was some good alternative music playing, it was actually pretty cool." "Nice man, I went to an emo bar last night, they played some interesting alternative music, it made my ears bleed...? by Stevo-K January 02, 2009 Free Daily Email Emails are sent from We'll never spam you.
From Wikipedia, the free encyclopedia Jump to: navigation, search The Czechoslovak Hussite Church The Czechoslovak Hussite Church Flag of Rychvald Coat of arms of Rychvald Coat of arms Rychvald is located in Czech Republic Location in the Czech Republic Coordinates: 49°51′37″N 18°22′44″E / 49.86028°N 18.37889°E / 49.86028; 18.37889 Country Czech Republic Region Moravian-Silesian District Karviná First mentioned 1305  • Mayor Šárka Kapková  • Total 17.02 km2 (6.57 sq mi) Elevation 220 m (720 ft) Population (2006)  • Total 6,817  • Density 400/km2 (1,000/sq mi) Postal code 735 32 Website http://www.rychvald.cz/ About this sound Rychvald  (Polish: Rychwałd , Cieszyn Silesian: Rychwołd, German: Reichwaldau) is a town in the Karviná District, Moravian-Silesian Region, Czech Republic, in the historical region of Cieszyn Silesia. It has a population of 6,769 (2001 census). The village was first mentioned in a Latin document of Diocese of Wrocław called Liber fundationis episcopatus Vratislaviensis from around 1305 as item in Richinwalde XL) mansi.[1][2][3][4] It meant that the village was supposed to pay tithe from 40 greater lans. The walde (German for a wood) ending of its name indicates that the primordial settlers were of German origins. The village could have been founded by Benedictine monks from an Orlová abbey[5] and also it could a part of a larger settlement campaign taking place in the late 13th century on the territory of what will be later known as Upper Silesia. Politically the village belonged initially to the Duchy of Teschen, formed in 1290 in the process of feudal fragmentation of Poland and was ruled by a local branch of Piast dynasty. In 1327 the duchy became a fee of Kingdom of Bohemia, which after 1526 became part of the Habsburg Monarchy. The village became a seat of a Catholic parish, mentioned in the register of Peter's Pence payment from 1447 among 50 parishes of Teschen deaconry as Reychenwald.[6] It is now served by the Saint Anne Church. After World War I, fall of Austria-Hungary, Polish–Czechoslovak War and the division of Cieszyn Silesia in 1920, it became a part of Czechoslovakia. Following the Munich Agreement, in October 1938 together with the Zaolzie region it was annexed by Poland, administratively organised in Frysztat County of Silesian Voivodeship.[7] It was then annexed by Nazi Germany at the beginning of World War II. After the war it was restored to Czechoslovakia. See also[edit] 1. ^ Hosák et al. 1980, 405. 2. ^ Panic, Idzi (2010). Śląsk Cieszyński w średniowieczu (do 1528) [Cieszyn Silesia in Middle Ages (until 1528)] (in Polish). Cieszyn: Starostwo Powiatowe w Cieszynie. pp. 297–299. ISBN 978-83-926929-3-5.  5. ^ I. Panic, 2010, p. 430 6. ^ "Registrum denarii sancti Petri in archidiaconatu Opoliensi sub anno domini MCCCCXLVII per dominum Nicolaum Wolff decretorum doctorem, archidiaconum Opoliensem, ex commissione reverendi in Christo patris ac domini Conradi episcopi Wratislaviensis, sedis apostolice collectoris, collecti". Zeitschrift des Vereins für Geschichte und Alterthum Schlesiens (in German) (Breslau: H. Markgraf) 27: 361–372. 1893. Retrieved 21 July 2014.  External links[edit]
Will Burning Biomass With Coal Do Any Good? A Polish coal plant, which will cofire biomass (burn biomass at the same time as coal) to help reduce its emissions by 25 percent compared with the country’s current coal plants, is due to come online in 2009. A major Polish power group, Poludniowy Koncern Energetyczny, estimates its total cost at €500 million ($735 million). The power plant, and others like it that combine renewable feedstock with coal, present a new dilemma for green contemplation. The all-investment, all-the-time arm of the green movement believes that we should be throwing money at all viable technologies that could reduce greenhouse gases. So, Vinod Khosla invests up and down the line in ethanol companies because he believes the intermediate steps will help us get to the end goal of second- and third-generation biofuels that are not as bad as corn-based ethanol. Dropping some biomass into our coal plants, just like ethanol doping our gasoline, will get us a reduction in coal mining and emissions, but raises the question: Will these small steps actually put off the more drastic steps that we’ll need to take to keep our climate on the rails? Or do we need to do something with coal for the next decade, until some type of carbon capture and storage system? The last president of the American Academy for the Advancement of Science had a brilliant state-of-the-world article adapted from a speech he gave last year in this week’s Science, in which he argued for a truly sustainable approach to the use of land: We need more studies that combine projected land requirements for food and feed, fiber, biofuels, and infrastructure — rather than pretending that each use can be analyzed separately–and that attempt to reconcile the combined demands with the requirement for enough land covered by intact forests and other native ecosystems to provide the carbon sequestration and other ecosystem services society cannot do without. Just like with biofuels, we have to ask of biomass for baseload generation: Which feedstock? Where is it grown? How is it grown? Over at Biopact, a cleantech group lobbying for a green energy pact between Europe and Africa, they run the numbers on one example of a good cofiring feedstock, palm kernel shells grown by Nigerian palm farms. “Palm kernel shells are a waste residue from palm fruit processing; they are easy to ship, don’t need to be densified and can be readily co-fired with coal,” they wrote. Coal cofired with biomass grown for the purpose might not be a clear winner, but coal burned with waste biomass seems like a pretty good idea. Clearly, technologies like these aren’t going to solve our problems, but by engaging with their design, we can ensure that these intermediate steps are the longest strides possible. Guntur Setyawan Dear all, We are big Indonesian player for Palm Kernel Shells (PKS) since 2005. We are now hold more than 30’000 mts PKS in our stockyard ready for delivery using handymax vessel to Europe buyer. If you might interest kindly pls contact us for further infos. Guntur Setyawan M: +62 818 839 841 E/YM! [email protected] Dr. No If somebody would only care to understand physics and thermodynamics. E=MC2 predicts the amount of ash you are left with after you pulled out the energy at a miserable 40-45% effeciency. Where is the ash going to go??? Dump it “some place”??? Quit wasting time looking for a magic bullet. Every step which reduces dependency on finite resources is a worthwhile step. Each measured introduction of a renewable resource is part and parcel of a long-term solution. Sometimes, my peer enviros sound like they learned to analyse history – from Walt Disney. It’s not likely John Wayne will stroll into town with a Winchester rifle and make it all better. maybe they should actually look at Malaysia as well being one of the biggest producers of Palm Oil products due to the enormous growth of palm trees for this production. Dont know where the wastes goes but certainly a venue to explore Comments are closed.
The concept of FireAndBrimstoneHell comes from a distinction LostInTranslation. The word "Hell" is used as a translation for four words used in the initial writing of ''Literature/TheBible'' in its original languages: ''Sheol'', ''Hades'', ''Tartarus'', and ''Gehenna''. ''Sheol'' is a Hebrew word and ''Hades'' is Greek; both mean the same thing, the abode of the dead for all humans, whether good or bad, at least until the end of days, and used in conjunction with Ecclesiastes 9:5 would refer to CessationOfExistence. ''Hades'' is mentioned as a place of torment only once: in Luke 16:19-31, Jesus tells a story about a rich man who died and went there (however, this was a parable, so whether he meant it literally is up for debate). ''Tartarus'' is used only once: -->God did not hold back from punishing the angels that sinned, but, by throwing them into Tartarus, delivered them to pits of dense darkness to be reserved for judgment. (2 Peter 2:4) Note that this is in past tense, and the phrase "reserved for judgment" denotes that that this is not the final fate for the {{Fallen Angel}}s referred to. ''Gehenna'' is a Greek word that when translated means "Valley of Hinnom", which was a trash dump where garbage, filth, corpses of criminals, and the like were burned. Jesus re-purposed this word to refer to the future eventual end and KarmicDeath of the wicked. Unfortunately, the ''King James Version'' translated ALL FOUR words as "Hell", despite ''Hades'' and ''Gehenna'' having different meanings and thus causing some confusion, especially considering the KJV's long-standing use and popularity. Recent Bible translations have caught on to this error and make some attempts to correct it, but it's too little, too late to reverse the popular notion of "Hell". The "fire and brimstone" part comes from a set of passages in chapters 14 and 19-21 of the Literature/BookOfRevelation, in which [[{{Satan}} the devil]] and the ungodly are cast into a lake of fire and sulfur, which is generally considered to be the same as ''Gehenna'': -->And another angel, a third, followed them, saying in a loud voice: "If anyone worships the wild beast and its image, and receives a [[MarkOfTheBeast mark]] on his forehead or upon his hand, he will also drink of the wine of the anger of God that is poured out undiluted into the cup of his wrath, and he shall be tormented with fire and sulfur in the sight of the holy angels and in the sight of the Lamb. And the smoke of their torment ascends forever and ever, and day and night they have no rest, those who worship the wild beast and its image, and whoever receives the mark of its name." (Revelation 14:9-11)\\ And death and Hades were hurled into the lake of fire. This means the second death, the lake of fire. Furthermore, whoever was not found written in the book of life was hurled into the lake of fire. (Revelation 20:14, 15)\\ "But as for the cowards and those without faith and those who are disgusting in their filth and murderers and fornicators and those practicing spiritism and idolaters and all the liars, their portion will be in the lake that burns with fire and sulfur. This means the second death." (Revelation 21:8) Note that the first and third passages DO mention damned humans. However, none of these passages say anything about being underground or demons doing the tormenting. In addition, Revelation is generally viewed to take place in the FUTURE, which some interpret meaning there isn't a "lake of fire" NOW, or at least nobody is in it yet. As for the third and fourth passages, does the "tormented day and night forever and ever" really refer to eternal torture? While the Devil is a living entity, the wild beast and false prophet are not. They are symbols, and symbols cannot be tortured. Neither is the "lake of fire" an actual place. Additionally, death itself is obviously not a living entity, but a state of being. How can you torture death? To say that the verse means that the Devil will be literally tortured would also mean that symbols and death itself can also be tortured. The word translated as "torture" is [[ basanizo]]; while that translation is probably more common now, in ''ancient'' Greek, the ''primary'' translation referred to "testing/proving (on a touchstone)" or "examine closely", which would refer to testing and God's and Satan's rival claims to sovereignty as set out in Eden as told in Genesis chapter 3 against each other and finally resolving the issue, also setting a precedent on the issue and the prevention of another issue like it from ever arising. Thus the Satan will have been 'proved wrong' "day and night, forever and ever." The more popular notion of FireAndBrimstoneHell where demons torment the damned appears to have originated with Dante's ''[[DivineComedy Inferno]]'', so this is OlderThanPrint... more or less. Most of the layers of Hell in ''Inferno'' were more like the IronicHell. [[{{Flanderization}} Only about one or two layers were]] TRULY Fire and Brimstone (In fact, the lowest level, reserved for traitors, was [[EvilIsDeathlyCold completely frozen over]]). For the curious, brimstone is another word for sulfur and means "burning stone". It burns quite hot with a blue flame and is more or less unquenchable. It produces sulfur dioxide, which is quite noxious. All in all, it's an unpleasant sort of fire, one that historically was used to purify homes of bad air. Thus its purpose in hell is twofold: it cleans as it burns. It should perhaps also be noted that FireAndBrimstoneHell is not a uniquely Christian concept. For example, there are a number of passages in The Qur'an that affirms its existence just as vigorously as (and [[{{Gorn}} much more graphically than]]) any of the Bible bits quoted in this article. A couple of examples: -->Lo! Those who disbelieve Our revelations, We shall expose them to the Fire. As often as their skins are consumed We shall exchange them for fresh skins that they may taste the torment. (Surah 4:56, "The Women")\\ There is also the question of how "literal" Revelation is supposed to be interpreted anyway. Annihilationists believe that eternal punishment would instantly mean that GodIsEvil. The "lake of fire and sulfur" is not literal, but instead [[WhatDoYouMeanItsNotSymbolic figurative or symbolic]]. It represents [[KilledOffForReal eternal destruction]] and those sent there are [[DeaderThanDead completely obliterated]]. The only thing that lasts for eternity is the fire itself, and humans are not inherently immortal. After all, [[WordOfGod God Himself said to Adam and his sons that sinners will be returned to dust or destroyed, not given the Fruit of Life.]] They also argued that this was the stance of the original Jews who used the metaphor of the real-life garbage incinerator "Gehenna" to signify a place where sinners are annihilated forever. The concept of Eternal Punishment originated from Tartarus in Myth/ClassicalMythology, but it was only reserved for the most vile of Complete Monsters. However, the concept of Gehenna gave an excuse for Tartarus to be adopted by the [[CorruptChurch exceedingly-corrupt]] Catholic Church who modified the concept to include all non-Christians, in order to strike totalitarian fear into its subjects and to give an excuse to make suicide illegal.
Navigation Links Research could lead to preeclampsia prevention AUGUSTA, Ga. Excessive turnover of cells in the placenta may trigger an unnatural increase in blood pressure that puts mother and baby at risk, researchers say. It's called preeclampsia, a condition that can develop after the 20th week of pregnancy, prompting an unhealthy increase in the mother's blood pressure that can result in premature delivery. Georgia Health Sciences University researchers want to know if dead placental cells in some cases produce an exaggerated immune response that constricts blood vessels and raises blood pressure. "During pregnancy, there is a natural turnover of trophoblasts the main cell type in the placenta," said Dr. Stella Goulopoulou, a postdoctoral fellow in the Medical College of Georgia Department of Physiology at GHSU. "In pregnancies with preeclampsia, we see exaggerated rates of cell death compared to normal pregnancies." When those cells die, they can release their mitochondria, or powerhouse, which then binds to a key receptor, Toll-like receptor 9, and causes an inflammatory response. Previous research has linked mitochondria released by damaged or dead cells to inflammatory responses associated with sepsis and heart failure. "Blood vessels, like other tissues, have receptors that respond to mitochondrial DNA and other components of the mitochondria," Goulopoulou said. "DNA from the mitochondria can specifically activate Toll-like receptor 9, which is present in blood vessels. In our experiments, we found that activating Toll-like receptor 9 causes the blood vessels to constrict more than normal." Goulopoulou has received a $25,000 Vision Grant from the Preeclampsia Foundation to study whether that is behind the generalized global inflammation and if that ultimately impairs the growing baby's supply of nutrients and oxygen. Vision Grants provide initial funding for novel lines of research to encourage young investigators to study causes and treatments of preeclampsia. "The placenta is a dynamic tissue," she said. "We think it is the source of the mitochondria implicated in preeclampsia because it is the only tissue that undergoes such cell turnover during pregnancy. It also goes away, in most cases, when the placenta and baby are delivered." Preeclampsia is characterized by high blood pressure, protein in the urine a sign the kidneys are stressed - and restricted growth of the fetus. It can also cause long-term damage to the mother's blood vessels, kidneys and liver. The condition causes approximately 76,000 maternal and half a million infant deaths worldwide each year. The symptomsheadaches, nausea, swelling, achescan be indistinguishable from those of ordinary pregnancy, which can complicate diagnosis. Risk factors include first pregnancy, multiple fetuses, obesity, maternal age greater than 35 and a maternal history of diabetes, high blood pressure or kidney disease. Researchers suspect many different causes for the condition, and although mild cases may be treated with dietary modifications, bed rest and blood pressure medication, birth is the only cure, Goulopoulou said. Goulopoulou is looking for a molecular explanation for what triggers Toll-like receptor 9 to signal the body's inflammatory response, leading to vessel constriction. "When vessels in the uterus constrict, it inhibits blood flow, oxygen and nutrient supply to the baby," she said. "So, increased uterine constriction could be responsible for restricting the baby's growth." Women with preeclampsia often have underweight and underdeveloped babies. By injecting mitochondria from placental cells into pregnant rats, Goulopoulou expects to see an inflammatory response and symptoms of preeclampsia. She will also measure the levels of mitochondrial DNA in the circulation of women with preeclampsia. "One of the main objectives of this study is to discover why and how activation of Toll-like receptor 9 by mitochondrial DNA causes abnormal function of the blood vessels," she said. "If it is determined that this receptor is responsible, it could be a valid therapeutic target." Contact: Jennifer Hilliard Scott Georgia Health Sciences University Related medicine news : 8. Researchers find evidence of banned antibiotics in poultry products 10. Scientific session and new research highlights Post Your Comments: Related Image: Research could lead to preeclampsia prevention Breaking Medicine News(10 mins): Breaking Medicine Technology:
Coming of Age in Mississippi Quiz | Eight Week Quiz A Anne Moody Buy the Coming of Age in Mississippi Lesson Plans Name: _________________________ Period: ___________________ Multiple Choice Questions 1. How is Ed related to Mama? (a) Her cousin (b) Her nephew (c) Her step father (d) Her brother 2. Who does Raymond bring to live with them? (a) Davis (b) James (c) Miss Pearl (d) Jack 3. Who looks after Essie Mae and her sister while their parents are at work? (a) Abraham Fredericks (b) George Lucas (c) George Lee (d) Joe Rogers 4. Where is Essie baptized? (a) A lake (b) The sea (c) A swimming pool (d) A pond 5. How are Ed's younger brothers different from the rest of the family? (a) They are disabled. (b) They are Asian. (c) They are albinos. (d) They are white. Short Answer Questions 1. Where should blacks go in the movie theater? 2. For what does Miss Claiborne give Essie $12? 3. Where do Essie's parents work? 4. Who sets fire to the house in Chapter 2? 5. How does Essie impress the white children? (see the answer key) This section contains 170 words (approx. 1 page at 300 words per page) Buy the Coming of Age in Mississippi Lesson Plans
How does "The Man to Send Rainclouds" comment on the complexities involved in cross cultural communication and understanding? 1 Answer | Add Yours gbeatty's profile pic Posted on The entire story is an extended comment on cross cultural communication. In it a dead person's body is treated according to both Catholic and traditional native customs. Just the fact that this is necessary indicates the complex challenges involved in cross cultural communication. The priest's ambivalence about whether to use holy water in the Native American ceremony is a good example of the emotional challenges of cross cultural communication. He has to do a lot of translation to agree to it, and these translations have both symbolic and emotional impact on him (and the Native community). The story therefore does a good job of showing just how much energy and attention is needed to communicate—and that meaning varies according to context. We’ve answered 331,107 questions. We can answer yours, too. Ask a question
Compendium of forest hydrology and geomorphology in British Columbia Project Background: Over the last two decades, hydrologists and geomorphologists have often discussed the need to document the history, scientific discoveries, and field expertise gained in watershed management in British Columbia. Several years ago, a group of watershed scientists from FORREX, academia, government, and the private sector gathered at the University of British Columbia to discuss the idea of a provincially relevant summary of hydrology, geomorphology, and watershed management. Through this meeting, the Compendium of Forest Hydrology and Geomorphology was born. As a synthesis document, the Compendium consolidates current scientific knowledge and operational experience into 19 chapters. To ensure reliable, relevant, and scientifically sound information, all chapters were extensively peer reviewed employing the standard double-blind protocol common to most scholarly journals. Chapters in the Compendium summarize the basic scientific information necessary to manage water resources in forested environments, explaining watershed processes and the effects of disturbances across different regions of the province. In short, the Compendium is about British Columbia and is primarily intended for a British Columbian audience, giving it a uniquely regional focus compared to other hydrology texts. At over 800 pages, the Compendium showcases the rich history of forest hydrology, geomorphology, and aquatic ecology research and practice in British Columbia and sets forth the foundation for the future by showing us how much more we have yet to learn. Project Team: The project team consists of a Steering Committee and a Publication Team. The project steering committee guides the development of the manuscript, while the publication team co-ordinates publication production. Project Steering Committee: • Dr. Todd Redding, FORREX • Robin Pike, BC Ministry of Forests and Range • Dr. R.D. (Dan) Moore, University of British Columbia • Dr. Rita Winkler, BC Ministry of Forests and Range • Dr. Kevin Bladon, FORREX Table of Contents
Skip to definition. Noun: bhindi Usage: Asia 1. Long green edible beaked pods of the okra plant - okra - okra, gumbo [N. Amer], okra plant, lady's-finger, Abelmoschus esculentus, Hibiscus esculentus 3. Long mucilaginous green pods; may be simmered or sautéed but used especially in soups and stews - gumbo [N. Amer], okra Derived forms: bhindis Type of: herb, herbaceous plant, pod, seedpod, veg, vegetable, veggie Part of: Abelmoschus, genus Abelmoschus Encyclopedia: Bhindi
From Wikipedia, the free encyclopedia Jump to: navigation, search Nagapatri at belle brahmastana Nagabana at Belle Badagumane, Moodubelle, Udupi Union of nagabrahma and nagakannike at a mandala held in Belle Brahmastana, Udupi A Mandala drawn during nagamandala Nagaradhane (Tulu:പാമ്പ് ആരാധന) is a form of snake worship which, along with Bhuta Kola, is one of the unique traditions prevalent in coastal districts of Dakshina Kannada, Udupi and Kasaragod alternatively known as Tulu Nadu. Snakes are not seen as deities, but as an animal species which should be respected, appeased and protected for multiple social, religious and ecological regions. Origin of Nagaradhane[edit] Snakes have been associated with power, awe and respect in India. According to Hindu mythology, Lord Vishnu takes rest under the shade of the giant snake, Adisesha. Lord Shiva wears a snake Vasuki around his neck. It is difficult to trace the origin of Nagaradhane, though the Nairs of Kerala and Bunts of Tulu Nadu claim to be kshatriyas of Nagavanshi descent, thus maybe snake worship was popularised by them. Though most rituals of snake worship are done by Brahmins, there is not a single Bunt house that does not have a nagabana. Snakes are offered sweets and milk to appease them. The snake worship rituals practiced in Tulu Nadu are quite unique and different from the other rituals. Snakes have their own snake shrines in a sacred grove known as Nagabana. The shrines have images of cobras carved of stones. Accordingly, nobody is allowed to chop the tree near the Nagabana. It is also believed that snakes, specifically the cobras, are not be harmed or killed by anyone. If harmed, the individual has to perform a ritual to cleanse the sin of killing or harming the snake. The belief is that the individual who refuses to perform the ritual will be cursed by the snake for eternity. It can also be noted that in Tulu Nadu or the South Canara region in Karnataka, agriculture is predominant that too paddy is the main crop. In these fields snakes help in saving the crop from rodents. This can be a plausible reason for worship of snakes. The ritual[edit] Mandala drawn during ashleshabali at Belle Badagumane Moodubelle, Udupi There are two distinct rituals performed in reverence to the snake; Aashleshabali and Nagamandala. Of these, Nagamandala is the longer and more colourful of the two. Nagamandala depicts the divine union of male and female snakes. It is generally performed by two priests. The first priest, called patri, inhales the areca flower and becomes the male snake. The second priest, called Nagakannika or the female snake dances and swings around an elaborate serpent design drawn with natural colours on the sacred ground. The ritual is supplemented by playing an hour glass shaped instrument called as Dakke. The drawings in five different colours on the sacred ground are white (white mud), red (mix of lime powder and turmeric powder), green (green leaves powder), yellow (turmeric powder) and black (roasted and powdered paddy husk). Aashleshabali is similar nature to the after death rituals performed for the humans as per the Hindu tradition. The ritual, centered on the serpent design, continues till early in the morning. A similar ritual is found in Kerala and is known as Sarpam Thullal and Sarpam Kali. All communities of Tulu Nadu revere snakes.[citation needed] Significance of Nagabanas[edit] Nagabanas or the sacred groves are deemed to be the resting place of snakes. Cutting of trees or defacing the grove is considered as sacrilege. People are wary of the snake-bites and also wanted ecological preservation. See also[edit] External links and videos[edit]
Skip to primary content Skip to main menu Skip to section menu (if applicable) Bipolar Disorder Mood disorders are conditions that cause people to feel intense, prolonged emotions that negatively affect their mental well-being, physical health, relationships and behaviour. In addition to feelings of depression, someone with bipolar disorder also has episodes of mania. Symptoms of mania may include extreme optimism, euphoria and feelings of grandeur; rapid, racing thoughts and hyperactivity; a decreased need for sleep; increased irritability; impulsiveness and possibly reckless behaviour. Depression and Bipolar Disorder We all experience changes in our mood. Sometimes we feel energetic, full of ideas, or irritable, and other times we feel sad or down. But these moods usually don’t last long, and we can go about our daily lives. Depression and bipolar disorder are two mental illnesses that change the way people feel and make it hard for them to go about their daily routine.
Nora The Piano-Playing Cat: The Secret Behind Her Talents Explained (VIDEO) Guest Post by Marc Silver of National Geographic's Pop Omnivore Blog Like many of you--OK, millions of you--I'm a fan of Nora the Piano-Playing Cat, star of YouTube videos. Gray and sleek, she strokes the keys with grace and restraint. She duets with her piano-playing mistress. She appears to be, as one YouTube commenter says, the reincarnation of Meowzart, er, Mozart. I was inspired to try to get my cat, Rosie, to tickle the 88s. I held her in my lap and took control of her paws. I made her play Chopsticks. She seemed to enjoy it. But has she practiced even one minute since then? Nope. You have a wayward feline who refuses to practice, is the joking diagnosis of Nick Dodman, animal doctor. Dodman directs the Animal Behavior Clinic at the Tufts Cummings School of Veterinary Medicine, and is a good person to explain why cats might play the piano. Hint: It's not because they like the sound of music. Here are some points that speak to Nora's prowess and motivation. 1. Cats can be trained. People have the impression you can't train a cat, says Dodman. Cats can be taught to do quite sophisticated things, like turning a light on and off, opening the door to a cupboard, running through a complicated circuit of 10 exercises in order. Professional cat trainers have tools and tricks that can work for regular cat owners, too (see point #3). 2. Cats can learn by watching. Imitation learning has been demonstrated quite clearly in cats, says Dodman. In other words, monkey see/monkey do does not just apply to monkeys. In one study, cats were trained to press a lever to receive food. Other cats then watched the lever-pressing cats. The cats that observed the behavior learned it more quickly than the cats that had never seen such a thing in all their nine lives. So Nora the Piano-Playing Cat might well have seen her owner tickling the ivories. But why would she decide to imitate them? 3. Cats like rewards. Maybe Nora jumped up on the piano stool, as cats are wont to do. Maybe her paw accidentally hit a key. Maybe her owner said, Awww, Nora, and petted her, or gave her a kitty treat. You can teach cats to do almost anything if they're hungry and the food is delicious, says Dodman. A handheld device called a clicker can help. As its name indicates, it makes a clicking sound. That's how Dodman once taught a cat to sit. Here's what you do: Wait for the behavior to be expressed by chance--say, the cat happens to sit. Then click the clicker. Then open a cat of wet food--mmm, yummy. The cat will associate the click with the behavior and the reward. 5. Cats can learn to respond to short command words. Says Dodman: It's easier [for them] to understand monosyllabic words that end in a hard sound like a consonant--sharp command words. You don't want something to end in a vowel. Then there's no ending, just a trailing vowel. So perhaps Nora's owners use a piano command word, like Hit It or Note. But not Pianooooooo. 6. You could get a cat to repeatedly press piano keys. In the video, the person stops playing, nothing is happening; then the cat starts it up and plays a couple notes. It looks like a duet, says Dodman. Here's what probably happened: The owner used the cat-effective strategy of reward/no reward. In this case, the cat played a key. Nothing happened. At this point the cat might have just lost interest and wandered off. Or it might have thought, Hmmm, maybe I need to do it again to get the reward. So it hit another note, then another, and it received affection, petting, food--the whole gestalt. 7. There is a benefit to training a cat. The bond between pet owner and pet is strengthened. The owner thinks, My cat is very special. Says Dodman: [People] can't hide their feelings from the animal, and cats are good at picking up body language. 8. Playing the piano isn't the only trick a cat can learn. My cat, knowing I was going to be interviewing with you today, last night [learned to turn on] a radio on my kitchen counter, Dodman says. The cat, named Griswold (after the Chevy Chase character in the movie National Lampoon's Vacation), jumped on the counter, saw a flickering shadow, and hit the radio button. Griswold got attention for this act, so he did it again. But here's the thing: Griswold, says Dodman, is deaf as a post. So the motivation to repeat the action was clearly the desire to be loved. In a similar vein, Dodman once had a grad student who trained a cat to run across the bedroom, jump on a chair, and turn off the bedroom lights. P.S. - Cats probably don't like music. Dodman believes that piano music would sound as harsh and dissonant to a cat's ears as Japanese opera would sound to European ears. Kind of like bonkbonkbonkbonk. He adds, I don't think cats can be trained to appreciate music. Check out more from National Geographic: Animals Cats Suggest a correction
An artist's representation of atomic-scale heat dissipation, which poses a serious obstacle to the development of novel nanoscale devices. Univ. of Michigan engineering researchers have, for the first time, established a general framework for understanding heat dissipation in several nanoscale systems. Image: Enrique Shagun, ScixelIn findings that could help overcome a major technological hurdle in the road toward smaller and more powerful electronics, an international research team involving Univ. of Michigan engineering researchers, has shown the unique ways in which heat dissipates at the tiniest scales. A paper on the research is published in Nature. When a current passes through a material that conducts electricity, it generates heat. Understanding where the temperature will rise in an electronic system helps engineers design reliable, high-performing computers, cell phones and medical devices, for example. While heat generation in larger circuits is well understood, classical physics can't describe the relationship between heat and electricity at the ultimate end of the nanoscale—where devices are approximately one nanometer in size and consist of just a few atoms. Within the next two decades, computer science and engineering researchers are expected to be working at this "atomic" scale, according to Pramod Reddy, U-M ass. prof. of mechanical engineering and materials science and engineering who led the research. "At 20 or 30 nanometers in size, the active regions of today's transistors have very small dimensions," Reddy says. "However, if industry keeps pace with Moore's law and continues shrinking the size of transistors to double their density on a circuit then atomic-scales are not far off. "The most important thing then, is to understand the relationship between the heat dissipated and the electronic structure of the device, in the absence of which you can't really leverage the atomic scale. This work gives insights into that for the first time." The researchers have shown experimentally how an atomic-scale system heats up, and how this differs from the process at the macroscale. They also devised a framework to explain the process. In the tangible, macroscale world, when electricity travels through a wire, the whole wire heats up, as do all the electrodes along it. In contrast, when the "wire" is a nanometer-sized molecule and only connecting two electrodes, the temperature raises predominantly in one of them. "In an atomic scale device, all the heating is concentrated in one place and less so in other places," Reddy says. In order to accomplish this, researchers in Reddy's laboratory—doctoral students Woochul Lee and Wonho Jeong and postdoctoral fellow Kyeongtae Kim—developed techniques to create stable atomic-scale devices and designed and built a custom nanoscale thermometer integrated into a cone-shaped device. Single molecules or atoms were trapped between the cone-shaped device and a thin plate of gold to study heat dissipation in prototypical molecular-scale circuits. "The results from this work also firmly establish the validity of a heat-dissipation theory that was originally proposed by Rolf Landauer, a physicist from IBM," Reddy says. "Further, the insights obtained from this work also enable a deeper understanding of the relationship between heat dissipation and atomic-scale thermoelectric phenomena, which is the conversion of heat into electricity." Source: Univ. of Michigan
Aden Ridge From Wikipedia, the free encyclopedia Jump to: navigation, search Aden-Sheba Ridge The Aden Ridge is a part of an active oblique rift system located in the Gulf of Aden, between Somalia and the Arabian Peninsula to the north. The rift system marks the divergent boundary between the Somalian and Arabian tectonic plates, extending from the Owen Transform Fault in the Arabian Sea to the Afar Triple Junction or Afar Plume beneath the Gulf of Tadjoura in Djibouti.[1] The Gulf of Aden is divided east to west into three distinct regions by large-scale discontinuities, the Socotra, Alula Fartak, and Shukra-El Shiek transform faults.[2] Located in the central region, bounded by the Alula Fartak fault and Shukra-El Shiek fault, is the Aden spreading ridge. The Aden Ridge connects to the Sheba Ridge in the eastern region and to the Tadjoura Ridge in the western region.[2] Due to oblique nature of the Aden Ridge, it is highly segmented. Along the ridge there are seven transform faults that offset it to the north. Initiation of rifting[edit] Extension of the Gulf of Aden rift system began in the late Eocene - early Oligocene (~35 Ma ago), caused by the northeast escape of the Arabian plate from the African plate at a rate of ~2 cm/yr, and the development of the Afar plume.[1] Extension eventually gave way to seafloor spreading, first initiated near the Owen transform fault ~18 Ma ago.[1] Seafloor spreading then propagated as far west as the Shukra-El Shiek fault at a rate of ~14 cm/yr ~6 Ma ago rifting propagated west of the Shukra-El Shiek fault until terminating at the Afar plume.[2] The Afar plume is believed to have contributed to the initiation of the Aden ridge, due to the flow of hot mantle material being channeled along the thin lithosphere beneath the Gulf of Aden.[3] Currently, the Aden Ridge is undergoing extension at a rate of ~15 mm/yr.[4] Segmentation of the Aden Ridge[edit] Compared to its neighboring ridges, the Aden ridge is much more segmented. The Aden Ridge is broken up by seven transform faults with ridge segments of 10 – 40 km. In contrast, the Sheba Ridge is broken by only three transform faults and the Tadjoura Ridge continues essentially uninterrupted to the Afar Plume.[4] Sauter et al. (2001) proposed that variations in the spacing of spreading cells along ridges is a result of spreading rate; i.e., larger spacing results from slower spreading rates. However, the variation in spreading rates across the Gulf of Aden, 18 mm/yr in the east and 13 mm/yr in the west, is not great enough to explain the significant variation in spreading cell length between the Aden ridge and its neighboring ridges.[4] One likely cause for the segmentation of the Aden ridge is its distance from the Afar plume. The western most region of the Gulf, where the Tadjoura Ridge is located, has an anomalously high mantle temperature due to its proximity to the Afar plume. The result of this is higher degrees of melting and magmatism below the ridge, which allows for longer spreading segments without transform faults.[4] The difference in segmentation between the Aden and Sheba ridges can be explained by varying degrees of obliquity. The ocean-continent transition (OTC) of the Sheba ridge formed parallel to the syn-rift structure, whereas the OTC of the Aden ridge formed oblique to the syn-rift structure. The former scenario is more accommodating to oblique spreading and does not require as many transform faults for stability.[4] 1. ^ a b c Leroy, S; Lucazeau, D'Acremont; Watremez, Autin; Rouzo, Khanbari (2010). "Contrasted styles of rifting in the eastern gulf of aden: A combined wide-angle, multichannel seismic, and heat flow survey". Geochemistry, Geophysics, Geosystems 11. Bibcode:2010GGG....11.7004L. doi:10.1029/2009gc002963.  2. ^ a b c Manighetti; Tapponnier, Courtillot; Gruszow, Gillot (1997). "Propagation of rifting along the arabia-somalia plate boundary: The gulfs of aden and tadjoura". Journal of Geophysical Research: Solid Earth 102 (B2): 2681–2710. Bibcode:1997JGR...102.2681M. doi:10.1029/96jb01185.  3. ^ Chang; Van der Lee (2011). "Mantle plumes and associated flow beneath arabia and east africa". Earth and Planetary Science Letters 302: 448–454. Bibcode:2011E&PSL.302..448C. doi:10.1016/j.epsl.2010.12.050.  4. ^ a b c d e Bellahsen; Husson, Autin; Leroy, d'Acremont (2013). "The effect of thermal weakening and buoyancy forces on rift localization: Field evidences from the gulf of aden oblique rifting". Tectonophysics 607: 80–97. doi:10.1016/j.tecto.2013.05.042.  Coordinates: 14°N 52°E / 14°N 52°E / 14; 52
Battle of the Masts From Wikipedia, the free encyclopedia Jump to: navigation, search Battle of the Masts Part of the Arab–Byzantine Wars Date 655 Location off the Lycian coast at Phoenice, Mediterranean Sea Result Decisive Rashidun Caliphate victory Flag of Afghanistan (1880–1901).svg Rashidun Caliphate Simple Labarum2.svg Byzantine Empire Commanders and leaders Abu'l-Awar Constans II unknown unknown Casualties and losses heavy heavy The Battle of the Masts (Arabic: معركة ذات الصواري, romanized Dhat Al-Sawari) or Battle of Phoenix was a crucial naval battle fought in 655 between the Muslim Arabs, led by Abdullah bin Sa'ad bin Abi'l Sarh and the Byzantine fleet under the personal command of Emperor Constans II. The battle is considered to be "the first decisive conflict of Islam on the deep."[1] In the 650s, the Arab Caliphate finished off the Sassanid Empire and continued its successful expansion into the Eastern Roman Empire's territories. In 645, Abdallah ibn Sa'd was made Governor of Egypt by his foster brother Rashidun Caliph Uthman, replacing the semi-independent Amr ibn al-Aas. Uthman permitted Muawiyah to raid the island of Cyprus in 649 and the success of that campaign set the stage for the undertaking of naval activities by the Government of Egypt. Abdallah ibn Sa'd built a strong navy and proved to be a skilled naval commander. Under him the Muslim navy won a number of naval victories including repulsing a Byzantine counter-attack on Alexandria in 646.[2] In 655, Muawiyah undertook an expedition in Cappadocia while his fleet, under the command of Abu'l-Awar, advanced along the southern coast of Anatolia. It seems that Emperor Constans considered the naval part of the invasion the more dangerous, for he embarked against it with a large fleet.[citation needed] The battle[edit] The two forces met off the coast of Mount Phoenix in Lycia,[3] near the harbour of Phoenix (modern Finike). According to the 9th century chronicler Theophanes the Confessor, as the Emperor was preparing for battle, on the previous night he dreamed that he was in Thessalonica; awaking he related the dream to an interpreter of dreams who said: Emperor, would that you had not slept nor seen that dream for your presence in Thessalonica - according to the interpreter, victory inclined to the Emperor's foes.[4][5] As the ships came into battle range Constans raised the Cross and had his men sing psalms. The Arabs responded by raising the Crescent and trying to drown out the psalms by chanting passages from the Koran. Both the Cross and Crescent remained mounted on the masts throughout the battle giving the naval conflict its name.[6] The Arabs were victorious in battle, although losses were heavy for both sides, and Constans barely escaped to Constantinople.[7] According to Theophanes, he managed to make his escape by exchanging uniforms with one of his officers.[4] Although the Arab fleet retreated after its victory,[7] the Battle of the Masts was a significant milestone in the history of the Mediterranean, Islam and the Byzantine Empire, as it established the superiority of the Muslims at sea as well as on land. For the next four centuries, the Mediterranean would be a battleground between Byzantines and Muslims. In the aftermath of this disaster, however, the Byzantines were granted a respite due to the outbreak of a civil war among the Muslims. 1. ^ Ridpath, John Clark. Ridpath's Universal History, Merrill & Baker, Vol. 12, New York, p. 483. 2. ^ Carl F. Petry (ed.), The Cambridge History of Egypt, Volume One, Islamic Egypt 640-1517, Cambridge University Press, 1998, 67. ISBN 0-521-47137-0 3. ^ Probably Mount Olympos south of Antalya, see "Olympus Phoinikous Mons" in Barrington Atlas of the Greek and Roman World, map 65, D4. 4. ^ a b Theophanes the Confessor, Chronographia, in J.P. Migne, Patrologia Graeca, vol.108, col.705 5. ^ Thessalonike can be read as «θὲς ἄλλῳ νὶκην», i.e., «give victory to another». See Bury, John Bagnell (1889), A history of the later Roman empire from Arcadius to Irene, Adamant Media Corporation, 2005, p.290. ISBN 1-4021-8368-2 6. ^ Part Great Sea Battles Have Played in History, New York Daily Tribune, 4 June 1905. 7. ^ a b Warren Treadgold, A history of the Byzantine State and Society, Stanford University Press 1997, 314. ISBN 0-8047-2630-2 Coordinates: 36°16′55″N 30°15′39″E / 36.281898°N 30.260732°E / 36.281898; 30.260732
By Topic Study of gas sensor with carbon nanotube film on the substrate of porous silicon Sign In 7 Author(s) Yong Zhang ; Dept. of Autom., Xi'an Jiaotong: Univ., China ; Junhua Liu ; Xin Li ; Juying Dou more authors A new method of obtaining extremely high electric fields in a very small region near the tip of a carbon nanotube, with diameter of nanometer order, is studied. This method makes the self-sustaining dark discharge voltage decrease to less than 220 V, which is in the safe range. The carbon nanotube array film is used as a cathode to form a new kind of gas sensor based on the gas discharge. The discharge current at room temperature and atmospheric pressure increases from the order of nanoamperes to that of microamperes. The electrical characteristics of several gases at atmospheric pressure are studied. The self-sustaining dark discharge voltages of different gases vary, as do the discharge currents. Porous silicon is selected as the substrate for the carbon nanotubes. This can improve the adhesion of the nanotube on the substrate and so prolong the lifetime of the cathode Published in: Vacuum Microelectronics Conference, 2001. IVMC 2001. Proceedings of the 14th International Date of Conference:
GMO debate on • 2013•10•27     The Sunday Mail Excerpt from the Sunday Mail article: According to research by a United Nations University Institute of Advanced Studies academic, Dr Ademola Adenle, there are increased crop yields, higher farm income, health and environmental benefits associated with modified crops. The research shows that from 1996, when genetically modified (GM) crops were first officially commercialised, six countries produced GM crops on 1,7 million hectares, but by 2010, 29 countries were producing on 148 million hectares. “At least 19 of the 29 countries were in the developing world. This 87-fold growth makes GM the fastest-growing crop technology adopted in modern agriculture,” said Dr Adenle.
Thanksgiving: A Celebration of Genocide. on Nov 22, 2010 get elephant's newsletter The life of a turkey is filled with suffering that you and I cannot fully imagine. Thanksgiving is the celebration of a dual genocide: one against native people, and one against turkeys. The first Thanksgiving Day celebration in 1637 was proclaimed by the Governor of the Massachusetts Bay Colony, not as a festival of the Pilgrims and Indians sharing a meal to celebrate the cooperation between the two communities, but as a celebration of the massacre of 700 Pequot men, women and children. The Pequot were celebrating their annual green corn dance when white mercenaries ordered the Indians out of the building in which they were celebrating. As the Pequot exited the building, they were shot to death. The remaining Pequots were burned alive. If anything the first Thanksgiving was the kickoff to the systematic obliteration of a race of people that continues to this day, and is evident in the disproportionate poverty, poor health, and unemployment levels. Native Americans living on reservations have the highest rates of poverty, unemployment, and disease of any ethnic group in America. This does not sound like anything to celebrate or be thankful for, but in some great cultural conspiracy, we’ve been manipulated to believe Thanksgiving is a day to spend with loved ones in a state of patriotic fervor while devouring shameful amounts of calories. Neither is there anything to celebrate in the murder of 45 million turkeys for one day (an additional 22 million turkeys are slaughtered for Christmas). Over 280 million turkeys are slaughtered every year in the US for food. The life of a turkey is filled with suffering that you and I cannot fully imagine. The majority of turkeys for your Thanksgiving table are raised in warehouses where they will never see the light of day, and allotted three square feet of space to live their abbreviated life. To prevent them from injuring each other in such cramped quarters, most turkeys have one-third of their beaks seared off with a red-hot blade, and their toes cut off, all without painkillers and all within the first few days of their lives. It is also typical to cut off their snoods, the fleshy appendage above the beak, with scissors. These naturally sweet and social animals are unable to engage in normal turkey behaviors such as perching, dustbathing and sunbathing. Mothers and babies begin to bond while the baby is still inside the egg, as do chickens. When inside the egg, little turkeys are already able to vocalize and “talk” to their mothers. Young birds are completely dependent on mothers, and their absence renders commercially bred turkeys helpless. Sometimes they cannot find food or water because no one ever showed them how. Turkeys often form strong bonds with each other as well, sometimes with other animals. They do not have the ability to do so in crowded warehouses. Since they have been genetically manipulated to develop extremely large breasts in a short period time, they grow so awkwardly large that they are unable to hold up their own weight. Because of this they are no longer able to breed naturally, so females must be inseminated through rape. This also causes serious health problems like heart attacks and organ failures, however, most turkeys never live long enough to experience these issues since they are slaughtered at around five months old. (Yes, you are eating a baby bird.) If you feel smug about only eating free-range turkey, there is very little difference in how the turkeys are treated on these farms. The term “free range” only means turkeys are allowed outside 51 percent of their lives. They don’t live idyllic lives on a farm, as the term implies. Free-range turkeys are also genetically manipulated to grow unusually large and still undergo the same tortures as factory-farmed turkeys such as debeaking, detoeing and desnooding. Whether a turkey comes from a free-range farm or a factory farm, they all undergo horrific conditions on the way to slaughter. Turkeys are gathered up and carried upside down by their legs and thrown into crates on transportation trucks. Most trucks have no protection from the weather, whether that be extreme heat or cold, and are usually denied food and water in transit. All turkeys end up at the same slaughterhouses where they are hung upside down by their legs and dipped in electrical baths to “stun” them. If they are not stunned, their throats are slit while they are still conscious. There are many occasions where they miss the hot bath and the blade, and end up plunged alive in scalding tanks to remove their feathers. It is all perfectly legal, since birds are exempt from the ironically named “Humane Slaughter” Act – ironic in the sense that of course there is no such thing as humane slaughter of humans or non-human animals. I have a difficult time experiencing gratitude on a holiday that celebrates and worships the dual genocide of a race of people and a species of animal. If you wish to truly express and share gratitude on Thanksgiving, how about choosing to eat a vegan meal? There are a myriad of satisfying and cruelty-free seasonal vegan recipes on the internet. How about taking some time to meditate on the genocide of the Native Americans? Or kick off your holiday meal with a reading from A People’s History of the United States by the late, great Howard Zinn. Instead of being forced to spend the holiday with shirttail relatives, feeling faux gratitude, take back Thanksgiving and celebrate the fact that your Thanksgiving has been respectful of people, animals, and the planet. I know that I’ll be grateful. Undercover investigation YouTube Preview Image Let’s Talk Turkey by Ciddy Fonteboa YouTube Preview Image Sarah Palin YouTube Preview Image YouTube Preview Image About Gary Smith 33 Responses to “Thanksgiving: A Celebration of Genocide.” 1. Jeff Meyer jcmeyer10 says: Gary, I really appreciate the passion in this post. Eating turkey is a choice millions of people make each year and if some of those people chose an alternative, we could truly alleviate a great amount of suffering. 2. Hilaryk says: Thank you for your words. They pretty much sum up exactly how I feel about "Thanksgiving". I've never liked this "holiday" for the two reasons you highlight. 3. larry says: brilliant writing…American history has been whitewashed & dumbed down to a point that I doubt actual historians know the truth anymore…but alas, it is the American way…thank you, Mr. Smith, for a breath of fresh air… 4. Doctor FranklinStein says: All the members of my family get a day off. We are thankful that we are alive, resonably healthy, geting through school, work, and living with beautiful, rescued fur critters for which we are VERY THANKFUL. We are thankful to all have a day together! 5. Fawn says: I frick fracking LOVE you Gary! What an eye opener! Great article. 6. AMO says: If you don't feel the gratitude, don't "celebrate". For myself, I don't connect the holiday in any way with Native Americans and a small group of religious zealots who almost starved to death a couple of hundred years ago. I associate it with family and friends, and gratitude. I don't in any way celebrate the take over of North America by Europeans, though there is NOTHING I can do about that as it's a done deal. We're here, their lifestyle is gone never to return to this to this continent, and that is sad and horrible, and, for me, it has nothing to do with this day of sharing food and gratitude with my loved ones. Any more than Christmas, which I also celebrate, has anything to do with a magic man in the sky impregnating a 12 year old secretly through an angel and the off spring of that union dying so I do crappy things like voting for George Bush but be forgiven as long as I'm sorry and I accept his story. These are days of deep meaning to many people, and for most people that meaning has little to do with anything that happened long ago. Societies thrive on ritual and food and gifts are ways of reminding us of what's important in life. As for the birds "genocide" is the deliberate killing of a large group of HUMAN BEINGS especially because of their race or religion and is generally done to cleanse the area or the Earth of them (historical note: the Holocaust) so killing turkeys which were raised to be eaten is NOT genocide and I am sure the victims of actual genocide would be disturbed, were they still here, at your glib comparison. I'm disturbed by it. I eat meat, by choice, and will be preparing two turkeys for a large group of friends tomorrow, a handful a vegimites will be attending and most of my friends don't eat meat everyday and don't eat much of anything, we're a healthy but thin bunch. A good people are we, not Nazis. Use words with care. They have meaning… 7. Torry Smith says: This holiday depressed me so much to the point that I just tried to sleep through it and miss it altogether. Finally my husband & I decided to celebrate our OWN holiday (on Friday) with a lovely vegan feast & our own traditions. Now we have something to look forward to that does not involve false history and the cruel lives & deaths of millions of innocent animals. Still get depressed every time I hear the phrase “Turkey Day”, though. Whenever I read it I comment that it’s not a happy day for the turkeys. 8. Alice2112 says: thanks for this thoughtful post Erica! I enjoy learning about Native American history (as much as it saddens me) and know there is a lot of controversy about many "historical accounts". I am attending a community Vegetarian Thanksgiving – a COMPASSIONATE Thanksgiving, if you will. I encourage everyone to join in a community "Turkey – Free"/vegan event where you can be joyful and thankful for our efforts to make the world a better place. 9. Alice2112 says: WOW. Gary Smith continues to post thought provoking articles on Elephantjournal. Always things that push us to broaden our world views and circles of compassion as well as test our spiritual practices and our definitions of "mindful". 10. Lasara Allen LasaraAllen says: I appreciate your words, and your courage in speaking them here. When I read Gary's article last night, I was at a loss for words. You voiced them, and I thank you. In case you didn't see MY take on the holiday, read it here: Also, on my personal website, about ceremonies and light in the darkness: 11. Lasara Allen LasaraAllen says: Thank you for your view. I'm glad to see so many looking at the NEW(er) meaning of the holiday. 12. jcrows says: It could be viewed as a ganapuja. Eating with awareness to liberate and make a good cause. Many critters are killed when the earth is plowed and disturbed for any cultivation. 13. Lisa says: Just getting around to this… thank you very much Gary…will post now. X 14. […] are a few pieces from the on-line article Thanksgiving: A Celebration of Genocide. I highly recommend reading the complete article. “If anything the first Thanksgiving was the […] 15. Gary Smith Gary Smith says: Erica – There were many different stories to choose from in writing this article. I agree with the tone of your argument that we need to celebrate this holiday in another manner. The problem I have is that we have not accepted the fact that we committed genocide against the native people. It's like telling the African slaves, Chinese slaves, et al that we need to move on before we have asked for forgiveness. Most Americans still believe the story of the Pilgrims sharing their food with the native people. Most American holidays are just excuses for nationalism. So, it would be great for people to know the history and deal with it in a respectful and healing manner. My concern with most of the comments is that it happened a long time ago, so we just need to move on and eat lot's of dead turkeys. 16. Nik says: Thanks for the great reply! I was appalled at reading this article and was having difficulty articulating my feelings. Happy Thanksgiving to you and your family. 17. MrKappa says: It is difficult to draw the line. I mean we could so far as to say that anything that grows, or interacts biologically with the environment should not be eaten, but it doesn't make alot of sense to the ecosystem. As for the human treatment of animals. I can't see anything incredibly wrong with the quality of life at the Turkey farm in the first video but I can definately see something wrong with the inhumane treatment of the animals. I would go so far to say that given the choice between a store bought turkey which was raised on a free range, versus a turkey which was backed by a reputable employer who enforces humane standards, I would probably go with humane standards over paying extra for Turkey condos. 18. […] Please read this article: Thanksgviving […] 19. […] 大量殺戮の祝賀 December 5th, 2010 By Gary Smith November 22 […] 20. […] Posted in Essays and Articles | « For the impending "holiday" Great News » Responses are currently closed, but you can trackback from your own site. […] 21. […] example, when we witness an everyday act of human brutality such as an animal cruelty we should not brush it off (“hey, we are at the top of the food chain for a reason, right?”), […] 22. […] Thanksgiving: a celebration of genocide. More Thanksgiving coverage here. […] 23. […] Please read this article: Thanksgviving […] 24. Ken says: I agree that factory farming is horrific, but there are some humanely raised turkeys, you just have to look. I'll happily pay more for a turkey that is raised humanely. Contrary to much vegan preaching, not everyone can be healthy on a vegan diet. If you can, that's wonderful, more power to you. But read Lierre Keith's book to see how it can really damage your health. It's impossible for humans to live without killing something. Look at all the natural habitat of wildlife that has to be destroyed to make room for you to grow your vegan food. It is offensive that you equate the slaughter of Native Americans, stealing their land, and robbing them of their culture, which is a true genocide, with eating a turkey. There is simply no comparison. To do so minimizes the magnitude of what whites did when they invaded America. I don't see anything here calling for some way to at least attempt to make amends for what was done to Native Americans. You're concerned about the turkeys, which is fine, but please don't bring real genocide into it. 25. Ken says: Totally agree with you. It's offensive to equate eating a turkey with what was done to Native Americans. Leave a Reply
Education Quandary: At a parent-teacher evening our son's form teacher kept saying he used 'metacognition' in teaching. What was he talking about? Hilary's advice Hilary's advice I don't think this is a query about vocabulary. I think it's a yelp of embarrassment and annoyance at being made to feel small and stupid at a parent-teacher conference. But, believe me, it's this teacher who is small and stupid. Any teacher who cannot talk in jargon-free prose to parents is not up to one of the fundamentals of the job, and is probably poor at communicating with pupils as well. In addition, any teacher who clutches a single idea about teaching and parades it as if it were the answer to all things pedagogical is an idiot. Good teachers always keep up to speed with new ideas about teaching and learning, they incorporate the best ones into their practice, and monitor them to see how well they are working. But they also understand that teaching is as much an art as a science, and that they will always need a whole toolbox of tricks to help every child in their classroom learn and grow. For the record, metacognition means knowing the answer to things, and knowing how you know the answer. Pupils are helped to be aware of their thinking processes, and how they research, evaluate, analyse, select, synthesise and conclude. It's a useful approach to learning that builds confidence, and one that really helps pupils understand things properly, but it isn't the secret to the universe and this teacher is absurd to talk as if it is. Readers' advice Metacognition generally means that the teacher keeps at the forefront of the cognitive processes involved in learning and designs lessons that will help students to be aware of how they learn best. Metacognition involves ''thinking about thinking''. For example, teachers show students how to question themselves as they read a text. (Why did this event happen? What might the character be thinking). This is using metacognition to help to understand the text. In easier terms, it just means that the teacher will be using higher-order thinking skills and activities to help the students learn and achieve best. Shannon Dipple, Dayton, Ohio, US Your son's teacher was talking about an important idea in modern teaching, but he should have explained it properly and not just used the professional term. Metacognition means helping pupils to think about how they are thinking, and all the different techniques they can use to acquire and apply knowledge. Pupils who are taught in this way are much better equipped as learners than those who are simply taught traditional stuff. Mike Winterbourne, Guildford My last head always talked about "scaffolding" learners. It was her favourite term. We actually started calling her Mrs Scaffold behind her back. She had got hold of this term like a dog with a bone, and used it to everyone, all the time. I can't imagine what parents made of it. Some teachers hide their inadequacies behind important-sounding jargon and I suspect that this is what has happened here. Ginny Clayton, Essex Next Week's Quandary We have objected to our daughter’s school showing Disney films during 'wet play'. The head has suggested setting up two rooms for the children, one showing films, the other with play activities. We think this is unacceptable and ridiculous. Children will always choose films if they are offered. What can we do? Send your replies, or any quandaries you would like to have addressed, to h.wilce@btinternet. com. Please include your postal address. Readers whose replies are printed will receive a Collins Paperback English Dictionary 5th Edition. Previous quandaries are online at They can be searched by topic
A Comparison of Scaling Techniques for BGP Published on ACM Computer Communications Review, July 1999, Volume 29, Number 3: Pages 44-46. Published in: Technology • Be the first to comment • Be the first to like this No Downloads Total views On SlideShare From Embeds Number of Embeds Embeds 0 No embeds No notes for slide A Comparison of Scaling Techniques for BGP 1. 1. A Comparison of Scaling Techniques for BGP Rohit Dube High Speed Networks Research Room 4C-508, Bell Labs, Lucent Technologies 101 Crawfords Corner Road, Holmdel, NJ 07733 Email: [email protected] Abstract BGP is the inter-domain routing protocol used in the Inter- net today. During the course of its evolution, the Internet has gone from being a simple and small network to one that is run at its core by large service providers constantly bat- tling with bigger and bigger topologies forcing the routing community to invent ways of scaling both interior and exte- rior routing protocols. Route-re ectors and confederations have turned out to be the weapons of choice in scaling BGP to these large topologies. This paper takes a close look at these two mechanisms and seeks to compare them. 1 Introduction The Border Gateway Protocol (BGP) [1], [2], [3] is the pervasive inter-domain routing protocol in the Internet to- day. Before the recent explosive growth of the service providers topologies, BGP was typically used in a con 2. 2. g- uration where all the border routers imported routes from external Autonomous Systems (ASes) and then distributed them to all the routers within their own AS. This distri- bution was accomplished using a full-mesh of Internal-BGP (IBGP) peerings amongst all the routers in the AS. Once this at topology hit the scaling limit (both administrative and the cpu/memory ceiling), mechanisms were devised to reduce the number of peering sessions per router. There are three such mechanisms deployed in the Internet today - route-re ectors [4], confederations [5] and route-servers [6]. Of these three, route-re ectors and confederations are the dominant mechanisms having been implemented by multiple vendors and deployed by the biggest Internet and Network Service Providers (ISPs and NSPs). In this paper we analyze and compare route-re ectors and confederations. We start by describing these mechanisms followed by a detailed com- parison. We conclude with a summary of our observations and pointers for future work. 2 IBGP, Route-re ectors and Confederations Consider the scaled down BGP topology depicted in 3. 3. gure 1. Routers r1 through r6 form an ISP backbone. In order to provide consistent loop free routing, each of these routers maintains IBGP peering sessions with all the others. When one of these routers learns a pre 4. 4. x, say from an External- BGP (EBGP) peer, it runs the BGP decision algorithm and installs the best route to the pre 5. 5. x into its routing table. If this best route is in turn not learnt from an IBGP peer, the R4 R5R6 R1 R2 R3 EBGP EBGP IBGP IBGP Figure 1: Full-mesh IBGP pre 6. 6. x is propagated to all the IBGP peers. In the stable state this provides all the BGP routers in the network with all possible routes to a pre 7. 7. x. (Further details on this can be found elsewhere in the literature [1], [2], [3]). Given that each BGP router has to peer with all the BGP routers within the AS, it is easy to see that as the number of BGP routers in the AS grows, the number of peering sessions each router needs to maintain increases to (n 1) with a total of n (n 1)=2 IBGP peerings in the network, where n is the number of BGP routers. Maintaining these peering sessions gets quickly out of hand with increasing n, both for the network administrators and the the router hardware. 2.1 Route-re ection Route-re ectors tackle this scaling problem by dividing the IBGP topology into clusters. A cluster consists of one or more BGP routers acting as server(s) and the remaining as client(s). The servers are fully meshed with each other and also have peering sessions with all the clients. The clients may or may not peer with each other. Further, clients in a cluster can act as servers for sub-clusters provided a strict ancestor-descendant relationship is maintained between the cluster and its sub-clusters and that the servers of all the sub-clusters of a cluster are fully-meshed in a peer-peer re- lationship. The sub-clusters can have their own sub-sub- clusters and so on. Note that the servers of the top-level clusters of the hierarchy form a full-mesh amongst them- selves (i.e they are in a peer-peer relationship with each other). The client-server relationship described above is used to break the don't propagate IBGP routes rule on the route- re ector servers. The server is allowed to re ect routes from a non-client (i.e. an IBGP router in a peer-peer relationship) to all its clients and from a client to all the other clients as well as non-clients. It is helpful to think of the server as a 8. 8. proxy agent which disseminates routes between its servers and peers on one side and clients on the other (in both directions). HUB SPOKE SPOKE SPOKESPOKE Figure 2: Hub and Spoke topology ISPs typically deploy route-re ectors in a two-level hi- erarchy similar to the hub-and-spokes network in 9. 9. gure 2. The hub consists of all the route-re ector servers arranged in a full-mesh. These servers are physically located in a point-of-presence (POP) facility, typically in pairs for re- dundancy. Each of these facilities also contain the client routers as shown in 10. 10. gure 3 which represents a scaled down version of a large ISPs POP. RR RR to Non-client Peers in other Clusters C C C C C C Route-reflector Clients(C) Servers (RR) Route-reflector Figure 3: Route-re ector based POP 2.2 Confederations Confederations tackle the same scaling problem by dividing an AS into sub-ASes. Each of these sub-ASes is fully meshed inside with regular IBGP sessions and is a at BGP network. At the boundary between two sub-ASes, a modi 11. 11. ed form of EBGP is used (called confederation-EBGP) which adds the local sub-AS to the AS Path for loop detection within the confederation (the AS Path is a record of the ASes a pre- 12. 12. x has traversed and is ordinarily used by EBGP to detect looping updates). To the outside world, this confederation of sub-ASes, looks like a single regular BGP network. At the boundary between a confederation of sub-ASes and a regular AS, the peering is a standard EBGP session, except that the router in the confederation does some extra work in order to hide its internal structure from the outside world. Each sub-AS in the confederation can be further sub- divided to form a confederation of sub-sub-ASes (and so on). The sub-ASes therefore form either an ancestor-descendant relationship (when an AS contains a confederation of sub- ASes) or a peer-peer relationship (when two sub-ASes be- long to the same parent confederation). With respect to deployment, confederation are typically made to 13. 13. t the hub-and-spoketopology of 14. 14. gure 2. A central sub-AS forms the hub and spans the geography of the ISPs network. Metropolitan or larger areas typically form the spoke sub-ASes. Hierarchy within the hub or the spoke is typically not used. 3 Similarities and Dierences As may be evident by now, route-re ection and confeder- ations solve the IBGP scaling problem in ways which are very similar on some counts but dissimilar on others. In this section we analyze the two approaches with respect to their underlying philosophies, deployment scenarios, prob- lems unique to these approaches and scalability. 3.1 Underlying Philosophy Route-re ection primarily works by changing the behavior of IBGP sessions. The main idea is that of selectively prop- agating updates over IBGP sessions from the routers desig- nated to be route-re ector servers. On the other hand, con- federations work by breaking up an AS into smaller, more manageable sub-ASes, in the process changing the behavior of EBGP sessions. 3.2 Deployment In the 15. 15. eld, route-re ectors have proven to be more popu- lar than confederations. This is probably because deploy- ing route-re ectors requires a software upgrade only on the routers which are to be designated as servers. The clients can be oblivious of the fact that some of the updates they receive are re ected. Confederations, on the other hand, require all routers to be able to process the segment type extensions to the AS Path attribute. This forces a topology moving from a full-mesh IBGP network to confederations to perform a fork-lift software upgrade of all the routers. For most existing networks, this is likely too high a barrier to entry. (For details on attribute extensions related to route- re ectors and confederations see [4] and [5]. In the interest of brevity and clarity, we have deliberately culled the details from this manuscript). Interestingly, both route-re ectors and confederations are typically deployed in the hub-and-spoke topology dis- cussed earlier in 16. 16. gure 2. In both cases, hierarchy is not used within the hub or the spokes. The only dierence is that while for route-re ectors the boundary of the hub and the spokes is made up of routers (i.e route-re ector servers), with confederations the boundary is actually a confederation-EBGP session between routers in dierent sub-ASes. 3.3 Unique Problems E1 RR1 R3 R4 E2 Solid Lines denote physical connections. Dashed Lines denote BGP sessions. RR2 Figure 4: Persistent Loop 17. 17. Because IBGP was originally designed around the never propagate routes learnt from another IBGP peer idea and because route-re ection breaks this rule, route-re ection based topologies can encounter persistent loops and other problems as described in [7]. We brie y repeat a simple ex- ample demonstrating this. Consider the network in 18. 18. gure 4. RR1 and RR2 are route-re ector servers with R3 and R4 as clients respectively (i.e. RR1 and R4 form a cluster and RR2 and R3 form another). E1 and E2 are in a separate AS and peer with RR1 and RR2 respectively. If E1 and E2 advertize the same pre 19. 19. x, RR1 readvertizes the pre 20. 20. x to R4 which now has a path to the pre 21. 21. x going through R3 to RR1 to E1. Similarly R3 gets a path to the pre 22. 22. x through R4 to RR2 to E2. R3 and R4 end up pointing to each other for the pre 23. 23. x in question, creating a persistent loop. N E R1 R3 R2 Figure 5: Sub-optimal Routing Use of confederations on the other hand can lead to sub- optimal routing within an AS. Consider the topology in 24. 24. g- ure 5. R1, R2 and R3 are routers in the same confederation and each of them belong to a separate sub-AS. Router E is in an AS of its own and has a regular EBGP session with R1. E advertizes the network N to R1 which readvertizes it to R2 and R3. R2 and R3 also readvertize the route to N to each other. So R3 has a route to N from both R2 and R1. All other things being equal, R3 may choose the longer route through R2 to reach N. This is because while tie-breaking between routes to the same pre 25. 25. x, most BGP implementations do not take into account the length of the sub-AS path (some vendors solve this problem by providing a knob to take the sub-AS path length into account). 3.4 Scalability Currently, large ISP networks run between 300 and 500 routers using one of these two approaches to reduce peer- ing requirements. In the following paragraphs the maximum number of peering sessions is calculated for a hub-and-spoke network of approximately 400 routers { Assume that a route-re ector based network has 20 POPs each with 2 servers and 18 clients, for a total of 400 routers. Each server therefore sees 18 + 1 + 19 2 = 57 IBGP peering sessions. The clients in each POP (assuming that they are fully meshed) see 19 IBGP sessions, one to each router in the POP. Similarly assume that a confederation based network of 398 router has 20 sub-ASes, one of which is the hub con- taining 18 routers and the remaining 19 are spokes each containing 20 routers. Further assume that 2 routers from each spoke sub-AS peer with 2 router of the central sub-AS. Each router on the spoke sub-AS boundary therefore sees 19 IBGP sessions and 2 confederation-EBGP sessions for a total of 21 BGP sessions. Each router in the hub has 17 IBGP sessions and 4 confederation-EBGP sessions for a to- tal of 21 sessions. The routers not on the boundary of the spoke sub-ASes see only the 19 IBGP sessions. Clearly, the number of BGP sessions that need to be maintained by the confederations based network is much lesser than the route-re ector based network. The compari- son is a little bit unfair as the confederation based network has one spoke less than the route-re ector based network. Yet, the general result holds. In both cases, the routers on the boundary of the spoke have the eect of condensing routes for the rest of the spoke. However with confedera- tions, the hub itself does a lot of condensing before passing on the routes to the spokes (accounting for the reduction in the number of peering sessions) and the total number of up- dates seen by both the spoke-boundary routers as well as the spoke-internal routers is much smaller with confederations than route-re ectors. It should be noted that by imposing additional hierarchy, a topology with route-re ectors can be tailored to reduce the maximum number of BGP peerings on any router in the network. 4 Conclusion and Future Work Both route-re ectors and confederations have proven them- selves in the 26. 26. eld and look very similar when deployed, but they have distinct advantages over each other. Route- re ectors are backward compatible and can therefore be de- ployed in a network incrementally without requiring a fork- lift upgrade. Confederations on the other hand reduce the number of BGP peering sessions much better (at least for the canonical hub-and-spoke topology). Several questions remain unanswered in this article and present an opportunity for extensive simulation. For in- stance, how far in terms of the number of BGP sessions and the total number of BGP updates can the two-level hier- archy for route-re ectors and the similar hub-and-spoke for confederations scale? Or, how do the two techniques com- pare in terms of convergence time in the face of failures? In addition, the eect of these mechanisms on the stability of the network as a whole is not clear and should be looked at more closely. [8], [9] analyze the general problem of insta- bility in the Internet, but they don't speci 27. 27. cally identify the role of network architecture with respect to this instability. As the size of the ISP networks increase, the importance of this particular problem will grow. Acknowledgements We would like to thank Vab Goel for describing the Sprint Network, Joe Malcolm for describing the UUNET network and Je Young for describing the Cable and Wireless (for- merly MCI) network and Tony Przgyienda and the CCR reviewers for reviewing this paper. References [1] Y. Rekhter and T. Li. A Border Gateway Protocol (BGP-4), March 1995. IETF RFC 1771. [2] B. Halabi. Internet Routing Architectures. Cisco-Press, 1997. [3] J.W. Stewart III. BGP4: Inter-Domain Routing in the Internet. Addison-Wesley, 1998. [4] T. Bates and R. Chandra. BGP Route Re ection: An alternative to full mesh IBGP, June 1996. IETF RFC 1966. 28. 28. [5] P. Traina. Autonomous SystemConfederations for BGP, June 1996. IETF RFC 1965. [6] D. Haskin. A BGP/IDRP Route Server Alternative to a full mesh routing, October 1995. IETF RFC 1863. [7] R. Dube and J.G. Scudder. Route Re ection Considered Harmful, November 1998. IETF Draft draft-dube-route- re ection-harmful-00.txt. [8] C. Labovitz, G.R. Malan, and F. Jahanian. Internet Routing Instability. In SIGCOMM Conference. ACM, 1997. [9] C. Labovitz, G.R. Malan, and F. Jahanian. Origins of Internet Routing Instability. In INFOCOM Conference. IEEE, 1999.
Consumption of two cups of hot chocolate every day for a month was linked with an 8% improvement in blood flow to the brain among older people with impaired blood vessel functioning who performed poorly in cognitive tests, a study found. Researchers noted that the seniors who drank cocoa regularly were roughly one minute faster on a cognitive test. No differences were seen between groups who drank high-flavanol and low-flavanol cocoas. The findings were published in the journal Neurology. Related Summaries
A Good Read: Finding a Niche for Notable Books Nancy P. Gallavan Maybe you, as a social studies teacher, are required to include the lives of women in your history lessons, even though your textbook is weak in this area. Such a requirement is made a little easier with a book like Eleanor, which tells of Eleanor Roosevelt’s growing up in a “cheerless household,” under the care of her imposing grandmother.1 A mandate to instruct sixth graders on “economic concepts” seems a little less daunting when one can peruse Neale S. Godfrey’s Ultimate Kids’ Money Book, with its clear and colorful examples, before drawing up a lesson plan.2 One does not have to comb the entire library to find such gems. Every year since 1972, a book review committee appointed by National Council for the Social Studies (NCSS), in cooperation with members of the Children’s Book Council (CBC), has evaluated and selected children’s literature related to the field of social studies. In creating “Notable Social Studies Trade Books for Young People,” the committee seeks books that emphasize human relations, represent a diversity of groups, express sensitivity to a broad range of cultural experiences, and present an original theme or a fresh slant on a traditional topic. The books must be easily read, be of high literary quality, and when appropriate, include illustrations that enrich the text. This annotated book list now appears annually in a special supplement to Social Education.3 In 1996, reviewers also began indicating specific NCSS themes for social studies that relate most closely with each book, making it even easier for teachers to locate books that might complement a given unit of study.4 Unfortunately, I have found that many teachers are not aware of this resource. They teach in school districts where social studies education is not emphasized, or their elementary or middle school librarians are unfamiliar with the list. Likewise, many new teachers have not learned about the ten NCSS thematic strands, as this resource was not used in college courses or at hand during teaching internships. And many elementary school teachers would still say that they “do not teach social studies,” even though social studies might be understood as the context for all other subjects that they do teach. Goals for Our Project with Teachers Integrating literature and social studies allows teachers to maximize interest and learning. Students and teachers can explore almost any social studies discipline or curriculum through an outstanding work of children’s literature. This was the rationale for a project at the University of Nevada in 1998 during which forty elementary and middle school teachers strove to achieve four goals: > To become acquainted with the NCSS Notable Social Studies Trade Books for Young People, particularly books published within the last five years; > To explore the ten NCSS thematic strands of social studies: their definitions, their related performance expectations, and their application in daily life; > To identify creative teaching strategies for using the notable books within social studies classes; and > To create a useful resource for teachers and preservice students. Discovering the Notable Books Teachers were provided with a brief overview of the notable books effort and were given the lists of notable books from the previous five years. Most of the books were made available for browsing and checking out. Many of the teachers pleasantly discovered that they owned copies of these books in their personal, classroom, or school libraries. Many of these books were also found on other award-winning book lists. Teachers read some books silently and then discussed the texts; they read others aloud; some books had sections that could be play acted. Teachers shared brief book reports (written individually or with a partner) to small groups and to the entire group. One teacher even played soft, tape-recorded music in the background to enrich the audience’s listening experience. Exploring the Ten Social Studies Thematic Strands Teachers were divided into ten groups and given the task of exploring one of the ten NCSS thematic strands. Each small group wrote a brief definition of one theme, described some exemplary performance expectations related to a specified theme, and made a list of symbols and signs from daily life that reflect the general theme. For example, elementary students can learn about Theme 2 Time, Continuity, and Change by hearing stories about the recent past, as well as of long ago. They can then demonstrate their knowledge by placing pictures of events in a correct sequence, or by drawing a picture that is historically coherent (in which the people, clothing, setting, and action are appropriate to a specific time). They can help make a list of things in modern life that represent time or the passage of time, such as clocks (of all kinds), the hour glass (of which some board games include a variation), calendars, and timelines—or things that are marked with a date, such as birth and death certificates, letters, diaries, maps, newspapers, photographs, and other documents. There are also more fanciful images and fictional concepts such as “Baby New Year,” “Father Time,” and time machines. Strategies for the Classroom Next, teachers dived into the professional literature (journals, textbooks, teachers’ guides, and other teaching resources), searching for descriptions of successful teaching strategies that could be used in conjunction with a good children’s book. The aim was to integrate children’s literature and the social studies themes into a coherent lesson (Table 1). Patterns began to emerge as teachers progressed in this work. They found it helpful to tell which grade they taught and to place books and themes in a logical sequence— a curriculum. Connections were made to other disciplines as well. Some of the teaching strategies that emerged from the project are listed here.5 > Make available many notable books for students to read and enjoy independently or with one another at a reading center or library corner; > Select a notable book and have students read and discuss it as part of a social studies unit, using one or more social studies themes to guide the discussion; > Read a notable book (or an excerpt) aloud and lead discussions about the elements of the book as they apply to specific social studies themes. For example, the setting of the story involves history and geography, time and place; the conflicts in a story often involve economic issues, needs and wants, and so on; > Use social studies-related words, found in the book, in the lesson and in vocabulary lists for that unit; > Have students identify places mentioned in the book and locate these on large wall maps of the United States, various countries, or the world; > Construct an active lesson (one that is less teacher-directed and more student-centered) based on a book. Such a lesson might involve panel discussions, debates, role playing, charades, mock trials, simulations, readers’ theater, mock television commercials, and so on. Use performance standards to evaluate students’ work;6 > Use small groups for peer reading and cooperative learning (or “jigsaw”) approaches. This is one way to have many students become generally acquainted with a wide selection of books quickly (as was modeled with teachers in this project); > Ask higher-order questions that involve inferential and critical analyses. For example, Do the details in the story match what you have learned from other sources about this time and these situations? What skills and resources do the characters in the story apply to events in their lives? What were the causes of major events in the story? Were the characters aware of these causes? > Have students read a novel at the beginning of a unit of study, then draw up questions about real history that grow out of having read the fictional work, and finally search various primary sources of information for answers; > Invite a guest speaker who has lived through an experience similar to that in the book; > Go on a field trip to a site in your local community that is related to the time or events described in the book; > When only half the book has been read, have students “take on the role of author” by predicting possible outcomes. Or students can state viewpoints representative of various characters. Or they can revise the end of a story, or create a sequel. These activities can be oral or written; > Coordinate with a language arts teacher, so that the book is used in both classes; > Integrate math and science skills to analyze the story. For example, students could calculate the number of years or generations (one every twenty years) that have passed since an event occurred, research the demographics pertinent to an event, map the course of a journey, study the relevant science or technology of the times, and so on; > Compare the situation in the story with a current event unfolding at the school, locally, nationally, or globally, then explore that event in detail; > Offer students an opportunity to write or speak about a public or personal event or an activity that relates somehow to the book. (For example, students may be volunteering for a political campaign, helping neighbors after a flood, getting their family settled in a new community, preparing for an ethnic or religious celebration, and so on); > Use music, dance, and fine arts that are related to the story. One might seek assistance from the appropriate teacher in the school, a museum staff person, or other community resource; > Form book clubs or reading contests featuring the notable books. Challenge other classrooms or schools to book-reading contests. A Professional Resource Throughout the project, teachers kept journals, many of which included annotated bibliographies and charts that correlated each notable book with one or more of the social studies themes (Table 1). To conclude the project, teachers shared their journals. We looked back at our four original goals in light of the resources that we had collectively created. In the true sense of social studies education, these teachers had empowered themselves through an exploration that was meaningful, relevant, and authentic. 1. Barbara Cooney, Eleanor (New York: Viking, 1996). 2. Neale S. Godfrey, Neale S. Godfrey’s Ultimate Kids’ Money Book (New York: Viking, 1998). 3. See, for example, the May/June 1999 issue of Social Education. In earlier years, this list was given a slightly different title, “Notable Children’s Trade Books in the Field of Social Studies,” and appeared in the April/May issue. 4. Expectations of Excellence: Curriculum Standards for Social Studies (Washington, D.C.: National Council for the Social Studies, 1994). 5. For further examples and applications, see articles in the Children’s Literature section in past issues of Social Studies and the Young Learner as well as the book Children’s Literature in Social Studies: Teaching to the Standards by DeAn M. Krey (Washington, D.C.: NCSS, 1998). 6. Reference 4 contains a chapter on performance expectations, “Standards into Practice: Examples for the Early Grades,” on pages 47-75. About the Author Nancy P. Gallavan is an assistant professor of elementary school social studies and K-12 multicultural education at the University of Nevada, Las Vegas.
Turkey Point Nuclear Plant in Hot Water , director, Nuclear Safety Project | January 6, 2015, 6:00 am EDT Bookmark and Share Fission Stories #179 Earlier this summer, the owner of the Turkey Point nuclear plant in Florida requested and the NRC approved a change in the maximum limit on cooling water used by plant. For years, the plant had operated with the limit at 100°F. The plant could only continue operating for a few hours when this limit was exceeded. But power uprates and global warming conspired to cause problems with this limit. Turkey Point uses an extensive canal network for its cooling water needs. Pumps pulled water from the canals and routed it through the plant to remove waste heat. The warmed water was discharged back into the canals. Long peninsulars of dirt forced the warmed water to wind back and forth literally for miles before it could again be drawn into the plant. En route, the warmed water would surrender some of its thermal energy to the air so as to be a little cooler for its next trip through the plant. On September 26, 1996, the NRC approved a 4.5% increase in the maximum power level of the Unit 3 and 4 reactors at Turkey Point. On June 15, 2012, the NRC approved a 15% increase in each reactor’s maximum power level. Nuclear power reactors like those at Turkey Point are only about 33% efficient—for every three units of energy produced by the reactor core, only one unit goes out on the transmission lines as electricity while two units must be discharged as waste heat or thermal pollution. The higher power levels that NRC allowed Turkey Point’s reactors to operate, the more waste heat had to be released into the canal network. Global warming only compounded that situation by warming both the temperature of the water in the canals and the temperature of the air cooling the canal water. As the NRC reviewed the plant owner’s request to increase the cooling water temperature limit from 100°F to 104°F, I received several inquiries regarding safety implications of the proposed increase. My review of the paperwork the owner submitted to the NRC justifying the requested increase showed how it would have little to no impact on safety margins. Turkey Point takes water from the canals for two purposes: (1) to cool the steam used to spin the turbine-generator to make electricity, and (2) to cool emergency equipment during an accident. For both purposes, the canal water flows through exchangers to cool steam or hot water that gets re-used by the plant. The canal water, warmed by several degrees, is returned to the canal network. Shell and tube heat exchangers are commonly used to transfer waste heat to cooling water from the nearby lake, river, ocean, or canal system. In Figure 1, blue represents the water from an internal plant cooling system. For example, the component cooling water (CCW) system for each reactor at Turkey Point was three pumps and two heat exchangers. Only one pump and heat exchanger is needed to handle the heat loads during an accident—the others are provided for increased reliability. The CCW system cools safety equipment like the emergency diesel generators and the areas housing the emergency core cooling system pumps. The CCW system water enters to the lower right of the heat exchanger, flows leftward through its tubes, and leaves via an outlet to the upper left. Fig. 1 (Source: Creative Commons by Oschal, modified by UCS) Canal water is represented in red on the schematic in Figure 1. It enters the shell of the heat exchanger to the upper left and passes rightward past the tubes before leaving via an outlet to the lower right. Baffles force the canal water to weave its way back and forth across the tubes, maximizing the amount of time this water stays in contact with the tubes. Heat conducts through the thin metal tube walls resulting in the canal water leaving the heat exchanger a few degrees warmer while the CCW system water is a few degrees cooler. In the original design, canal water entering the plant, and this CCW heat exchanger, at 100°F was able to cool the CCW system water enough to adequately handle all the heat loads under accident conditions. Increasing the canal water’s maximum temperature limit to 104°F did not alter this safety outcome for this simple reason—the heat exchangers had available margin. The heat exchangers have many more than the seven tubes illustrated in this simplified drawing. Workers plug tubes when the thin metal walls break. And debris inside the canal water sometimes clogs other tubes. The heat exchangers have available margin built into them so that an entire heat exchanger need not be replaced every time a single tube gets plugged or clogged. The available margin also enables the heat exchanger tubes to experience some degradation and still remain functional. Scaling, rusting, biological growth, and other mechanisms can foul the tube walls. The performance of essential heat exchangers must be tested periodically to measure performance and verify safety margins are maintained. If not, the tubes must be cleaned or other actions taken to restore the necessary margins. The 4°F canal water temperature increase does not prevent the heat exchangers from removing the required thermal energy from the CCW and other vital plant cooling systems. The increase may mean that workers can plug fewer tubes and/or that the tubes require cleaning more frequently. But it doesn’t mean that the plant’s safety margins have been compromised and is the reason that the NRC approved the increase. Our Takeaway Turkey Point’s owner also requested and received enforcement discretion from the NRC, allowing the plant to continue operating with canal water temperature exceeding 100°F while the requested increase was being reviewed. The owner also asked the NRC to review the requested increase on an expedited basis, short-cutting the normal process and its opportunities for public notice and comment. Did global warming catch them by surprise? It’s been in the papers and on the radio and television. There may have even been a Simpsons episode or two about it. Or did the fact that global warming causes the temperature of earth, water, and air (i.e, the globe) to increase surprise them? These are Rhettorical questions because their answers don’t mean a damn. The first two, three, and perhaps even four plant owners can claim to be surprised. But after the NRC approved cooling water temperature limit increases for Millstone Unit 2, Peach Bottom Units 2 and 3, and Millstone Unit 3 in just the past two years, the “didn’t know any better” excuse is quite lame. And the NRC should not allow plant owners to use lame excuses to get express lane service. Turkey Point’s owner could have—and should have—submitted its request via the normal channels instead of making the NRC drop everything to review it ASAP and denying the public its full rights to review and comment on proposed changes. Next time an owner comes to the NRC with such a lame excuse, the NRC must channel Nancy Reagan and just say no. Posted in: Fission Stories, Nuclear Power Safety Tags: , , , , Show Comments Comment Policy • dinkydave Another well informed, informative post. • Thank you for acknowledging that there was no safety issue. I was just taking photos of this power plant about an hour ago while standing next to a sign warning not to swim in the canal I was standing next to because of crocodiles, which are endangered but thriving in the area around the nuclear plant. See the Netflix documentary Crocpocalypse, which also mentions Turkey Point. Next time an owner comes to the NRC with such a lame excuse… You failed to tell us what the “lame” excuse was. I’m amused at your claim that it was global warming that caused the temperature limitations. It was unusually hot weather and the need to keep ever growing numbers of air conditioners running. Climate change is real but you should know better than to confuse local weather variation with climate change. You should also know that nuclear energy is by far and away our largest low carbon source of energy. The reason they wanted it expedited was to keep the air conditioners running. Public comment would likely have slowed down or even stopped the approval because it would have given anti-nuclear energy ideologues a chance to stir fear and doubt. • jharragi >>You should also know that nuclear energy is by far and away our largest low carbon source of energy. So what? The reality is that nuclear is not growing but contracting. This is because while it currently the biggest low CO2 source, it is not the best. There are many reasons for this, wind generation capacity for example is cheaper, safer & can be installed much faster. Denmark has demonstrated that 20% electric wind generation capacity can be deployed in a decade. Of course with good old American engineering we should be able to achieve more faster… A big problem with nuclear’s low carbon claims is that the industry argues that they should get subsidies for low carbon emissions on top of the huge public subsidies it already receives. To help ensure that, it seeks to minimize money flowing to other low carbon sources. If it is successful in this effort, it slows deployment of renewable sources which ultimately contributes overall to elevated CO2 emissions. I also wonder how the total carbon footprint of nuclear generation stacks up when factoring in all the carbon emission of the fuel chain, waste disposal (if it ever actually happens), several hundred employees commuting daily to plants and whatever fraction of the some 10 billion dollars of construction cost per reactor is represented by fossil fuel use. Also, it would be prudent to add into the equation, say, a half a percent per reactor of the eventual carbon footprint of the Fukushima accident. Thankfully, the industry seeks to minimize the amount of safety upgrades done to prevent accidents, thus at least lowering the carbon emissions of component manufacturing and delivery…
While we all aim to live long, happy and healthy lives, our overall health and wellness stretches far beyond the surface. Fitness is essential to maintaining good physical health, but let’s not forget about nurturing our main control center: the brain. Exercising the mind should be high on the list of personal health initiatives in order to protect the memory against the inevitable effects of aging brought by Father Time. With May being Nurture Month at Travaasa, we have complied a list of helpful tips and tricks to keep your brain active and strong. Challenge yourself. Immersing yourself in a fresh environment or learning a new skill is a great way to challenge your mind. Always wanted to learn Italian? Interested in cultural anthropology? Enroll yourself in a community college course or start a Rosetta Stone learning program. Challenging the brain is key to keeping your mind sharp and memory on point. Make new connections. Introducing yourself to new social environments can help stimulate brain activity. Try joining a book club, volunteering at your local animal shelter or attending a trivia night. Surrounding yourself with a new group of people will help to maintain cognitive health and who knows, you might make a few new friends as well. Watch what you eat. Eating foods high in nutrients is known to have a positive effect on your overall physical health, yet many don’t know that it can also help maintain brain health. Avoiding processed foods and eating plenty of fruits and vegetables on a daily basis will quite literally feed your mind the essential vitamins it needs to stay sharp as you age. Use your nose. While vision and hearing are our most commonly used senses, a recent study suggests that using different aromatherapy techniques, such as sniffing lemon oil in the morning and lavender throughout the day can help you stay alert, focused and calm. Stop multi-tasking. Every time you drop what you are doing to answer a text or respond to an email, you could be doing a disservice to your brain power. While we live in a day and age that promotes multi-tasking by the use of our smartphones, taking an hour or two a day to focus on the task at hand helps reduce stress and improve concentration. Drink tea. Swap your morning cup of coffee for a cup of tea and your brain will thank you. Green tea is known to boost brain function and boost antioxidants in your body, while peppermint tea promotes alertness and improved functionality. Find more ways to encourage brain health during Nurture Month at Travaasa by checking out the activities schedule.
Corporate Accounting From WikiEducator Jump to: navigation, search PRIVATE COMPANY The term privately held company refers to the ownership of a business company in two different ways: first, referring to ownership by non-governmental organizations; and second, referring to ownership of the company's stock by a relatively small number of holders who do not trade the stock publicly on the stock market. Because of these two different meanings, the use of the term should normally be avoided unless the context makes clear which definition is intended. Less ambiguous terms for a privately held company are unquoted company and unlisted company. Though less visible than their publicly traded counterparts, private companies have a major importance in the world's economy. In 2005, the 339 companies on Forbes' survey of closely held U.S. businesses sold a trillion dollars' worth of goods and services and employed 4 million people. In 2004, the Forbes' count of privately held U.S. businesses with at least $1 billion in revenue was only 305.[1] Koch Industries, Bechtel, Cargill, Chrysler, PricewaterhouseCoopers, Flying J, Ernst & Young, Publix, and Mars are among the largest privately held companies in the United States. IKEA, Victorinox, and Bosch are examples of Europe's largest privately held companies. There has been a general confusion among corporate managers about whether to have the status of their company as private or public. Well, it basically depends on the requirement it needs to be. Notably, many companies prefer it to be private considering the kind of privileges they enjoy being private. Here’s a brief list of concessions and privileges which favour formation of private limited companies: Privileges: - Limited liability, - Simple and easy formation, - Immediate commencement of business upon incorporation, - Liberal payment of remuneration and loans to directors without any restrictions, - Easier inter-corporate loans - Lesser disclosure requirements - Tremendous ease in operation - Two directors are enough - Two Shareholders are adequate - Need not declare dividend - Listing of shares not mandatory - Directors need not hold qualification shares These continue to be the dominating factors for carrying on trade and industry through the medium of private limited companies. Limitations: Nevertheless, there are limitations too. Under the Companies Act, a private limited company is: - prohibited to issue any invitation to the public to subscribe to any shares or in debentures of the company - to limit the number of its members to 50 - to restrict the right of its members to transfer shares
Traditionally, document formats were tethered to the authoring tool. But today information needs to be more fluid. Microsoft has been one company that has taken the initiative to open file formats, designing an interoperable document format named Office Open XML (OOXML). In this article, we discuss why OOXML is important not only to Microsoft, but also to the enterprise. An Evolution Prior to Microsoft Office 2007, Office documents were saved in proprietary binary file formats designed for Office applications such as Word, Excel, and PowerPoint. Despite this, third-party products have existed for several years that extend the capabilities of Office and allow for tighter integration between Office and other products. Additionally, other productivity suites, such as OpenOffice, have been able to read and save Office documents for years. Yet, Microsoft maintained control. Of course, there are some advantages in doing this, the most important of which is that it allowed Microsoft to have absolute authority over the file format. This means that Microsoft could ensure backward compatibility when desired, as well as easily extend the file format to allow for new features within Office. Yet, the “advantage” of having such tight control also proved to be a limitation for Microsoft and the end-user. Specifically, the ecosystem of third-party add-ons that evolved around Microsoft Office has generally had a very limited scope. Despite the reach of Office, it has always had a very limited focus within the enterprise, and most third-party extensions only gave desktop users more features rather than expanding the scope of Office—in other words, Office would always be a productivity suite and nothing more if something didn’t change. A Shifting View Things did begin to change several years ago as organizations began asking for more open standards. Also, productivity suites such as OpenOffice—prevalent on Linux desktops—implemented open formats to increase interoperability between both users and applications. Enterprises noticed and wanted Microsoft to do the same. Microsoft was also seeing an internal shift in how Office was viewed. Rather than being limited to a desktop application, why not leverage the strengths of XML so that Office could become more of a service-oriented tool within the enterprise? In other words, why restrict Office to desktop use when a structured and open format could be used by enterprise applications to gain more operational efficiency? Microsoft made a decision: Support an open document format. With Microsoft Office 2007, Microsoft now stores documents in Microsoft’s OOXML. (Previous versions of Office can access these files using a free add-on.) In the short-term, this may not seem all that profound. After all, desktop users will continue doing what they’ve always done: creating and saving Office documents. But in the long-term, this could mean some very significant benefits for both Microsoft and enterprises. One obvious and immediate benefit is that the Office ecosystem has a lot more room for growth. The reach of Office can now extend beyond just the desktop and deeper into the enterprise. Essentially, with an open format, Office and third-party tools can now create documents that can be easily read, written, and adapted for use by other applications and services—perhaps even deep into the enterprise data center. Let’s speculate a bit more about the potential for an open XML-based standard such as OOXML and its role in the enterprise. Take for example a user who needs to manipulate data within an organization’s Enterprise Resource Planning (ERP) application. Right now, users have only a few options for getting at the data. One method is to access and manipulate the ERP information using a vendor-supplied interface—this option is certainly always available but not very flexible. Alternatively, they can get a dump of the data they need (e.g., in CSV format), import it into an application such as Excel, and manipulate it there. (To further complicate things, they may need to import the data back into the ERP.) Open Communication Now jump forward a few years when OOXML may not only be used to save files, but in communication between Office and enterprise applications—including the user’s ERP. Office suddenly becomes a flexible and familiar front-end for accessing the user’s data. And instead of relying on stale information from CSV downloads, Excel (or any Office application for that matter) dynamically retrieves and updates the ERP with little or no translation needed. Now imagine this happening across all of your enterprise applications. No funky, proprietary vendor plug-ins or ODBC configuration—each application speaks natively to Office using a single documented and open standard. This is just one example of where OOXML may be headed. There really is a lot more ground to cover about OOXML, including real-world use cases for integration and interoperability. In addition, several important comparisons can be made between OOXML and OpenDocument. So be ready to learn more in the future! For more information about interoperability, go to
A rectangular loop of wire, carrying current I1= 1.3 mA, is next to a very long wirecarrying a current I2 = 8.0 A, as indicated inthe figure below, in which d = 1.9 cm. (a) What is the direction of the magnetic forceon each of the four sides of the rectangle due to the long wire'smagnetic field? side direction (b) Calculate the net magnetic force on the rectangular loop due tothe long wire's magnetic field. [Hint: The long wire doesnot produce a uniform magnetic field.] N directed Get this answer with Chegg Study
The tide is turning on what the general public thinks of weed, as evidenced by more marijuana legalizations in the U.S., and will only continue to change. Part of this must be accredited to studies that have helped debunk marijuana myths such as weed being a gateway drug, weed being as addictive as heroin, or weed having negative effects on a user's physical and psychological health. One of the biggest myths has been that weed makes you dumb—as potheads way after Cheech and Chong have been characterized. A New UK study has disproved that. A study published in the Journal of Psychopharmacology on Jan. 6 is based on the findings among participants in the Avon Longitudinal Study of Parents and Children. For the ALSPAC 2,235 teenagers born in Bristol in 1991 and 1992 had their IQ tested when they were 8-years-old and then at 15-years-old. Almost 25 percent of the teenagers in the study had tried weed one or more times while 3.3 percent reported using it 50 times. The study revealed teenagers who used marijuana had lower grades in school in addition to a lower IQ score. But University College London researchers who conducted the study said there wasn't proof weed directly caused those outcomes, adding that marijuana use just made it likelier for users to also smoke cigarettes, drink alcohol, and use other drugs. "The notion that cannabis use itself is causally related to lower IQ and poorer educational performance was not supported in this large teenage sample," wrote Claire Mokrysz and her colleagues of the University College London. The researchers explained how other factors, such as alcohol use, could affect school performance and IQ. "These findings therefore suggest that cannabis use at the modest levels used by this sample of teenagers is not by itself causally related to cognitive impairment. Instead, our findings imply that previously reported associations between adolescent cannabis use and poorer intellectual and educational outcomes may be confounded to a significant degree by related factors." In a news released in 2014 before the study was published Mokrysz wrote, "People often believe that using cannabis can be very damaging to intellectual ability in the long-term, but it is extremely difficult to separate the direct effects of cannabis from other potential explanations. Adolescent cannabis use often goes hand in hand with other drug use, such as alcohol and cigarette smoking, as well as other risky lifestyle choices. It's hard to know what causes what—do kids do badly at school because they are smoking weed, or do they smoke weed because they're doing badly?"
Public Release:  Climatologists offer explanation for widening of Earth's tropical belt University of California - Riverside RIVERSIDE, Calif. -- Recent studies have shown that the Earth's tropical belt -- demarcated, roughly, by the Tropics of Cancer and Capricorn -- has progressively expanded since at least the late 1970s. Several explanations for this widening have been proposed, such as radiative forcing due to greenhouse gas increase and stratospheric ozone depletion. Study results appear March 16 in Nature Geoscience. "Prior analyses have found that climate models underestimate the observed rate of tropical widening, leading to questions on possible model deficiencies, possible errors in the observations, and lack of confidence in future projections," said Robert J. Allen, an assistant professor of climatology in UC Riverside's Department of Earth Sciences, who led the study. "Furthermore, there has been no clear explanation for what is driving the widening." Now Allen's team has found that the recent tropical widening is largely driven by the PDO. "Although the PDO is considered a 'natural' mode of climate variability, implying tropical widening is primarily driven by internal dynamics of the climate system, we also show that anthropogenic pollutants have driven trends in the PDO", Allen said. "Thus, tropical widening is related to both the PDO and anthropogenic pollutants." Widening concerns Belt contraction "The reversal of the PDO, in turn, may be related to the global increase in anthropogenic pollutant emissions prior to the ~ early 1980s," Allen said. "When we analyzed IPCC climate model experiments driven with the time-evolution of observed sea surface temperatures, we found much larger rates of tropical widening, in better agreement to the observed rate--particularly in the Northern Hemisphere," Allen said. "This immediately pointed to the importance of sea surface temperatures, and also suggested that models are capable of reproducing the observed rate of tropical widening, that is, they were not 'deficient' in some way." Encouraged by their findings, the researchers then asked the question, "What aspect of the SSTs is driving the expansion?" They found the answer in the leading pattern of sea surface temperature variability in the North Pacific: the PDO. "In this case, we found tropical widening -- particularly in the Northern Hemisphere -- is completely eliminated," Allen said. "This is true for both types of models--those driven with observed sea surface temperatures, and the coupled climate models that simulate evolution of both the atmosphere and ocean and are thus not expected to yield the real-world evolution of the PDO. "If we stratify the rate of tropical widening in the coupled models by their respective PDO evolution," Allen added, "we find a statistically significant relationship: coupled models that simulate a larger PDO trend have larger tropical widening, and vice versa. Thus, even coupled models can simulate the observed rate of tropical widening, but only if they simulate the real-world evolution of the PDO." Future work "Future emissions pathways show decreased pollutant emissions through the 21st century, implying pollutants may continue to drive a positive PDO and tropical widening," Allen said.
Children who have excess body fat by age 10 may have greater odds of developing diabetes in their preteen years than their slimmer peers, a Canadian study suggests. Researchers measured height and hip size to assess body fat in about 600 children when they were 8 to 10 years old, and again two years later. They found every 1 percent of additional body fat at the start of the study was linked to a 3 percent decline in sensitivity to the hormone insulin, a shift that can allow excess sugar to build up in the blood and lead to diabetes. The study published in JAMA Pediatrics also found more exercise and less screen time were linked to better insulin sensitivity, which might reduce the risk of diabetes. Reduced body fat might explain at least part of this connection, said lead study author Dr. Melanie Henderson of the University of Montreal. "Our findings suggest that we should be encouraging children early on to be physically active, and that we should reduce their screen time, in order to favor a healthy body weight and better cardiometabolic health later on in life," Henderson said by email. Henderson and colleagues focused on what's known as insulin sensitivity, the body's ability to use this hormone to regulate levels of blood sugar, or glucose, and turn it into fuel for cells. Type 2 diabetes is associated with obesity and occurs when the body can't make or use enough insulin to prevent glucose from accumulating in the blood. Researchers assessed children once when they were 9.6 years old on average, and again two years later. The majority of the kids had not gone through puberty at the start, and by the end of the study two thirds had experienced this transition. All of the children had at least one obese parent, and 23 percent of the kids were obese themselves at the start of the study. Another 19 percent were overweight. The children who had more girth around their hips or bigger increases in hip size during the study were more prone to insulin insensitivity than kids with slimmer hips. Examining excess circumference around the hips, a measure known as adiposity, is thought to be better than relying on overall weight to assess fat because it can more accurately account for muscle and flab. Physical activity seemed to be the main explanation for the differences in childrens' adiposity. Every 10 minutes per day of moderate to vigorous physical activity was associated with 3.5 percent lower body fat at the end of the study, even after adjusting for fitness levels and the amount of screen time. This amount of exercise was also linked to a 4.8 percent increase in insulin sensitivity. At the same time, every one-hour increase in screen time at the start of the study predicted a 2.9 percent increase in body fat. This amount of screen time was also associated with a 4.5 percent reduction in insulin sensitivity. One shortcoming of the study is that 66 children, or about 10 percent, dropped out before the end, the authors note. The youth who left the study tended to be more insulin resistant and have more body fat than the children who stuck with it through the end. Even so, the findings are important because metabolic health during childhood can influence whether people later have health problems including diabetes, hypertension, obesity, sleep apnea, heart attacks and strokes, said Dr. Kim Eagle, a researcher at the University of Michigan Frankel Cardiovascular Center in Ann Arbor. "Daily decisions about activity, screen time, food and beverage consumption and overall energy balance matter and can influence profoundly what our children's future health state will look like," Eagle, who wasn't involved in the study, said by email. "We live in a toxic culture filled with empty calories, video games, endless screen time opportunities, and not enough health education," Eagle added. "Studies like this one should help us find the will to take action." More on this...
Output to a text File : Files « File Input Output « Java Output to a text File import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.PrintStream; import java.util.Vector; public class ListOfNumbersDeclared { public static void main(String[] a) { Vector victor; int size = 10; victor = new Vector(size); victor.addElement(new Integer(i)); try { PrintStream out = new PrintStream(new FileOutputStream( out.println("Value at: " + i + " = " + victor.elementAt(i)); } catch (FileNotFoundException e) { Related examples in the same category 1.Create file 2.Create a temporary file 3.Creating a Temporary File and delete it on exit 4.Create a directory (or several directories) 5.Get file size 6.Change last modified time of a file or directory 7.Construct file path 8.Create temporary file with specified extension suffix 9.Create temporary file in specified directory 10.Create new empty file 11.Compare two file paths 12.Delete a file 13.Delete file or directory 14.Deleting a Directory (an empty directory) 15.Delete file or directory when virtual machine terminates 16.Determine File or Directory 17.Determine if a file can be read 18.Determine if a file can be written 19.Determine if file or directory exists 20.Determine if File or Directory is hidden 21.Demonstrate File. 22.Moving a File or Directory to Another Directory 23.Find out the directoryFind out the directory 24.Get all path information from java.io.File 25.Getting an Absolute Filename Path from a Relative Filename Path 26.Getting an Absolute Filename Path from a Relative Filename with Path 27.Getting an Absolute Filename Path from a Relative Filename parent Path 28.Get Absolute path of the file 29.Get File size in bytes 30.Get parent directory as a File object 31.Get a file last modification date 32.File.getCanonicalFile() converts a filename path to a unique canonical form suitable for comparisons. 33.Getting the Parents of a Filename Path 34.Get the parents of an absolute filename path 35.Getting and Setting the Modification Time of a File or Directory 36.Mark file or directory Read Only 37.List Filesystem roots 38.List drives 39.Listing the Directory ContentsListing the Directory Contents 40.Rename file or directory 41.Forcing Updates to a File to the Disk 42.Random FileRandom File 43.Create a directory; all ancestor directories must exist 44.Create a directory; all non-existent ancestor directories are automatically created 45.Getting the Current Working Directory 46.Change a file attribute to writable 47.Data fileData file 48.Choose a FileChoose a File 49.Read data from text file 50.Copy File 51.Querying a File for Information 52.Working with RandomAccessFileWorking with RandomAccessFile 53.Get a list of files, and check if any files are missing 54.Delete a file from within Java 55.Work with temporary files in Java 56.Compare File Dates 57.Sort files base on their last modified date 58.Strings -- extract printable strings from binary file 59.Get extension, path and file nameGet extension, path and file name 60.Read file contents to string using commons-io? 61.Get all xml files by file extension 62.Get icon for file type 63.Change a file attribute to read only 64.Get file extension name 65.Search for files recursively 66.Create a human-readable file size 67.Set file attributes. 68.Format file length in string 69.Return a file with the given filename creating the necessary directories if not present.
Let's explore what this rule does: ProxyPass /perl/ When a user initiates a request to, the request is picked up by mod_proxy. It issues a request for and forwards the response to the client. This reverse proxy process is mostly transparent to the client, as long as the response data does not contain absolute URLs. Since ProxyPass forwards the response unchanged to the client, the user will see in her browser's location window, instead of You have probably noticed many examples of this from real-life web sites you've visited. Free email service providers and other similar heavy online services display the login or the main page from their main server, and then when you log in you see something like, then, etc. These are the backend servers that do the actual work.
The six radical secrets of a more productive classroom Professor Dylan Wiliam is convinced his simple but unorthodox ideas can help children learn better. So he put them to the test for a term with real students - and achieved extraordinary results, writes Gerard Gilbert. Hands up who knows one of the most common, time-honoured and, it is now being argued, detrimental teaching methods used in schools? That's right, person bobbing up and down excitedly, waving your digits in the air: it's the hands-up habit itself. Apparently, it is the same minority of top pupils, usually sitting at the front, who raise their hands to answer questions, while most switch off and opt out. According to education expert Professor Dylan Wiliam, this ingrained, almost sacrosanct, classroom habit is widening the achievement gap in our schools. ''Only a quarter of pupils consistently put their hands up,'' Wiliam says. ''They can't wait to take part, while others switch off completely.'' New school of thought ... the traditional ‘‘hands-up’’ method of answering questions allows many students to just opt ... New school of thought ... the traditional ‘‘hands-up’’ method of answering questions allows many students to just opt out of taking part.  Some sort of randomisation process was required, Wiliam decided long ago, and his unorthodox solution was to write the pupils' names down on lollipop sticks, the teacher then pulling them at random from a pot. No one can hide - everyone is potentially in the firing line. The lollipop sticks form just one of six radical but low-tech ideas Wiliam unleashed on a mixed-ability class of 12- to 13-year-olds at an average school at Hertfordshire in Britain, Hertswood School, Borehamwood, over the course of one summer term. The aim of the experiment, which was filmed for the BBC, was to involve every pupil in the lesson. ''Our education system was designed to cater for everyone, regardless of background or ability, but we still seem to struggle to keep everyone engaged,'' Wiliam says. Teach ... Dylan Wiliam. Teach ... Dylan Wiliam.  Wiliam, a former maths and science teacher turned teacher-trainer, along with Paul Black, wrote a leading book on classroom assessment in Britain, Inside the Black Box. He was also, until recently, the deputy director of the University of London's Institute of Education, so has had the ear of governments. ''The program is a crystallisation of things I've been doing for a long while,'' Wiliam says. ''When Paul Black and I worked with teachers in Oxfordshire and Kent in the late 1990s, we realised that research was presented to teachers in the form of bland platitudes … they were just too general to be applied in real classroom settings.'' Hence the lollipop sticks. And the traffic-light cups. The latter are painted red, amber and green. Students put a cup on their desk to inform the teacher whether they understand what they have just been taught (green), whether they are uncertain (amber) or whether they haven't the foggiest (red). It's all very low-tech and, importantly in these tight economic times, low-budget. ''People are always pushing new technology and expensive ways of raising students' achievement but the fact is that this is something that every school could do if it was minded to,'' Wiliam says. Another innovation - small, hand-held whiteboards for each student - came as a direct result of an unforeseen problem with the lollipop sticks. Deprived of the chance to show off their brilliance in front of the class, the regular hands-up brigade were getting frustrated and had even started to become disruptive. ''The high-achieving girls were really struggling,'' Wiliam says. ''They're used to being in the limelight and that's causing them some pain.'' And what's more, used to putting their hands up only when they knew the answer (which was, admittedly, much of the time), the random lollipop method was putting some of the high achievers into the unaccustomed position of sometimes not knowing the answer. ''It's kind of embarrassing, because I've got this reputation for being smart,'' says one, Emily, who was captured in the BBC program The Classroom Experiment furtively removing her lollipop from the pot after being caught out by a question. The idea with the mini-whiteboards is that the whole class simultaneously scribbles answers before displaying their boards to the teacher - and each other. ''I think mini-whiteboards are the greatest development in education since the slate,'' Wiliam says. ''You can get an overall view of what the whole class think.'' And once they got used to using them, the students in the film also approved - except the dissident who wrote ''can't be arsed'' on his. ''I want a classroom culture where it's OK not to know the answer,'' Wiliam says. Another innovation illustrated in the film isn't in itself radical - even if its actual implementation in state schools would be something of a revolution - and that is getting the students to come to school 15 minutes early each morning for a PE lesson. Indeed, a growing body of research in the US shows that a short burst of exercise every morning can have an impact on students' attention and learning throughout the day. But getting students into school by 8.30am, and changed into their sports clothes, is going to be a lot easier than changing another aspect of the British, US and now Australian education systems that Wiliam finds unhelpful - and that is schools' (and, therefore, pupils') obsession with grades. ''I don't get it unless I see my grade,'' says one frustrated pupil, thus illustrating Wiliam's point. The professor's own mantra is ''comments, not grades''. Wiliam says Britain is ''hooked on grades and national curriculum levels''. ''Kids don't work for things unless they get levels in them. It's absolutely crazy - we're like drug-pushers … we've got our kids hooked on levels and it's going to be very hard to get them off,'' he says. So much for the students. But if Wiliam's innovations are to succeed, they need to be applied with vigour, imagination and consistency by the teachers - and, as Wiliam well knows, teachers often feel they know best. ''Most teachers teach these lessons and they think the students understand it - and, therefore, they don't want to know that it didn't work,'' he says. ''It's almost ostrich-like behaviour. What many of these techniques reveal is that, despite your brilliant lesson, the kids didn't get it - and I think every teacher should want to know about that.'' One of the most interesting parts of the filmed experiment concerns a maths teacher, Miss Obi, whose somewhat inconsistent application of the new methods is leading to chaotic lessons and frustration from her and her pupils. Wiliam takes Miss Obi aside and persuades her to have her lesson assessed by two of her less high-achieving pupils, Katie and Sid, who are both visibly empowered by the process. ''She took it on board quite well,'' Sid says, while realising, perhaps for the first time, that ''it must be quite hard being a teacher''. ''One of the greatest resources is untapped - the students themselves,'' Wiliam says. ''I don't think teachers should be afraid of asking the students for advice.'' The idea is controversial, however, as some believe it undermines the professionalism and authority of teachers. Not so the head at Hertswood, who, starting this academic year, has decided to apply all Wiliam's innovations across the entire school. Wiliam himself thinks his simple and effective - if teachers persist, that is - ideas could be unrolled into all classrooms and lollipop sticks and traffic-light cups could eventually be as ubiquitous as chalk and blackboards were in earlier times. ''It's scalable across the whole country and it doesn't require people like me to come in just to keep the change going,'' he says. And what about those bright things who were always the first to put their hands in the air? ''I listen more,'' says Emily, previously dismissive of her less-engaged classmates. ''People are smarter than I thought.''
Chapter 7,8, and 9 Key Terms Random History Quiz Can you name the Chapter 7,8, and 9 Key Terms? Quiz not verified by Sporcle How to Play Also try: 'E' in History Score 0/30 Timer 10:00 An election held pursuant to a periodic schedule, in which a candidate for the office that the election concerns will become the scheduled successor to that office, if that candida The way a political party can manipulate electoral boundaries for political gain by using useful techniques and strategies to create partisan, incumbent protected and neutral distr A group of delegates from each state and territory that runs party affairs between national conventions. An election in which voters select candidates for a subsequent election. Money contributed to a political party as a whole A person who believes that moral rules are derived in part from an individual's beliefs and the circumstances of modern life. During these a sharp, lasting shift occurs in the popular coalition supporting one or both parties. Voting for candidates of different parties for various offices in the same election. A ballet listing all candidate for a given office under the name of that office; also called a 'Massachusetts' ballot. A meeting of party delegates elected in state primaries or caucuses that is held every four years. A paid, full time manager of a party's day-to-day work who is elected by the national committee. A party organization that recruits its members by dispensing patronage and that is characterized by a high degree of leadership control over member activity. A group of people with interest in a shared area. The person currently in office. An increase in the votes that congressional candidates usually get when they first run for reelection. An issue dividing the electorate on which rival parties adopt different policy positions to attract voters. Voting for a candidate because one favors his or her ideas for addressing issues after the election Voting for or against the candidate or party in office because one likes or dislikes how things have gone in the recent past. You're not logged in! Compare scores with friends on all Sporcle quizzes. Sign Up with Email Log In You Might Also Like... Show Comments Your Account Isn't Verified!
Datura inoxia From Wikipedia, the free encyclopedia Jump to: navigation, search Datura inoxia Datura innoxia flower 02.jpg Scientific classification Kingdom: Plantae (unranked): Angiosperms (unranked): Eudicots (unranked): Asterids Order: Solanales Family: Solanaceae Genus: Datura Species: D. innoxia Binomial name Datura inoxia Flower in Hyderabad, India. Datura inoxia (pricklyburr,[1] recurved thorn-apple,[2] downy thorn-apple, Indian-apple, lovache, moonflower, nacazcul, toloatzin, tolguache or toloache) is a species in the family Solanaceae. It is rarely called sacred datura, but this name in fact refers to the related Datura wrightii. It is native to Central and South America, and introduced in Africa, Asia, Australia and Europe. The scientific name is often cited as D. innoxia.[3] When English botanist Philip Miller first described the species in 1768, he misspelled the Latin word innoxia (inoffensive) when naming it D. inoxia. The name Datura meteloides was for some time erroneously applied to some members of the species, but that name has now been abandoned.[4] D. innoxia with ripe, split-open fruit Datura inoxia is an annual shrubby plant that typically reaches a height of 0.6 to 1.5 metres.[5][6] Its stems and leaves are covered with short and soft grayish hairs, giving the whole plant a grayish appearance. It has elliptic entire-edged leaves with pinnate venation.[4] All parts of the plant emit a foul odor similar to rancid peanut butter when crushed or bruised, although most people find the fragrance of the flowers to be quite pleasant when they bloom at night.[7][citation needed] The fruit is an egg-shaped spiny capsule, about 5 cm in diameter. It splits open when ripe, dispersing the seeds. Another means of dispersal is by the fruit spines getting caught in the fur of animals, who then carry the fruit far from the mother plant. The seeds have hibernation capabilities, and can last for years in the soil. The seeds, as well as the entirety of this plant, act as deliriants, but have a high probability of overdose. Main article: Datura (Toxicity) Cultivation and uses[edit] When cultivated, the plant is usually grown from seed, but its perennial rhizomes can be kept from freezing and planted in the spring of the following year.[4] Similar species[edit] Datura inoxia is quite similar to Datura metel, to the point of being confused with it in early scientific literature. D. metel is a closely related Old World plant for which similar effects were described by Avicenna in eleventh century Persia. The closely related Datura stramonium differs in having smaller flowers and tooth-edged leaves, and Datura wrightii in having wider, 5-toothed (instead of 10-toothed) flowers. Datura inoxia differs from D. stramonium, D. metel & D. fastuosa in having about 7 to 10 secondary veins on either side of the midrib of the leaf which anastomose by arches at about 1 to 3 mm. from the margin. No anastomosis of the secondary veins are seen in the other 4 major species of Datura. 3. ^ "Jimsonweed-Nightshade Family".  4. ^ a b c d e Preissel, Ulrike; Preissel, Hans-Georg (2002). Brugmansia and Datura: Angel's Trumpets and Thorn Apples. Buffalo, New York: Firefly Books. pp. 117–119. ISBN 1-55209-598-3.  5. ^ "Datura inoxia_Plant World Seeds".  6. ^ "Datura inoxia_TrekNature".  8. ^ "Datura inoxia_Desert Thornapple_EOL".  Further reading[edit] • A. Alon, ed. in chief, Plants and Animals of the Land of Israel, Vol. 11: Flowering Plants B., p. 92; ed. M. Raviv and D. Heler; Ministry of Defence Publications and the Society for Protection of Nature (in Hebrew), 1983. External links[edit]
From Wikipedia, the free encyclopedia   (Redirected from Itikaf) Jump to: navigation, search I’tikaf at the University of Tehran in Iran, April 2016. Iʿtikāf (Arabic: اعتكاف‎‎, also i'tikaaf or e'tikaaf) is an Islamic practice consisting of a period of staying in a mosque for a certain number of days, devoting oneself to ibadah during these days and staying away from worldly affairs.[1][2] The literal meaning of the word suggests sticking and adhering to, or being regular in, something, this 'something' often including performing nafl prayers, reciting the Qur'an, and reading hadith. Some Muslims claim that a performance of i'tikāf to a beggar sitting outside another's house, knowing that with persistence the occupant of the house will apparently give something to help. Others may claim that one can more easily identify when Ramadan's night of power will occur. It is considered sunnah to perform i'tikāf for ten full days, these ten often being the last of the month of Ramadan; any ten consecutive days, however, may be chosen. I'tikāf may also be observed for three full days, often the 13th through the 15th of Ramadan, or for a full day from sunset to sunset, in each case supererogatorily. The shortest i'tikāf, also considered supererogatory, can be observed for the duration of time between two daily prayers (such as from Asr to Maghrib). Because in every case except for the last, one ought to be fasting at the same time, performing i'tikāf on Eid al-Fitr or Eid al-Adha would not be permissible. A person who wishes to take part in i'tikāf (a mu'takif) must be a sane and clean Muslim, not causing strife within the family in taking part, and unambiguously in the main masjid of the person's town or city (not any other place of congregation, though exceptions may be made in in extraneous circumstances). The person must make the appropriate intention, either on the person's own behalf or on behalf of someone else unable to do so, to commit to staying inside the masjid for the entire period chosen for i'tikāf, except for very important or unavoidable reasons (such as buying food or attending a funeral). This intention must be made on the sunset prior to the first day of i'tikāf. Permitted activities[edit] Most of i'tikāf consists of the persistent recitation of the Qur'an (without disturbing others if doing so loudly) and numerous du'a, whether in seeking forgiveness from Allah or otherwise, though other people may also study numerous hadith. Some who take part try to complete an entire reading of the Qur'an in the duration they have allotted for themselves. Outside of these highly-regarded actions, eating (when not fasting) and sleeping, changing clothes, clipping nails, trimming facial hair, and having religious discussions and lectures are generally permitted. Prohibited activities[edit] Much of what is prohibited in i'tikāf is often considered sacrilegious behavior in a holy time, such as speaking of worldly affairs, trying to seek pleasure in them, backbiting or otherwise telling lies, and remaining silent as a means to achieve ibadah. Outside of these, shaving facial hair, buying and selling anything, deriving pleasure from people not of one's gender, and discussions with the intent to display one's superiority in some fashion are generally prohibited. Performing any of these actions voids one's i'tikāf and often one is required to pay kaffara to compensate for this. The usual actions that are prohibited in an Islamic fast are prohibited in i'tikāf. Leaving the place of i'tikāf[edit] A mu'takif may leave the place that the person chooses for i'tikāf for only very specific reasons. Except when responding to another person's greeting, one who must leave the place of i'tikāf must go out and return quickly, avoiding the shade and other human contact except when absolutely necessary. • One may go out to perform wudu when the person is no longer in a state of ablution; performing it a second time (i.e. when the person is still in a state of ablution) isn't allowed. • One may take a shower, assuming such shower is available on the masjid's premises, only if the person has had a wet dream, a period, or something remotely similar. Though this shower may be combined with wudu, any other reason for taking such a shower isn't allowed. • One may leave for Friday prayers only right before the khutbah begins and must return right after the prayers have ended. It is highly recommended that one leave, the person knowing that the i'tikāf will be voided, if one is ill or must help in tackling an emergency such as a fire or a person in distress. If where multiple people are in i'tikāf one person must leave, then the rest are also obligated to leave in order to avoid 'usurping' the area where the first person was in i'tikāf. Compensation for violations[edit] If in performing i'tikāf one does something which voids it, one must • redo it completely, if sexual relations took place; • redo it completely, if the violation took place on any day except the last; • redo only the time lost, if anything not involving sex organs took place, or • redo one day's worth, if one has had a period. See also[edit] Ali ibn Abi Talib 1. ^ Habib Rauf. Itikaf: An Introduction. Glasgow Central Mosque. GGKEY:KDYYR0A1QE7.  2. ^ "Itikaf". Retrieved 25 April 2016.
From Wikipedia, the free encyclopedia   (Redirected from USENET) Jump to: navigation, search Usenet is a worldwide distributed discussion system available on computers. It was developed from the general-purpose UUCP dial-up network architecture. Tom Truscott and Jim Ellis conceived the idea in 1979, and it was established in 1980.[1] Users read and post messages (called articles or posts, and collectively termed news) to one or more categories, known as newsgroups. Usenet resembles a bulletin board system (BBS) in many respects and is the precursor to Internet forums that are widely used today. Usenet can be superficially regarded as a hybrid between email and web forums. Discussions are threaded, as with web forums and BBSs, though posts are stored on the server sequentially. Usenet was conceived in 1979, and publicly established in 1980, at the University of North Carolina at Chapel Hill and Duke University,[2][1] over a decade before the World Wide Web was developed and the general public received access to the Internet, making it one of the oldest computer network communications systems still in widespread use. It was originally built on the "poor man's ARPANET", employing UUCP as its transport protocol to offer mail and file transfers, as well as announcements through the newly developed news software such as A News. The name Usenet emphasized its creators' hope that the USENIX organization would take an active role in its operation.[3] In most newsgroups, the majority of the articles are responses to some other article. The set of articles that can be traced to one single non-reply article is called a thread. Most modern newsreaders display the articles arranged into threads and subthreads. When a user posts an article, it is initially only available on that user's news server. Each news server talks to one or more other servers (its "newsfeeds") and exchanges articles with them. In this fashion, the article is copied from server to server and should eventually reach every server in the network. The later peer-to-peer networks operate on a similar principle, but for Usenet it is normally the sender, rather than the receiver, who initiates transfers. Some[who?] have noted that this seems an inefficient protocol in the era of abundant high-speed network access. Usenet was designed under conditions when networks were much slower and not always available. Many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out.[5] This is largely because the POTS network was typically used for transfers, and phone charges were lower at night. Usenet has significant cultural importance in the networked world, having given rise to, or popularized, many widely recognized concepts and terms such as "FAQ", "flame", and "spam".[6] The format and transmission of Usenet articles is similar to that of Internet e-mail messages. The difference between the two is that Usenet articles can be read by any user whose news server carries the group to which the message was posted, as opposed to email messages, which have one or more specific recipients.[7] ISPs, news servers, and newsfeeds[edit] With the rise of the World Wide Web (WWW), web front-ends (web2news) have become more common. Web front ends have lowered the technical entry barrier requirements to that of one application and no Usenet NNTP server account. There are numerous websites now offering web based gateways to Usenet groups, although some people have begun filtering messages made by some of the web interfaces for one reason or another.[10][11] Google Groups[12] is one such web based front end and some web browsers can access Google Groups via news: protocol links directly.[13] Moderated and unmoderated newsgroups[edit] Historically, a mod.* hierarchy existed before Usenet reorganization.[16] Now, moderated newsgroups may appear in any hierarchy, typically with .moderated added to the group name. Unmoderated newsgroups form the majority of Usenet newsgroups, and messages submitted by readers for unmoderated newsgroups are immediately propagated for everyone to see. Minimal editorial content filtering vs propagation speed form one crux of the Usenet community. One little cited defense of propagation is canceling a propagated message, but few Usenet users use this command and some news readers do not offer cancellation commands, in part because article storage expires in relatively short order anyway. Almost all unmoderated Usenet groups have become collections of spam.[18][19][20] Technical details[edit] The "Big Nine" hierarchies of Usenet • misc.* – miscellaneous topics (,, • rec.* – recreation and entertainment (, rec.arts.movies) • soc.* – social discussions (, soc.culture.african) See also the Great Renaming. Binary content[edit] Binary retention time[edit] Major NSPs have a retention time of more than 4 years.[23] This results in more than 9 petabytes (9000 terabytes) of storage.[24] Legal issues[edit] | | | | | | wivax | | | | | | | | | microsoft| uiucdcs | | | | genradbo | | | | | | (Tektronix) | | | | | | | purdue | | | | | | | | | | | | | | csin | | | | | | | | | | | | | pdp phs grumpy wolfvax | | | | | | | | | | | | bio | | | | | (Misc) | | (Misc) | | | | sii reed | dukgeri duke34 utzoo | | | | | | | | | | | | | | | | | | | | | u1100s | | | | | | | | red | | | | | pyuxh | | | | zeppo | | | | | allegra | | | | | | | | | | | | | | | | / | | | | ucbopt | | | | | esquire | | : | | | cbosgd | | | : | | | | | | : | | | mhuxa mhuxh mhuxj mhuxm mhuxv | | | : | | | | | | : | | | | | | ucbcad | | | ihpss mh135a | | : | | | | | | | : | | | | | (UCB) : | | | | (Silicon Valley) ucbarpa cmevax | | menlo70--hao : | | | | ucbonyx | | | sri-unix | ucsfcgl | | | | ------- | | | | = "Bus" | | | o jumps sdcarl phonlab sdcattb : Berknet @ Arpanet The Usenet Oldnews Archive: Compilation.[34] Newsgroup experiments first occurred in 1979. Tom Truscott and Jim Ellis of Duke University came up with the idea as a replacement for a local announcement program, and established a link with nearby University of North Carolina using Bourne shell scripts written by Steve Bellovin. The public release of news was in the form of conventional compiled software, written by Steve Daniel and Truscott.[2][35] In 1980, Usenet was connected to ARPANET through UC Berkeley which had connections to both Usenet and ARPANET. Mark Horton, the graduate student who set up the connection, began "feeding mailing lists from the ARPANET into Usenet" with the "fa" ("From ARPANET"[36]) identifier.[37] Usenet gained 50 member sites in its first year, including Reed College, University of Oklahoma, and Bell Labs,[2] and the number of people using the network increased dramatically; however, it was still a while longer before Usenet users could contribute to ARPANET.[38] UUCP networks spread quickly due to the lower costs involved, and the ability to use existing leased lines, X.25 links or even ARPANET connections. By 1983, thousands of people participated from more than 500 hosts, mostly universities and Bell Labs sites but also a growing number of Unix-related companies; the number of hosts nearly doubled to 940 in 1984. More than 100 newsgroups existed, more than 20 devoted to Unix and other computer-related topics, and at least a third to recreation.[39][2] As the mesh of UUCP hosts rapidly expanded, it became desirable to distinguish the Usenet subset from the overall network. A vote was taken at the 1982 USENIX conference to choose a new name. The name Usenet was retained, but it was established that it only applied to news.[40] The name UUCPNET became the common name for the overall network. Early versions of Usenet used Duke's A News software, designed for one or two articles a day. Matt Glickman and Horton at Berkeley produced an improved version called B News that could handle the rising traffic (about 50 articles a day as of late 1983).[2] With a message format that offered compatibility with Internet mail and improved performance, it became the dominant server software. C News, developed by Geoff Collyer and Henry Spencer at the University of Toronto, was comparable to B News in features but offered considerably faster processing. In the early 1990s, InterNetNews by Rich Salz was developed to take advantage of the continuous message flow made possible by NNTP versus the batched store-and-forward design of UUCP. Since that time INN development has continued, and other news server software has also been developed.[42] Public venue[edit] Usenet was the first Internet community and the place for many of the most important public developments in the pre-commercial Internet. It was the place where Tim Berners-Lee announced the launch of the World Wide Web,[43] where Linus Torvalds announced the Linux project,[44] and where Marc Andreessen announced the creation of the Mosaic browser and the introduction of the image tag,[45] which revolutionized the World Wide Web by turning it into a graphical medium. Internet jargon and history[edit] Many jargon terms now in common use on the Internet originated or were popularized on Usenet.[46] Likewise, many conflicts which later spread to the rest of the Internet, such as the ongoing difficulties over spamming, began on Usenet.[47] — Gene Spafford, 1992 Sascha Segan of PC Magazine said in 2008 "Usenet has been dying for years". Segan said that some people pointed to the Eternal September in 1993 as the beginning of Usenet's decline. Segan believes that when pornographers and software crackers began putting large files on Usenet by the late 1990s, Usenet disk space and traffic increased correspondingly. Internet service providers questioned why they needed to host space for pornography and unauthorized software. When the State of New York opened an investigation on child pornographers who used Usenet, many ISPs dropped all Usenet access or access to the alt.* hierarchy.[48] In response, John Biggs of TechCrunch said "As long as there are folks who think a command line is better than a mouse, the original text-only social network will live on".[49] AOL discontinued Usenet access in 2005. In May 2010, Duke University, whose implementation had kicked off Usenet more than 30 years earlier, decommissioned its Usenet server, citing low usage and rising costs.[50][51] After 32 years, the Usenet news service link at the University of North Carolina at Chapel Hill ( was retired on February 4, 2011. Usenet traffic changes[edit] Over time, the amount of Usenet traffic has steadily increased. As of 2010 the number of all text posts made in all Big-8 newsgroups averaged 1,800 new messages every hour, with an average of 25,000 messages per day.[52] However, these averages are minuscule in comparison to the traffic in the binary groups.[53] Much of this traffic increase reflects not an increase in discrete users or newsgroup discussions, but instead the combination of massive automated spamming and an increase in the use of .binaries newsgroups[52] in which large files are often posted publicly. A small sampling of the change (measured in feed size per day) follows: Usenet traffic per day (en).svg Daily Volume Daily Posts Date Source 4.5 GB 1996 Dec 9 GB 1997 Jul 12 GB 554 k 1998 Jan 26 GB 609 k 1999 Jan 82 GB 858 k 2000 Jan 181 GB 1.24 M 2001 Jan 257 GB 1.48 M 2002 Jan 492 GB 2.09 M 2003 Jan 969 GB 3.30 M 2004 Jan 1.30 TB 2004-09-30 1.38 TB 2004-12-31 1.52 TB 5.09 M 2005 Jan 1.34 TB 2005-01-01 1.30 TB 2005-01-01 1.81 TB 2005-02-28 1.87 TB 2005-03-08 2.00 TB 2005-03-11 Various sources 2.27 TB 7.54 M 2006 Jan 2.95 TB 9.84 M 2007 Jan 3.07 TB 10.13 M 2008 Jan 3.80 TB 2008-04-16 4.60 TB 2008-11-01 4.65 TB 14.64 M 2009 Jan 6.00 TB 2009 Dec 5.42 TB 15.66 M 2010 Jan 8.00 TB 2010 Sep 7.52 TB 20.12 M 2011 Jan 8.25 TB 2011 Oct 9.29 TB 23.91 M 2012 Jan 11.49 TB 28.14 M 2013 Jan 14.61 TB 37.56 M 2014 Jan 15.50 TB 2014 Feb 17.50 TB 2015 Jan 17.87 TB 44.19 M 2015 Jan 23.50 TB 2015 Nov 23.87 TB 55.59 M 2016 Jan AOL announced that it would discontinue its integrated Usenet service in early 2005, citing the growing popularity of weblogs, chat forums and on-line conferencing.[57] The AOL community had a tremendous role in popularizing Usenet some 11 years earlier.[58] In August 2009, Verizon announced that it would discontinue access to Usenet on September 30, 2009.[59][60] JANET(UK) announced it will discontinue Usenet service, effective July 31, 2010, citing Google Groups as an alternative.[61] Microsoft announced that it would discontinue support for its public newsgroups ( from June 1, 2010, offering web forums as an alternative.[62] Some ISPs did not include pressure from Attorney General of New York Andrew Cuomo's aggressive campaign against child pornography as one of their reasons for dropping Usenet feeds as part of their services.[65] ISPs Cox and Atlantic Communications resisted the 2008 trend but both did eventually drop their respective Usenet feeds in 2010.[66][67][68] Public archives of Usenet articles have existed since the early days of Usenet, such as the system created by Kenneth Almquist in late 1982.[69][70] Distributed archiving of Usenet posts was suggested in November 1982 by Scott Orshan, who proposed that "Every site should keep all the articles it posted, forever."[71] Also in November of that year, Rick Adams responded to a post asking "Has anyone archived netnews, or does anyone plan to?"[72] by stating that he was, "afraid to admit it, but I started archiving most 'useful' newsgroups as of September 18."[73] In June 1982, Gregory G. Woodbury proposed an "automatic access to archives" system that consisted of "automatic answering of fixed-format messages to a special mail recipient on specified machines."[74] The huge site archives and indexes erotic and pornographic stories posted to the Usenet group Today, the archiving of Usenet has led to a fear of loss of privacy.[80] An archive simplifies ways to profile people. This has partly been countered with the introduction of the X-No-Archive: Yes header, which is itself controversial.[81] Archives by Google Groups and DejaNews[edit] Main article: Google Groups Google Groups hosts an archive of Usenet posts dating back to May 1981. The earliest posts, which date from May 1981 to June 1991, were donated to Google by the University of Western Ontario with the help of David Wiseman and others,[83] and were originally archived by Henry Spencer at the University of Toronto's Zoology department.[84] The archives for late 1991 through early 1995 were provided by Kent Landfield from the NetNews CD series[85] and Jürgen Christoffel from GMD.[86] The archive of posts from March 1995 onward was started by the company DejaNews (later Deja), which was purchased by Google in February 2001. Google began archiving Usenet posts for itself starting in the second week of August 2000. Google has been criticized by Vice and Wired contributors as well as former employees for its stewardship of the archive and for breaking its search functionality.[87][88][89] See also[edit] Usenet history[edit] Usenet administrators[edit] Usenet celebrities[edit] Main article: Usenet celebrity 2. ^ a b c d e Emerson, Sandra L. (October 1983). "Usenet / A Bulletin Board for Unix Users". BYTE. pp. 219–236. Retrieved 31 January 2015.  6. ^ "USENET Newsgroup Terms – SPAM". Archived from the original on 2012-09-15.  7. ^ Kozierok, Charles M. (2005). The TCP/IP guide: a comprehensive, illustrated Internet protocols reference. No Starch Press. p. 1401. ISBN 978-159327-047-6 18. ^ "The Social Machine: Designs for Living Online". Today, Usenet still exists, but it is an unsociable morass of spam, porn, and pirated software  19. ^ "Unraveling the Internet’s oldest and weirdest mystery". Groups filled with spam, massive fights took place against spammers and over what to do about the spam. People stopped using their email addresses in messages to avoid harvesting. People left the net.  20. ^ "The American Way of Spam". ...many of the newsgroups have since been overrun with junk messages.  23. ^ "Giganews FAQ – How long are articles available?". Archived from the original on 2012-09-04. Retrieved October 23, 2012.  27. ^ "Cancel Messages FAQ". Archived from the original on February 15, 2008. Retrieved June 29, 2009. ...Until authenticated cancels catch on, there are no options to avoid forged cancels and allow unforged ones...  31. ^ "Better living through forgery". Newsgroupnews.admin.misc. 1995-06-10. Usenet: Archived from the original on 2012-07-24. Retrieved December 5, 2014.  35. ^ LaQuey, Tracy (1990). The User's directory of computer networks. Digital Press. p. 386. ISBN 978-1555580476 36. ^ "And So It Begins".  37. ^ "History of the Internet, Chapter Three: History of Electronic Mail".  38. ^ Hauben, Michael and Hauben, Ronda. "Netizens: On the History and Impact of Usenet and the Internet, On the Early Days of Usenet: The Roots of the Cooperative Online Culture". First Monday vol. 3 num.August 8, 3 1998 39. ^ Haddadi, H. (2006). "Network Traffic Inference Using Sampled Statistics". University College London. 40. ^ Horton, Mark (December 11, 1990). "Arachnet". Archived from the original on 2012-09-21. Retrieved 4 June 2007.  43. ^ Tim Berners-Lee (August 6, 1991). "WorldWideWeb: Summary". Newsgroupalt.hypertext. Usenet: Retrieved June 4, 2007.  45. ^ Marc Andreessen (March 15, 1993). "NCSA Mosaic for X 0.10 available.". Newsgroupcomp.infosystems.wais, comp.infosystems, alt.hypertext, comp.infosystems.gopher, comp.infosystems.wais, comp.infosystems, alt.hypertext, Check |newsgroup= value (help). Usenet: Retrieved 4 June 2007.  47. ^ Campbell, K. K. (October 1, 1994). "Chatting With Martha Siegel of the Internet's Infamous Canter & Siegel". Electronic Frontier Foundation. Archived from the original on November 25, 2007. Retrieved September 24, 2010.  48. ^ Sascha Segan (July 31, 2008). "R.I.P Usenet: 1980–2008". PC Magazine. p. 2. Archived from the original on 2012-09-09. Retrieved May 8, 2011.  54. ^ Rosencrance, Lisa. "3 top ISPs to block access to sources of child porn". Archived from the original on 2012-07-22. . Computer World. June 8, 2008. Retrieved on April 30, 2009. 59. ^ Bode, Karl. "Verizon To Discontinue Newsgroups September 30". Archived from the original on 2012-07-31. . DSLReports. August 31, 2009. Retrieved on October 24, 2009. 61. ^[dead link] 65. ^ "The Comcast Newsgroups Service Discontinued". September 16, 2008. Archived from the original on December 5, 2014. Retrieved December 5, 2014.  70. ^ "How to obtain back news items (second posting)". Newsgroupnet.general. December 21, 1982. Retrieved December 5, 2014. message-id:bnews.spanky.138  75. ^ "keepnews – A Usenet news archival system". Archived from the original on 2012-07-17. Retrieved December 14, 2010.  80. ^ Segan, Sascha (January 1, 1970). "R.I.P Usenet: 1980–2008 – Usenet's Decline – Columns by PC Magazine". Archived from the original on 2012-09-09. Retrieved December 14, 2010.  81. ^ Strawbridge, Matthew (2006). Netiquette: Internet Etiquette in the Age of the Blog. Software Reference. p. 53. ISBN 978-0955461408 83. ^ Wiseman, David. "Magi's NetNews Archive Involvement", 86. ^ "Google Groups Archive Information". Archived from the original on 2012-07-09.  (December 21, 2001) 87. ^ Poulsen, Kevin. "Google’s Abandoned Library of 700 Million Titles". Wired.  88. ^ Braga, Matthew. "Google, a Search Company, Has Made Its Internet Archive Impossible to Search". Motherboard.  89. ^ Edwards, Douglas (2011). I'm Feeling Lucky: The Confessions of Google Employee Number 59. Houghton Mifflin Harcourt. pp. 209–213. ISBN 978-0-547-41699-1.  Further reading[edit] External links[edit]
NCBI ROFL: On the purpose of the belly button. By ncbi rofl | September 8, 2011 7:00 pm Umbilicus as a fitness signal in humans. “Typically, mammalian umbilical cord forms a tiny, stable, and asymmetrical scar. In contrast, humans have a clearly visible umbilicus that changes with age and nutrients gathered. Based on this, I propose that umbilicus, together with the surrounding skin area, is an honest signal of individual vigour. More precisely, I suggest that the symmetry, shape, and position of umbilicus can be used to estimate the reproductive potential of fertile females, including risks of certain genetically and maternally inherited fetal anomalies. The idea is supported by a comparative study where symmetrical t-shaped and oval-shaped umbilici of fertile females were considered the most attractive. Further support comes from observations that abnormal velocity of umbilical cord has been associated with fetal brain development, diabetes, and other fitness-related properties with a strong genetically or maternally inherited component. In addition, umbilicus and the umbilical skin area may reveal nutrimental competitive ability, and need for social care in small children and pregnant females. The novel hypothesis explains why umbilicus has aesthetic value, and why umbilicus has had a distinctive role in different cultures. If further research confirms the signalling hypothesis, female umbilici may be routinely measured to detect risk pregnancies of several fetal abnormalities.” Image: flickr/jessicafm Related content: Discoblog: NCBI ROFL: Attack of the belly button lint! Discoblog: NCBI ROFL: The nature of navel fluff. Discoblog: NCBI ROFL: An explanation for the shape of the human penis. WTF is NCBI ROFL? Read our FAQ! CATEGORIZED UNDER: analysis taken too far, NCBI ROFL Discover's Newsletter About ncbi rofl See More Collapse bottom bar
Top Definition An individual who takes part in certain activities or attends particular meetings because they are trendy or popular, usually without regard to whether or not such activities or the subject matter of such meetings interest or pertain to them at all. Marla Singer is such a fucking tourist for attending multiple affliction-specific support group therapy sessions when she doesn't even have any of those diseases. od uživatele dalbort 21. červen 2004 foreign visitors who come to see popular sites and attractions... but are often side-tracked by even simpler things... (a telephone pole, a school bus, a bird, a sandwich) signs of their attraction towards native features: accessive photographing, pointing of fingers Tourist guide: Here we have an active volcano... it's called Killa wayyah... Tourist: Umm excuse me! Tourist guide: Yes ma'am? Tourist: What's that thing you're holding in your hand? Tourist guide: this? this is a sandwich, it would be my lunch Group of Tourists: OoOoooh! Ahhhh.... (snapshots) Tourist: May i have a taste? od uživatele james eakmon 17. listopad 2003 btw, I live in alaska Tourist: what sea level are we at? Local: about two feet Tourist: what's that big lake over there? Local: that would be the ocean Tourist: How do you say this word in alaskan? Local: I have no idea Tourist: Don't you speak alaskan or eskimo? od uživatele Jordan<3 14. srpen 2006 Any individual who displays no regard or knowledge for the unwritten rules of a location or city. Tourists in London, for example, frequently stand on the left hand side of escalators on the tube (a serious no-no), stand directly in front of the train doors with large rucksacks (again, you just don't do this) and take photographs of them halfway out of a telephone box as though they were appearing in a Broadway musical (does anyone do this anywhere at all?). "What's that? Stand on the right? No, I think I like the left hand side better. Who cares about those other jerks wanting to get by? I'm a tourist, they should wait." - the primary cause of rioting on the London Underground. od uživatele Mr Ben 10. únor 2005 1. An annoying sort of people who vacation (invade) someone else's living space. They are often found in tropical locations and travel in swarms. But the worst of the tourists plauge Cape Codders with their precence. As soon as June rolls around, the beaches are crowded and littered upon, the roads are filled with countless accidents because of the Tourist's legenday LACK of driving capabilities, and local stores, like Cuffy's and Wings, actually have customers! Tourists are often able to be noticed by their appearant lack of fashion sence (often seen in socks&sandals, a common favorite, or better yet, a cheesey hawaiian T-shirt paired with baggy cargo shorts. The women prefer to have fanny packs and visors attached to them, and often hold their young offspring on leashes.) Most of the Tourists on Cape Cod enjoy stopping at "interesting places" such as The Sandwich Glass Museum or many of the lighthouses that skatter the eroding shoreline. (Like the locals haven't grown bored of that the FIRST time they were forced to appreciate them). Also, they have habits for stopping at crowded restaurants or store, which are filled to the max with Tourists of course, and ask how to get on 'scenic' Route 6A, which often times, they are already driving on. They are recognizeable for their horrible speach (the word 'wicked' is not a part of their limited vocabulary) Many of the locals enjoy scouting for the hot Tourist, the few in millions, and often partake in Cape Codder's favorite pastime: Tourist Tricking. With the locals help, the Tourists may end up standed on a beach, in a rented car, or stuck in one of our many cranberry bogs. Tourists are often the cause of the Cape Codders deepest summertime woes, from clogging the beaches, to clogging the streets, and clogging, well, basically everything. But when Labor Day rolls around, and all that is left are the footprints in the sand, and the cash registers full of cash, the locals are able to withstand the summers, in hopes to survive the tough vacant Cape Cod winters with the cash the Tourists supplied them with in the summer. In many ways, Tourists are like Cicadas. They come in swarms in the summertime, the locals HATE them for eating everything and making it impossible to be outdoors without immediate frustration, but once their epic plunder is over, the locals reminise of the times they had smacking them around. "Its Tourist Season!" "Clear the beaches! The Tourists are coming" "Route 6A? Hmm... Just take a left on this road...You say its a dirt road? Well, thats okay, its my little shortcut!" "Thank god the Tourists are gone. I couldn't stand them walking around, digital cameras in hand, taking pictures of every rock, tree, lighthouse, and grain of sand on CAPE COD!" od uživatele CapperCodder 16. leden 2009 A group of people nobody knows where comes from. Tourists love warmth, fjords, bad music, and funny hats. Tourists is people that much of the time is in another place than home. Tourists are always where they're not suppose to be. See. Tourist. Run. od uživatele The boy who picked flowers and made people sing 21. květen 2003 A person who's not completely into what they are doing, or someone who does something just to be cool. "Jim's coming boarding with us." "He can't even stand up on a board, he's such a fucking tourist." od uživatele sittingbourneinnit 05. prosinec 2003 Denní e-mail zdarma
Babylon 10 The world's best online dictionary Download it's free Definition of Brave Babylon English American Indian warrior courageous; handsome defy; face with courage; endure with courage Brave Definition from Arts & Humanities Dictionaries & Glossaries English-Latin Online Dictionary Brave Definition from Language, Idioms & Slang Dictionaries & Glossaries Webster's Revised Unabridged Dictionary (1913) (v. t.) To encounter with courage and fortitude; to set at defiance; to defy; to dare. (v. t.) To adorn; to make fine or showy. Making a fine show or display. Having any sort of superiority or excellence; -- especially such as in conspicuous. Bold; courageous; daring; intrepid; -- opposed to cowardly; as, a brave man; a brave act. Specifically, an Indian warrior. A man daring beyond discretion; a bully. A challenge; a defiance; bravado. A brave person; one who is daring. hEnglish - advanced version \brave\ (&?;), n. 1. a brave person; one who is daring. the star-spangled banner, o,long may it wave o'er the land of the free and the home of the brave. s. key. 2. specifically, an indian warrior. 3. a man daring beyond discretion; a bully. hot braves like thee may fight. 4. a challenge; a defiance; bravado. [obs.] demetrius, thou dost overween in all; and so in this, to bear me down with braves. \brave\ (brāv), a. [compar. braver; superl. bravest.] [f. brave, it. or sp. bravo, (orig.) fierce, wild, savage, prob. from. l. barbarus. see barbarous, and cf. bravo.] 1. bold; courageous; daring; intrepid; -- opposed to cowardly; as, a brave man; a brave act. 2. having any sort of superiority or excellence; -- especially such as in conspicuous. [obs. or archaic as applied to material things.] iron is a brave commodity where wood aboundeth. it being a brave day, i walked to whitehall. 3. making a fine show or display. [archaic] wear my dagger with the braver grace. for i have gold, and therefore will be brave. in silks i'll rattle it of every color. frog and lizard in holiday coats and turtle brave in his golden spots.   similar words(1)   brave out  Concise English-Irish Dictionary v. 1.1 cróga, cródha, calma, dána brave danger: téidhim i gconntabhairt English Phonetics JM Welsh <=> English Dictionary Anlew = a. not brave; not clever Anwych = a. not brave, infirm Dewr = n. a brave one, a hero, a. brave, bold; stout Ewn = a. daring, bold, brave Galawnt = a. fair, brave, gallant Glew = n. a resolute man; a. persevering; brave Gorddewr = a. brave excess, foolhardy Gorwych = a. very brave Gowych = a. somewhat brave Gwech = a. brave; fine, gay Gwych = a gallant, brave; gaudy Gwychr = a. valiant, brave Shakespeare Words Australian Slang to have a go at something others think or find difficult WordNet 2.0 1. a North American Indian warrior (hypernym) warrior 2. people who are brave; "the home of the free and the brave" (antonym) timid, cautious (hypernym) people (derivation) weather, endure, brave out 1. face or endure with courage; "She braved the elements" (synonym) weather, endure, brave out (hypernym) defy, withstand, hold, hold up 1. possessing or displaying courage; able to face and deal with danger or fear without flinching; "Familiarity with danger makes a brave man braver but less daring"- Herman Melville; "a frank courageous heart...triumphed over pain"- William Wordsworth; "set a courageous example by leading them safely into and out of enemy-held territory" (synonym) courageous, fearless (antonym) cowardly, fearful (similar) desperate, heroic (see-also) adventurous, adventuresome (attribute) courage, courageousness, bravery (synonym) audacious, dauntless, fearless, intrepid, unfearing (similar) bold 3. brightly colored and showy; "girls decked out in brave new dresses"; "brave banners flying"; "`braw' is a Scottish word"; "a dress a bit too gay for her years"; "birds with gay plumage" (synonym) braw, gay (similar) colorful Brave Definition from Encyclopedia Dictionaries & Glossaries English Wikipedia - The Free Encyclopedia Brave(s) or The Brave(s) may refer to: Common meanings • an adjective for one who possesses courage • an American Indian warrior See more at Wikipedia.org... Brave Definition from Society & Culture Dictionaries & Glossaries 5 oz. tequila 2 1/2 oz. kahlua stir in highball glass Brave Definition from Entertainment & Music Dictionaries & Glossaries English - Klingon n. yoHwI' - the brave one v. yoH; Sub (slang) Brave Definition from Medicine Dictionaries & Glossaries A Basic Guide to ASL Both '5' hands are placed palms against the chest. They move out and away, forcefully, closing and assuming the 'S' position.
Open Access Apocalypse now? Genome Biology201213:151 DOI: 10.1186/gb-2012-13-3-151 Published: 30 March 2012 Revelation 6:12 (King James Version) Every religion has a myth about the end of the world. My personal favorite is Ragnarök, the end-time of the old Norse mythology. Three terrible winters, with no summer in between, will occur in succession, and then a great wolf will devour the sun, and his brother will eat the moon, bringing darkness to the earth. The inhabitants of hell, with Loki at their head, will join with the giants to attack the gods. Heimdallr, guardian of Bifrost, the rainbow bridge to Asgard, will sound his horn and call the gods to battle. On the battlefield, he and Loki, mortal enemies since the beginning of time, will kill each other. Finally, after Odin and Thor have been killed, the fire giant Surt will incinerate the universe. The myth goes on to predict that a new world will arise after this, with the rebirth of the gods, who will live in harmony with men. (That doesn't sound so bad, does it? Except for that incineration-of-the-universe part. That sounds bad.) Note that in this story, just as in the verse from Revelation, there are specific signs that indicate the end is near. That is a common feature of nearly every apocalyptic myth. These signs are occasionally good things, but most often are bad things - typically so unusual, or so terrible, that they can only be comprehended as portending the obliteration of everything. Christians, in particular, have been watching for such signs since the first century AD, because early Christianity was very much an eschatological faith, and most of its adherents were convinced that the Second Coming was imminent. (Some modern evangelical Christians believe the same thing, which can lead them to ignore problems like global warming on the grounds that the world will end before such problems become serious. Unfortunately, their numbers included a few policy makers in the recent Bush administration.) Most Hindus believe that we are living in the Kali Yuga, the last of the four periods that make up the current age. According to this belief, each period has seen a successive degeneration in the morals and character of human beings, to the point that now, in the Kali Yuga, conflicts and hypocrisy are prevalent. (Sounds a bit like the Republican presidential debates, doesn't it?) This is taken as a sign that soon Shiva will dissolve the world. (I think we can all agree: dissolving the world would be bad.) In Islamic tradition, the Prophet Muhammad listed a number of signs of the apocalypse, including this one: "When the most wicked member of a tribe becomes its ruler, and the most worthless member of a community becomes its leader, and a man is respected through fear of the evil he may do, and leadership is given to people who are unworthy of it, expect the Day of Judgment." (I suspect that, no matter what country you live in and who's in charge, these signs may sound worryingly familiar.) And then, of course, there's the alarming 'fact' that the Mayan calendar is scheduled to run out on 21 December 2012. According to Wikipedia (reliance on which, by students and columnists, may be another sign of the apocalypse), this date is regarded as the end-date of a 5,125-year-long cycle. Various astronomical alignments and numerological formulae have been proposed as pertaining to this date, though none have been accepted by mainstream scholars (which means anyone with half a brain). An entire movie, the disaster film 2012, was based on the idea that the world will end this year. (By 'disaster film', I mean a film about disasters, not a film that is itself a disaster, although in this case both are applicable.) Mexican tourist industry officials are actually planning to exploit the idea as a way of encouraging visits to Mayan ruins. A recent burst of solar flare activity, which can damage some communications satellites and electronic devices, is taken by some as further evidence that a cosmic catastrophe is imminent. I'm told, however, that this whole notion is actually based on a mistake, and that scholars of ancient Mayan culture insist there is no specific 'end' to the calendar. (Of course, since there haven't been any ancient Mayans around for at least a thousand years, there's no one to ask about this who would really know. So, just in case, I intend to send my Christmas presents out early this year, and I suggest you do the same.) The disturbing thing about many of these so-called apocalyptic signs is that they can be viewed as already having taken place, which would mean that the clock is ticking, and the alarm is set to go off soon. In that spirit, I thought I would share with you a few of my own favorite recent signs of the apocalypse. Some are taken from science, including the world of genomics. Others are just taken from the world around us. 1) The Financial Times reported last year that the price of stock in Berkshire Hathaway, Warren Buffet's company, jumps every time the actress Anne Hathaway gets a lot of media play. According to the FT, this is due to so-called robotrading algorithms, automatic stock-trading computer programs that now account for about 70% of all stock transactions. Evidently some computer genius programmed these algorithms to track trends in news coverage, but forgot to tell them how to distinguish between a company with over $380 billion in assets and an actress with quite different, albeit equally impressive, assets. 2) Speaking of women, Sports Illustrated magazine, which has reported a Sign of the Apocalypse weekly for years, provides this one in the 30 January 2012 issue: "Corner Canyon High, a new school set to open in Draper, Utah, in 2013, had its request for a team nickname, the Cougars (the No. 1 choice in a poll of future students), rejected by the school board on the grounds that it would be offensive to some middle-aged women." I presume this means that the University of Notre Dame football team, whose nickname is the Fighting Irish, may soon be asked to change it to something like the Fighting Sparrows. Or would that be considered offensive to birds? 3) The social media site Facebook has over 850 million users - that's more users than the total population of the United States, Indonesia and Brazil, put together. At present rate of growth, by 2013, if Facebook were a country, it would be China. Imagine what that statistic means: by next year, roughly one in every seven people on the planet will be a Facebook user. And of those 1,000,000,000 Facebook pages, 999,999,999 will still be mind-numbingly boring. By the way, Facebook is planning to make an Initial Public Offering (IPO, in Wall Street jargon) of its stock. Analysts - en masse, possibly the dumbest creatures on earth - estimate it will be valued at $100 billion. That would be 100 times its earnings, when the average company is trading at 12 times earnings. It should be noted that Apple and Microsoft also had IPOs with staggering multiples (Apple's was at slightly over 100 times earnings). But then, Apple and Microsoft actually make things that people buy. 4) Speaking of social media, according to a recent USA Today article, "a growing number of theaters and performing groups across the country are setting aside 'tweet seats,' in-house seats for patrons to live-tweet during performances. Jumping on this most dubious of bandwagons are, among others, the Carolina Ballet in Raleigh, NC, and the Dayton Opera in Dayton, Ohio. Rick Dildine, the executive director for Shakespeare Festival St. Louis - an outdoor theater festival that began using tweet seats two years ago - said tweet seats have 'become a national trend'." 5) A number of scientists have proposed that we should sequence the genome of every species of organism on the planet. Of course, no one knows how many there are (the current count of known species is around 2 million), but estimates range up to 100 million. If we assume an average cost per genome of $1,000 for a good quality sequence, such a project could cost up to $100 billion, which, by coincidence (or is it a coincidence?) is the probable value of the Facebook IPO. By the way, you get one guess who those scientists are. That's right, they're genome sequencers. I guess when what you do is assembly-line science, you have to find ever more creative ways to stay in business. My other guess is that we would get a much better return on the investment if we just took the $100 billion and bought out all the shares of Facebook. 6) 41% of Americans believe Jesus Christ will return by the year 2050. Since more than 32 million Americans will be over 80 years of age by then, He should feel right at home, as He will be well past His 2000th birthday. 7) Speaking of old age, President Obama has announced a War on Alzheimer's Disease. He intends for the US to have found a treatment for this devastating, age-related neurodegenerative disorder, which currently afflicts more than 5 million Americans, by 2025. That would be good news, not a sign of the apocalypse, except for one small detail: he has requested only around $50 million in new funding for the war this coming year. Now $50 million might sound like a lot of money, and in a sense it is - right now government support of Alzheimer's research amounts to about $600 million a year, so that's an 8% increase - however, the budget for NIH-sponsored HIV/AIDS research is over $2.5 billion annually, which means that current funding for Alzheimer's research is, on a per US patient basis, about 20-30 times less than funding for AIDS research, and that imbalance won't change by much. So the President's announcement, welcome though it may be, is sort of like ordering, "Forward, march!" to an army of soldiers who have only one leg to stand on. 8) A number of sportswriting pundits - en masse, possibly the second dumbest creatures on earth - are predicting that the Boston Red Sox will win baseball's World Series this October. For those of you who don't follow baseball, let me simply say that is akin to predicting smooth sailing for the Titanic. 9) I once saw a mediocre 1950s B movie called It Came From Outer Space (these days considered a cult classic, which may itself be a sign of the apocalypse). Apparently, It has arrived, and It is - a quasicrystal. A press release dated 12 January 2012 from Princeton University announced that a team of scientists had established that the only known naturally occurring quasicrystal was not formed on earth, but came here as part of a meteorite found in the Koryak Mountains of Chukotka in far eastern Russia. (As an aside, have you noticed that weird stuff is always found in some inaccessible part of far eastern Russia? Why don't we ever find weird stuff in far eastern Brooklyn? I mean, besides the people who live there.) Quasicrystals are solids that are ordered but aperiodic; they lack translational symmetry in at least one dimension. That's right, the alien civilization that actually is bombarding us from outer space, as opposed to the ones that only do so in B movies, evidently takes such a dim view of our planet that it can't even be bothered to pelt us with real crystals. Maybe they heard about my next sign: 10) According to a recent study by Cornell University Professor of Government Suzanne Mettler, many beneficiaries of US government programs seem confused about what government is for. She tells us that 44% of Social Security recipients, 43% of those receiving unemployment benefits, and 40% of those on Medicare say that they 'have not used a government program'. Perhaps they think their benefits are being dispensed by space aliens (the same ones that are dropping quasicrystals?). In any case, this degree of cluelessness may go a long way towards explaining my final sign: 11) News on Scientists Discover Moderate Republican - Species Previously Thought Extinct The reported sighting today, in a remote region of Washington state, of a moderate Republican has provoked skeptical reactions from other scientists and locals. "We get these reports every four years or so," said Bedford County Sheriff Horace Jones. "They always turn out to be hoaxes or misidentification of something else, like a Democrat." As biologist Greg Petsko remarked, "A number of people have claimed to see one, but scientists are pretty sure the last moderate Republican died out in the Triassic era, when the giant Thesaurus roamed the earth." OK, yes, I made that up. But how about this one: Rick Santorum Declared Winner of Iowa Caucuses. Unfortunately, that one is real. That's right: on 3 January 2012, a bunch of rural right-wing nutcases actually thought that an even bigger right-wing nutcase who believes in neither evolution nor global warming, wants to make contraception illegal, and thinks women with children shouldn't work outside the home, would make a dandy President of the United States - even dandier, evidently, than Mitt Romney, who has more money than God, or Newt Gingrich, who thinks he is God. To call Santorum a dinosaur is an insult to dinosaurs. And it doesn't end there: on 7 February, former Pennsylvania Senator Santorum won three more Republican primaries, in the seemingly sane states of Minnesota, Missouri and Colorado. My friends in those states have tried to reassure me that turnout was low and, as one of them put it, "only the crazies voted" in those elections. Maybe they're right, but I take little comfort in that. My South African friends tell me that high turnout by 'conservative rural crazies' was the reason for the victory, in the 1948 election in that country, of the National Party. The result was the disgrace of apartheid and domination of the country's politics by right-wing religious racists for the next forty-six years. Lest you think I do the man a disservice, consider this comment Santorum made: "One of the things I will talk about, that no president has talked about before, is I think the dangers of contraception in this country. It's not okay. It's a license to do things in a sexual realm that is counter to how things are supposed to be." Makes me wonder if, were we fortunate enough to finally develop a vaccine against HIV/AIDS, he would refuse to have his children - or anyone else's - vaccinated on the grounds that to do so would give them a 'license to do things in a sexual realm'. Santorum, by the way, is among those religious conservatives who assert that the Founding Fathers intended the United States to be a Christian nation. What that suggests to me is, not only would he fail Biology 101 for his anti-evolution beliefs, he would also fail American History 101. Many of the Founding Fathers were actually atheists, and of those who weren't, a large number were Deists. Deism, whose closest modern equivalent is Unitarianism, has as one of its major tenets the denial that Jesus Christ was God. If Mr Santorum did his homework, he would probably be horrified to learn that, had the Founding Fathers intended this country to have any state religion (which they absolutely did not - try reading the Constitution, Mr Santorum), it would almost certainly have been a religion that he and his fellow Christians regard as a heresy. By the way, Philadelphia Inquirer science writer Faye Flam, whose column/blog on evolution should be required reading for every genome biologist, has a wonderful piece about the possible evolutionary basis for the success of the Santorums of the world. You can read it at Believe me, if Rick Santorum's popularity is a sign of the apocalypse, his election to the presidency would be the apocalypse. So for you apocalypse-watchers, let me suggest that, in addition to looking out for weird social trends and strange behavior in the scientific community, you pay close attention to the coming US presidential election. Its outcome might say a lot about whether, in Muhammadan terms, we are living in a time when "leadership is given to people who are unworthy of it". In this election, that may be a real possibility. Or you could just wait for the sun to become black as sackcloth of hair. I'm not sure what that means, but it can't be good. Authors’ Affiliations Rosenstiel Basic Medical Sciences Research Center, Brandeis University © BioMed Central Ltd. 2012
Clear Light of Day Test | Mid-Book Test - Medium Buy the Clear Light of Day Lesson Plans Name: _________________________ Period: ___________________ Multiple Choice Questions 1. What does Bim return with from Hyder Ali Sahib's house? (a) Books, a dog and a car (b) Furniture, money and some letters (c) A dog, a servant, and the gramophone (d) A dog, a cat, and some food 2. Why are the Misras sisters still living at the family home? (a) They want to be with their brothers (b) They have a home based internet business (c) Their husbands have abandoned them (d) They were wounded in the war 3. What sort of birds begin their calls before daylight around the family's home? (a) Koels (b) Hawks (c) Seagulls (d) Cardinals 4. What does Bakul tell people who ask detrimental questions about India? (a) No comment (b) I will check for you and find out (c) Mind your own business (d) I don't know the answer 5. Bakul is waiting for them on the veranda. What music greets the sisters when they join Bakul? (a) Aunt Mira playing the flute (b) Bakul's transister radio (c) Country music from the Misras home (d) Baba's old 78 records Short Answer Questions 1. What language does Raja admire the most? 2. Who does Raja suggest to take over the family business? 3. Who did their brother Raja admire above all? 4. Why can't they visit more often? 5. What happens to the mother? Short Essay Questions 1. Why are Muslims no longer safe in India? What has caused this problem? 2. Why does Raja choose Hindu College over his original choice? 3. Why is Raja so upset about the whereabouts of the family of Hyder Ali Sahid? 4. Why do you think Bim believes she is seeing Aunt Mira's ghost in the garden? 5. Describe the way Bim speaks to Tara, to Bakul and to Baba. 6. Describe Bim as an adult woman, including her attitude towards her family members as well as her appearance. 7. Describe the family home and the feeling you get about it. 8. What is Tara's purpose in coming to the family home? 9. How does Bim feel about her youth? Why do you think she feels that way? 10. Why do the children think of their father as the master of the entrance and the exit? (see the answer keys) This section contains 584 words (approx. 2 pages at 300 words per page) Buy the Clear Light of Day Lesson Plans Clear Light of Day from BookRags. (c)2016 BookRags, Inc. All rights reserved. Follow Us on Facebook
Numbers Math Quant Physics PiJoão Trindade via Flickr A few months ago, Randall Munroe of the webcomic xkcd published a description of the Saturn V rocket using only the 1000 most frequent words in English. Under this restriction, the rocket was called "up-goer five," the command module was "people box," and the liquid hydrogen feed line was "thing that lets in cold wet air to burn." The comic inspired Theo Anderson, a geneticist who supports accessible science education, to build a text editor that would force the user to write with only the 1000 most frequent words. He then invited scientists to describe what they do using the editor. Geologists Anne Jefferson and Chris Rowan created the Tumblr "Ten Hundred Words of Science" to collect examples of scientific text rendered into up-goer five speak. From the site, here are examples of up-goer simplified science from 18 different fields. View As: One Page Slides
Click here to Skip to main content 12,406,098 members (74,741 online) Click here to Skip to main content Add your own alternative version Tagged as 78 bookmarked Image Processing Basics in C# , 16 Nov 2008 CPOL Rate this: Please Sign up or sign in to vote. This article demonstrates the utilization of C# for basic image processing algorithms Languages like C# and Java are very easy to use, and they present a lot of features for Application Layer development. This means that they could easily access peripheral devices and at the same time, make GUI (Graphical user interface) development much easier. However it is always said that they are not good for signal processing applications because of the overhead and limited possibility of optimization. In such applications, the code usually runs much more slowly when compared to C/C++ code. We know this is a natural result. However, there are certain applications which are not very speed-demanding. Some examples may include educational software, algorithm test software, or integration with existing applications. For these reasons, signal, or in this case image processing in C# may be a good idea. In this article I won't describe complicated image processing algorithms but I will describe how one can implement these algorithms in C# in an efficient way, using simple examples such as thresholding, gray scale conversion and connected component analysis. This will sort of be like an introduction to people who like signal processing and at the same time want to write test codes in C# to make things easier. Thresholding is binarizing the image in such a way that the values bigger than a threshold will be 255 (maximum pixel value in bytes) and pixels with smaller intensities will be set to 0 (black). It is a very important operation that is often used to prepare images for vectorization, further segmentation or use as guide layers in the creation of drawings. It can be used with raster data images to set off ranges of values that may then be used for subsequent analysis or as selection masks. Some people refer to it as foreground and background separation. Here is a result of Lena thresholded: Gray Scale Conversion Gray scale conversion is setting of all pixels of an image to a weighted average of the values in different color channels. The conversion is done through the mapping: GRAY= (byte)(.299 * R + .587 * G + .114 * B); Connected Component Analysis Connected component labeling is used in computer vision to detect unconnected regions in binary digital images, although color images and data with higher-dimensionality can also be processed. When integrated into an image recognition system or human-computer interaction interface, connected component labeling can operate on a variety of information. (Wikipedia definition). Connected component labeling works by scanning an image, pixel-by-pixel (from top to bottom and left to right) in order to identify connected pixel regions, i.e. regions of adjacent pixels which share the same set of intensity values V. (For a binary image V={1}; however, in a graylevel image V will take on a range of values, for example: V={51, 52, 53, ..., 77, 78, 79, 80}.) Pointer Arithmetic Image pixels are represented in a 1D array in memory. We access it through pointers. While reading the rest of the code, one should keep in mind that pointers are actually only addresses. Even though the image is a 2D structure, for convenience and ease of processing & speed, it is a common technique to represent it as a 1D structure. In such a case, the image can be indexed as: Because of the linear structure of the memory, a pointer is used linearly to index the image using the given formula. Languages like C/C++/C# and even assembler allow us to use arithmetic operation on pointers. For example: int x[5]; int* p =&x[0]; // Get address of x p=p+2; // This line increments the pointer by 2 memory addresses and hence //we could access x[2] Using the Code   Now we will go step by step showing how we could write a simple image processing routine in C#: This code is very easy and yet undesired of writing an image processing routine. GetPixel and SetPixel functions have several drawbacks. First of all they are functions. We have the function overhead of accessing and modifying a pixel value. However, in many cases we would want to modify the image through a pointer operation not a function call. The next example utilizes the “unsafe” block in C#. Inside unsafe blocks, we have access to pointers from C#. But don't mix it with the idea of pointers that you are familiar from C/C++. These pointers don't address the unmanaged code directly. What they address is related to intermediate language. You can read about it more in many articles. It has its own instructions (soft instructions of course) and own language. The conclusion is that pointers in unsafe blocks are not as fast as native pointers. Here is how one can implement it: You see three important issues here: 1. Lockbits and UnlockBits functions 2. byte* access 3. p[0] operation Notice that we have a loop from 0 to Width*ChannelCount*Height which is the total amount of bytes that are stored in the memory. Then we increment the pointer by ptr+=3 so that we can process each color pixel. When using GDI, one should be careful about two important facts: 1. Pixels are ordered as BGRBGRBGRBGRBGRBGR… instead of RBGRBGRBGRBG… This is one of the strange things Microsoft has come up with.  2. The image is flipped. This is not so crucial when using techniques described in this article, but it is important when one needs to go more native. Most of the people who do some work on image processing with C# use the coding convention that I described above. Unfortunately, this style has some unoptimized operations. For example, we have a y value counting from 0 to imageSize, but it is not used inside the loop. Also, the pointer is incremented by 3 each time while y is also incremented. To overcome this problem, we could go ahead and use further pointer arithmetic. Here is an example: private void Convert2GrayScaleFast(Bitmap bmp) BitmapData bmData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); byte* p = (byte*)(void*)bmData.Scan0.ToPointer(); int stopAddress = (int)p + bmData.Stride * bmData.Height; while ((int)p != stopAddress) p[0] = (byte)(.299 * p[2] + .587 * p[1] + .114 * p[0]); p[1] = p[0]; p[2] = p[0]; p += 3; In this example, we calculate the start and end addresses of the image structure, thinking that it is a 1D linear array. We increment the pointer from start to end addresses and get values in between. If one measures the computation time of these three algorithms (slow, well known and new), he/she will see that there is a huge difference between each of them. If you download the code, you will see the measured times in messageboxes. Connected Component Labeling Algorithm This algorithm is the stack implementation of general recursive connected component labeling algorithm. You can use it for many applications and get good results. I am not pretty sure that it's the best implementation but the results seem satisfactory. The implementation is very similar to Trajan's strongly connected component algorithm. However it has been generalized for images. The pseudo-code is as follows: Input: Graph G = (V, E), Start node v0 index = 0 // DFS node number counter S = empty // An empty stack of nodes tarjan(v0) // Start a DFS at the start node procedure tarjan(v) v.index = index // Set the depth index for v v.lowlink = index index = index + 1 S.push(v) // Push v on the stack forall (v, v') in E do // Consider successors of v if (v'.index is undefined) // Was successor v' visited? tarjan(v') // Recurse v.lowlink = min(v.lowlink, v'.lowlink) elseif (v' in S) // Is v' on the stack? v.lowlink = min(v.lowlink, v'.lowlink) if (v.lowlink == v.index) // Is v the root of an SCC? print "SCC:" v' = S.pop print v' until (v' == v) I hope this article will help anyone to speed up their image processing algorithms. I don't know if this will happen or not but if Microsoft can integrate inline assembler (Not intermediate language assembler but native assembler) in C#, we will be able to write algorithms almost as fast as C++ code if not faster. • 16th November, 2008: Initial post About the Author Tolga Birdal CEO Gravi Information Technologies and Consultancy Ltd Turkey Turkey I admire performance in algorithms. You may also be interested in... Comments and Discussions Questionabout Color subtraction Pin Member 989248613-Dec-14 20:36 memberMember 989248613-Dec-14 20:36  AnswerRe: about Color subtraction Pin Tolga Birdal14-Dec-14 4:34 memberTolga Birdal14-Dec-14 4:34  GeneralMy vote of 5 Pin Çağrı Daşkın2-Apr-14 1:39 memberÇağrı Daşkın2-Apr-14 1:39  GeneralMy vote of 5 Pin manoj kumar choubey20-Feb-12 19:15 membermanoj kumar choubey20-Feb-12 19:15  Questionit can be posible to Comparing Images using that algorithm? Pin chieto28-Nov-11 4:27 memberchieto28-Nov-11 4:27  AnswerRe: it can be posible to Comparing Images using that algorithm? Pin Tolga Birdal28-Nov-11 4:29 memberTolga Birdal28-Nov-11 4:29  GeneralRe: it can be posible to Comparing Images using that algorithm? Pin chieto28-Nov-11 23:01 memberchieto28-Nov-11 23:01  Tolga Birdal28-Nov-11 23:05 memberTolga Birdal28-Nov-11 23:05  chieto29-Nov-11 12:49 memberchieto29-Nov-11 12:49  Questionhelp, thank you Pin mostrodf12-Oct-11 15:56 membermostrodf12-Oct-11 15:56  AnswerRe: help, thank you Pin Tolga Birdal12-Oct-11 21:57 memberTolga Birdal12-Oct-11 21:57  GeneralRe: help, thank you Pin mostrodf13-Oct-11 14:28 membermostrodf13-Oct-11 14:28  Generalpregunta sobre Imagenes tiff aplicado al codigo Pin Member 332055927-Feb-10 10:15 memberMember 332055927-Feb-10 10:15  GeneralRe: pregunta sobre Imagenes tiff aplicado al codigo Pin Tolga Birdal28-Feb-10 5:50 memberTolga Birdal28-Feb-10 5:50  GeneralIt's important! Pin slovica4-Feb-10 17:18 memberslovica4-Feb-10 17:18  Generaltypo in discussion Pin Member 445497117-Nov-08 7:39 memberMember 445497117-Nov-08 7:39  GeneralRe: typo in discussion Pin Tolga Birdal17-Nov-08 13:09 memberTolga Birdal17-Nov-08 13:09  GeneralIs it really "almost as fast as a C++ code if not faster" Pin Dmitri Nesteruk17-Nov-08 5:38 memberDmitri Nesteruk17-Nov-08 5:38  GeneralRe: Is it really "almost as fast as a C++ code if not faster" Pin Tolga Birdal17-Nov-08 13:14 memberTolga Birdal17-Nov-08 13:14  GeneralI remember that image Pin Joel Ivory Johnson17-Nov-08 5:17 memberJoel Ivory Johnson17-Nov-08 5:17  GeneralRe: I remember that image Pin Tolga Birdal17-Nov-08 13:14 memberTolga Birdal17-Nov-08 13:14  GeneralJust what I was looking for. Pin nordhaus17-Nov-08 4:12 membernordhaus17-Nov-08 4:12  AnswerRe: Just what I was looking for. Pin Pung17-Nov-08 5:15 memberPung17-Nov-08 5:15  GeneralGood article Pin abombardier17-Nov-08 3:25 memberabombardier17-Nov-08 3:25  GeneralRe: Good article Pin Tolga Birdal17-Nov-08 13:15 memberTolga Birdal17-Nov-08 13:15  | Advertise | Privacy | Terms of Use | Mobile Web02 | 2.8.160726.1 | Last Updated 16 Nov 2008 Article Copyright 2008 by Tolga Birdal Everything else Copyright © CodeProject, 1999-2016 Layout: fixed | fluid
History of the Concord Grape Growers Cooperative Grape Juice Company is headquartered in the heart of the Lake Erie Concord Grape Belt, the oldest and largest Concord Grape growing region in the world! To take a virtual tour of the Concord Grape Belt, visit ww.concordgrapebelt.org and click on the tourism page. Commercial production of grapes dates all the way back to the year 1000 B.C. It was not until the year 1854 that the Concord variety came into being. The Concord grape is named after the Massachusetts village of Concord where the first vines were originally cultivated. The Concord grape is an extremely robust and aromatic grape derived from wild native species growing throughout New England in the most rugged soils. Through experimentation with native seeds, Boston-born Ephraim Wales Bull created the Concord grape in the year 1849. At his farm outside the village of Concord, Bull planted some 22,000 seedlings before he produced the grape he was looking for.the Concord. He was determined throughout his experimentation to produce a variety that was early to ripen and possessed a full bodied flavor. The Concord was a perfect fit for his expectations. In 1853, Bull felt that his new variety should be put before the public and he ended up winning first prize at the Boston Horticultural Society exhibition. News of Bull's variety spread worldwide and hence he was indeed "the father of the Concord Grape". He sold cuttings of his grape for $1,000 a piece and unfortunately died a poor man. His tombstone reads, "He sowed - others reaped". The first non-fermented Concord grape juice was processed in 1869 by a New Jersey dentist named Dr. Thomas Welch. Dr. Welch and his family gathered up 40 pounds of Concords from a trellis in front of their house. In the kitchen of their home, Dr. Welch cooked the grapes for a few minutes, squeezed the juices out with cloth bags, and poured the world's first fresh Concord grape juice into 12 one quart bottles on the kitchen table. Dr. Welch then processed his juice by stoppering them with waxed corks and boiling them in water, killing native yeasts that would cause his juice to ferment. This method of preservation was a success, and his use of Louis Pasteur's theory of pasteurization labels Dr. Welch as the pioneer of processed fruit juices in America. The first Concord grape juice was used on the Communion table at a local Methodist church for sacramental purposes, and most of the first orders for Concord juice came from churches for Communion. In 1896, Dr. Welch's son, Charles, transferred the juice operation to Watkins Glen, New York and in the following year, to Westfield, New York. 300 tons of Concords were processed in 1897. In the 20th century, the Concord grape industry boomed. Today, growers harvest more than 350,000 tons of Concords per year in the U.S. Washington grows the largest number, followed by New York, Michigan, Pennsylvania, Ohio and Missouri. More Articles on the History of Concord Grapes and its Growers
ElGamal signature scheme From Wikipedia, the free encyclopedia Jump to: navigation, search The ElGamal signature scheme is a digital signature scheme which is based on the difficulty of computing discrete logarithms. It was described by Taher ElGamal in 1984.[1] The ElGamal signature algorithm is rarely used in practice. A variant developed at NSA and known as the Digital Signature Algorithm is much more widely used. There are several other variants.[2] The ElGamal signature scheme must not be confused with ElGamal encryption which was also invented by Taher ElGamal. The ElGamal signature scheme allows a third-party to confirm the authenticity of a message sent over an insecure channel. System parameters[edit] These system parameters may be shared between users. Key generation[edit] • Randomly choose a secret key x with 1 < x < p − 1. • Compute y = g x mod p. • The public key is y. • The secret key is x. These steps are performed once by the signer. Signature generation[edit] To sign a message m the signer performs the following steps. • Choose a random k such that 1 < k < p − 1 and gcd(kp − 1) = 1. • Compute . • Compute . • If start over again. Then the pair (r,s) is the digital signature of m. The signer repeats these steps for every signature. A signature (r,s) of a message m is verified as follows. • and . The verifier accepts a signature if all conditions are satisfied and rejects it otherwise. The algorithm is correct in the sense that a signature generated with the signing algorithm will always be accepted by the verifier. The signature generation implies Hence Fermat's little theorem implies A third party can forge signatures either by finding the signer's secret key x or by finding collisions in the hash function . Both problems are believed to be difficult. However, as of 2011 no tight reduction to a computational hardness assumption is known. The signer must be careful to choose a different k uniformly at random for each signature and to be certain that k, or even partial information about k, is not leaked. Otherwise, an attacker may be able to deduce the secret key x with reduced difficulty, perhaps enough to allow a practical attack. In particular, if two messages are sent using the same value of k and the same key, then an attacker can compute x directly.[1] Existential forgery[edit] The original paper[1] did not include a hash function as a system parameter. The message m was used directly in the algorithm instead of H(m). This enables an attack called existential forgery, as described in section IV of the paper. Pointcheval and Stern generalized that case and described two levels of forgeries:[3] 1. The one-parameter forgery. Let be a random element. If and , the tuple is a valid signature for the message . 2. The two-parameters forgery. Let and be random elements and . If and , the tuple is a valid signature for the message . Improved version (with a hash) is known as Pointcheval–Stern signature algorithm See also[edit] 1. ^ a b c T. ElGamal (1985). "A public key cryptosystem and a signature scheme based on discrete logarithms" (PDF). IEEE Trans Inf Theory 31 (4): 469–472.  - this article appeared earlier in the proceedings to Crypto '84. 2. ^ K. Nyberg, R. A. Rueppel (1996). "Message recovery for signature schemes based on the discrete logarithm problem". Designs, Codes and Cryptography 7 (1-2): 61–81. doi:10.1007/BF00125076.  3. ^ Pointcheval, David; Stern, Jacques (2000). "Security Arguments for Digital Signatures and Blind Signatures" (PDF). J Cryptology 13 (3): 361–396.
William Steward spokesman from NY; political operator who opposed the compromise; ideals of the union were less important than the issue of abolition Napoleon III a Republic in southern North America Alaska Purchase Secretary of State William Seward bought Alaska from Russia for $7.2 Million ("Seward's Folly") new imperialism international Darwinism original concept of survival of the fittest, applied to international realtions ; competition among nations was justified Josiah Strong a popular American minister in the late 1800s who linked Anglo-Saxonism to Christian missionary ideas Alfred Thayer Mahan United States naval officer and historian (1840-1914) Pan-American Conference this was an international organization that dealt with trade; organized by james blaine; created to encourage cooperation and trust with the manufacturers James Blaine Richard Olney Attorney General of the U.S., he obtained an active injunction that state union members couldn't stop the movement of trains. He moved troops in to stop the Pullman strike. Venezuela boundary dispute Dispute between Great Britain and Venezuela over the boundary between Venezuela and British Guiana; British had ignored American demands to arbitrate the matter with Sec. of State Olney saying that Britain was violating the Monroe Doctrine; president Cleveland supported Venezuela and decided to determine the boundary line and if Britain resisted this, the U.S. could declare war to enforce it; Britain eventually agreed to arbitration the largest island in the West Indies fanatical patriotism Valeriano Weyler He was a Spanish General referred to as "Butcher" Weyler. He undertook to crush the Cuban rebellion by herding many civilians into barbed-wire reconcentration camps, where they could not give assistance to the armed insurrectionists. The civilians died in deadly pestholes. "Butcher" was removed in 1897. yellow journalism sensationalist journalism Spanish-American War DeLome Letter A private letter written by Enrique Depuy de Lome, Spainish Minister to U.S, critized President Mckinley call him "weak" and "a bidder for the admiration of the crowd" a state in New England Teller Amendment an archipelago in the southwestern Pacific including some 7000 islands George Dewey a United States naval officer remembered for his victory at Manila Bay in the Spanish-American War Theodore Roosevelt 26th President of the United States Rough Riders volunteer soldiers led by Theodore Roosevelt during the Spanish American War Hawaii Liliuokalani She was the last Hawaiian ruler to govern the islands. January 17, 1893, pro-American forces overthrew the government and proclaimed a provisionist government in Hawaii with Sanford B. Dole as president. Liliuokalani had no choice but to surrender her throne. She made a plea to the U.S. government for reinstatement, and a representative of President Grover Cleveland found the overthrow to be illegal. Dole, however, refused to accept the decision. Puerto Rico Guam given to the U.S. with the conclusion of the Spanish-American War Philippine annexation A treaty ratified on Feb. 6, 1899 guaranteed this. The anti-imperialists fell just two votes short of defeating this treaty. Emillio Aguinaldo leader of the Filipino rebels Anti-Imperialist League objected to the annexation of the Philippines and the building of an American empire. Idealism, self-interest, racism, constitutionalism, and other reasons motivated them, but they failed to make their case; the Philippines were annexed in 1900 insular cases Determined that inhabitants of U.S. territories had some, but not all, of the rights of U.S. citizens. Platt Amendment John Hay Secretary of State under McKinley and Roosevelt who pioneered the open-door policy and Panama canal spheres of influence areas in which countries have some political and economic control but do not govern directly (ex. Europe and U.S. in China) Open Door policy an irrational fear of foreigners or strangers Boxer Rebellion big-stick policy Hay-Pauncefote Treaty (TR) , negotiations with Colombia, six mile strip of land in Panama, $10 million, US could dig canal without British involvement Panama Canal George Goethals United States army officer and engineer who supervised the construction of the Panama Canal (1858-1928) William Gorgas Army physician who helped eradicate Yellow Fever and Malaria from Panama so work on the Panama Canal could proceed Roosevelt Corollary Roosevelt's 1904 extension of the Monroe Doctrine, stating that the United States has the right to protect its economic interests in South And Central America by using military force Santo Domingo the capital and largest city of the Dominican Republic TR won the Nobel Peace Prize in 1906 for helping to end this war Treaty of Portsmouth (1905) ended the Russo-Japanese War (1904-1905). It was signed in Portsmouth, New Hampshire, after negotiations brokered by Theodore Roosevelt (for which he won the Nobel Peace Prize). Japan had dominated the war and received an indemnity, the Liaodong Peninsula in Manchuria, and half of Sakhalin Island, but the treaty was widely condemned in Japan because the public had expected more. gentlemen's agreement Agreement when Japan agreed to curb the number of workers coming to the US and in exchange Roosevelt agreed to allow the wives of the Japenese men already living in the US to join them great white fleet 1907-1909 - Roosevelt sent the Navy on a world tour to show the world the U.S. naval power. Also to pressure Japan into the "Gentlemen's Agreement." Root-Takahira Agreement 1908 - Japan / U.S. agreement in which both nations agreed to respect each other's territories in the Pacific and to uphold the Open Door policy in China. Algeciras Conference International conference called to deal with the Moroccan question. French get Morocco, Germany gets nothing, isolated. Result is U.S, Britain, France, Russia see Germany as a threat. William Howard Taft dollar diplomacy diplomacy influenced by economic considerations a republic in Central America Henry Cabot Lodge Chairman of the Senate Foreign Relations Committee, he was a leader in the fight against participation in the League of Nations Lodge Corollary In 1912 Senate passed resolution to Monroe Doctrine. It stated that non-European powers (such as Japan) would be excluded from owning territory in Western Hemisphere. Woodrow Wilson New Freedom Woodrow Wilson's domestic policy that, promoted antitrust modification, tariff revision, and reform in banking and currency matters. moral diplomacy foreign policy proposed by President Wilson to condemn imperialism, spread democracy, and promote peace Jones Act (WW) 1916, Promised Philippine independence. Given freedom in 1917, their economy grew as a satellite of the U.S. Filipino independence was not realized for 30 years. Mexican civil war Fransisco Villa, a dictator, rose to power in Mexico. The USA Attempted and failed his capture. Victoriano Huerta Tampico incident In April 1914, some U.S. sailors were arrested in Tampico, Mexico. President Wilson used the incident to send U.S. troops into northern Mexico. His real intent was to unseat the Huerta government there. After the Niagara Falls Conference, Huerta abdicated and the confrontation ended. ABC powers The South American countries of Argentina, Brazil, and Chile, which attempted to mediate a dispute between Mexico and the United States in 1914. Venustiano Carranza (1859-1920) Mexican revolutionist and politician; he led forces against Vitoriano Huerta during the Mexican Revolution (1910-1920). expeditionary force Wilson ordered General Pershing to pursue Pancho Villa into Mexico. They were in nothern Mexico for months without being able to capture Villa. Growing possibility of U.S. entry into World War I caused Wilson to withdraw Pershing's troops. John J. Pershing US general who chased Villa over 300 miles into Mexico but didn't capture him Please allow access to your computer’s microphone to use Voice Recording. Having trouble? Click here for help. We can’t access your microphone! Reload the page to try again! Press Cmd-0 to reset your zoom Press Ctrl-0 to reset your zoom Please upgrade Flash or install Chrome to use Voice Recording. For more help, see our troubleshooting page. Your microphone is muted For help fixing this issue, see this FAQ. Star this term You can study starred terms together Voice Recording
England, United Kingdom Scarborough, town and borough on the North Sea coast, administrative county of North Yorkshire, historic county of Yorkshire, northern England. • zoom_in Scarborough, North Yorkshire, Eng. John Thomas Edward Slade Scarborough town originated from a 10th-century Viking fishing settlement in the shelter of a craggy sandstone headland, where there had earlier been a Roman signal station. In the 12th century a Norman castle was built on the headland. Spa development after 1626 and sea bathing later contributed to Scarborough’s burgeoning as a fashionable 18th-century resort. From 1845 the railways further stimulated its growth and extended the social range of its clientele. Scarborough remains the most popular seaside resort town in northeastern England. It is also a significant conference centre and retirement town. The borough of Scarborough extends far beyond old Scarborough town. It lies almost entirely within North York Moors National Park. Eskdale in the north and the valley of the River Derwent in the south cut through heather-clad moorlands with scattered sheep farms and some replanted forestland. Coastal cliffs shelter small picturesque fishing villages. Most of the population lives in the coastal resorts of Scarborough, Whitby, and Filey. Area 315 square miles (817 square km). Pop. (2001) town, 38,364; borough, 106,233; (latest est.) town, 39,700; (2011) borough, 108,793. print bookmark mail_outline • MLA • APA • Harvard • Chicago You have successfully emailed this. Error when sending the email. Try again later. Keep Exploring Britannica 7 Bizarre Spa Treatments Mankind’s relentless pursuit of physical beauty is nothing new (the use of cosmetics dates back to ancient Egypt and Rome) but the methods we use to achieve that “perfect look” have certainly evolved.... Exploring Europe: Fact or Fiction? From Point A to B: Fact or Fiction? United Kingdom United States Destination Europe: Fact or Fiction? Email this page
Sunday, October 13, 2013 complex color wheels and guitars more to come in the next few days 1. Very cool, but I'm not understanding one thing. -did the students make both the round color wheel and the one fitted to the guitar? Or did they cut out the round color wheel to fit the guitar? Or did they just do a round color wheel OR a guitar? I love the idea of the value scale guitar neck! Terrific! 2. These are great! I've never seen value studies and color wheels done in guitar form, what a great idea! 3. Hi Phyl, students could choose to either make a complex color wheel OR the guitar. The kids who chose to make the guitar drew the guitar body and then divided it into 12 wedges and then each wedge into four sections. Glad you like them 4. Very nice idea! I love the guitar variation. I wonder if you could open that up to other shapes as well? drumset? basketballs? etc? 5. happy new year for all thanks for sharing I will try to do it with my students 6. Do you have a rubric for this assignment? 7. Why did you have them use magenta paint? 8. the blue and red tempra our school gets makes a very odd purple...very close to almost a black so we have to add some magenta to get a better purple
David Scadden Co-Director, Harvard Stem Cell Institute A Short History of Stem Cell Research To embed this video, copy this code: Co-Director of Harvard Stem Cell Institute David Scadden reveals how 1960s research into surviving radiation stumbled upon stem cells. David Scadden David T. Scadden is Professor of Medicine at Harvard Medical School and Director of the Massachusetts General Hospital Center for Regenerative Medicine and Technology. He also is Co-Director of the Harvard Stem Cell Institute. Dr. Scadden’s research focuses on reconstituting immune function using the stem cells that form blood cells to fight cancer and AIDS.  He an expert in the treatment of HIV-related Kaposi's sarcoma and B-cell lymphoma and has developed a number of new therapies for them. He has received numerous honors and awards, including the Alpha Omega Alpha; Edwin C. Garvin, MD Senior Prize; Doris Duke Innovation in Clinical Research Award; the Burroughs Welcome Fund Clinical Scientist Award in Translational Research; and the Brain Tumor Society's Alan Goldfine Leadership Chair of Research. Question: What are the therapeutic and diagnostic uses of stem cells? David Scadden:          The use of stem cells in medicine is still actually very limited. So we know a lot about the cells, we know that they can be extraordinarily powerful as therapy. But the uses of them still are very limited. So we learned about stem cells as a defined entity in humans about fifty years ago. And that was work that was done really in the Cold War era. And in an effort to try to understand how you might be able to survive a radiation injury. That work actually led to some early studies on animals, that first defined that there is such a thing as a stem cell in an adult animal. It was hypothesized the same as exist in humans because some people had survived radiation injury, places like Hiroshima and Nagasaki. And that actually led them very quickly turned into an effort to use those stem cells as a sort of a bone marrow transplant. That work has now been going on for over four decades and has been a very powerful tool to treat people with blood diseases and cancer. So stem cell transplant is actually not a new concept. Stem cell therapy is not a new concept. What is new is that this was thought to be something that was rather a curiosity of the blood. And that clearly isn’t the case. We now know that the blood is one of many tissues that has stem cells, and so the idea that the success that’s been achieved in the blood system might apply to these other tissues. Now it’s something that has really opened a door on the idea of stem cells as a source of therapy of replenishment. We know the blood stem cell can completely restore the entire blood immune system. Well, if you could do that in other tissues that are injured, that would be of course tremendously powerful. So, that has really spawned this great enthusiasm and interest in pursuing these other applications of stem cells. That concept has largely been one of trying to replace, damaged cells. The idea of using a stem cell as a replacement part, that’s a very easy concept for people to grasp. It’s of course very complicated in other tissue types and so that is actually very narrow way to use stem cells. I think one of the other interesting aspects of stem cells is that they have an opportunity to have an impact on Medicine. That’s very different than this idea of just the replacement part. Recorded on: July 06, 2009
Nationalism in Europe, 1789-1945 1st edition Nationalism in Europe, 1789-1945 0 9780521598712 0521598710 Details about Nationalism in Europe, 1789-1945: An engaging range of period texts and theme books for AS and A Level history. This text analyses nationalism in Europe from the French Revolution to the Second World War. Drawing on a wide range of examples, Timothy Baycroft explains what characterises modern nations, what the theoretical roots of nationalism are, and what interaction there has been with other significant theories. The book alsot presents reasons for the overwhelming importance of nationalism in the development of modern European history. The result is a concise description of the ways in which nationalism spread through Europe and its consequences for European civilisation. Nationalism in Europe contains a selection of primary and secondary sources. Back to top Rent Nationalism in Europe, 1789-1945 1st edition today, or search our site for other textbooks by Timothy Baycroft. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Cambridge University Press.
Stanford Creates Living Video Games | NBC Bay Area Stanford Creates Living Video Games Researchers down on the farm make science a little more fun. Stanford University Researchers at Stanford have developed a new take on video games. Making them even more life-like by actually adding life to the game. Ingmar Riedel-Kruse and his lab group have created a game in which the player's actually control single celled organisms in a game, in real time, a press release said. Monsters vs. Aliens Goes Multiplatform [BAY] Monsters vs. Aliens Goes Multiplatform Monsters vs. Aliens has been ported to a variety of gaming platforms -- Xbox 360, PS3, Wii and the PC. It's kid-friendly and true to the movie. (Published Wednesday, April 22, 2009) Calling them "biotic games," Riedel-Kruse explains, "That is something to figure out for the future, what are good research problems which a lay person could really be involved in and make substantial contributions. This approach is often referred to as crowd-sourcing." Riedel-Kruse and his team currently have four games, PAC-mecium, a PacMan-esque game where the player controls the paramecis with a mild electrical field, Biotic Pinball, POND PONG, Ciliball, and PolymerRace. What the player sees is a live feed of the organisms, with a "game board" superimposed over the images. The player then uses a controller to navigate the tiny single celled beasts. "We would argue that modern biotechnology will influence our life at an accelerating pace, most prominently in the personal biomedical choices that we will be faced with more and more often," Riedel-Kruse said. "Therefore everyone should have sufficient knowledge about the basics of bio medicine and biotechnology. Biotic games could promote that."
Up For Discussion Educators, Get Ready to Fight What Policy Battle Matters Most to the Future of American Schools? Everyone knows that schools in the United States have been struggling for years. Expenditures rise, but test scores fall. There are many competing lines of thought about problems and remedies in contemporary education, but only some of these disputes lead to all-out policy battles. And some of these policy battles are more important than others. So, in advance of Steven Brill’s visit to Zócalo to discuss the future of America’s public schools, we asked education experts what policy battle matters most to the future of American education. The most important battle is over bringing about minimal equality of educational opportunities If the recent furor surrounding the Occupy Wall Street movement has proven anything, it’s that Americans are finally beginning to get frustrated with the level of inequality in our nation. By now we are all familiar with the statistics: the top one percent of Americans possess more wealth than the entire bottom 90 percent. Moreover, the median wealth of white households is 20 times that of black households and 18 times that of Hispanic households. How did we let things get things get so out of whack? How can we restore balance? According to the great 19th-century American educator Horace Mann, education was supposed to be “the great equalizer of the conditions of men, the balance-wheel of the social machinery.” Clearly, our education system is not succeeding at this task. I do not bring this up in order to blame the schools for all of America’s problems, but rather to serve as a reminder that the inequalities plaguing our country are a reflection of the inequalities that have long defined our public schools. The frightening statistics regarding wealth disparities have equally frightening corollaries in the education system, where economic and racial achievement gaps remain cavernous. Low-income and minority students are learning less, graduating high school at lower rates, and attending and completing college at lower rates than their more advantaged peers. These disparities have serious implications for individuals’ labor-market outcomes and quality of life. They also have serious implications for the health of our society. If we are serious about reducing inequality in this county, we need to get serious about ensuring that our children are receiving equal educational opportunities. In my opinion, this means equalizing resources, ensuring that disadvantaged students have access to experienced, high-quality teachers, and taking steps to reduce the segregation that produces schools of concentrated poverty and concentrated privilege. Miya Warner is a Ph.D. candidate in the Sociology & Education program at Teachers College, Columbia University. The most important battle is over how to produce teacher evaluations that work The policy battle that matters most for the future of education in America focuses on improving the quality and usability of teacher evaluations. Specifically, local, state, and federal policymakers are deciding the extent to which teacher evaluations should include measures of student achievement and whether those evaluations should factor into teacher placement, retention, and compensation decisions. In most schools across the country, teacher evaluations are based on principal observations of classroom instruction and an assessment of that instruction based on a rubric or some other kind of scoring sheet. While some principals may be well-suited to judge the quality of a teacher’s instruction, few are able to dedicate the time and attention needed to make these types of evaluations truly comprehensive. As a result, these evaluation systems lack nuance and allow ineffective teachers to remain in the classroom. Luckily, a new movement is afoot in teacher evaluations. Some schools have begun to implement more thorough evaluation systems using peer reviews, self-assessment, and even measures of student achievement from test scores or other measures. Naturally, these evaluations are controversial and constantly evolving. It’s an imperfect science, but a vast improvement over the current system. And while these types of evaluations are in their infancy, they represent a new approach to measuring teacher effectiveness that will provide a better sense of how successful teachers are in their classrooms. But even if all teachers were evaluated using robust systems like those described above, these new systems will only matter if schools use them to improve the overall quality of the teaching force. Toward that end, districts, states, and schools are deciding whether the outcomes of these evaluations should be used to ensure that low-income and minority students have access to high-quality teachers, that the most effective teachers are compensated accordingly, and that struggling teachers are either provided the support they need or removed from the classroom. Ultimately, the future of education depends on what happens in the classroom, and teachers are the greatest determinant of that success. Jennifer Cohen is a senior policy analyst with the Education Policy Program at the New America Foundation. The most important battle is over how we approach math “Math is the great equalizer.” (Escalante, Stand and Deliver) Yet children in America rarely have opportunities to experience mathematics in a positive way. Their opportunities to learn mathematics are limited, because their teachers’ opportunities to learn are limited, especially with respect to developing deep understanding of K-12 math content. One common assumption is that math majors have a deep understanding of the K-12 topics. This assumption has been challenged by math educators like Deborah Ball at the University of Michigan and mathematicians like Heng-Hsi Wu at UC Berkeley. Ball also did some empirical studies that called into question the assumption that a major in math corresponds to a deep understanding of K-12 math topics. And one of the exploratory studies I did with my graduate student researchers at UC Berkeley also shows that the assumption is questionable. The implication of these studies is that we need to provide future teachers with explicit training in mathematics topics that they are expected to teach at the K-12 level. The mathematics content in college math courses has little to do with what educators are expected to teach down the road. In terms of this, to the best of my knowledge, the UC Berkeley math department is the only math department that offers math content courses specifically focusing on grades 6-12 for math majors who are interested in pursuing teaching as a career. We need policies to promote college math departments’ involvement in the training of future math teachers. Otherwise, math majors, when they become teachers one day, will resort to the methods by which they were taught as K-12 students. Xiaoxia Newton is assistant professor at UC Berkeley Graduate School of Education. The most important battle is about standardized testing and its limitations The “No Child Left Behind” (NCLB) law, formerly known as the Elementary and Secondary Education Act, mired the public education system in a world of unsatisfactory results. It made standardized test results the primary measure by which to determine whether or not students and schools were making “Annual Yearly Progress.” All schools, including sub-groups of students such as bilingual and special education students, were expected to achieve 100 percent proficiency on standardized reading and math tests, or they would face a series of sanctions culminating with the firing of staff, state takeover of the school, or the privatization of the school. Nothing in NCLB accounted for individual student growth or the individual student’s social state, nor did anything in the law account for the unique socio-economic forces acting on the students or the school community. In addition, the testing narrowly emphasized only what was easily measurable in standardized tests of math, reading, and science rather than more complex critical thinking skills. The law ignored current research into multiple intelligences and the values of subject areas that are not easily tested, such as the arts and music. The legacy of NCLB continues to haunt current educational policy on both the state and federal level. Current drafts of President Obama’s “Race to the Top” (RTTT) initiative continues to emphasize standardized testing as the central means for evaluating educational progress in the United States. Despite the President’s awareness of the over-reliance on standardized testing, RTTT continues to emphasize standardized testing to the detriment of many other important factors. And while there will no longer be punitive sanctions against schools for failing to make AYP, the focus has shifted to punitive sanctions against teachers whose students do not make adequate progress on standardized reading and writing tests. These policies are currently moving forward on the national and state level, where New Jersey is currently piloting this method for assessing teachers in nine school districts. These initiatives are moving forward despite a lack of any known means to correlate a teacher’s classroom behaviors to outcomes on standardized tests. To be sure, RTTT is proposing to reduce the amount of standardized testing that students will face and trying to work off of a more realistic definition of student growth that will attempt to account for the variety of factors that affect student learning. Nevertheless, current state and federal educational policy still steers schools away from studies in music, fine and practical arts, technology, vocational education, and critical-thinking projects that are at the heart of the problem-solving skills students need to face the ever-changing economic landscape of the 21st century. The standardization movement has been sold to the American people as the means for making the United States more competitive in the world economy. Ironically, countries like Finland that have de-emphasized high-stakes testing consistently outperform the United States in international comparative assessments. High-stakes testing moves schools and teachers away from concentrating on the broad array of experiences that help students to develop their creativity, critical thinking, and problem-solving skills. Leon Alirangues teaches English at Livingston High School in Livingston, New Jersey and earned his doctorate in educational theory and policy from Rutgers University. *Photo courtesy of quacktaculous.
skip to navigation skip to content mementos 1.2.2 Memoizing metaclass. Drop-dead simple way to create cached objects A quick way to make Python classes automatically memoize (a.k.a. cache) their instances based on the arguments with which they are instantiated (i.e. args to their __init__). It’s a simple way to avoid repetitively creating expensive-to-create objects, and to make sure objects that have a natural ‘identity’ are created only once. If you want to be fancy, mementos implements the Multiton software pattern. Say you have a class Thing that requires expensive computation to create, or that should be created only once. Easy peasy: from mementos import mementos class Thing(mementos): def __init__(self, name): = name Then Thing objects will be memoized: t1 = Thing("one") t2 = Thing("one") assert t1 is t2 # same instantiation args => same object Under the Hood When you define a class class Thing(mementos), it looks like you’re subclassing the mementos class. Not really. mementos is a metaclass, not a superclass. The full expression is equivalent to class Thing(with_metaclass(MementoMetaclass, object)), where with_metaclass and MementoMetaclass are also provided by the mementos module. Metaclasses are not normal superclasses; instead they define how a class is constructed. In effect, they define the mysterious __new__ method that most classes don’t bother defining. In this case, mementos says in effect, “hey, look in the cache for this object before you create another one.” If you like, you can use the longer invocation with the full with_metaclass spec, but it’s not necessary unless you define your own memoizing functions. More on that below. Python 2 vs. Python 3 Python 2 and 3 have different forms for specifying metaclasses. In Python 2: from mementos import MementoMetaclass class Thing(object): __metaclass__ = MementoMetaclass # now I'm memoized! Whereas Python 3 uses: class Thing3(object, metaclass=MementoMetaclass): mementos supports either of these. But Python 2 and Python 3 don’t recognize each other’s syntax for metaclass specification, so straightforward code for one won’t even compile for the other. The with_metaclass() function shown above is the way to go for cross-version compatibility. It’s very similar to that found in the six cross-version compatibility module. Careful with Call Signatures MementoMetaclass caches on call signature, which can vary greatly in Python, even for logically identical calls. This is especially true if kwargs are used. E.g. def func(a, b=2): pass can be called func(1), func(1, 2), func(a=1), func(1, b=2), or func(a=2, b=2). All of these resolve to the same logical call–and this is just for two parameters! If there is more than one keyword, they can be arbitrarily ordered, creating many logically identical permutations. o1 = Thing("lovely") o2 = Thing(name="lovely") assert o1 is not o2 # because the call signature is different In most cases, this isn’t an issue, because objects tend to be instantiated with a limited number of parameters, and you can take care that you instantiate them with parallel call signatures. Since this works 99% of the time and has a simple implementation, it’s worth the price of this inelegance. Partial Signatures If you want only part of the initialization-time call signature (i.e. arguments to __init__) to define an object’s identity/cache key, there are two approaches. One is to use MementoMetaclass and design __init__ without superfluous attributes, then create one or more secondary methods to add/set useful-but-not-essential data. E.g.: class OtherThing(with_metaclass(MementoMetaclass, object)): def __init__(self, name): = name self.color = None # unset for now self.weight = None self.color = color or self.color self.weight = weight or self.weight return self assert ot1 is ot2 Or you can just define your own memoizing metaclass, using the factory function described below. Visiting the Factory The first iteration of mementos defined a single metaclass. It’s since been reimplemented as a parameterized meta-metaclass. Cool, huh? That basically means that it defines a function, memento_factory() that, given a metaclass name and a function defining how cache keys are constructed, returns a corresponding metaclass. MementoMetaclass is the only metaclass that the module pre-defines, but it’s easy to define your own memoizing metaclass.: from mementos import memento_factory, with_metaclass IdTracker = memento_factory('IdTracker', lambda cls, args, kwargs: (cls, id(args[0])) ) class MyTracker(with_metaclass(IdTracker, object)): # object identity is the object id of first argument to __init__ # (and there must be one, else the args[0] reference => IndexError) The first argument to memento_factory() is the name of the metaclass being defined. The second is a callable (e.g. lambda expression or function object) that takes three arguments: a class object, an argument list, and a keyword arg dict. Note that there is no * or ** magic–args passed to the key function have already been resolved into basic data structures. The callable must return a globally-unique, hashable key for an object. This key will be stored in the _memento_cache, which is a simple dict. When various arguments are used as the cache key/object identity, you may use a tuple that includes the class and arguments you want to key off of. This can also help debugging, should you need to examine the _memento_cache cache directly. But in cases like the IdTracker above, it’s not mandatory that you keep extra information around. The raw id(args[0]) integer value would suffice, as would a constructed string or other immutable, hashable value. In cases where arguments are very flexible, or involve flexible data types, a high-powered hashing function such as that provided by SuperHash might come in handy. E.g.: from superhash import superhash SuperHashMeta = memento_factory('SuperHashMeta', lambda cls, args, kwargs: (cls, superhash(args)) ) For the 1% edge-cases where multiple call variations must be conclusively resolved to a unique canonical signature, that can be done on a custom basis (based on the specific args). Or in Python 2.7 and 3.x, the inspect module’s getcallargs() function can be used to create a generic “call fingerprint” that can be used as a key. (See the tests for example code.) • Version 1.2 adds the mementos shorthand. • Version 1.1.2 adds automatic measurement of test branch coverage. Starts with 95% branch coverage. • Version 1.1 initiates automatic measurement of test coverage. Line coverage is 100%. Hooah! • See CHANGES.rst for the extended Change Log. • mementos is not to be confused with memento, which does something completely different. • mementos was originally derived from an ActiveState recipe by Valentino Volonghi. While the current implementation quite different and the scope much broader, the availability of that recipe was what enabled this module and the growing list of modules that depend on it. This is what open source evolution is all about. Thank you, Valentino! • This implementation is not thread-safe, in and of itself. If you’re in a multi-threaded environment, consider wrapping object instantiation in a lock. • Automated multi-version testing managed with pytest, pytest-cov, coverage and tox. Continuous integration testing with Travis-CI. Packaging linting with pyroma. Successfully packaged for, and tested against, all late-model versions of Python: 2.6, 2.7, 3.2, 3.3, 3.4, and 3.5 pre-release (3.5.0b3) as well as PyPy 2.6.0 (based on 2.7.9) and PyPy3 2.4.0 (based on 3.2.5). Test line coverage 100%. To install or upgrade to the latest version: pip install -U mementos python3.3 -m easy_install --upgrade mementos To run the module tests, use one of these commands: tox # normal run - speed optimized tox -e py27 # run for a specific version only (e.g. py27, py34) tox -c toxcov.ini # run full coverage tests File Type Py Version Uploaded on Size mementos-1.2.2-py2.py3-none-any.whl (md5) Python Wheel 2.7 2016-06-22 11KB mementos-1.2.2.tar.gz (md5) Source 2016-06-22 10KB (md5) Source 2016-06-22 22KB
More titles to consider Shopping Cart You're getting the VIP treatment! This book integrates strategy, technology and economics and presents a new way of looking at twentieth-century military history and Britain's decline as a great power. G. C. Peden explores how from the Edwardian era to the 1960s warfare was transformed by a series of innovations, including dreadnoughts, submarines, aircraft, tanks, radar, nuclear weapons and guided missiles. He shows that the cost of these new weapons tended to rise more quickly than national income and argues that strategy had to be adapted to take account of both the increased potency of new weapons and the economy's diminishing ability to sustain armed forces of a given size. Prior to the development of nuclear weapons, British strategy was based on an ability to wear down an enemy through blockade, attrition (in the First World War) and strategic bombing (in the Second), and therefore power rested as much on economic strength as on armaments.
You are currently browsing rvohra’s articles. Platooning, driverless cars and ride hailing services have all been suggested as ways to reduce congestion. In this post I want to examine the use of coordination via ride hailing services as a way to reduce congestion. Assume that large numbers of riders decide to rely on ride hailing services. Because the services use Google Maps or Waze for route selection, it would be possible to coordinate their choices to reduce congestion. To think thorough the implications of this, its useful to revisit an example of Arthur Pigou. There is a measure 1 of travelers all of whom wish to leave the same origin ({s}) for the same destination ({t}). There are two possible paths from {s} to {t}. The `top’ one has a travel time of 1 unit independent of the measure of travelers who use it. The `bottom’ one has a travel time that grows linearly with the measure of travelers who employ it. Thus, if fraction {x} of travelers take the bottom path, each incurs a travel time of {x} units. A central planner, say, Uber, interested in minimizing total travel time will route half of all travelers through the top and the remainder through the bottom. Total travel time will be {0.5 \times 1 + 0.5 \times 0.5 = 0.75}. The only Nash equilibrium of the path selection game is for all travelers to choose the bottom path yielding a total travel time of {1}. Thus, if the only choice is to delegate my route selection to Uber or make it myself, there is no equilibrium where all travelers delegate to Uber. Now suppose, there are two competing ride hailing services. Assume fraction {\alpha} of travelers are signed up with Uber and fraction {1-\alpha} are signed up with Lyft. To avoid annoying corner cases, {\alpha \in [1/3, 2/3]}. Each firm routes its users so as to minimize the total travel time that their users incur. Uber will choose fraction {\lambda_1} of its subscribers to use the top path and the remaining fraction will use the bottom path. Lyft will choose a fraction {\lambda_2} of its subscribers to use the top path and the remaining fraction will use the bottom path. A straight forward calculation reveals that the only Nash equilibrium of the Uber vs. Lyft game is {\lambda_1 = 1 - \frac{1}{3 \alpha}} and {\lambda_2 = 1 - \frac{1}{3(1-\alpha)}}. An interesting case is when {\alpha = 2/3}, i.e., Uber has a dominant market share. In this case {\lambda_2 = 0}, i.e., Lyft sends none of its users through the top path. Uber on the hand will send half its users via the top and the remainder by the bottom path. Assuming Uber randomly assigns its users to top and bottom with equal probability, the average travel time for a Uber user will be \displaystyle 0.5 \times 1 + 0.5 \times [0.5 \times (2/3) + 1/3] = 5/6. The travel time for a Lyft user will be \displaystyle [0.5 \times (2/3) + 1/3] = 2/3. Total travel time will be {7/9}, less than in the Nash equilibrium outcome. However, Lyft would offer travelers a lower travel time than Uber. This is because, Uber which has the bulk of travelers, must use the top path to reduce total travel times. If this were the case, travelers would switch from Uber to Lyft. This conclusion ignores prices, which at present are not part of the model. Suppose we include prices and assume that travelers now evaluate a ride hailing service based on delivered price, that is price plus travel time. Thus, we are assuming that all travelers value time at $1 a unit of time. The volume of customers served by Uber and Lyft is no longer fixed and they will focus on minimizing average travel time per customer. A plausible guess is that there will be an equal price equilibrium where travelers divide evenly between the two services, i.e., {\alpha = 0.5}. Each service will route {1/3} of its customers through the top and the remainder through the bottom. Average travel time per customer will be {5/3}. However, total travel time on the bottom will be {2/3}, giving every customer an incentive to opt out and drive their own car on the bottom path. What this simple minded analysis highlights is that the benefits of coordination may be hard to achieve if travelers can opt out and drive themselves. To minimize congestion, the ride hailing services must limit traffic on the bottom path. This is the one that is congestible. However, doing so makes its attractive in terms of travel time encouraging travelers to opt out. The incentive compatibility constraint is The principal prefers to accept immediately over inspection if You shouldn’t swing a dead cat, but if you did, you’d hit an economist doing data. Wolfers wrote: Knee-deep usually goes with shit, while mired with bog. I’ll pick bog over shit, but suspect that that was not Wolfers’ intent. The recent paper by Chang and Li about the difficulty of replicating empirical papers  does rather take the wind out of the empirical sails. One cannot help but wonder about the replicability of replicability studies. No doubt, a paper on the subject will be forthcoming. Noah Smith on his blog wrote: So the supply of both good and mediocre empirics has increased, but only the supply of mediocre theory has increased. And demand for good papers – in the form of top-journal publications – is basically constant. The natural result is that empirical papers are crowding out theory papers. Even if one accepts the last sentence, the first can only be conjecture.  One might very well think that the supply of mediocre empirical papers is caused entirely by an increase in the supply of mediocre theory papers whose deficiencies are  glossed over with a patina of empirics. Interestingly, when reviewers could find nothing nice to say about Piketty’s theories they praised his data instead. Its like praising the author of a false theorem by saying while the proof is wrong, it is long. The whole business has the feel of  tulip mania. Empirical papers as abundant as weeds. Analytics startups as plentiful as hedge funds. Analytics degree programs spreading like herpes. Positively Gradgrindian. In empirical econ classes around the world I imagine (because I’ve never been in one) Gradgrindian figures laying down the law: I have nothing against facts. I am quite partial to some.  But, they do not speak for themselves without an underlying theory. Bush II: Mike Rogers, GOP chairman of the House Intelligence Committee: Putin is playing chess while Obama is playing marbles. Sarah Palin: Rudolph Giuliani: Now,  compare with Trump’s slogan to make America great again. Get every new post delivered to your Inbox. Join 238 other followers
Feb 19, 2013 • For the first time, U.S. Geological Survey scientists have mapped long-term average evapotranspiration rates across the continental United States—a crucial tool for water managers and planners because of the huge role evapotranspiration plays in water availability. Why are evapotranspiration rates so important to know? It’s because the amount of water available for people and ecosystems is the amount of annual precipitation—that is, snow or rain—minus the amount of annual evapotranspiration. Evapotranspiration itself is the amount of water lost to the atmosphere from the ground surface. Much of this loss is the result of the “transpiration” of water by plants, which is the plant equivalent of breathing. Just as people release water vapor when they breathe, plants do too. “Since evapotranspiration consumes more than half of the precipitation that happens every year, knowing the evapotranspiration rates in different regions of the country is a solid leap forward in enabling water managers and policy makers to know how much water is available for use in their specific region,” said Bill Werkheiser, associate director for water at the USGS. “Just as importantly,” he added, “this knowledge will help them better plan for the water availability challenges that will occur as our climate changes since transpiration rates vary widely depending on factors such as temperature, humidity, precipitation, soil type, and wind.” Estimated mean annual ratio of ET to P Estimated mean annual ratio of actual evapotranspiration (ET) to precipitation (P) for the conterminous U.S. for the period 1971-2000. Image credit: Sanford & Selnick Source: Ward Sanford & Ann-Berry Wade, USGS
The remains of a miniature version of the Tyrannosaurus Rex have been unearthed – with claws that would rival the king of the dinosaurs. Mini Tyrannosaurus Rex discovered The Linhenykus monodactylus is a miniature version of T-rex (Pic: PA) Linhenykus monodactylus was only half a metre tall and weighed about as much as a large parrot but had a single, large claw instead of the usual three fingers of T-Rex. Scientists believe the mini T-Rex may have used its dino-digits to dig into insect nests. It is the only known dinosaur with one finger, said researchers. The creature belonged to the alvarezsauroids, a branch of the ‘theropod’ family of carnivorous dinosaurs which gave rise to modern birds and also included the Velociraptor. Michael Pittman, from University College London, one of the scientists who describe the find in the journal Proceedings of the National Academy of Sciences, said: ‘Non-avian theropods start with five fingers but evolved to have only three fingers in later forms. ‘Tyrannosaurs were unusual in having just two fingers, but the one-fingered Linhenykus shows how extensive and complex theropod hand modifications really were.’ The tiny terror was found in a dinosaur graveyard in the Wulansuhai Formation in Inner Mongolia and lived up to 84million years ago.
Public Release:  Study shows both a Mediterranean diet and diets low in available carbohydrates protect against type 2 diabetes People with an MDS of over 6 were 12% less likely to develop diabetes than those with the lowest MDS of 3 or under. Patients with the highest available carbohydrate in their diet were 21% more likely to develop diabetes than those with the lowest. A high MDS combined with low available carbohydrate reduced the chances of developing diabetes by 20% as compared with a diet low in MDS and high in GL. The authors say: "The role of the Mediterranean diet in weight control is still controversial, and in most studies from Mediterranean countries the adherence to the Mediterranean diet was unrelated to overweight. This suggests that the protection of the Mediterranean diet against diabetes is not through weight control, but through several dietary characteristics of the Mediterranean diet. However, this issue is difficult to address in cohort studies because of the lack of information on weight changes during follow-up that are rarely recorded." They point out that a particular feature of the Mediterranean diet is the use of extra virgin olive oil which leads to a high ratio of monounsaturated to saturated fatty acids. But again research here has been conflicting. One review of dietary fat and diabetes suggests that replacing saturated and trans fats with unsaturated fats has beneficial effects on insulin sensitivity and is likely to reduce the risk of type 2 diabetes. However, in a randomised trial of high-cardiovascular-risk individuals who were assigned to the Mediterranean diet supplemented with either free extra virgin olive oil or nuts and were compared with individuals on a low-fat diet (comparison group), there was no difference in diabetes occurrence between the two variants of the Mediterranean diet when compared with the comparison group. Regarding GL, the authors say: "High GL diet leads to rapid rises in blood glucose and insulin levels. The chronically increased insulin demand may eventually result in pancreatic β cell failure and, as a consequence, impaired glucose tolerance and increased insulin resistance, which is a predictor of diabetes. A high dietary GL has also been unfavourably related to glycaemic control in individuals with diabetes." They conclude: "A low GL diet that also adequately adheres to the principles of the traditional Mediterranean diet may reduce the incidence of type 2 diabetes."
© 2016 Shmoop University, Inc. All rights reserved. The Return of the King The Return of the King by J.R.R. Tolkien The Return of the King Language and Communication Quotes How we cite our quotes: (Book.Chapter.Paragraph). Quote #10 The legends, histories, and lore to be found in the sources are very extensive. Only selections from them, in most places much abridged, are here presented. Their principal purpose is to illustrate the War of the Ring and its origins, and to fill up some of the gaps in the main story. The ancient legends of the First Age, in which Bilbo's chief interest lay, are very briefly referred to, since they concern the ancestry of Elrond and the Númenorean kings and chieftains. Actual extracts from longer annals and tales are placed within quotation marks. Insertions of later date are enclosed in brackets. Notes within quotation marks are found in the sources. Others are editorial. (Appendix A, 2) In another book series, we might think these appendices were a joke or a parody, they are so formal and academic. Come on—these aren't actual extracts. These stories aren't even real! Middle-earth doesn't exist! But maybe the fact that the entire cycle is in the guise of a factual history is meant to tell us something about the power of stories. We have said before that Tolkien is really superb at world creation, and this is part of it. It's almost as though Tolkien himself has faith that Middle-earth exists, and that he is just editing or recording true stories (rather than writing them). It makes it easier for us to believe in the rich lore of Middle-earth because Tolkien appears to believe it himself. People who Shmooped this also Shmooped...
Published on Published in: Education • Be the first to comment • Be the first to like this No Downloads Total views On SlideShare From Embeds Number of Embeds Embeds 0 No embeds No notes for slide 1. 1. Java Session 7 2. 2. Methods of super class Object • toString() Method : Override toString() when you want a mere mor- tal to be able to read something meaningful about the objects of your class. Code can call toString() on your object when it wants to read useful details about your object. Refer : • boolean equals (Object obj) : Decides whether two objects are meaningfully equivalent. The equals() method in class Object uses only the == operator for comparisons, so unless you override equals(), two objects are considered equal only if the two references refer to the same object. Ex : • void finalize() : Called by garbage collector when the garbage collector sees that the object cannot be referenced. you cannot enforce call to garbage collector. • int hashCode() : Returns a hashcode int value for an object, so that the object can be used in Collection classes that use hashing, including Hashtable, HashMap, and HashSet. • final void notify() : Wakes up a thread that is waiting for this object’s lock. • final void notifyAll() : Wakes up all threads that are waiting for this object’s lock. • final void wait() : Causes the current thread to wait until another thread calls notify() or notifyAll() on this object. • String toString() : Returns a “text representation” of the object. 3. 3. The equals() contract : • It is reflexive. For any reference value x, x.equals(x) should return true. • It is symmetric. For any reference values x and y, x.equals(y) should return true if and only if y.equals(x) returns true. • It is transitive. For any reference values x, y, and z, if x.equals(y) returns true and y.equals(z) returns true, then x.equals(z) must return true. • It is consistent. For any reference values x and y, multiple invocations of x.equals(y) consistently return true or consistently return false, pro- vided no information used in equals comparisons on the object is modified. • For any non-null reference value x, x.equals(null) should return false. Overriding hashCode() : • Hashcodes are typically used to increase the performance of large collections of data. • In real-life hashing, it’s not uncommon to have more than one entry in a bucket. Hashing retrieval is a two-step process. 1. Find the right bucket (using hashCode()) 2. Search the bucket for the right element (using equals() ). • Our hashcode algorithm should be efficient to distribute objects into memory evenly. • Ex : 4. 4. The hashCode() contract : • Whenever it is invoked on the same object more than once during an execu- tion of a Java application, the hashCode() method must consistently return the same integer, provided no information used in equals() comparisons on the object is modified. This integer need not remain consistent from one execution of an application to another execution of the same application. • If two objects are equal according to the equals(Object) method, then calling the hashCode() method on each of the two objects must produce the same integer result. • It is NOT required that if two objects are unequal according to the equals(java.lang.Object) method, then calling the hashCode() method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hashtables • x.equals(y) == true means x.hashCode() == y.hashCode() • Transient variables can really mess with your equals() and hashCode() implementations. Keep variables non-transient or, if they must be marked transient, don't use them to determine hashcodes or equality • Ex : 5. 5. Collections • What Do You Do with a Collection? There are a few basic operations you'll normally use with collections: 1. Add objects to the collection. 2. Remove objects from the collection. 3. Find out if an object (or group of objects) is in the collection. 4. Retrieve an object from the collection (without removing it). 5. Iterate through the collection, looking at each element (object) one after another. Core Interfaces : Concrete implementations : 6. 6. Class hierarchy of collections • 7. 7. Collections come in four basic flavors: • Lists Lists of things (classes that implement List). • Sets Unique things (classes that implement Set). • Maps Unique things (classes that implement Set). • Queues Things arranged by the order in which they are to be processed. Ordered collections : When a collection is ordered, it means you can iterate through the collection in a specific (not-random) order. • A Hashtable collection is not ordered. Although the Hashtable itself has internal logic to determine the order (based on hashcodes and the implementation of the collection itself), you won't find any order when you iterate through the Hashtable. • An ArrayList, however, keeps the order es tablished by the elements' index position (just like an array). • LinkedHashSet keeps the order established by insertion, so the last element inserted is the last element in the LinkedHashSet (as opposed to an ArrayList, where you can insert an element at a specific index position) Sorted collections : A sorted collection means that the order in the collection is determined according to some rule or rules, known as the sort order. A sort order has nothing to do with when an object was added to the collection, or when was the last time it was accessed, or what "position" it was added at. Sorting is done based on properties of the objects themselves. • A collection that keeps an order (such as any List, which uses insertion order) is not really considered sorted unless it sorts using some kind of sort order. Most commonly, the sort order used is something called the natural order 8. 8. List Interface • A List cares about the index. The one thing that List has that non-lists don't have is a set of methods related to the index • Those key methods include things like get(int index), indexOf(Object o), add(int index, Object obj), and so on. All three List implementations are ordered by index position—a position that you determine either by setting an object at a specific index or by adding it without specifying position, in which case the object is added to the end. • ArrayList : Think of this as a growable array. It gives you fast iteration and fast random access. To state the obvious: it is an ordered collection (by index), but not sorted. You might want to know that as of version 1.4, ArrayList now implements the new RandomAccess interface—a marker interface (meaning it has no methods) that says, "this list supports fast (generally constant time) random access." Choose this over a LinkedList when you need fast iteration but aren't as likely to be doing a lot of insertion and deletion. • Vector : Vector is a holdover from the earliest days of Java; Vector and Hashtable were the two original collections, the rest were added with Java 2 versions 1.2 and 1.4. A Vector is basically the same as an ArrayList, but Vector methods are syn- chronized for thread safety. You'll normally want to use ArrayList instead of Vector because the synchronized methods add a performance hit you might not need. And if you do need thread safety, there are utility methods in class Collections that can help. Vector is the only class other than ArrayList to implement RandomAccess. • LinkedList : A LinkedList is ordered by index position, like ArrayList, except that the elements are doubly-linked to one another. This linkage gives you new methods (beyond what you get from the List interface) for adding and removing from the beginning or end, which makes it an easy choice for implementing a stack or queue. Keep in mind that a LinkedList may iterate more slowly than an ArrayList, but it's a good choice when you need fast insertion and deletion. As of Java 5, the LinkedList class has been enhanced to implement the java.util.Queue interface. As such, it now supports the common queue methods: peek(), poll(), and offer(). 9. 9. Set Interface • A Set cares about uniqueness—it doesn't allow duplicates. Your good friend the equals() method determines whether two objects are identical (in which case only one can be in the set). • HashSet : A HashSet is an unsorted, unordered Set. It uses the hashcode of the object being inserted, so the more efficient your hashCode() implementation the better access performance you'll get. Use this class when you want a collection with no duplicates and you don't care about order when you iterate through it. • LinkedHashSet : A LinkedHashSet is an ordered version of HashSet that maintains a doubly-linked List across all elements. Use this class instead of HashSet when you care about the iteration order. When you iterate through a HashSet the order is unpredictable, while a LinkedHashSet lets you iterate through the elements in the order in which they were inserted. • TreeSet : The TreeSet is one of two sorted collections (the other being TreeMap). It uses a Red-Black tree structure (but you knew that), and guarantees that the elements will be in ascending order, according to natural order. Optionally, you can construct a TreeSet with a constructor that lets you give the collection your own rules for what the order should be (rather than relying on the ordering defined by the elements' class) by using a Comparable or Comparator. As of Java 6, TreeSet implements NavigableSet. 10. 10. Map Interface • A Map cares about unique identifiers. You map a unique key (the ID) to a specific value, where both the key and the value are, of course, objects. The Map implementations let you do things like search for a value based on the key, ask for a collection of just the values, or ask for a collection of just the keys. Like Sets, Maps rely on the equals() method to determine whether two keys are the same or different. • HashMap : The HashMap gives you an unsorted, unordered Map. When you need a Map and you don't care about the order (when you iterate through it), then HashMap is the way to go; the other maps add a little more overhead. Where the keys land in the Map is based on the key's hashcode, so, like HashSet, the more ef- ficient your hashCode() implementation, the better access performance you'll get. HashMap allows one null key and multiple null values in a collection. • Hashtable : Hashtable Like Vector, Hashtable has existed from prehistoric Java times. Just as Vector is a synchronized counterpart to the sleeker, more modern ArrayList, Hashtable is the synchronized counterpart to HashMap. Remember that you don't synchronize a class, so when we say that Vector and Hashtable are synchronized, we just mean that the key methods of the class are synchronized. Another difference, though, is that while HashMap lets you have null values as well as one null key, a Hashtable doesn't let you have anything that's null. • LinkedHashMap Like its Set counterpart, LinkedHashSet, the LinkedHash- Map collection maintains insertion order (or, optionally, access order). Although it will be somewhat slower than HashMap for adding and removing elements, you can expect faster iteration with a LinkedHashMap. • TreeMap You can probably guess by now that a TreeMap is a sorted Map. And you already know that by default, this means "sorted by the natural order of the elements." Like TreeSet, TreeMap lets you define a custom sort order (via a Comparable or Comparator) when you construct a TreeMap, that specifies how the elements should be compared to one another when they're being ordered. As of Java 6, TreeMap implements NavigableMap. 11. 11. QueueInterface • A Queue is designed to hold a list of "to-dos," or things to be processed in some way. Although other orders are possible, queues are typically thought of as FIFO (first-in, first-out). Queues support all of the standard Collection methods and they also add methods to add and subtract elements and review queue elements. • PriorityQueue : This class is new with Java 5. Since the LinkedList class has been enhanced to implement the Queue interface, basic queues can be handled with a LinkedList. The purpose of a PriorityQueue is to create a "priority-in, priority out" queue as opposed to a typical FIFO queue. A PriorityQueue's elements are ordered either by natural ordering (in which case the elements that are sorted first will be accessed first) or according to a Comparator. In either case, the elements' ordering represents their relative priority. 12. 12. • ArrayList : The java.util.ArrayList class is one of the most commonly used of all the classes in the Collections Framework. It's like an array on vitamins. Some of the advantages ArrayList has over arrays are • It can grow dynamically. • It provides more powerful insertion and search mechanisms than arrays. • Autoboxing with collections : In general, collections can hold Objects but not primitives. Prior to Java 5, a very common use for the wrapper classes was to provide a way to get a primitive into a collection. Prior to Java 5, you had to wrap a primitive by hand before you could put it into a collection. With Java 5, primitives still have to be wrapped, but autoboxing takes care of it for you. Ex : Sorting Collections and Arrays : • Using Collections.sort : • Comparable interface : The Comparable interface is used by the Collections.sort() method and the java.util.Arrays.sort() method to sort Lists and arrays of objects, respectively. To implement Comparable, a class must implement a single method, compareTo() int x = thisObject.compareTo(anotherObject); The compareTo() method returns an int with the following characteristics: negative If thisObject < anotherObject zero If thisObject == anotherObject positive If thisObject > anotherObject Ex : 13. 13. • Remember that when you override equals() you MUST take an argument of type Object, but that when you override compareTo() you should take an argument of the type you’re sorting. Sorting with Comparator : The Comparator interface gives you the capability to sort a given collection any number of different ways. The other handy thing about the Comparator interface is that you can use it to sort instances of any class—even classes you can't modify—unlike the Comparable interface, which forces you to change the class whose instances you want to sort.The Comparator interface is also very easy to implement, having only one method, compare(). • The method returns an int whose meaning is the same as the Comparable.compareTo() method's return value. Ex : 14. 14. • Sorting with Arrays class : sorting arrays of objects is just like sorting collections of objects. The Arrays.sort() method is overridden in the same way the Collections.sort() method is. •Arrays.sort(arrayToSort) •Arrays.sort(arrayToSort, Comparator) • Searching arrays and collections : Searches are performed using the binarySearch() method. • Successful searches return the int index of the element being searched. • Unsuccessful searches return an int index that represents the insertion point. The insertion point is the place in the collection/array where the element would be inserted to keep the collection/array properly sorted. Because positive return values and 0 indicate successful searches, the binarySearch() method uses negative numbers to indicate insertion points. Since 0 is a valid result for a successful search, the first available insertion point is -1. There- fore, the actual insertion point is represented as (-(insertion point) -1). For instance, if the insertion point of a search is at element 2, the actual insertion point returned will be -3. • The collection/array being searched must be sorted before you can search it. • If you attempt to search an array or collection that has not already been sorted, the results of the search will not be predictable. • If the collection/array you want to search was sorted in natural order, it must be searched in natural order. (Usually this is accomplished by NOT sending a Comparator as an argument to the binarySearch() method.) • If the collection/array you want to search was sorted using a Comparator, it must be searched using the same Comparator, which is passed as the second argument to the binarySearch() method. Remember that Comparators cannot be used when searching arrays of primitives. •Ex : 15. 15. • Converting Arrays to Lists to Arrays : There are a couple of methods that allow you to convert arrays to Lists, and Lists to arrays. The List and Set classes have toArray() methods, and the Arrays class has a method called asList(). • The Arrays.asList() method copies an array into a List. The API says, "Returns a fixed-size list backed by the specified array. (Changes to the returned list 'write through' to the array.)" When you use the asList() method, the array and the List become joined at the hip. When you update one of them, the other gets updated automatically • Ex : • Iterator : The two Iterator methods you need to understand for the exam are boolean hasNext() : Returns true if there is at least one more element in the collection being traversed. Invoking hasNext() does NOT move you to the next element of the collection. Object next() : This method returns the next object in the collection, AND moves you forward to the element after the element just returned. Ex : • TreeSet and TreeMap : TreeSet and TreeMap implements interfaces: java.util.NavigableSet and java.util.NavigableMap. Ex : • Polling : The idea of polling is that we want both to retrieve and remove an element from either the beginning or the end of a collection. In the case of TreeSet, pollFirst() returns and removes the first entry in the set, and pollLast() returns and removes the last. Similarly, TreeMap now provides pollFirstEntry() and pollLastEntry() to retrieve and remove key-value pairs. • Descending Order : Return a collection in the reverse order of the collection on which the method was invoked. method calls are TreeSet.descendingSet() and TreeMap .descendingMap(). 16. 16. • Priority Queue : Unlike basic queue structures that are first-in, first-out by default, a PriorityQueue orders its elements using a user-defined priority. The priority can be as simple as natural ordering (in which, for instance, an entry of 1 would be a higher priority than an entry of 2). In addition, a PriorityQueue can be ordered using a Comparator, which lets you define any ordering you want. Queues have a few methods not found in other collection interfaces: peek(), poll(), and offer() • peek() returns the highest priority element in the queue without removing it, and poll() returns the highest priority element, AND removes it from the queue. Uses the offer() method to add elements to the PriorityQueue. • PriorityQueue doesn’t allow null values and we can’t create PriorityQueue of Objects that are non- comparable, for example any custom classes we have. We use java Comparable and Comparator interfaces for sorting Objects and PriorityQueue use them for priority processing of it’s elements. • The head of the priority queue is the least element based on the natural ordering or comparator based ordering, if there are multiple objects with same ordering, then it can poll any one of them randomly. When we poll the queue, it returns the head object from the queue. • PriorityQueue size is unbounded but we can specify the initial capacity at the time of it’s creation. When we add elements to the priority queue, it’s capacity grows automatically. • PriorityQueue is not thread safe, so java provides PriorityBlockingQueue class that implements the BlockingQueue interface to use in java multi-threading environment. • PriorityQueue implementation provides O(log(n)) time for enqueing and dequeing method. Let’s see an example of PriorityQueue for natural ordering as well as with Comparator. • Ex : 17. 17. Generics • Java Generic methods and generic classes enable programmers to specify, with a single method declaration, a set of related methods or, with a single class declaration, a set of related types, respectively. • Generics also provide compile-time type safety that allows programmers to catch invalid types at compile time. • Using Java Generic concept, we might write a generic method for sorting an array of objects, then invoke the generic method with Integer arrays, Double arrays, String arrays and so on, to sort the array elements. • The type in angle brackets is referred to as either the "parameterized type," "type parameter," or of course just old- fashioned "type.” • JVM has no idea that your ArrayList was supposed to hold only Integers. The typing information does not exist at runtime! All your generic code is strictly for the compiler. Through a process called "type erasure," the compiler does all of its verifications on your generic code and then strips the type information out of the class bytecode. At runtime, ALL collection code—both legacy and new Java 5 code you write using generics—looks exactly like the pre-generic version of collections. None of your typing information exists at runtime • Why did they do generics this way? Why is there no type information at runtime? To support legacy code. At runtime, collections are collections just like the old days. What you gain from using generics is compile- time protection that guarantees that you won't put the wrong thing into a typed collection, and it also eliminates the need for a cast when you get something out, since the compiler already knows that only an Integer is coming out of an Integer list • <? extends T> means I can be assigned a collection that is a subtype of List and typed for <T> or anything that extends Animal. • <? extends Animal> means that you can take any subtype of Animal; however, that subtype can be EITHER a subclass of a class (abstract or concrete) OR a type that implements the interface after the word extends. In other words, the keyword extends in the context of a wildcard represents BOTH subclasses and interface implementations. There is no <? implements Serializable> syntax. If you want to declare a method that takes anything that is of a type that implements Serializable, you'd still use extends like this: void foo(List<? extends Serializable> list) // odd, but correct to use "extends” • ONE wildcard keyword that represents both interface implementations and subclasses. And that keyword is extend 18. 18. Generics(contd …) • There is another scenario where you can use a wildcard AND still add to the collection, but in a safe way—the keyword super. • List<?>, which is the wildcard <?> without the keywords extends or super, simply means "any type." So that means any type of List can be assigned to the argument. That could be a List of <Dog>, <Integer>, <JButton>, <Socket>, whatever. And using the wildcard alone, without the keyword super (followed by a type), means that you cannot ADD anything to the list referred to as List<?>. • List<Object> is completely different from List<?>. List<Object> means that the method can take ONLY a List<Object>. Not a List<Dog>, or a List<Cat>. It does, however, mean that you can add to the list, since the compiler has already made certain that you're passing only a valid List<Object> into the method • List<? extends Object> and List<?> are absolutely identical! • One way to remember this is that if you see the wildcard notation (a question mark ?), this means "many possibilities". If you do NOT see the question mark, then it means the <type> in the brackets, and absolutely NOTHING ELSE. Ex : • It’s tempting to forget that the method argument is NOT where you declare the type parameter variable T. In order to use a type variable like T, you must have declared it either as the class parameter type or in the method, before the return type • Generic api example : • Generic class : • Generic methods : • One of the most common mistakes programmers make when creating generic classes or methods is to use a <?> in the wildcard syntax rather than a type variable <T>, <E>, and so on.This code might look right, but isn’t: public class NumberHolder<?> { ? aNum; } // NO! public class NumberHolder<T> { T aNum; } // Yes
Heliotrope Warning The flowering plant heliotrope is toxic to dogs and can cause death to those who ingest it? Claim:   The flowering plant heliotrope is toxic to dogs and can cause death to those who ingest it. Example:   [Collected via Facebook, 2013] FB friends, we got the biopsy report back and with great sorrow I must share this... Our darling girl died from the toxin in this plant that I have on my deck. It is called heliotrope and is highly toxic, causes total liver destruction. The pathologist said our angel had the worst liver damage he's ever seen. Goldie would nibble at the leaves of this plant every so often and we had no clue it was toxic. (It can come in white or purple.) Please share with any dog owners you know to hopefully prevent their dog from becoming a statistic like Goldie. We are even more heartbroken now knowing her death was preventable. Please share her story so that something positive may come of it and create awareness of toxic plants. Our own vet had no idea this was a toxic plant!! Origins:   Heliotropium is a genus of flowering plants which includes a few hundred different species commonly known as "heliotropes," the most well known version being a plant that produces pink-purple flowers as shown above. Heliotropes are generally found in the eastern U.S. from Florida up to New Jersey, and sometimes as far north as northern New England. The ASPCA's Animal Poison Control Center article on heliotropes lists them as a substance which is toxic to horses and can induce liver failure in equines: The ASPCA's listing does not declare the heliotrope to be toxic to dogs, however. Likewise, other sources mention the toxic effects of heliotrope on horses, pigs, and cows but make no mention of dogs: This plant should be considered toxic as it contains the pyrrolizidine alkaloids; lycopsamine, intermedine, and echiumine. Ingestion can cause severe illness and possibly death in horses, swine, and cattle. The alkaloids are potent liver toxins that under some conditions can be carcinogenic. For horses that have ingested a potentially lethal amount of the plant and/or are suffering advanced symptoms the illness has been termed "walking disease" or "sleepy staggers". The name being a reference to the fact that affected horses may appear blind and wander aimlessly, walking in circles or bumping into objects. Other visible symptoms that are typically associated with severe intoxication include: muscle tremors, especially of the head and neck; frequent yawning, copper colored or red urine, difficulty or inability to swallow, horses may stop eating halfway through a mouthful of food; horses may stand with their heads held down, head pressing, dragging of the hind legs, causing the hooves to have worn tips, random attacks of frenzy and violent, uncontrollable galloping. Once an animal begins to show signs of severe intoxication there is little that can be done to stop disease progression and inevitable liver damage. As a result prevention is the best treatment option. Luckily the plant is not very palatable and most animals will completely ignore it unless no other forage is available. Poisonings typically occur from ingestion of the green plant or when the plant becomes a contaminant in hay. Always check hay for signs of contaminants and ensure animals are provided plenty of quality hay and feed, if animals are left to graze ensure the pasture provides plenty of non hazardous plants to forage upon. A good default is to assume that any plant can be toxic or cause an allergic reaction in your pet. If your dog or cat is nibbling on the leaves of something, take it away and check to find out if it's harmful. The ASPCA lists 448 plants known to be toxic to dogs, cats, and horses, and many of these are common houseplants. Last updated:   22 April 2013
Place:Jinotega, Nicaragua Alt namesJinotegasource: Wikipedia Coordinates13.75°N 85.583°W Located inNicaragua source: Getty Thesaurus of Geographic Names source: Family History Library Catalog the text in this section is copied from an article in Wikipedia Jinotega is the second largest department in Nicaragua. It is bordered on the north by the country of Honduras. The departments surrounding it are, Matagalpa to the south, Región Autónoma del Atlántico Norte to the east, and Estelí, Madriz, and Nueva Segovia. It covers an area of 9,755 km² and has a population of 297,300 (2005 census). The capital is the city of Jinotega. Research Tips This page uses content from the English Wikipedia. The original content was at Jinotega (department). The list of authors can be seen in the page history. As with WeRelate, the content of Wikipedia is available under the Creative Commons Attribution/Share-Alike License.
Talk:Galaxy rotation curve From Wikipedia, the free encyclopedia Jump to: navigation, search WikiProject Physics (Rated B-class, Mid-importance) WikiProject Astronomy (Rated C-class, High-importance) WikiProject icon Galaxy rotation curve is within the scope of WikiProject Astronomy, which collaborates on articles related to Astronomy on Wikipedia. Mass Distribution Models[edit] Models of mass distribution show rotation curves calculated entirely according to Newtonoian formula that match observation. This should be included in the article. A uniform spherical distribution model shows a rotation curve that slopes steeply up from center. A uniform disk model shows a rotation curve that slopes less steeply. A disk with a bulge of increased mass in the middle model shows the rotation curve noted in the article. Rising steeply in the bulge like the uniform spherical distribution then flat in the disk. An all mass in the middle model like the solar system shows a rotation curve that slopes down as the "predicted" line in the diagram. Since these models predict rotation curves that match observations in all cases they should be included. These models also predict luminant and opaque matter distributions that match observation. (talk) 13:49, 24 May 2012 (UTC) The equation that claims to relate the mass density distribution to the rotation curve is not correct --- or it least it does not follow from Kepler's third law without some ascribing some unusual meaning to the "radial density profile". Asssuming spherical symmetry (is this reasonable?) a correct equation relating to the velocity curve is surely I'd change the equation, but I'm not sure how the change would affect the rest of the section, which is confusing enough as it stands. Mike Stone (talk) 15:13, 19 March 2013 (UTC) I can only underline that Eq. (??) is wrong. I think the derivation of the correct formula is simple enough to be stated in the Wikipedia. It can be done in the first semester in the usual physics curriculum at universities. The first important feature of Newtonian gravity, which is sufficient for this purpose, is that the gravitational force on a body moving in an extended radially symmetric (around a center) mass distribution that consitutes the gravitational field is given by where is Newton's gravitation constant, is the mass of the body, is the position vector relative to the center of the mass distribution, and is the total mass contained in a sphere around the center, i.e., To get the rotation curve, we make the simplifying assumption that the body runs on a circular orbit. Then you equate the centripetal force, necessary to keep the body on this orbit with the gravitational force, which gives From (3) and (2) we find the relation of the velocity profle to the density distribution as Vanhees71 (talk) 08:57, 19 December 2013 (UTC) The equation is correct (eg equation 3.20 on p. 100 of the Sparke & Gallagher 2000 textbook). What's confusing is that the Wikipedia article expresses the equation as ; rho is the average density within a sphere of radius r, not the local density at radius r. More commonly (including in Sparke & Gallagher), this equation is expressed using M(<r), the mass enclosed within a radius r. However, it is the mass enclosed, expressed either M(<r) or rho as in the current version of the Wikipedia article, which is relevant for Kepler's law. The local mass density is not relevant. —Alex (ASHill | talk | contribs) 03:37, 20 December 2013 (UTC) Refactored from Archived Discussion[edit] Elliptical orbits[edit] One thing that I have never seen is an explanation of the galactic rotation curve that also explicitly takes into account the theory that the spiral arms are not in fact coherent but are a construct of the elliptical orbits of the stars that make them up, as explained by Image:Spiral galaxy arms diagram.png. The implication of this is that a star in a spiral arm is near the aphelion end of the ellipse, and so is going more slowly than a star on a circular orbit at that distance would be. If the ellipses themselves are turning and giving the illusion of the spiral arms rotating evenly, then the discrepancy could disappear. It seems unlikely that this has been overlooked, but I'd be interested to see a discusson that includes this aspect. PhilHibbs | talk 18:41, 23 Feb 2005 (UTC) Velocity vs Speed[edit] The vertical axis of the graph really should be "Speed". I know that "Velocity" sounds more scientific, but velocity is a vector and that's not being represented on the graph, but only the magnitude, and the magnitude component of velocity is "speed". -- Ch'marr 15:29, 10 October 2005 (UTC) (the pedant) Actually it would be correct to call it angular velocity, since that's what it is. 12:51, 4 August 2007 (UTC) Not true. If it were angular velocity then it wouldn't be flat at large radii: v(r)~constant, but (r)=v(r)/r1/r. And angular velocity is also a vector, although the distinction isn't important for a rotating disk. Cosmo0 —Preceding signed but undated comment was added at 18:41, 20 September 2007 (UTC) Mysterious vs. Mundane Dark Matter[edit] I've read in the past, either in "Light at the Edge of the Universe" or perhaps "The Whole Shebang" (I can't remember which) that the amount of matter required to explain the galaxy rotation curve effect was significantly less than the amount of matter required to explain the flat curvature of the universe in the FLRW metric. I should, I suppose, find the exact quote. For the galaxy rotation curve, it was estimated that approximately 90% of the matter of the universe had to be "dark" which to me seems no great stretch of the imagination, just considering free hydrogen in a more-or-less smooth distribution. In the region of planets, of course, you wouldn't find it, because it's been swept away by the condensed masses. But the rings of Saturn (and the asteroid belt) are suggestive of a large amount of such sweeping. Anyway, a smooth region of gas would be consistent with the smoothness of the Cosmic Background. Consider that all the galaxies could be just condensations in a universe with an almost crystalline pattern of hydrogen gas, held apart by an almost perfectly symmetrical initial big bang. Galaxies could be whirlpools in a sea of hydrogen instead of a vacuum, as is usually assumed. While nonbaryonic dark matter seems to exist, it seems to appear mainly in the course of violent explosions, with a very short half-life, and in insufficient quantities to create the Galaxy Rotation Curve result. Corrections welcome. JDoolin 00:45, 22 August 2006 (UTC) Reverted contributions[edit] I reverted this contribution because it represents a rather extreme minority theory for an explanation of dark matter. If and when this idea receives more notice within the community we can begin to include it, but it will probably find itself at dark matter rather than here. ScienceApologist (talk) 18:22, 11 January 2008 (UTC) I totally agree! It seems that it is very doubtful that in the end Dark Matter will still be considered, and to include something that has never been proven is wrong! Yes, it is a topic for 'Dark Matter', because that is where this unidentifiable stuff is being discussed. Further, we MUST make up out minds what 'Dark Matter' is. This topic says that 84.5% of the mass in the universe is made of 'Dark Matter', where the words 'Dark Matter' have an attached URL. Clicking on this takes us to a page that says the mass is made up of some 26.8% of Dark Matter. The 86.4% represents the sum of both the Dark Matter and the Dark Energy. This implies that the authors are either stupid or totally disorganised. I would suggest that it is totally removed from this topic and discussed in the 'Dark Matter' page. At most, this page could just have half a sentence mentioning the total uncertainty in this area. GilR 23:52, 19 April 2015 (UTC) BEC or Scalar field dark matter model (SFDM) of halos is not an extrem minority theory[edit] Dear ScienceApologist I don't know how to contact you. So I wrote a mail here. Thank you for your efforts. But BEC or Scalar field dark matter model (SFDM) of halos is not an extremly minority theory. Actually, this model is a main alternative to standard cold dark matter model and the most successful model in explaing the rotation curves.It has many different names such as fuzzy, fluid, repulsive, scalar field, boson halo and so on.(Please see a review article in Science [1].)This model solves many problems of cold dark model (CDM) such as cuspy halo and missing satellite. There are already hundres of journal paper published about this idea and also many works saying the model predicts the observed rotation curves of galaxies and dwarf galaxies very well.(Please see review [2] page 21 and [3,4,5]) No other dark matter model have succeeded at this level in explaining the curves. Note that SFDM also behaves as CDM at the scale larger than galaxy, so it also explain large scale strures as much as CDM. Thus, I think, for balance, this model deserve space at least as much as the modified gravity theories which seems to be even not accepted by majority of physics community currently. As far as I understand, the philosophy of wikipedia is that users themselves make contents and the others review and edit the contents. Many experts in this field is already reading and editing this subject.(I am one of them) So, if my statements are wrong, missing something, or the model is an extremly minority which does not deserve publication, they will soon edit it. So please let the experts do it. [1] "New Light on Dark Matter" Jeremiah P. Ostriker and Paul Steinhardt Science 20 June 2003: Vol. 300. no. 5627, pp. 1909 - 1913 [2] "TOPICAL REVIEW: General relativistic boson stars" Franz E Schunck and Eckehard W Mielke arXiv:0801.0307v1 [astro-ph] [3] "Is dark matter a BEC or scalar field?"Jae-Weon Lee arXiv:0801.1442v1 [astro-ph] [4] "Mini-review on Scalar Field Dark Matter" L. Arturo Ure˜na–L´opez [5] "Scalar Field Dark Matter: head-on interaction between two structures" Phys.Rev. D74 (2006) 103002 Retrieved from "" —Preceding unsigned comment added by Scikid (talkcontribs) 07:12, 12 January 2008 Just to chime in with my view: it may not be an extreme minority theory but it is, at the moment, not well established as a serious alternative to CDM (despite your exaggerated account of it's successes). In any case, as ScienceApologist says, the article on dark matter is the correct place for outlining different theories on the nature of dark matter. From the point of view of galaxy rotation curves, the main debate is over whether dark matter is neccessary at all, although a single sentence pointing out that different kinds of dark matter predict slightly different rotation curves may be appropriate here. Cosmo0 (talk) 13:06, 12 January 2008 (UTC) Direction of acceleration[edit] It may be an over-simplification to assume a radial accelaration towards the centre of mass. From the viewpoint of a star near the rim of the galaxy, the concentration of mass in the spiral arms will act as a significant gravitational attractor, in comparison with the more distant central bulge. Envisaging the space as a deformed stretched membrane, the outer reaches of the galaxy will look like a corrugation of valleys and ridges corresponding to the arms and voids respectively. A test mass would roll forwards down one of these valleys and spiral in towards the centre of mass. The acceleration would therefore have a forward as well as an inward component, speeding up the rotation of the outer parts and helping to preserve the structure of the spiral arms. This would explain at least part of the observed phenomenon without modifying Newton or positing dark matter.Brian O'Donnell (talk) 15:47, 29 July 2008 (UTC) I've just discovered an explanation of the above with some arithmetic at [2].Brian O'Donnell (talk) 07:22, 30 July 2008 (UTC) What you say is perfectly true, but I don't quite see what your point is. This is a well known fact, even if we often neglect it in practice when considering the average motions of stars in a galaxy. It has no bearing on the need for dark matter - if that's what you're implying - since it doesn't contribute anything to the average rotation curve. Cosmo0 (talk) 14:23, 30 July 2008 (UTC) The problem here seems to be getting enough mass at the outer reaches of the galaxy. But what if the spaces between the arms were actually empty voids and all the mass was concentrated in the spirals? I.e. just as it appears in a telescope. The arms would have to be semi-stable structures, e.g. vortices, and gravity would act in a line of thrust along the axis of each arm. A test mass released at the end of an arm would curve along it into the centre instead of a direct straight line. A body in orbit would be accelerated forwards as well as inwards and would keep up with the arm to a greater extent than in a uniform disc. The need for dark matter would therefore be reduced or even eliminated for spiral galaxies.Brian O'Donnell (talk) 13:15, 5 August 2008 (UTC) If it is assumed that the mass distribution in the galaxy is uniform then a Newtonian solution is not possible. More subtly, if the idea of a centre of mass is assumed to be accurate enough within the galaxy, the same effect ensues. Between galaxies, the centre of mass approach is a good approximation but it is less accurate for less uniform mass distributions and for distances closer to the centre. If it is proposed that the arms are real, local effects have to be considered if they are to be stable. They would collapse into themselves along their axes unless the component stars were orbitting around each axis. At[3]a reciprocal simple harmonic motion is proposed but isn't this a kind of flattened ellipse? The website goes on to explain gravitational attraction to the arms as well as to the centre. I am not trying to rubbish the general idea of new physics, merely to point out that a solution using the old physics is plausible in this case.Brian O'Donnell (talk) 20:26, 16 August 2008 (UTC) Galactic Jets[edit] How much of the anomalous rotation curve can be ascribed to gravitational attraction between stars and the long relativistic jets that the central supermassive black hole has been emitting for billions of years? It seems likely that some of the flattening of the rotation curve should be due to the long axial jets, which should extend far beyond the halo and beyond the radius ascribed to a spherical distribution of cold dark matter. I don't know how much mass is ejected into jets for either an active galactic nucleus or an old galaxy but their mass should affect stars at large enough distance from the galactic center. Perhaps the radius of the galactic bulge is a clue to the mass of the jets? WalterU (talk) 11:17, 25 June 2009 (UTC) Work of Cooperstock and Tieu[edit] This should be added to the alternatives theory. Cooperstock is a decades-long researcher in GR with a long and fine publication record, and his contributions to this subject, insofar as they are a direct attack on this problem, should be pointed out. Antimatter33 (talk) 14:23, 5 September 2009 (UTC) Radial Velocity Profiles in Globular Clusters[edit] Analysis of globular clusters shows similar anomalous behavior in the radial velocity distribution of cluster members. Since dark matter cannot account for these, while the work of Cooperstock, in principle, can (effect of non-linearity in GR, cluster members not test particles), this should also be pointed out in the "alternatives" section. Antimatter33 (talk) 14:23, 5 September 2009 (UTC) If that is true the article is wrong! From the article: "It [dark matter] has been uniquely successful in ... explaining the dynamics of groups and clusters of galaxies". Aarghdvaark (talk) 06:19, 31 January 2012 (UTC) The relevance of gravitational lensing needs to be explained[edit] Someone significantly smarter than myself needs to explain how gravitational lens measurements can distinguish between a) alternative gravity theories, and b) dark matter. The statement is made that it does, but it feels like the article is truncated. Though the fact doesn't surprise me, I can't see why gravitational lens predictions would differ between the two kinds of rotation anomaly]y explanations. --TechnoFaye Kane 12:00, 13 December 2009 (UTC) Newtonian gravitation is wrong[edit] IMHO, the graph is embarrassing; Newtonion gravitation is known to be wrong, so why base ANYTHING on it? The argument that relativistic effects don't matter is IMHO BS, because there is a supermassive black hole in the center of the galaxy, and orbital speed of stars observed in outer reaches of other galaxies show that there is a fixed relation between the mass of the galactic supermassive hole and orbital speed of stars. That is, THE HOLE CONTROLS THE ENTIRE CURVE. So this is NOT evidence for dark matter, it is epicycles. —Preceding unsigned comment added by (talkcontribs) Well, there are a couple of things you can do here. If you have a reliable source that backs up your assertion that the galaxy rotation curve can be explained by relativistic corrections and "epicycles" then you can include it in the article. If, on the other hand, you believe you have a unique explanation for the galaxy rotation curve that has not yet been discovered by mainsteam science then write a paper, get it published in a peer-reviewed journal and then include it in the article. Gandalf61 (talk) 10:04, 25 March 2010 (UTC) Rubin's inclusion in the main article text[edit] Vera Rubin is present as a "See also" link and the first two entries in the bibliography. Should she be moved into the text itself? Her own article seems to suggest she most certainly should be. —Preceding unsigned comment added by (talk) 10:55, 14 November 2010 (UTC) done Aarghdvaark (talk) 01:53, 1 February 2012 (UTC) Just wondering if anyone had thought to take in account that space has could have mass and repulsive forces.[edit] all this talk of dark matter and trying to explain abnormalities in the way the universe rotates and the way light is seen is retarded. Stop trying to find out what is wrong with the equations u all ready have and make up some new ones. Creating a fictional material to explain that u fucked up is childish. (dark matter my ass). Signed, Keaton McMahon, 92234 Velocity: Angular or Linear?[edit] Just one question: The diagram in the article does not have units for velocity, so it is unclear to me if this is angular (revolutions per time) or linear (distance per time) speed. And the text offers no clues for me… A physics buff probably takes one look and knows, but I have no idea. (talk) 16:27, 30 December 2011 (UTC) The Y axis of the graph shows linear velocity, measured in km/s; in fact, it's true that in this case it should be called Speed, instead of Velocity, as velocity is a vector, and therefore, a sign (positive or negative) should be used in order to express the whole measurement: notice that we are working with absolute values of velocity, that is, the magnitude component of the vector, as we just want to see how it changes deppending on the absolute value of the distance (which initially was a vector -displacement-, but now we just use its magnitude component, so it's scalar -distance-). If we plotted an angular speed Vs distance graph, we would see that the line would remain constant in the part that previously was a 'growing' line (where linear speed increased at a constant ratio), and it would tend to decrease in the part in which linear speed remains constant (horizontal line in the original graph). Hope it's helpful. Pichiniqui (talk) 00:55, 8 January 2012 (UTC) hopefully clarified with edit "represented by a graph that plots the orbital speed (in km/s) of the stars". Aarghdvaark (talk) 02:02, 1 February 2012 (UTC) "needs attention from an expert on the subject" heading[edit] I'm removing this tag. I went over the article a few months ago and there have been only minor changes since then. So, although I'm not an expert, it seems no expert is coming along to the rescue and the article is good enough not to cause problems. Also there was no section added here in talk explaining why the tag was needed in the first place. If you put the "needs attention from an expert on the subject" tag back, please also discuss here why you think it needs attention. Aarghdvaark (talk) 03:04, 16 April 2012 (UTC) Further investigation[edit] The original wording in this section and my first change are given here [4]. The original is clearly wrong since there has been no observation of the distribution of dark matter. A small edit war then ensued about whether the wording should be "the simulations assume" (me) or "the simulations show" (User:Junjunone [5], with finally an article cited by Junjunone, presumably in support of his view. I have skimmed the article and I don't understand why Junjunone cited it? The last sentences in the conclusion are "The hypothesis that the inclusion of baryons would resolve the discrepancy between theory and observation have been shown to be wrong. Worse, the disagreement actually grows larger if one utilises strong feedback physics schemes that can reproduce the observed stellar fractions in these systems". There is more of course, but I think I'm justified on the basis of this cited paper on stating that "State-of-the-art cosmological and galaxy formation simulations of dark matter with normal baryonic matter included do not resolve the discrepancy between theory and observation". Aarghdvaark (talk) 14:42, 14 September 2012 (UTC) It appears that Aarghdvaark is not capable of understanding the papers beings cited. A standard treatment in simulations is to add baryons to dark matter. In these scenarios, the baryons follow the dark matter because gravity works that way. The cited paper shows that this happens and then asks the question whether feedback from baryonic processes can affect the dark matter and they get a negative as the answer. The simple fact is that this is just an example of one of the hundreds of papers written that show that baryons follow the dark matter. There are no assumptions and it is a bit strange that one would change the wording of the sentence to refer to a completely separate issue. Junjunone (talk) 16:00, 15 September 2012 (UTC) Don't be so patronizing. It is obvious that normal matter will follow dark matter since the assumption is that dark matter is subject to gravity and that there is approximately five times as much exotic dark matter as normal matter. Have you really just used the paper by Duffy et al to try and justify the seemingly simple and innocuous looking sentence that "state-of-the-art ... simulations ... show that baryonic matter traces dark matter structures" [6]? But this sentence is wrong: • the presumed distribution of dark matter is in a Dark matter halo around a galaxy, and obviously this is not how baryonic matter is distributed - so baryonic matter clearly does not trace dark matter structures. • the sentence is too emphatic as it could easily be read as implying dark matter structures had been observed, when no one has seen any dark matter structure outside of a simulation. • the point under discussion is not that baryonic matter "traces" or follows dark matter, but why the baryonic matter tail wags the dark matter dog - why is the amount of dark matter the right amount to make the galaxy rotation curve appear to depend on the baryonic matter, as per the well-established Tully-Fisher relation? I took it that rather than trying to support the facile truism that baryonic matter and dark matter are both affected by gravity, you had cited Duffy et al's paper to support a statement something like "state-of-the-art ... simulations ... show that the amount of dark matter is dependent on the amount of baryonic matter" or the other way round - either of which which would go some way to explaining the Tully-Fisher relation. The paper is well written and is a summary of a large number of simulations. They investigate the back-reaction of baryons on the dark matter halo density profile - i.e. is the amount of dark matter dependent on the amount of baryonic matter, or can the tail wag the dog. Their conclusions were: • "in the inner ten per cent of the virial radius the models are only successful if we allow their parameters to vary with baryonic physics, halo mass and redshift, thereby removing all predictive power" (my emphasis). What they mean is the models only show the correct result if the input parameters are adjusted to give the correct result. Or to put it another way, there is no theory to show any direct relation between the amount of dark matter and the amount of baryonic matter up to 10% of the virial radius. • "On larger scales the profiles from dark matter only profiles consistently provide better fits", and "The most significant effects occur in galaxies at high redshift, where there is a strong anti-correlation between the baryon fraction in the halo centre and the inner slope of both the total and the dark matter density profiles" (my emphasis). i.e. there was a hypothesis which sought to explain how the amount of dark matter could depend on the amount of baryonic matter - only that hypothesis has been shown to be utterly wrong because it actually made things worse. As I said the paper is well written and the conclusions are clear - state of the art simulations demonstrate there is no dark matter theory which can currently explain the Tully-Fisher relation. Aarghdvaark (talk) 04:41, 17 September 2012 (UTC) There is something seriously wrong with your reading comprehension. I will ask to find out what the procedure is for disciplining someone who either misrepresents sources or cannot understand very simple points of clarification. Junjunone (talk) 17:22, 17 September 2012 (UTC) Junjunone, Please avoid personal attacks against other editors. Comment on content, not personality. Thanks. Ebikeguy (talk) 01:30, 18 September 2012 (UTC) Junjunone has taken this further. This is the timeline of his actions on 17 Sep 2012: Those are the pages I'm aware of. Suggest to keep everything in one place that further discussion on content only be at Wikipedia talk:WikiProject Astronomy#Need a dark matter.2Fbaryonic matter simulations expert? Aarghdvaark (talk) 03:35, 18 September 2012 (UTC) That doesn't make sense, why would we move the discussion about the article off this page? The ANI thread and Jimbo thread have nothing directly to do with this discussion. IRWolfie- (talk) 00:01, 19 September 2012 (UTC) OK keep it here. Aarghdvaark (talk) 22:42, 19 September 2012 (UTC) You said "There has been no observation of the distribution of dark matter": [7]: "Direct measurement of a dark-matter ‘filament’ confirms its existence in a galaxy supercluster." It sounds like the basis of much of your reasoning above, and for the removal is original research. IRWolfie- (talk) 23:16, 18 September 2012 (UTC) I'm not sure I see the relevance of your reference here. The article is about galactic rotation curves, and hence is concerned with the dark matter distribution within galaxies. I read Aarghdvaark's statement as saying that there has been no observation of the dark matter distribution on these scales (i.e. on scales of tens of kiloparsecs). The Nature news article you link to is concerned with an observation of dark matter on the scale of a supercluster of galaxies, and tells us something about how dark matter is distributed between galaxy clusters, on a scale of greater than a megaparsec, but not about how it's distributed on smaller scales. Scog (talk) 13:19, 19 September 2012 (UTC) Hi IRWolfie. Did you actually read this reference you cite above or just go on the headline: "Direct measurement of a dark-matter ‘filament’ confirms its existence in a galaxy supercluster"[8]? Because it says there "Mark Bautz ... notes that astrophysicists do not know precisely how visible matter follows the paths laid out by dark matter", which is exactly the point I was making. Incidentally, I would also disagree with the claim of direct measurement of dark matter, because what they actually did was "the team calculated that no more than 9% of the filament's mass could be made up of hot gas. The team's computer simulations suggest that roughly another 10% of the mass could be due to visible stars and galaxies. The bulk, therefore, must be dark matter, says Dietrich". That's not direct observation, that's using simulations to predict what is happening in some observations. Sloppy writing in Nature - sigh Aarghdvaark (talk) 00:27, 20 September 2012 (UTC) After reading the rather confused commentary here, I am of the opinion that the problem is one of WP:DUE. In particular, Stacy McGaugh's reasonable pointing out that MOND works well for galaxy rotation curves should be discussed in this article, but we currently devote a bit too much to it in relation to what the literature shows. The Tully-Fisher relation has no scatter and MOND reproduces some interesting bumps and wiggles that are not as easily accounted for in the halo fitting picture, but that's, for better or worse, only a small part of the galaxy rotation curve story. We have little discussion here on how these plots are actually made, how the data is gathered, and what the standard way of fitting dark matter profiles is. I think that if this was added to the article with some paring back of the WP:BALL issues associated with McGaugh's work, we could resolve much of this. I am willing to begin writing this revision soon. Junjunone (talk) 03:13, 24 September 2012 (UTC) Junjunone. The statement you have put back in:[9] "though state-of-the-art cosmological and galaxy formation simulations of dark matter with normal baryonic matter included show that baryonic matter traces dark matter structures" is at the very least misleading, e.g. see above for the cite from IRWolfie- that "astrophysicists do not know precisely how visible matter follows the paths laid out by dark matter". As I said before, since ordinary matter and dark matter are both affected by gravity they will tend to clump in the same place - that's a no-brainer. So either your sentence is trivial if you meant that, or wrong if you meant that dark matter determines precisely where ordinary matter ends up. Instead of asking for me to be banned for not understanding you, try and argue your case and explain what you meant to say. Aarghdvaark (talk) 03:56, 29 September 2012 (UTC) There is something you're missing, but I'm not sure what it is. Either your opinion of what is trivial is not trivial to me, or your view of what "precisely" means is beyond what any source we have or what any statement being made is trying to say. In any case, I do not see this as being meaningful once this page is rewritten since the discussion of McGaugh's points will be diminished to the level that they are considered in reference to this subject. Junjunone (talk) 01:52, 30 September 2012 (UTC) When challenged about what you have written, you are supposed to explain in more detail what you mean by what you write. Not just say it is the readers fault if they cannot understand you. You have actually done nothing apart from repeat your original sentence all through this and avoided any attempt to explain yourself, see above for a typical example - that is not constructive argument. Aarghdvaark (talk) 14:08, 1 October 2012 (UTC) I don't understand what your challenge means and I welcome your clarification. I believe it is based primarily on something you are not understanding properly, but I am not sure exactly where your misconception lies from what you are writing as there are two distinct possibilities I outlined. I agree that this argument is not constructive, which is why I think you should not continue here as your comments are mostly distractions from the job of retooling the article. Junjunone (talk) 14:30, 1 October 2012 (UTC) A good start would be to write what you are trying to say in a different way, here, on the talk page. Aarghdvaark (talk) 22:33, 1 October 2012 (UTC) I think that's what I'm doing. Junjunone (talk) 11:59, 2 October 2012 (UTC) It would be a good idea to have actual data and modeling for this article like this image. We need to find one that is free for use. I could create new images, but it would be good if I didn't have to spend my time on that. Junjunone (talk) 14:30, 1 October 2012 (UTC) Regarding the below image used at the top of the article: The embedded comment in the wikitext at the top of the article asks where the image came from. Visually, it appears that the creator of the .svg may have based it on the image here: As for additional images, you can find some at the Harvard citation under the image, or by a search for "spiral galaxy rotation curves" on I'll be referencing a couple of those papers in an upcoming article. Cheers, Al'Beroya (talk) 09:46, 11 July 2014 (UTC) Citation needed[edit] The article that is missing here may be this one: Extended rotation curves of high-luminosity spiral galaxies. IV - Systematic dynamical properties, SA through SC by Rubin, V. C.; Thonnard, N.; Ford, W. K., Jr. (talk) 10:13, 26 February 2014 (UTC) buonshi Universal rotation curve merge[edit] The concept of Salucci's group that there exists one universal rotation curve is not acknowledged by any group except their own, really. There are a few MONDians who cite their papers, but other than that there doesn't seem to be outside notice of the ideas. Therefore usable content should be merged here. jps (talk) 18:13, 29 October 2014 (UTC) • this is not true it is is a lie The Universal Rotation Curve, the existence of a function of the galaxy radius (in units of the disk length scale, ) and of galaxy magnitude (e.g. ) which reproduces the rotation curve of any object (of known and ). This concept, was first put at page 450 by Rubin et al 1982ApJ...261..439[1], is implicit in Rubin et al. (1985)[2] was pioneered in [3] and then set in [4] and [5] . many others hanity ve contributed These papers have received more than 3000 citations . Jps is taking a solitary battle against me , protected by the immunity of the anonimity Paolosalucci (talk) 21:16, 29 October 2014 (UTC) • You're the one with a WP:COI, so it's especially important to treat others with respect as it is up to them to determine how to address this, not you. The sources and context you've provided is extremely helpful. Nice to see something published in the past decade. --Ronz (talk) 23:28, 29 October 2014 (UTC) • The above mentions page 450 of [1] as supporting the URC model of Salucci et al. Having just read that page, I don't see it. Would you please clarify? Leegrc (talk) 12:08, 30 October 2014 (UTC) • Oppose The proposing editor started an AfD that was closed as keep, "seems that there are sufficient sources to meet GNG" less than two weeks ago. This merge proposal does not seem productive. Paradoctor (talk) 10:35, 30 October 2014 (UTC) Please WP:FOC. We're now discussing the concerns in more detail than in the AfD. Given the sources offered so far, a merge may be the best way forward. --Ronz (talk) 20:05, 30 October 2014 (UTC) I don't see anyone proposing anything but me. I've merged the content that is most clearly citable to external reviewers. Can we just proceed with the merge? jps (talk) 13:11, 2 November 2014 (UTC) Go ahead with the merge, as Paradoctor has withdrawn from the topic. --Ronz (talk) 16:31, 2 November 2014 (UTC) 1. ^ a b Rubin, Vera C.; Ford, W. Kent, Jr.; Thonnard, Norbert; Burstein, David (1982). "Rotational Properties of 23 Sb Galaxies". Astrophysical Journal 261: 439–456. doi:10.1086/160355.  2. ^ Burstein, D.; Rubin, V. C. (1985). "The distribution of mass in spiral galaxies". Astrophysical Journal 297: 423–435. Bibcode:1985ApJ...297..423B. doi:10.1086/163541.  3. ^ Persic, M.; Salucci, P. (1991). "The universal galaxy rotation curve". Astrophysical Journal 368: 60–65. Bibcode:1991ApJ...368...60P. doi:10.1086/169670.  4. ^ Persic, M.; Salucci, P.; Stel, F. (1996). "The universal rotation curve of spiral galaxies - I. The dark matter connection". Monthly Notices of the Royal Astronomical Society 281: 27. arXiv:astro-ph/9506004. Bibcode:1996MNRAS.281...27P.  5. ^ Catinella, Barbara; Giovanelli, Riccardo; Haynes, Martha P. (2006). "Template Rotation Curves for Disk Galaxies". The Astrophysical Journal 640 (2): 751–761. arXiv:astro-ph/0512051. Bibcode:2006ApJ...640..751C. doi:10.1086/500171.  Inertia and Unruh radiation paper[edit] Hi. I'd like to query your deletion of my paper. It was published in a good peer reviewed journal, so it fulfils wikipedia's stated reliability criterion and to delete it without reason makes a mockery of the correct peer review process. Since it predicts galaxy rotation as well as dark matter and MoND, but does it without any tuning, it should be mentioned on this page. (talk) 16:05, 6 March 2015 (UTC) — Preceding unsigned comment added by (talk) 15:59, 6 March 2015 (UTC) A few reasons. First, we certainly don't include every result from the peer-reviewed literature. So while the source certainly meets the reliability criteria, that doesn't mean we have to include it. Second, per WP:FRINGE, we don't generally include results that haven't gained some standing in the literature. That paper, from 2012, has two citations, both of which are by the author (so no third-party citations). Citation counts are in no way a definitive measure of the importance of the result, but that supports my view that it's not considered a "significant-minority" idea, which is the kind of result WP:FRINGE says we should include. Third, Wikipedia's conflict of interest guideline makes it somewhat questionable to add your own result. (It's certainly ok per the reliability guidelines, as you say, but still a bit questionable to add your own paper -- I certainly avoid adding my own results. But thank you for clarifying that it is your work that you're adding.) Lastly, I think the wording as written was way to technical for an encyclopedia audience. I have no idea what the sentence means (and I'm an astronomer, so in principle much better-versed in this than the average reader). "Unruh radiation" is undefined, the meaning of "tuning parameter" is unclear, etc. My last objection could certainly be addressed, and the third point could be addressed if someone else added it, but the first two require, at the least, time and wider acceptance evidenced in reliable sources. —Alex (ASHill | talk | contribs) 16:31, 6 March 2015 (UTC) Thanks Alex for your well-reasoned reply. I tend to judge solely by agreement with the data & simplicity & not by community response, but I understand wp has a different slant on it. I do think my paper should be mentioned here, so hopefully time'll bring a citation. Best wishes (talk) 17:33, 6 March 2015 (UTC) When I'm writing a paper, I certainly try to judge by agreement with the data and simplicity, but Wikipedia is in no place to judge that. Since we obviously shouldn't include every peer-reviewed result, community response in the literature is the best proxy we have. —Alex (ASHill | talk | contribs) 18:06, 6 March 2015 (UTC)
How To Avoid Investing In The Wrong Stuff BOSTON (CBS) – Most investors don’t even know they are in the wrong type of investment until it’s too late. Asset allocation is simply what your money is invested in; stocks, bonds, cash, real estate. Your time horizon is when you will need the money for the goal. For example, the goal is a comfortable retirement. So you take advantage of your 401(k) plan at work and start to squirrel away money. You’re getting the company match of 3% so you are thinking you are doing well. Your time horizon is 25 years to retirement so you invest the portfolio in a 100% stocks. Let’s fast forward here. It is now 2013 and you are approaching that magic Medicare age. You are thinking retirement in a couple of years. You look at your 401(k) statement and you are still 100% invested in stocks. As you get closer to reaching your goals, your time horizon changes and that in turn should trigger a change in your asset allocation. So if your retirement is in sight that 100% stock portfolio is much too risky for you. You need to begin selling off some stocks 5 to 10 years before your retirement and investing those monies in something safer, CDs, money markets, short term bonds or bond funds. How much of an asset allocation change? Depends. Could be as much 50%. A wedding for your daughter is a short term goal. The money should be safe and not in the stock market. CDs, money market, Treasuries. A college education for the little one who is now 2. You know in 16 years you will need the dollars to make the first tuition payment. If you are in a 529 plan often times the asset allocation is done for you, moving away from stocks and going into cash and bonds as the child gets older. More from Dee Lee More From CBS Boston Summer of Savings Download Weather App Listen Live
MyPlate: A New Alternative to the Food Pyramid Health risks of not enough sleep: Why Z’s Matter! Did you know that not getting enough sleep can cause health problems beyond just feeling tired and worn out? Recent studies have found that lack of adequate sleep is related to weight gain, sexual problems, reduced concentration, mental health problems, and even Alzheimer’s disease. Continue reading Statins lower cholesterol but will they reduce your risk of heart attacks or strokes? The FDA issued new safety warnings for statins in February 2012 about the increased risk for diabetes, memory loss and muscle pain, symptoms that we have been warning patients about for some time. Continue reading Obesity in America: are you part of the problem? Despite our country’s obsession with weight and appearance, most people who are medically overweight don’t realize it. What we’re talking about isn’t “love-handles” or a body that doesn’t match the supermodels we see in magazines. Instead, we’re talking about a weight that affects your health, well-being, and longevity. Our collective weight problem is so bad that only cigarette smoking causes more preventable deaths in America than obesity does. Continue reading Are Bisphenol A (BPA) plastic products safe? Bisphenol A (BPA) is a chemical used to make plastics, and is frequently used in baby bottles, sports equipment, water bottles, medical devices, and as a coating in food and beverage cans. Continue reading Are processed meats more dangerous than other red meats? Yes! You have probably heard it many times already–whether from your doctor, a health magazine, or a health promotion poster: don’t eat too much red meat. Red meat has been linked to health problems such as coronary heart disease and diabetes. But, the latest research tells a somewhat different story. Red meat-beef, pork, and lamb-may not deserve its bad rap for those diseases. It’s possibly processed red meats, like bacon, hot dogs, and salami, that are the bigger problem. Continue reading Is when you eat just as important as what you eat? Wouldn’t it be great if there was a simple way to lose weight? Instead of counting calories or cutting carbs, what if you could just avoid eating during certain times? A study in 2012 showed that mice who were restricted to only eating at regular times throughout an eight hour period weighed 28% less than mice who consumed the same number of calories but ate frequently throughout the entire day Continue reading Do chemicals in our environment cause weight gain? 6 things you need to know about juicing your veggies Polycystic ovary syndrome (PCOS): what is it and what are the signs? Monthly changes in hormones affect nearly all women. Some of the symptoms are more bothersome or noticeable than others, and sometimes they signal health problems. Studies show that 4% to 18% of women of reproductive age have a condition called polycystic ovary syndrome (PCOS). It can be difficult to diagnose because it is similar to so many other conditions. What is PCOS, and what are the signs? Continue reading Laser liposuction—weight loss tool or scam? As American waistlines have expanded, the attraction of a quick weight loss fix has increased. Diet and exercise are the key to safe weight loss, but for many of us, the results are discouraging. As a result, liposuction is the third most commonly performed cosmetic procedure in the United States, after breast augmentation and nose reshaping. However, the procedure can result in severe though rare complications including infection, cardiac arrest, blood clots, excessive fluid loss, fluid accumulation, damage to the skin or nerves, seizures, bruising, swelling, and damage to vital organs. Plastic surgeons often present laser liposuction as a safer, effective alternative which works by inserting a laser beneath the skin and liquifying fat. But does it work and is it really safe? Continue reading How do I get my child to eat healthier foods? Eating habits that improve health and lower body mass index
The asteroid that hit Earth 65 million years ago and wiped out the dinosaur was at least six miles across and left behind a crater over 110 miles across. But that's nothing compared to a possibly newly discovered impact site. It's not confirmed yet, but a potential crater site in Greenland would be the oldest, biggest impact ever observed on Earth. The original asteroid would have been about 18 miles across - that's about half the length of Rhode Island, which is pretty damn huge by asteroid standards - and the crater it initially created would have been nearly 375 miles wide and over 15 feet miles deep. While we've seen evidence of impacts on that scale elsewhere in the solar system, we've never seen anything like this on Earth. So why are we only discovering something this big now? Part of it is its immense age - when this asteroid hit, Earth was only a third its current size. Most of the crater has since been worn away by three billion years of erosion, meaning only the rocks from the very bottom of the original impact site are still there. Besides, part of the reason we know so much about the Chicxulub crater is because it killed the dinosaurs - it left behind plenty of clues in the fossil and geological record that there was something there worth searching for, whereas this massive crater smashed into a relatively empty Earth. The fact that this site is in Greenland instead of Mexico also probably doesn't help. Still, we can't say for certain that this is an impact site - it's hard to be definitive with something so ridiculously old. But after three years of careful review, researchers from the Geological Survey of Denmark and Greenland say they're pretty much ready to confirm the find. New Scientist has more: You can check out the original article for more of the evidence used to confirm the find. Top image from outside the town of Maniitsoq on Greenland's western coast, near the site of the find. Image by ilovegreenland on Flickr.
Alcohol Impairment Out at Sea In the U.S. Coast Guard’s 2014 recreational boating statistics report, the five types of recreational vessels with the highest number of casualties are open motorboat, personal watercraft, cabin motorboat, canoe/kayak, and pontoon. The top contributing factors to fatal boating accidents, on the other hand, are failure to wear a life jacket (also known as personal flotation device or PDF) and consumption of any type of impairing substance, most commonly alcohol. Besides failure to wear a life jacket and consumption of an impairing substance, the USCG has also identified the following to be contributing factors of accidents: operator inattention; improper lookout; operator inexperience; excessive speed; machinery failure; navigation rules violation; hazardous waters; weather and force of wave/wake. Though boating seems to be the least risky type of recreational activity, the fact is, it is as much dangerous as driving a car or, worse, riding a motorcycle. From 2014 to 2015, the number of accidents have been 4,064 and 4,158 respectively. Rate of fatal accidents for the stated years is more than 600, while accidents that resulted to injuries exceeded 2,600 for both years. According to the USCG, alcohol impairment out at sea is just as dangerous, if not more, as getting drunk on land and then driving a vehicle. Alcohol, when consumed while at sea, even has faster impairing effects than when it is consumed on land. This is due to the overall marine environment, which includes exposure to the sun’s heat, the wind, sea water mist or spray, engine noise, and the vessel’s vibration and motion. Boat operators who will be caught operating a boat while impaired will be charged with BUI. The federal BUI law enforced by the United States Coast Guard applies to all types of boats, including large ships, rowboats and canoes; it also includes foreign vessels sailing through US territories and US ships on the high seas. As boating season begins in the U.S. it is important that boat owners and those who rent recreational vessels make sure that they are familiar with the “Rules of the Road” and many other things concerning boating activities. Also, to learn more about boating accidents, as well as about your legal rights and options in case of an accident, you can search through the net or visit websites owned by Boating and Watercraft accident lawyers, of Clawson Staubes, LLC: Injury Group, for instance. Why Memory Care Can Become Necessary Where do memories go when they’re forgotten? While forgetting is simply human as, for most people, they can barely remember what they had for breakfast that morning, there are some conditions that make the retention of memory just that much more difficult. Some people, due to some accident or another, acquire amnesia and find themselves unable to recall months or years of their lives. This is instant and though sometimes irreversible, there are some times when patients can recover from the trauma. Diseases that eat away at your mind, however, are different in more ways than one. For one thing, they are common in those who are somewhat older for their physical state is deteriorating. According to the website, some common diseases that involve memory loss are that of dementia or Alzheimer’s. This is more than just losing stories of their past, this could involve forgetting necessary information like their name, where they are, or what’s happening. People in this kind of situation then have a need for round the clock assistance or memory care. This is so that, in these trying last years of their life, they can be comforted and cared for by people who know how to deal with patients who are under the pressure of this condition. From forgetting to eat or shower, to developing clinical depression and anxiety due to the change in the mind’s chemistry and because of the social atmosphere of being one who is ill at such a difficult old age. If you or someone you know has an elderly loved one who may need memory care, it is recommended to seek the advice of experienced professionals in order to know the options that one has at hand and what memory care can entail for this particular situation. Every person is unique and, thus, the individual care must be just as unique to fit the person who needs the assistance. Personal Injury from Governmental Agency Anyone who has been a victim of negligence or reckless action (or inaction) can file a lawsuit against a government agency. However, suing a local government agency for a personal injury often involves strict number of rules and could have limited time and amount that can be recovered. Following these rigid steps and meeting the deadline are vital in establishing s strong injury lawsuit and secure a fair compensation. One of the things that should be considered when suing a government agency is the narrow time limits that it gives to bring forth an injury claim. Typical personal injury claims have a statute of limitations around one to six years after the accident to file for a personal injury claim, while the time frame for government agencies typically range between 30 to 120 days after the injury. States can have a government-specific time restrictions and awareness of the general statute of limitations on injury claims is what is really essential for the lawsuit. Another thing to regard is the “Notice of Claim” that some states may require, otherwise the lawsuits may be rejected by the court. This Notice of Claim is typically addressed to every person or entity involved in the accident and caused the injuries. It is not filed in court but rather sent through certified mail to the government employee or agency as well as to the government agency receiving all forms of Notice of Claims. It should contain state-specific information such as name, address, date, location and insurance provider of the injured claimant. Furthermore, there is a grace period of 30 -120 days after sending the Notice of Claim before filing a lawsuits, otherwise filing before the expiration of this period will render the lawsuit dismissible. There are certain injury claims that the government is immune form, and despite it being not as extensive as it was in the past it still can give immunity to some personal injury claims which change from state to state. Additionally, states do not compensate for punitive damages since the rationale states that the punitive damages do not provide any deterrent effect for the misconduct. Visit the website for more information. How A Criminal Record Can Affect A Divorce Case Every divorce case is complicated – though some are more so than others. Some arrive at the decision mutually and remain amicable with one another. Some, however, are highly more complex than that. There is something universal about a divorce case, though: the marriage just isn’t working for one reason or another and at least one person in that union wants their best chance in life with someone else or just by themselves. There are many variables to consider regarding a divorce, especially if the couple in question own a significant estate or are in custody of a child or several children. According to the website of Holmes, Diggs & Eames, PLLC, family law prioritizes the needs of the child before anything else. However, a criminal record can significantly influence the case in favor of one person or another, despite it not contributing to the case at all. For example, say the person in question is a loving father but has a criminal record of theft or maybe even substance abuse. Even if this hypothetical father were to do everything in his power in order to rehabilitate himself, there is a chance for the favor to go to the spouse in question. The criminal record will remain a stain on whoever it holds due to the negative connotation that it presents of the person’s character. A Collin County criminal defense lawyer will probably tell you that that will always be the case and when filing for a divorce, these variables need to be taken into consideration in order to be presented properly and within an appropriate context. Your Reconstruction, Your Recovery, Your Right You are not what happened to you; you are all the choices that you make regarding the things that happen to you. The path to your recovery after something terrible, for example, is one that is dictated by you. If you, then, have suffered a horrible accident that has left you disfigured or scarred, it can be quite the traumatizing experience to have for anyone. To bear such scars of an accident and to carry that burden can lead to depression, anxiety, and so many more problems as well as becoming socially ostracized due to the deformed physical appearance following an accident. According to the website of the people with Bergman Folkers Plastic Surgery, the face is the most easily recognizable part of the body and so any changes on it are easily noticed. When scarring occurs or the face’s skin suffers large burns, a large part of it can never truly, naturally heal on its own. This can have devastating effects on one’s self esteem; not only that, but it can also damage the career of a person who needs to look a certain way in order to do their profession (e.g. models, hosts, flight attendants, et cetera). This is why facial reconstruction surgery is such an important choice that is available for victims who have suffered horrendous accidents at the hands of someone else’s negligence. Some accidents that can cause disfiguration of this extent are animal attacks, explosion accidents, or fire accidents. Allowing yourself to look like your best self – like the you who you can and want to recognize in the mirror – is something that you have every right to pursue, if that is the path that you wish to take. If you or someone you know has suffered a terrible disfiguring accident and is considering reconstructive surgery as a means to recovery, it is recommended to get the professional advice of a plastic surgeon first. What Factors Are Considered for Social Security Disability Benefits? Benefits are privileges that people are often quick to jump on at a moment’s notice. It’s practically nature – who wouldn’t want the benefits that they are entitled to? However, benefits aren’t always that easy to claim especially if there is a legal protocol to be followed before you may receive them. Sometimes, claims to benefits are denied – and this is especially true for social security disability benefits. It has been speculated that at least 70% of social security disability benefit claims are disapproved at the first go. This is usually because the paperwork or the factors concerning the benefits were not properly filed or considered. It is unfortunate and can cause hassle when things like this happen but situations of this nature can be avoided altogether when the proper factors of social security disability benefits are properly thought through. Some main factors concerning these benefits are the age and the capability to do work. For example, according to the website of the Chris Mayo Law Firm, people who are aged over 50 are more commonly not expected to retrain themselves with new work skills or are particularly qualified for reallocation in terms of employment, especially if their previous job involved specialization. In terms of a person’s capability, if they are unable to carry on with their work for at least a year due to the disability then they may possibly be qualified. Though the system itself operates quite objectively, there are many factors to consider regarding social security disability benefits before they are granted. Security protocols are in place so that the system is not abused. However, the advice or help of a social security disability benefits attorney may not only allow for you to solidify your case but also smooth the process along so that you may claim your benefits as swiftly and as stress-free as possible. Is Car Insurance Really Necessary? A car, in this day and age, is practically a necessity at this point. It can be pricey to purchase, yes, and some people work for years and years before they are able to properly afford the car of their dreams. So with all of that in mind, there is a question that some people may think when they’re finalizing everything they need for their latest investment: is car insurance really necessary? There are some who hold the belief that car insurance is just but a fancy accessory that isn’t truly needed as long as you drive safely and carefully. The thing is, not everyone is going to be as careful as you think and believe you’re going to be. Accidents have a tendency to happen and they are accidents because they are never planned by whomever. A car without insurance that then gets into an accident may cost repairs that then give you more expenses than you would have had to pay if you’d invested in insurance. The thing is, as much as a car is an investment for you; car insurance is an investment on your investment. In medicine, there is a favored saying that goes along the lines of “an ounce of prevention is better than a pound of cure”; this follows that same basic principle. Taking safety precautions and preventative measures that ensure that you won’t have to worry too much about the worst case scenario when or if it happens because you have the car insurance could save you a lot of trouble in the long run. According to the website of Insure on the Spot, car insurance can also allow for you to retain your license, in the event of any driving violation that would result into a confiscation of your license. A certain type of insurance would then be required to be on hand at all times. This is because safety while driving is a serious issue and it needs to be given more serious attention. So what do you think? Is car insurance really necessary? You be the judge. The Difference between Car Accident Laws and Truck Accident Laws Unfortunately, accidents while on the road are more than common enough in this day and age. Several of them occur on a daily basis. Every single time, they cause damage – either almost inconsequential (just a bit of a scratch on the paint, nothing too major) to devastating rollovers or collisions that cost can cost a lot of money but also lives. The thing about regular motor vehicles accidents is that sometimes, they can be relatively tame. That is why truck accidents are on an altogether different class. Trucks, either semi-trucks or full on eighteen-wheelers, weight significantly more than your ordinary sports utility vehicle (SUV). According to the website of lawyers Habush Habush & Rottier S.C., truck accidents cause devastating effects on the victims due to the sheer size that something of that magnitude could inflict. Can you imagine something weighing at least 80,000 pound rolling at a few miles per hour at a Los Angeles freeway during rush hour? The damages and lives cost would be colossal. It is the sizable difference between trucks and regular motor vehicles that allows for trucks to actually have universal federal laws for every manufacturer and driver to abide by when operating a truck. For example, as is stated in The Ausband & Dumont Law Firm’s website, a driver of an eighteen-wheeler truck is only allowed to drive for a certain number of hours without rest. A truck is also only allowed a maximum capacity for how much weight it can carry and transport from state to state. Some states allow for trucks to carry over 100,000 pounds without a permit. Understanding the laws and regulations concerning truck accidents allows for you to know your rights to pursue legal action, should you have been injured due to circumstances of this nature. What to do After Being Injured in an Accident? There are many more hazards and risks that have become more prominent in the world we live in today. After all, they call it a technological revolution for a reason and new things are getting made every single day. However, as things get more and more dependent on things out of our immediate control, so does the room for accidents to happen grow exponentially when these things that we rely on happens to cause injury. So what are you to do after you are injured in an accident? First things first, you need to be given immediate medical attention for bodily harm that has come to you from the situation. According to the website of the Abel Law Firm, some injuries from accidents can cause temporary to permanent disabilities which could affect your and your family’s lifestyle forever. One of the reasons why it is recommended to seek legal action against the guilty party who caused the unfortunate events to occur is to gain compensation for the medical expenses that will be deemed necessary for the care and recovery of injuries sustained due to the accident involved. Houston personal injury attorney would probably mention that a good next thing to do is to get your story straight and in order, with supporting evidence. Though it might be the most imperative thing to get you into medical care, documentation of the scene of the accident can clear up any possible loopholes that the guilty party may abuse in order to get out of the case without paying the price of the law. It is not recommended that fault is admitted but only for the events of the case to be presented in a clear, concise manner. There are many complications that come with pursuing legal action and a lot of people are, oftentimes, intimidated by the scale of it all or are unafraid that they might not be able to afford it or for any other reason. Personal injury cases are, as is stated on the website of lawyers Habush Habush & Rottier S.C. ®, a personal affair with variables that are constantly changing.
John Nash Jr Only available on StudyMode • Download(s): 169 • Published: December 5, 2012 Open Document Text Preview John Forbes Nash Jr. was born June 13, 1928 in Bluefield, West Virginia. Mr. Nash Jr. is an American mathematician who won the 1994 Nobel Prize for his works in the late 1980’s on game theory. Game theory is the study of strategic decision making or more formally known as the mathematical models of conflict and cooperation between intelligent and rational decision makers. Game theory is mainly used in economics, political science, and psychology, as well as logic and biology. Mr. Nash Jr. has also contributed numerous publications involving differential geometry, and partial differential equation (PDE). Differential geometry is a mathematical discipline that uses differential calculus and integral calculus, linear algebra and multi linear algebra to study geometry problems. Partial differential equation is a differential equation that contains unknown multivariable functions and their partial derivatives. These are used to formulate problems involving functions of several variables. Mr. Nash Jr. used all of these skills and is known for developing the Nash embedding theorem. The Nash embedding theorem stated that every Riemannian manifold ( a real smooth manifold equipped with an inner product on each tangent space that varies smoothly from point to point) can be isometrically embedded into some Euclidean space ( a three dimensional space of Euclidean geometry, distinguishes these spaces from the curved spaces of Non-Euclidean geometry). An example used on Wikipedia is the bending of a piece of paper with out stretching or tearing the paper gives it an isometric embedding of the page into Euclidean space because curves drawn on the page retain the same arclenth however the page is bent. John Nash Jr. also made significant contributions tp parabolic partial differential equations and to singularity theory. While... tracking img
Headspace: Economist Jeff Rubin discusses Peak Oil Jeff Rubin spent 20 years as chief economist for CIBC World Markets. After resigning in 2009, he went on to become the best-selling author of Why Your World Is About To Get a Whole Lot Smaller, a book about the rising price of oil. Spacing sat down with Jeff to discuss the implications of “peak oil” for cities. Spacing: What is Peak Oil? Rubin: For me, it doesn’t mean the world is running out of oil in some absolute geological sense. For example, there’s over 170 billion barrels of the stuff in the Alberta Tar Sands. The real question is can we afford to burn it? We’ve exhausted our supply of easy-access, conventional oil and now we’re turning to unconventional sources in shale, tar sands, and deep water. It’s unconventional sources of oil and the prices required to facilitate extraction that are problematic for us. Therefore, to me, “Peak Oil” means the cost of extracting oil is gradually becoming greater than what our economies can tolerate. It’s going to take $150-200 per barrel oil prices to turn the Oil Sands into a 4-5 million barrel per day producer that would meet our needs. Those are simultaneously the kinds prices that when translated into pump prices take millions of people off the road. Spacing: What are the implications of Peak Oil for commuter travel? Rubin: If we want to know what our future roadways will look like, we should look at Europe over the last decade where, as a result of taxes, they have been paying gasoline prices equivalent to peak oil.  Even Britain, which is somewhat similar to Canada, they drive smaller cars and drive less. The problem is that if we apply European rates of car ownership to Canada, it implies that one in every five drivers is about to take the exit lane. My concern is that if one in five Toronto drivers decided to take the TTC tomorrow, I very much doubt we could accommodate them. My belief is that instead of having given $10 billion to General Motors to stay in business while the automobile industry is in massive decline, we should have invested in public transit for cities. Spacing: What does it mean for urban planning and city-building? Rubin: We’re going to have to accommodate a very significant migration of people from the suburbs into the inner cities over the next 20 years. People may want to live in suburbs now because in Richmond Hill you can buy a 3,000 sq. ft home on a huge lot for the same price of a semi-detached in Riverdale. In the future, however, the suburbs aren’t going to be affordable unless you happen to also work where you live. You’re going to spend too much money moving yourself back and forth. The economic foundation of the suburban mentality is predicated on cheap transport fuel. We won’t have that anymore. Public transit in suburbs will also prove ineffective given the lower population densities of the suburbs. Whether we like it or not, we’re going back to the cities. Spacing: How does cycling fit in within the whole dynamic? Rubin: In Copenhagen, every major street has bike lanes. But why do people in Copenhagen ride bikes? I originally thought it was because they are physically active or environmentally conscious people. I then inquired about how much it costs to buy a car. It turns out that in Denmark you pay a surcharge which, depending on horsepower, can be anywhere from 50% to 150% of a car’s sticker price. It’s a government-levied tax on car ownership. In Copenhagen, you can pay upwards of three times the amount for a car as in Toronto. If you want to see more bikes on Toronto roads then wait until oil prices make it that much more expensive to own a car. It’s the same with wind power. Denmark reduced its carbon emissions to less than its 1990 levels. We always hear about the 20% power generation from renewables. What we’re unaware of is that the remaining 80% comes from coal. So then how did Denmark reduce emissions while relying on coal for 80% of its power? The answer is the price of power. On average it’s 30¢ kw/h or three times what Ontarians pay. Denmark reducing its emissions has everything to do with people using less power because of prices. Spacing: Based on your analysis, is the GTA ready to adapt to peak oil? Rubin: I don’t know if it’s ready, but it will adapt because that future is staring us in the face. By the end of the year we’re looking at $1.50 gasoline. The way Toronto will run with those prices will be very different from today. We’re going to start having to make some changes. First we need to give up the foolish perception that we have a natural birthright to consume as much energy as we can. As an economist, I believe people respond in rational ways to prices. Triple-digit oil prices will show us how to respond and hopefully our politicians get it. The fact is that, in the future, more people will be taking public transit and less people will be driving. That change won’t be a result of any alleged “War on the Car.” The biggest war on the car is coming from $2/litre fuel. Why Your World Is About To Get A Whole Lot Smaller is now available in paperback in bookstores across Canada. photo by NWF Blogs 1. I’d like to know how synthetic petroleum products factor in any Peak Oil calculations. Once oil & gas prices justify the cost of creating synthetics won’t we just get caught a trap of producing expensive synthetic products that consume almost as much energy being made as they create? If synthetic products become viable, would there be a levelling off of energy prices based on those new technologies? Does new technology just stretch the time line for some kind of tipping point, or could it be a game changer? 2. The TTC commissioners and the Ford brothers don’t listen to Jeff Rubin. If they did, then we wouldn’t be cutting back bus service on 41 bus routes on May 8th. Also, we be building the LRT routes to replace the diesel burning buses, the fuel of which comes from oil as well. BTW. Crude oil jumped over a dollar today, to over $110 a barrel. It was $83 when Ford was elected and the war on the car was over. 3. Pingback: Jeff Rubin On How To Get People Onto Bikes | Everything Green 4. If the futures market was limited to actual users oil would be less than $70 per barrel. Jeff might also want to reconsier his assumptions regarding the the costs to extract oil from the tar sands. There are technologies in the pipeline that are going to ratttle all his arguments. Jeff might want to google Daniel Dicker or do some snooping of recent patents. 5. Thanks for a super helpful overview of a complex subject.  6. Gotta love the commenter above — Glen — telling Rubin how to do research! I would encourage this Glen character to write a book or get a job as chief economist for a national bank before he mouths off. 7. If futures markets were limited to actual users of oil, then everyone would be looking for $0 oil but of course no one would be willing to sell at that price. So limiting the futures market to only purchasers is obviously pointless. The “fair” price of oil has to include the risk premium. The chances of bad things happening in the Middle East, disrupting oil supplies, is hardly zero these days. Why should I sell you my barrel of oil for $70 today when I’m thinking that there’s a good probability that you will be desperately seeking oil at any price in six months (when Saudi Arabia blows up)? Sure, I’d rather have $70 today rather than $70 in six months (which is what would happen if everything calmed right down), but I’d more rather have $160 in six months than $70 today. 8. By the way, it seems that Daniel Dicker claims that oil prices can be stabilized by regulating the futures market, in the same way the silver futures market was tamed in 1980 by clamping down on speculators. Contrast and compare what would likely happen if the world encountered a sudden shortage of silver, versus what would likely happen if the world encountered a sudden shortage of oil. 9. Lisa, You might want to consider the fact the vast majority of economists missed the US housing bubble. Today, there is no recognition of the own we are in right here at home. The few economists that I correspond with were not the ones who had there head in the sand. Ed, if the futures market was limmited to users and producers oil would not go down to $0. All it would do is find its natural equilibrium. Storys like this demonstrate the stupidity of today’s markets……….. 10. WKLis… we do need improved transit especially in the northEast and northWest corners of this City. It’s quite the stretch r to say that improved transit should only be LRT. 11. Update to my comment of April 7th. While crude oil was $110 US a barrel on the 7th, on the 8th it went up to $113 US barrel, a 36% increase since October. Luckily (or unluckily, depending on your point of view), the Canadian dollar also went up, cushioning the gasoline price increase a little bit since the crude oil prices are in US dollars. 12. Mass migration into cities from suburbs? Suck it up, urbanites, Brampton’s moving in? How about mass conversions of suburbs into cities? Why shouldn’t the properties on the corners become corner stores? Why can’t the strip malls become apartment complexes with retail on the main floor and offices on the second? 13. The problem with suburbs becoming cities is ZONING. As long as zoning outlaws corner stores, outlaws basement apartments, outlaws mixing use buildings, outlaws medium-rise buildings, outlaws just about all that cities have, suburbs are and continue to be prevented from becoming cities. 14. While I have no problem with increasing the cost of owning a car, it is quite ridiculous to claim that the cost of owning a car is the main reason why so many Danes cycle. Many towns in Germany have high levels of cycling and there is no such tax on cars. The Netherlands has along the highest level of car ownership in Europe and the highest levels of cycling. People ride bicycles because they enjoy it and it can a very effective form of transportation for shorter trips. The key is the investment in high quality separated cycling facilities, not the price of car ownership. 15. WKLis… disagree with you regarding LRT’s as only transit solution.. but agree with you that ZONING is the issue… Would go farther and state that addressing our transit/transportations requires that ZONING needs to be addressed at a REGIONAL level rather than letting each municipality set its own policies for the most part… Only the province can do this.. and unfortunately, while they talk a good game about greater regional coordination… they don’t seem willing to make any moves towards mandating such coordination. Comments are closed.
Study your flashcards anywhere! Download the official Cram app for free > • Shuffle Toggle On Toggle Off • Alphabetize Toggle On Toggle Off • Front First Toggle On Toggle Off • Both Sides Toggle On Toggle Off • Read Toggle On Toggle Off How to study your flashcards. H key: Show hint (3rd side).h key A key: Read text to speech.a key Play button Play button Click to flip 49 Cards in this Set • Front • Back True of false: 50% is classified as secondary HTN (usually resulting from renal disease) false. (90% essential/primary, 10% secondary) read the questioner's mind: HTN predisposes individuals to this disease (the one John Ritter died of) aortic dissection Pathology changes associated with HTN hyaline thickening & atherosclerosis This awful term refers to a stiffening of the arteries that invovles the media. Particularly likely to occur at the radial & ulnar arteries. Monckeberg arteriosclerosis Atherosclerosis: True or false: atherosclerosis is a disease of small sized arteries false. affects elastic, large & medium muscular arteries. Atherosclerosis: Earliest sign of atherosclerotic disease fatty streak Atherosclerosis: most likely location abdominal aorta. (then coronary artery, popliteal artery, and carotid artery) Type of angina resulting from coronary artery spasm Prinzmetal's variant This coronary artery branch is most commonly implicated in myocardial infarction LAD (left anterior descending) most common cause of sudden cardiac death (lethal) arrhythmia Solid tissues like the heart, brain, kidney and spleen have only a single blood supply (not so good collaterals). Therefore infarcts are more likely to be --? 2 instances where red infarct is likely (1) reperfusion (2) loose tissues with good collaterals - like the lungs or intestine Evolution of MI: Rank the following vessels from most to least commonly occluded: RCA, LAD, circumflex Evolution of MI: Histologic changes on day 1 of an MI? pallor of infarcted area; coagulative necrosis Evolution of MI: days 2-4? dilated vessels (hyperemia); neutrophil invasion; extensive coagulative necrosis Evolution of MI: days 5-10? yellow-brown softening of infarcted region; macrophages present; granulation tissue begins to grow in Evolution of MI: after 7 weeks? infarct is gray-white; scar complete Diagnosis of MI True or false: ECG is not diagnostic during the first 6 hours following an MI False; it is the gold standard within this time period Diagnosis of MI What is the test of choice within the first 24 hours? Diagnosis of MI This enzyme is elevated from 4 hours up to 10 days after an MI and is the most specific protein marker cardiac troponin I Diagnosis of MI on ecg, transmural infarction causes ______ ST elevation, Q wave changes MI complications: Most common (90% of patients) arryhthmias, esp. 2 days after infarct MI complications: automimmune phenomen several weeks post-MI that results in fibrinous pericarditis Dressler's syndrome MI complications: high risk of mortality cardiogenic shock (large infarcts) MI complications: seen about a week after the infarction rupture of ventricular wall, septum, or papillary muscle Cardiomyopathies Most common dilated (congestive) cardiomyopathy; heart looks like a ballon on X-ray Cardiomyopathies True or False: substance abuse is a common cause of dilated cardiomyopathy True; cocaine and alcohol especially Cardiomyopathies These two infectious diseases are associated with dilated myopathy coxsackievirus B and Chagas' disease Cardiomyopathies True or false: hypertrophic cardiomyopathy causes systolic dysfunction False; dilated myopathy causes systolic dysfunction, hypertrophic causes diastolic Cardiomyopathies Half of hypertrophic myopathies are inherited as an _________ trait (x-linked, dominant, etc.) autosomal dominant; major cause of sudden death in young athletes Cardiomyopathies On echo in hypertrophic disease, the LV thickens and the chamber looks how? like a banana Cardiomyopathies These "-osis" diseases are major causes of restrictive/obliterative cardiomyopathy sarcoidosis, amyloidosis, hemochromatosis, endocardial fibroelastosis, endomyocardial (Loffler's) fibrosis….also, scleroderma but it's not an -osis Name two causes of holosystolic murmurs 1) VSD, 2) mitral regurg, and 3) tricuspid regurg Widened pulse pressure seen with this diastolic murmur aortic regurg Describe the murmur associated with the most common valvular lesion Mitral prolapse; late systolic murmur following mid-systolic click True or false: aortic stenosis causes a decrescendo-crescendo murmur following an ejection click False; ejection click is followed by a crescendo-decrescendo systolic murmur cause of a continuous murmur loudest at time of S2? patent ductus artieriosis opening snap followed by late diastolic rumbling? mitral stenosis most common heart tumor? primary cardiac tumor in 1) adults and 2) children adults=myxoma (almost always in left atrium); children=rhabdomyoma fun gross pathologic term for changes in liver with CHF? what are "heart failure cells"? hemosiderin-laden macrophages in lung dyspnea on exertion, pulmonary edema, and paroxysmal nocturnal dyspnea are symptoms of? left heart failure patient says "I have to sleep upright." the clinical term for this is? most pulmonary emboli arise from? True or false: Amniotic fluid can lead to DIC what are the component of virchow's triad? stasis, hypercoagulability, endothelial damage what is pulsus paradoxus? greater than 10 mmHg drop in systolic on inspiration what is electrical alternans? characteristic of tamponade on ECG in which QRS complex height varies beat-to-beat
Definitions for apiaceae This page provides all possible meanings and translations of the word apiaceae Princeton's WordNet 1. Umbelliferae, family Umbelliferae, Apiaceae, family Apiaceae, carrot family(noun) 1. Apiaceae The Apiaceae, commonly known as carrot or parsley family, are a family of mostly aromatic plants with hollow stems. The family is large, with more than 3,700 species spread across 434 genera; it is the 16th-largest family of flowering plants. Included in this family are the well-known plants: angelica, anise, arracacha, asafoetida, caraway, carrot, celery, Centella asiatica, chervil, cicely, coriander, cumin, dill, fennel, hemlock, lovage, Queen Anne's lace, parsley, parsnip, sea holly, and the now extinct silphium. U.S. National Library of Medicine 1. Apiaceae A large plant family in the order Apiales, also known as Umbelliferae. Most are aromatic herbs with alternate, feather-divided leaves that are sheathed at the base. The flowers often form a conspicuous flat-topped umbel. Each small individual flower is usually bisexual, with five sepals, five petals, and an enlarged disk at the base of the style. The fruits are ridged and are composed of two parts that split open at maturity. 1. Chaldean Numerology The numerical value of apiaceae in Chaldean Numerology is: 7 2. Pythagorean Numerology The numerical value of apiaceae in Pythagorean Numerology is: 5 Images & Illustrations of apiaceae Find a translation for the apiaceae definition in other languages: Select another language: Discuss these apiaceae definitions with the community: Word of the Day Please enter your email address:      Use the citation below to add this definition to your bibliography: "apiaceae." STANDS4 LLC, 2016. Web. 30 Jul 2016. <>. Are we missing a good definition for apiaceae? Don't keep it to yourself... Nearby & related entries: Alternative searches for apiaceae: Thanks for your vote! We truly appreciate your support.
Craig and Marc Kielburger Headshot How One Canadian College Is Building Better Health In Tanzania Posted: Updated: Randy Plett via Getty Images By Craig and Marc Kielburger Row after row of mothers wait patiently, babies fussing in their laps. A nurse in this small clinic in rural Arusha, Tanzania, calls them forward, one by one, and gives the squirming infants a shot that will protect them from killer diseases, like measles and polio. Over the past 15 years, Tanzania has made a concerted effort to immunize its children -- and has achieved a remarkable vaccination rate of almost 90 per cent. That's not good enough for the government and health organizations, though. They want to get as close to 100 per cent as possible. But figuring out which children have been missed is a huge challenge in a country where many families still live nomadic lives in remote areas. Enter Seattle health organization PATH and Canada's own Mohawk College, in Hamilton, Ont. They're helping out, not with more vaccines or nurses, but a database. The Better Immunization Data, or BID initiative, shows strong health systems in developing countries aren't built on hospitals and medical staff alone. Knowledge is vital, too. BID is an ambitious project to boost vaccination rates with an easy-to-use national electronic immunization registry. The initiative was launched in 2013 with funding from the Bill and Melinda Gates Foundation. Pilots of the system were first rolled out in Tanzania and Zambia last year. Mohawk College, which has been working on health information systems in Canada and abroad for eight years, provides the technological expertise. "In Tanzania, we're able to accurately track needles in children's arms," says Justin Fyfe, Mohawk's senior software architect on the BID project, who explained the registry to us. "That's something even Canada struggles with. We're still carrying yellow vaccination cards." When there's an immunization clinic in Tanzania today, nurses are equipped with digital tablets as well as needles. A nurse enters each vaccination into an app, which connects into the country's national vaccination registry. In areas where lack of power renders tablets impractical, health workers use a specially-designed paper form that is later scanned into the system. "Anyone can change the code, so other developing -- and developed -- countries will be able to adapt the system for their own use at low cost." Once their children are vaccinated, parents get a card with a barcode. Any health centre connected to the registry can scan the card and get a child's full immunization record. The registry is an epic time, labour and money saver. Before BID, it could take a team of Tanzanian nurses an entire day to prepare for an immunization clinic, says Dykki Settle, a senior technical adviser with PATH. They would have to wade through a mountain of paperwork, identifying which children still need which vaccinations. If a family had arrived from another part of the country, the nurses would have no information to work with. Tracking how many children are being immunized -- and in which areas -- will also help Tanzania more efficiently manage vaccine supply and distribution, Settle told us. Officials will know where the real need is, and can then ensure health centres don't run out of critical vaccines. Perhaps the most ingenious aspect of the project is that Mohawk College is making the software expandable and open source. Tanzanian health workers will be able to use the database to support other projects, such as malaria or HIV treatment. Because it is open source, anyone can change the code that makes up the software, so other developing -- and developed -- countries will be able to adapt the system for their own use, creating their own health databases at low cost. Overall, the system is designed to eventually become a one-stop health information source, with every Tanzanian citizen having their own electronic health record. For many donors who support international development, building a database isn't as sexy as building a hospital, but the impact is just as powerful. Just ask nurses, who spend less time doing paperwork and more time helping people. And the moms who sleep at night, knowing their children are safe from disease. Follow HuffPost Canada Blogs on Facebook The Colours of Tanzania Share this Current Slide
Brush your teeth to reduce heart problems Of these, 170 were fatal. "Future experimental studies will be needed to confirm whether the observed association between oral health behaviour and cardiovascular disease is in fact causal or merely a risk marker." Judy O'Sullivan, senior cardiac nurse at the British Heart Foundation, said: "If you don't brush your teeth, your mouth can become infected with bacteria which can cause inflammation. "It is already known that there is a link between inflammation and a higher risk of developing heart disease. "However, it is complicated by the fact that poor oral hygiene is often associated with other well-known risk factors for heart disease, such as smoking and poor diet. "Good personal hygiene is a basic element of a healthy lifestyle. "But if you want to help your heart, you should eat a balanced diet, avoid smoking and take part in regular physical activity."
From Wikipedia, the free encyclopedia Jump to: navigation, search "Endosoma" redirects here. For Endosomaphilia, see Vorarephilia. endocytic pathway compartments In cell biology, an endosome is a membrane-bound compartment inside eukaryotic cells. It is a compartment of the endocytic membrane transport pathway originating from the trans Golgi membrane. Molecules or ligands internalized from the plasma membrane can follow this pathway all the way to lysosomes for degradation, or they can be recycled back to the plasma membrane. Molecules are also transported to endosomes from the trans-Golgi network and either continue to lysosomes or recycle back to the Golgi. Endosomes can be classified as early, sorting, or late depending on their stage post internalization.[1] Endosomes represent a major sorting compartment of the endomembrane system in cells.[2] In HeLa cells, endosomes are approximately 500 nm in diameter when fully mature.[3] Endosomes comprise three different compartments: early endosomes, late endosomes, and recycling endosomes.[2] They are distinguished by the time it takes for endocytosed material to reach them, and by markers such as rabs.[4] They also have different morphology. Once endocytic vesicles have uncoated, they fuse with early endosomes. Early endosomes then mature into late endosomes before fusing with lysosomes.[5][6] Early endosomes mature in several ways to form late endosomes. They become increasingly acidic mainly through the activity of the V-ATPase.[7] Many molecules that are recycled are removed by concentration in the tubular regions of early endosomes. Loss of these tubules to recycling pathways means that late endosomes mostly lack tubules. They also increase in size due to the homotypic fusion of early endosomes into larger vesicles.[8] Molecules are also sorted into smaller vesicles that bud from the perimeter membrane into the endosome lumen, forming lumenal vesicles; this leads to the multivesicular appearance of late endosomes and so they are also known as multivesicular bodies (MVBs). Removal of recycling molecules such as transferrin receptors and mannose 6-phosphate receptors continues during this period, probably via budding of vesicles out of endosomes.[5] Finally, the endosomes lose RAB5A and acquire RAB7A, making them competent for fusion with lysosomes.[8] Some material recycles to the plasma membrane directly from early endosomes,[10] but most traffics via recycling endosomes. Phagosomes, macropinosomes and autophagosomes[13] mature in a manner similar to endosomes, and may require fusion with normal endosomes for their maturation. Some intracellular pathogens subvert this process, for example, by preventing RAB7 acquisition.[14] Another unique identifying feature that differs between the various classes of endosomes is the lipid composition in their membranes. Phosphotidyl inositol phosphates (PIPs), one of the most important lipid signaling molecules, is found to differ as the endosomes mature from early to late. PI(4,5)P2 is present on plasma membranes, PIP3 on early endosomes, PI(3,5)P2 on late endosomes and PIP4 on the trans Golgi network.[15] These lipids on the surface of the endosomes help in the specific recruitment of proteins from the cytosol, thus providing them an identity. The inter-conversion of these lipids is a result of the concerted action of phosphoinositide kinases and phosphatases that are strategically localized [16] animal cell endocytic pathway Golgi to/from endosomes[edit] Late endosomes to lysosomes[edit] See also[edit] 1. ^ Stoorvogel, Willem; Strous, Ger J.; Geuze, Hans J.; Oorschot, Viola; Schwartzt, Alan L. (1991-03-05). "Late endosomes derive from early endosomes by maturation". Cell 65 (3): 417–427. doi:10.1016/0092-8674(91)90459-C. ISSN 0092-8674. PMID 1850321.  6. ^ Luzio JP; Rous BA; Bright NA; Pryor PR; Mullock BM; Piper RC. (2000). "Lysosome-endosome fusion and lysosome biogenesis". Journal of Cell Science 113: 1515–1524. PMID 10751143.  15. ^ van Meer, Gerrit; Voelker, Dennis R.; Feigenson, Gerald W. (2008-02-01). "Membrane lipids: where they are and how they behave". Nature Reviews Molecular Cell Biology 9 (2): 112–124. doi:10.1038/nrm2330. ISSN 1471-0072. PMC 2642958. PMID 18216768.  16. ^ Di Paolo, Gilbert; De Camilli, Pietro (2006-10-12). "Phosphoinositides in cell regulation and membrane dynamics". Nature 443 (7112): 651–657. doi:10.1038/nature05185. ISSN 0028-0836. PMID 17035995.  External links[edit]
Sir Peter Mansfield British physicist Sir Peter MansfieldBritish physicist October 9, 1933 London, England Sir Peter Mansfield, (born October 9, 1933, London, England) English physicist who, with American chemist Paul Lauterbur, won the 2003 Nobel Prize for Physiology or Medicine for the development of magnetic resonance imaging (MRI), a computerized scanning technology that produces images of internal body structures, especially those comprising soft tissues. Mansfield received a Ph.D. in physics from the University of London in 1962. Following two years as a research associate in the United States, he joined the faculty of the University of Nottingham, where he became professor in 1979 and professor emeritus in 1994. Mansfield was knighted in 1993. Mansfield’s prize-winning work expanded upon nuclear magnetic resonance (NMR), which is the selective absorption of very high-frequency radio waves by certain atomic nuclei subjected to a strong stationary magnetic field. A key tool in chemical analysis, it uses the absorption measurements to provide information about the molecular structure of various solids and liquids. In the early 1970s Lauterbur laid the foundations for MRI after realizing that if the magnetic field was deliberately made nonuniform, information contained in the signal distortions could be used to create two-dimensional images of a sample’s internal structure. Mansfield transformed Lauterbur’s discoveries into a practical technology in medicine by developing a way of using the nonuniformities, or gradients, introduced in the magnetic field to identify differences in the resonance signals more precisely. He also created new mathematical methods for quickly analyzing information in the signal and showed how to attain extremely rapid imaging. Because MRI does not have the harmful side effects of X-ray or computed tomography (CT) examinations and is noninvasive, the technology proved an invaluable tool in medicine. Sir Peter Mansfield print bookmark mail_outline • MLA • APA • Harvard • Chicago You have successfully emailed this. Error when sending the email. Try again later. MLA style: "Sir Peter Mansfield". Encyclopædia Britannica. Encyclopædia Britannica Online. APA style: Sir Peter Mansfield. (2016). In Encyclopædia Britannica. Retrieved from Harvard style: Sir Peter Mansfield. 2016. Encyclopædia Britannica Online. Retrieved 30 July, 2016, from Chicago Manual of Style: Encyclopædia Britannica Online, s. v. "Sir Peter Mansfield", accessed July 30, 2016, Editing Tools: We welcome suggested improvements to any of our articles. Email this page
Intermediate Music Theory Lesson 7 – Analysis Often, I’ve been teaching a first piano lesson to a student, and they’ll play something for me so I can assess their level. After finishing, for example, a beautiful Chopin piece, I’ll ask the student to explain what they’ve just played. What chords are you playing? How does this progression work? Most of the time I’m greeted with blank looks. For the budding composer, one of the most important things to know and understand is how to analyze a song. We’ve already started to learn some analysis techniques in past lessons, but now, we’ll try to define a set of “guides” for analyzing a song. There are the rules to “baroque” 4 part harmony analysis, but that’s a whole different beast, something that may be covered in a special lesson segment. As an example, we’ll use the first few measures of the Movement 2 from Beethoven’s Piano Sonata No. 8, the “Pathétique”. Pathetique sonata Pathetique first 8.mp3 First 8 Measures of Pathetique Movement 2 Step 1- The foundation What is the key signature? Noting the key of the song is the most important. If the song has a key, that gives you the base from which to analyze all other chords and melodic development. In this case, there are 4 flats, so it’s either Ab major or F minor. You can usually tell whether a piece is major or minor by the first few chords. In this case, the first chord has an Ab, C, and the next note in an Eb. These three notes make up an Ab triad, to its pretty safe to say this piece of the movement is in Ab major. What is the form? From this excerpt, we cannot determine a form. However, it is important to look through a piece and decide on what the form could be, AA, AABA, sonata, etc. Most pop tunes don’t follow a form like this, but they follow something akin to: Intro, verse, verse, chorus, verse, chorus, bridge, chorus, outro- or some other variation. What is the Time Signature? In this case, 2/4. Step 2- The Harmony Now we can move chord by chord. What we’re looking for is not limited to which chords pass, but also includes which cadences we see, if there are any modulations, pedal points, strange motions, suspensions. Measure 1 As we said, the song starts in Ab major with an Ab major chord. The harmonic motion is moving at one chord per beat. On beat two, the chord changes to Eb7, which is the V chord in Ab major. Remember the thirds and sevenths! There is no fifth in this Eb7 chord, and the 7th appears in the base. It’s an easy leap to guess that the Eb7 will resolve to Ab major, with the third and seventh moving properly. Let’s see… Measure 2 Just as we predicted! The first chord in this measure is Ab major with C in the bass, as the seventh moved to the third, just like we though it would. Meanwhile, the third moved to the root. In the second chord, Beethoven moves it to Eb again with G in the bass, and in the second half of the beat, there’s that seventh again, I wonder if that seventh will move to the third again…. Measure 3 Aha! It did. The Db moved to C. The first chord is Ab major again, this time with the root in the bass. We can also see that the harmonic rhythm has changed, it’s now moving twice as fast, one chord every eighth note. The second chord is Eb with G in the bass again, but this time no seventh. Then, in the third chord, Beethoven switches it up and does what’s called a “deceptive” cadence. A deceptive cadence happens when the V chord moves to the relative minor or vi chord- in this case, it resolves to F minor. Then, Beethoven does something tricky, moving to a Bb7 chord with F in the bass, or in Ab, the V/V (the five chord of the five chord). He used the F minor as a sort of ii chord to do a ii-V-I cadence to Eb…. Measure 4 And there’s the Eb chord, tonicizing the V chord. This measure remains in Eb, and Beethoven has switched the harmonic rhythm back to one chord per beat. The only wonky thing that happens here is the E natural passing tone on the second half of beat two leading to…. Measure 5 Gb half-diminished over Db. This is the vii7 chord of Ab major which often works as an extension of V7, which Beethoven eventually moves to in beat two, briefly touching on V7 moving to… Measure 6 Another tonic! Ab major over C to start the measure. Next is an out of place chord, F7. F7 is the V/ii (five chord of the ii chord, which is Bb), will it move to Bb? Measure 7 Yes! He moves to ii then to V and eventually, after some harmonic motion…. Measure 8 To a pedal point of Eb7 over Ab, finally to Ab to ease the tension. For you classical theorists, if we were to write this out in “classical” roman numerals, the first 8 measures would look like this: Analysis Roman Numerals Some roman numerals have superscripts. This is just the classical way of writing inversions. The only two here are V4/2, which is a 7th chord in third inversion, and I6 which is a triad in first inversion. For this week’s homework, take one of you favorite tunes and analyze it piece by piece. Paste your analysis to the comment board!    <—-Previous Lesson Next Lesson—-> Share on: Tweet Intermediate Music Theory Lesson 7 - Analysis ! Twitter MySpace Facebook   This entry was posted in Artists in Residence, Rick Louie. Bookmark the permalink. 6 Responses to Intermediate Music Theory Lesson 7 – Analysis 1. I must confess I’m appalling at this with classical piano music – when I’m composing or playing guitar or producing tracks or anything I’m very conscious of what I’m doing theory-wise, and I know this stuff in theory, but in practise on the piano I just tend to learn pieces with much less attention to the chord progressions and suchlike than I would in any other situation. 2. Also I was going to try that analysis thing on the piece I’m learning at the moment (Rachmaninov’s Prelude in C# Minor) but I think it’s got so much non-diatonic movement that it may take a while and fill up the entire page. Also, as a minor, I’m never sure whether to write it as: bVI v i IV iii vi (IE with reference to the relative major, or just with the root note as I irregardless, however lower case here as it’s minor). Which would you advise? I know it gets done both ways depending on context, but the former would seem more rational to me, especially as you start moving towards more complex modes such as Phrygian Dominant or Altered Dominant or whatever; at that point it would surely be necessary to refer everything to the root note of whatever mode you’re playing in, yet for a minor scale I still people doing it the other way sometimes. 3. Rick Louie says: Romantic type music is on the whole much more difficult to parse using traditional analysis, though you can do it (sometimes you have to take a little leap to get around things). In regards to your other question, the only time you use accidentals is when you’re going out of the key. Analysis is key specific so, if you’re in C Minor and your moving to VI, all you have to denote is the capitalized (since its a major chord) IV. bIV is redundant. For more complex modes you wouldn’t use roman numeral analysis (though I guess you can) just because it gets too convoluted. Usually those only come up in jazz charts and in that case you’d just write the lead sheet notation like “Ephryg” or “Calt” for phrygian and altered. If, however, you’re using altered as a V chord, you can do a hybrid and write V7(b9#9b5#5), for example. 4. Pingback: Intermediate Music Theory Lesson 8 – Composition Techniques | Indablog 5. Pingback: Theory and Composition Hosted By Rick Louie | Indablog 6. john says: Measure 5 Gb half-diminished over Db. Supposed to be G half-diminished Leave a Reply