text
stringlengths 144
682k
|
---|
Doctors Are Urging Parents Everywhere To Keep Their Kids’ Baby Teeth
Saving baby teeth could save lives
Saving baby teeth could save lives
As a child, there are few things more exciting than losing a tooth and waking up with some money under your pillow from the Tooth Fairy. But, doctors are telling parents to hold on to those baby teeth because they could come in handy later in life.
A study in the Proceedings of the National Academy of Sciences reveals that baby teeth, also known as deciduous teeth, store a myriad of stem cells.
There are two basic kinds of stem cells, embryonic and adult. The former can only harvested from human embryos, while the latter are found in a variety of human tissues such as bone marrow, umbilical cord blood, fat, and, since 2003, baby teeth. The scientists who discovered that the pulp of baby teeth is rich in stem cells also noted that SHED (“stem cells from human exfoliated deciduous teeth”) have unique properties:
So if you thought getting a dollar for your tooth was cool, then getting the chance to grow any kind of tissue, from heart cells to brain cells, and repair damaged tissue is definitely even coooler!!
The tricky thing, however, is that the tooth has to have an adequate blood supply before being frozen. You can’t just keep the teeth in a box and use them later for stem cells. The cells will degrade and lose their potency if not properly preserved.
Saving baby teeth could save lives
Considering the uncertainties and the cost, then, parents would be well advised to seek the opinion of a knowledgeable medical professional and carefully weigh their decision before investing in a baby tooth bank account.
Please like us on facebook
Comments are closed.
|
Testing Distributed Systems for Linearizability
Distributed systems are challenging to implement correctly because they must handle concurrency and failure. Networks can delay, duplicate, reorder, and drop packets, and machines can fail at any time. Even when designs are proven correct on paper, it is difficult to avoid subtle bugs in implementations.
Unless we want to use formal methods1, we have to test systems if we want assurance that implementations are correct. Testing distributed systems is challenging, too. Concurrency and nondeterminism make it difficult to catch bugs in tests, especially when the most subtle bugs surface only under scenarios that are uncommon in regular operation, such as simultaneous machine failure or extreme network delays.
Before we can discuss testing distributed systems for correctness, we need to define what we mean by “correct”. Even for seemingly simple systems, specifying exactly how the system is supposed to behave is an involved process2.
Consider a simple key-value store, similar to etcd, that maps strings to strings and supports two operations: Put(key, value) and Get(key). First, we consider how it behaves in the sequential case.
Sequential Specifications
We probably have a good intuitive understanding of how a key-value store is supposed to behave under sequential operation: Get operations must reflect the result of applying all previous Put operations. For example, we could run a Put("x", "y") and then a subsequent Get("x") should return "y". If the operation returned, say, a "z", that would be incorrect.
More formal than an English-language description, we can write a specification for our key-value store as executable code:
class KVStore:
def __init__(self):
self._data = {}
def put(self, key, value):
self._data[key] = value
def get(self, key):
return self._data.get(key, "")
The code is short, but it nails down all the important details: the start state, how the internal state is modified as a result of operations, and what values are returned as a result of calls on the key-value store. The spec solidifies some details like what happens when Get() is called on a nonexistent key, but in general, it lines up with our intuitive definition of a key-value store.
Next, we consider how our key-value store can behave under concurrent operation. Note that the sequential specification does not tell us what happens under concurrent operation. For example, the sequential spec doesn’t say how our key-value store is allowed to behave in this scenario:
It’s not immediately obvious what value the Get("x") operation should be allowed to return. Intuitively, we might say that because the Get("x") is concurrent with the Put("x", "y") and Put("x", "z"), it can return either value or even "". If we had a situation where another client executed a Get("x") much later, we might say that the operation must return "z", because that was the value written by the last write, and the last write operation was not concurrent with any other writes.
We formally specify correctness for concurrent operations based on a sequential specification using a consistency model known as linearizability. In a linearizable system, every operation appears to execute atomically and instantaneously at some point between the invocation and response. There are other consistency models besides linearizability, but many distributed systems provide linearizable behavior: linearizability is a strong consistency model, so it’s relatively easy to build other systems on top of linearizable systems.
Consider an example history with invocations and return values of operations on a key-value store:
This history is linearizable. We can show this by explicitly finding linearization points for all operations (drawn in blue below). The induced sequential history, Put("x", "0"), Get("x") -> "0", Put("x", "1"), Get("x") -> "1", is a correct history with respect to the sequential specification.
In contrast, this history is not linearizable:
There is no linearization of this history with respect to the sequential specification: there is no way to assign linearization points to operations in this history. We could start assigning linearization points to the operations from clients 1, 2, and 3, but then there would be no way to assign a linearization point for client 4: it would be observing a stale value. Similarly, we could start assigning linearization points to the operations from clients 1, 2, and 4, but then the linearization point of client 2’s operation would be after the start of client 4’s operation, and then we wouldn’t be able to assign a linearization point for client 3: it could legally only read a value of "" or "0".
With a solid definition of correctness, we can think about how to test distributed systems. The general approach is to test for correct operation while randomly injecting faults such as machine failures and network partitions. We could even simulate the entire network so it’s possible to do things like cause extremely long network delays. Because tests are randomized, we would want to run them a bunch of times to gain assurance that a system implementation is correct.
Ad-hoc testing
How do we actually test for correct operation? With the simplest software, we test it using input-output cases like assert(expected_output == f(input)). We could use a similar approach with distributed systems. For example, with our key-value store, we could have the following test where multiple clients are executing operations on the key-value store in parallel:
for client_id = 0..10 {
spawn thread {
for i = 0..1000 {
value = rand()
kvstore.put(client_id, value)
assert(kvstore.get(client_id) == value)
wait for threads
It is certainly the case that if the above test fails, then the key-value store is not linearizable. However, this test is not that thorough: there are non-linearizable key-value stores that would always pass this test.
A better test would be to have parallel clients run completely random operations: e.g. repeatedly calling kvstore.put(rand(), rand()) and kvstore.get(rand()), perhaps limited to a small set of keys to increase contention. But in this case, how would we determine what is “correct” operation? With the simpler test, we had each client operating on a separate key, so we could always predict exactly what the output had to be.
When clients are operating concurrently on the same set of keys, things get more complicated: we can’t predict what the output of every operation has to be because there isn’t only one right answer. So we have to take an alternative approach: we can test for correctness by recording an entire history of operations on the system and then checking if the history is linearizable with respect to the sequential specification.
Linearizability Checking
A linearizability checker takes as input a sequential specification and a concurrent history, and it runs a decision procedure to check whether the history is linearizable with respect to the spec.
Unfortunately, linearizability checking is NP-complete. The proof is actually quite simple: we can show that linearizability checking is in NP, and we can show that an NP-hard problem can be reduced to linearizability checking. Clearly, linearizability checking is in NP: given a linearization, i.e. the linearization points of all operations, we can check in polynomial time if it is a valid linearization with respect to the sequential spec.
To show that linearizability checking is NP-hard, we can reduce the subset sum problem to linearizability checking. Recall that in the subset sum problem, we are given a set of non-negative integers and a target value , and we have to determine whether there exists a subset of that sums to . We can reduce this problem to linearizability checking as follows. Consider the sequential spec:
class Adder:
def __init__(self):
self._total = 0
def add(self, value):
self._total += value
def get(self):
return self._total
And consider this history:
This history is linearizable if and only if the answer to the subset sum problem is “yes”. If the history is linearizable, then we can take all the operations Add(s_i) that have linearization points before that of the Get() operation, and those correspond to elements in a subset whose sum is . If the set does have a subset that sums to , then we can construct a linearization by having the operations Add(s_i) corresponding to the elements in the subset take place before the Get() operation and having the rest of the operations take place after the Get() operation.
Even though linearizability checking is NP-complete, in practice, it can work pretty well on small histories. Implementations of linearizability checkers take an executable specification along with a history, and they run a search procedure to try to construct a linearization, using tricks to constrain the size of the search space.
There are existing linearizability checkers like Knossos, which is used in the Jepsen test system. Unfortunately, when trying to test an implementation of a distributed key-value store that I had written, I couldn’t get Knossos to check my histories. It seemed to work okay on histories with a couple concurrent clients, with about a hundred history events in total, but in my tests, I had tens of clients generating histories of thousands of events.
To be able to test my key-value store, I wrote Porcupine, a fast linearizability checker implemented in Go. Porcupine checks if histories are linearizable with respect to executable specifications written in Go. Empirically, Porcupine is thousands of times faster than Knossos. I was able to use it to test my key-value store because it is capable of checking histories of thousands of events in a couple seconds.
Testing linearizable distributed systems using fault injection along with linearizability checking is an effective approach.
To compare ad-hoc testing with linearizability checking using Porcupine, I tried testing my distributed key-value store using the two approaches. I tried introducing different kinds of design bugs into the implementation of the key-value store, such as modifications that would result in stale reads, and I checked to see which tests failed. The ad-hoc tests caught some of the most egregious bugs, but the tests were incapable of catching the more subtle bugs. In contrast, I couldn’t introduce a single correctness bug that the linearizability test couldn’t catch.
1. Formal methods can provide strong guarantees about the correctness of distributed systems. For example, the UW PLSE research group has recently verified an implementation of the Raft consensus protocol using the Coq proof assistant. Unfortunately, verification requires specialized knowledge, and verifying realistic systems involves huge effort. Perhaps one day systems used in the real world will be proven correct, but for now, production systems are tested but not verified.
2. Ideally, all production systems would have formal specifications. Some systems that are being used in the real world today do have formal specs: for example, Raft has a formal spec written in TLA+. But unfortunately, the majority of real-world systems do not have formal specs.
μWWVB: A Tiny WWVB Station
μWWVB is a watch stand that automatically sets the time on atomic wristwatches where regular WWVB signal isn’t available. The system acquires the correct time via GPS and sets radio-controlled clocks by emulating the amplitude-modulated WWVB time signal.
Watch stand with watch
Atomic Clocks
Most so-called atomic clocks aren’t true atomic clocks; rather, they are radio-controlled clocks that are synchronized to true atomic clocks. Radio clocks maintain time by using an internal quartz crystal oscillator and periodically synchronizing with an atomic clock radio signal. Quartz clocks have a fractional inaccuracy , which means that they can gain or lose about 15 seconds every month. Official NIST US time is kept by an ensemble of cesium fountain atomic clocks — their newest clock, NIST-F2, has a fractional inaccuracy , meaning that the clock would neither gain nor lose one second in about 300 million years.
Most radio-controlled clocks in the United States are synchronized to the WWVB radio station, which continuously broadcasts official NIST US time. WWVB broadcasts from Fort Collins, Colorado, using a two-transmitter system with an effective radiated power of 70 kW. Theoretically, during good atmospheric conditions, the signal should cover the continental United States. Unfortunately, I can’t get my wristwatch to receive the 60 kHz amplitude-modulated time signal in my dorm room in Cambridge, Massachusetts.
Getting Accurate Time
Taking into account frequency uncertainty, WWVB can provide time with an accuracy of about 100 microseconds. In the absence of WWVB, there are other sources that can provide reasonably accurate time. The Network Time Protocol (NTP), which operates over the Internet, can provide time with an accuracy of about 1 millisecond. GPS can theoretically provide time with an accuracy of tens of nanoseconds. I decided to use GPS, mostly because I didn’t want to make my WWVB emulator dependent on an Internet connection.
Building a WWVB emulator involves transmitting on 60 kHz. In general, it’s not legal to broadcast on arbitrary frequencies at an arbitrary transmit power, because transmissions cause interference. Many parts of the radio spectrum are already in use, as allocated by the Federal Communications Commission (FCC).
Luckily, the FCC grants exemptions for certain unlicensed transmissions, as specified by 47 CFR 15. This is explained in some detail in “Understanding the FCC Regulations for Low-Power Non-Licensed Transmitters”.
Transmitters in the 60 kHz band are allowed, and the emission limit at that frequency is given in 47 CFR 15.209. As long as the field strength is under as measured at 300 meters, it’s fine. In my use case, I have the transmitter within a couple inches of the receiver in my wristwatch, so I don’t need to transmit at a high power.
I designed and fabricated a tiny custom board designed to interface with a GPS and an antenna:
Circuit board
The board is powered by a $1 ATtiny44A microcontroller. I used a 20 MHz external crystal oscillator for the microcontroller so I’d have a more accurate clock than I would with the internal RC oscillator. The board has a Mini-USB connector for power, an AVR ISP header for programming the microcontroller, and a JST-SH 6 pin connector for the GPS. I included pin headers for the antenna, making sure to connect them to a port that works with fast PWM. I also included 3 LEDs as status indicators — a red LED for power, a green LED to indicate a GPS lock, and a blue LED to show the unmodulated WWVB signal.
I designed the board using the EAGLE PCB design software and milled the board from a single-sided FR-1 circuit board blank on an Othermill v2:
Once the board was finished, I used solder paste and a hot air gun to solder my components. Hand soldering surface-mount components is pretty painful, but using solder paste, the entire soldering process took only ten minutes.
For my GPS module, I used a USGlobalSat EM-506, a high-sensitivity GPS powered by the SiRFstarIV chipset.
The 60 kHz WWVB signal has a very long wavelength: , so the wavelength is approximately . It’s challenging to design good antennas for such long wavelengths — a quarter-wavelength antenna would be about 1250 meters long! WWVB uses a sophisticated antenna setup that’s automatically tuned using a computer to achieve an efficiency of about 70%. Luckily, for my use case, I didn’t need to worry about designing a really efficient antenna and doing careful impedance matching — I was transmitting over such a small distance that efficiency didn’t matter too much.
I didn’t want to build my own antenna, so I gutted a radio clock and repurposed its ferrite core loopstick antenna. Thanks to antenna reciprocity, which says that the receive and transmit properties of an antenna are identical, I knew that this should work.
Clock disassembly
I wrote software to periodically get accurate time via GPS and continuously rebroadcast the time following the WWVB protocol. The software is written in plain C and doesn’t use any libraries or anything. I used the CrossPack development environment on macOS for compiling my code and flashing my microcontroller.
Getting the software to work just right took a good amount of effort. To make it easier, I initially designed each component separately, and still, I ended up spending a lot of time debugging:
Debugging using an oscilloscope
NMEA GPS Interface
According to the datasheet, the EM-506 has a UART interface and supports both the SiRF Binary protocol and the NMEA protocol. NMEA 0183 is a standardized ASCII-based protocol, so I opted to use that over SiRF Binary.
After implementing software UART on the ATtiny44A, getting time data from the GPS was as simple as sending over a command to query for the ZDA (date and time) NMEA message:
In response, I’d get back a message with the current date and time (in UTC). For example, for 26 December 2016, 18:00:00, I’d get the following NMEA message1:
Date and Time Calculations
It was easy to parse the ZDA information to get the current date and time. However, the WWVB protocol required some extra date/time information not directly available in the ZDA data, so I had to write some date/time conversion utilities.
Leap year calculation was simple, and calculating the day of year was also straightforward.
Calculating whether daylight savings time was in effect took a little bit more effort. In the process of implementing it, I learned of a neat way to calculate the day of the week given the month, day, and year:
int day_of_week(long day, long month, long year) {
// via https://en.wikipedia.org/wiki/Julian_day
long a = (14 - month) / 12;
long y = year + 4800 - a;
long m = month + 12 * a - 3;
long jdn = day + (153 * m + 2) / 5 + 365 * y +
(y / 4) - (y / 100) + (y / 400) - 32045;
return (jdn + 1) % 7;
int is_daylight_savings_time(int day, int month, int year) {
// according to NIST
// begins at 2:00 a.m. on the second Sunday of March
// ends at 2:00 a.m. on the first Sunday of November
if (month <= 2 || 12 <= month) return 0;
if (4 <= month && month <= 10) return 1;
// only march and november left
int dow = day_of_week(day, month, year);
if (month == 3) {
return (day - dow > 7);
} else {
// month == 11
return (day - dow <= 0);
WWVB-format Time Signal
WWVB uses amplitude modulation of a 60 kHz carrier to transmit data at a rate of 1 bit per second, sending a full frame every minute. Every second, WWVB transmits a marker, a zero bit, or a one bit. A marker is sent by reducing the power of the carrier for 0.8 seconds and then restoring the power of the carrier for the remaining 0.2 seconds. A zero is sent by reducing the power of the carrier for 0.2 seconds, and a one is sent by reducing power for 0.5 seconds.
Here is the format of the WWVB time code, as documented by NIST:
WWVB time code format
I made use of the hardware PWM built into the ATtiny44A to generate and modulate the 60 kHz carrier for emulating WWVB. Working out exactly how to configure the microcontroller required careful reading of the section in the datasheet on fast PWM.
I used the following code to set up the 16-bit timer/counter:
// set system clock prescaler to /1
CLKPR = (1 << CLKPCE);
// initialize non-inverting fast PWM on OC1B (PA5)
// count from BOTTOM to ICR1 (mode 14), using /1 prescaler
TCCR1A = (1 << COM1B1) | (0 << COM1B0) | (1 << WGM11) | (0 << WGM10);
TCCR1B = (1 << WGM13) | (1 << WGM12) | (0 << CS12) | (0 << CS11) | (1 << CS10);
// fast PWM:
// f = f_clk / (N * (1 + TOP)), where N is the prescaler divider
// we have f_clk = 20 MHz
// for f = 60 kHz, we want N * (1 + TOP) = 333.3
// we're using a prescaler of 1, so we want ICR1 = TOP = 332
// this gives an f = 60.06 kHz
// we can use OCR1B to set duty cycle (a fraction of ICR1)
ICR1 = 332;
OCR1B = 0; // by default, have a low output
DDRA |= (1 << PA5); // set PA5 to an output port
After this setup, I could modulate the carrier by setting OCR1B. Setting OCR1B = 166 made a 50% duty cycle 60 kHz square wave, and setting OCR1B = 0 resulted in a reduction in power of the carrier. With this setup, for example, I could generate a zero bit as follows:
void gen_zero() {
OCR1B = 0;
OCR1B = 166;
After I had this set up, I implemented functionality to broadcast WWVB-format data by repeatedly broadcasting the appropriate data for the current second and then incrementing the current time.
Physical Design
I wanted to keep the physical design simple, so I opted to go with a press-fit design consisting of a 3D-printed top and bottom with laser-cut sides to form a box.
3D Parts
I used OpenSCAD, a programming-based 3D modeler, to design my 3D parts:
I used a Stratasys uPrint SE to print my parts out of ABS thermoplastic:
2D Parts
I used Adobe Illustrator to design my 2D parts, and I cut them out of acrylic on a 75-watt Universal PLS 6.75:
Because it was a press-fit design, assembly took about two minutes! Here’s the final product:
Watch stand
μWWVB works really well for me, consistently synchronizing my watch in about three minutes. My watch is set up to automatically receive the WWVB signal every night, so by leaving my watch on its stand overnight, it’s automatically synchronized every day!
In the current implementation, μWWVB syncs my watch to an accuracy of about 500 milliseconds of UTC. By putting a little more effort into making the timing in my software more precise, doing things like using the milliseconds value from the ZDA NMEA message instead of ignoring it, I could probably get the error down to about 100 milliseconds. There would still be some error, mostly due to the ZDA NMEA message being sent over UART, which is an asynchronous connection.
If I wanted the system to be much more accurate, I’d probably need to switch to a pulse per second (1PPS) GPS. A 1PPS GPS outputs a signal that has a sharp edge every second precisely at the start of the second — such a signal could be used to clock the WWVB time code such that each bit starts precisely at the start of the second.
But for now, for my purposes, μWWVB works really well!
1. Actually, for my device, I was getting data in the format $GPZDA,hhmmss.sss,dd,mm,yyyy,,*CC, contradictory to the SiRF NMEA reference manual. So for 26 December 2016, 18:00:00.000, I’d get the NMEA message $GPZDA,180000.000,26,12,2016,,*5D
Algorithms in the Real World: Committee Assignment
I recently had another chance to use a fancy algorithm to solve a real-world problem. These opportunities don’t come up all that often, but when they do, it’s pretty exciting!
Every year, MIT HKN has a bunch of eligible students who need to be matched to committees. As part of the assignment process, the officers decide how many spots are available on each committee, and then we have every eligible rank the committees. In the past, officers matched people manually, looking at the data and trying to give each person one of their 1st or 2nd choices. Unfortunately, this is time-consuming and unlikely to result in an optimal assignment if we’re trying to maximize overall happiness.
To see how assignments can be suboptimal, we can go through an example.
Committee Capacity
Tutoring 2
Outreach 1
Social 2
Person Tutoring Outreach Social
Alice 1 3 2
Bob 1 3 2
Charlie 1 2 3
Dave 2 1 3
Eve 2 1 3
In the above table, 1 means first choice, and 3 means third choice. We could imagine assigning people by going down the list and assigning each person to their highest-ranked committee that has available slots. This would result in assigning Alice to Tutoring (1st choice), Bob to Tutoring (1st choice), Charlie to Outreach (2nd choice), Dave to Social (3rd choice), and Eve to Social (3rd choice).
Not only is this algorithm unfair to the people at the bottom of the list, but it’s also suboptimal. If we want to minimize the sum of the rankings for committees we placed people on, we could go with Alice–Social, Bob–Social, Charlie–Tutoring, Dave–Outreach, Eve–Tutoring. This results in a “cost” of 8, which is optimal, rather than the cost of 10 we got with the first assignment, which was constructed greedily.
In the actual data set, there were 8 committees and 57 eligible members, so it wouldn’t have been feasible to manually find an optimal assignment.
Problem Statement
More formally: we have committees and people. Each committee has a capacity of people, where we’re guaranteed that . We know people’s preferences, which for any given person is a permutation of the committees, , mapping the highest ranked committee to and the lowest-ranked committee to . Our goal is to find an assignment of people to committees that solves the following optimization problem:
Above, is the Kronecker delta. Essentially, we want to find the assignment that minimizes cost while satisfying our constraints of having a specific number of people on each committee.
It turns out that the committee assignment problem can be transformed into an instance of the assignment problem (which can also be thought of as finding a minimum weight matching in a bipartite graph).
In the assignment problem, you have people, jobs, and a cost matrix where is the cost of having person do job , and you want to find the minimum cost assignment such that each person is assigned to a unique job.
The committee assignment problem can be trivially transformed into an instance of the assignment problem, simply by making copies of each committee , counting each as a separate “job”, and constructing an appropriate cost matrix from the s.
Luckily, the assignment problem is well-studied in computer science, and there’s a known solution — the Hungarian algorithm solves this problem. There’s even an implementation of the algorithm built into SciPy. This makes solving the committee assignment problem really easy — it only requires a little bit of code to implement the transformation described above.
Using this algorithmic approach to solve the committee assignment problem worked really well for us! We made some slight modifications to the process — one of the committees hand-picked their members, and then we used the algorithm on the remaining 52 members and 7 committees. When running the program, we decided to minimize the sum of the squares of the costs rather than minimizing just the sum.
With 52 people and 7 committees, our implementation ran in less than half a second and gave us an assignment with 37 people getting their first choice, 13 getting their second choice, and 2 getting their third choice.
|
Try Our Apps
Famous Last Words
periodic sentence
Compare loose sentence.
Origin of periodic sentence
First recorded in 1895-1900 Unabridged
Cite This Source
Examples from the Web for periodic sentence
Historical Examples
British Dictionary definitions for periodic sentence
periodic sentence
(rhetoric) a sentence in which the completion of the main clause is left to the end, thus creating an effect of suspense
Collins English Dictionary - Complete & Unabridged 2012 Digital Edition
Cite This Source
Word of the Day
Difficulty index for periodic sentence
Few English speakers likely know this word
Word Value for periodic
Scrabble Words With Friends
Nearby words for periodic sentence
|
This week we take a look at alcohol in the real world and what alcohol there might be in the world of The Tribe.
The legal age for drinking alcohol varies from country to country.
• Australians can purchase and drink alcohol at the age of 18 years.
• Canadians have to wait until they’re 18 years old.
• The French also have to be 18 years old but can drink wine with meals if they are accompanied by a parent or other adult.
• Germans have to be 16 years old to purchase and drink alcohol
• Italians do not have an age limit!
• The legal age in Japan to purchase and drink alcohol is 20.
• Those in Mexico can buy the booze and drink it when they’re 18 years old.
• The UK? Parents can buy alcohol for their child to drink with a meal, 16 year olds are allowed to buy their own wine with a meal but must be 18 years old before they can legally purchase or drink alcohol anywhere else.
• In the USA people have to wait until they are 21 years of age before they can get involved with alcohol.
Alcohol is a chemical compound whose formula is C2H5OH.
It comes from fermented fruit and vegetables combined with water, yeast and sugar. This mixture produces carbon dioxide and alcohol
When the mixture reaches 15% alcohol content, the alcohol begins to kill off the yeast. Any drink that has a higher percentage than 15% has had extra alcohol added to it through the distillation method.
• Beers are generally 5%
• Spirits are mostly 40%
• Wines are usually 12%
• Fortified wines (like sherry) are around 18%
A standard alcoholic drink has around 70 calories.
Alcohol starts to flow to every part of the body minutes after drinking it. The liver processes the alcohol at the rate of 1 standard drink per hour. Any more alcohol consumed will result in that drunken feeling.
The alcohol races through the stomach if there is any food in it. The alcohol then hits the small intestine and gets picked up and distributed to the whole body by the blood. The back up of alcohol slows down the reaction of the brain quite quickly.
Women are generally affected more quickly by alcohol for a number of reasons: –
• Women usually have a lower proportion of body water than men do.
• Women tend to have lower body weight than men.
• Women’s bodies’ process alcohol at a slower rate than men’s do.
• There is a tendency for some women who are taking the contraceptive pill to get drunk more easily.
• A lot of women feel drunk faster just before their monthly period.
Alcohol has been around for a long, long time and bread and beer were the staple diet for many centuries.
It is estimated that wine (made from grapes) has been around for at least 10,000 years and that mead (honey and spiced wine) was present even before that.
Distilled spirits date back to around 800BC in Japan and to the twelfth century in Europe.
Several different grains and fruits are used for spirits and beers such as
• Rice
• Molasses
• Honey
• Millet
• Barley
• Grape
• Corn
• Sweet potatoes
• Agave
• Maize
• May
• Cassava
• Persimmon
• Prehistoric nomads reputedly made Beer before they learnt to make bread. Beer was used in Assyria, Egypt, China and Babylon. In the Bible, it is said that Noah had beer amongst his supplies for the Ark.
Egyptian texts from 1600 BC contain at least 100 prescriptions using beer. In 55BC Romans introduced beer into Northern Europe.
The great explorer Christopher Columbus discovered Indians making beer from black birch syrup and corn in the 1490’s.
Queen Elizabeth 1 of England used to drink strong ale for her breakfast.
Crab claws, oyster shells and other such items were used as flavourings in ancient times.
The figure of king Gambrinus was the unofficial patron saint of beer and many breweries displayed his statue in the 19th Century.
There are some bottles of alcohol left that would have been hoarded, especially by Lex! And there are just enough for some bars to maintain a business of sorts. It seems like some tribes would start to make their own alcohol and might even use this as a way of trading.
It would be difficult to regulate who drinks this alcohol and it is doubtful there would be any age restrictions. Hopefully these tribal kids would use some common sense and make sure that young children stay away from it.
Lex was the one who was most affected by alcohol in The Tribe. Suffering from depression and high emotions, Lex turned to the demon drink. He had many rough times as a result of the alcohol and many of the Mall rats suffered from Lex’s fragility during this time. After being kicked out of the mall for his disruptive behaviour, Lex realised that he was acting like an idiot and managed to get his act together. There have been other times that he and some of the tribal gang have had brushes with alcohol but nobody else suffered as much as he did or got hooked on it.
We will have to wait and see if Lex manages to stay clean or if the lure of the liquor proves to be too much for him.
|
Is there any disease that has affected our society more than cancer?
Most people know at least one person who has fallen victim to cancer.
Many have bought into the claim that the only way to fight this disease is via extremely toxic drugs that kill the immune system. We have all heard the maxim “the cure is worse than the disease.” and the vast majority of people believe this is the only option. They think nothing else can treat this disease besides a toxic cocktail. However, in a recent article, we discuss the long history of natural cancer cures that prove this idea is not factual.
That article focuses on cures from decades past, however, there have been many recent discoveries that have shown amazing ability to completely kill cancer cells. This article will focus on one such newer cure– the Thunder God Vein.
Online Yoga Classes
The Ancient Chinese Medical herb called Lei gong teng (Tripterygium wilfordii) or as we call it in the West, Thunder God Vein, has been used in Chinese medicine for a millenia because of its wide variety of benefits. However, it was the claims about its cancer fighting prowess that lead the University of Minnesota’s Masonic Cancer Center to conduct a multi-faceted cancer study on the herb.
The results of the study were astounding to say the least. Ashok Saluja, who was the study leader and is the vice chairman of research at the University of Minnesota’s Masonic Cancer Center states “You could see that every day you looked at those mice, the tumor was decreasing and decreasing, and then just gone.”
The study found that after 40 days, the herb left the mice completely cancer free! This was because of a compound inside Thunder God Vein called tripotolide. Jun O. Liu, a professor of pharmacology and molecular sciences at Johns Hopkins, explains: “triptolide has been shown to block the growth of all 60 U.S. National Cancer Institute cell lines at very low doses, and even causes some of those cell lines to die.” He believes it is so effective because of its ability to impede the cancer cell’s ability to produce new RNA.
So, great all we have to do is extract the Tripotolide from Thunder God Vein and use the extract in human trails, right?
If only that was how it works.
The sad truth about the pharmaceutical industry is that their goal is not to find cures for any disease, their goal is to make the most profit (and they are really good at it). This creates an environment were you need as many return costumers as possible, which makes actually trying to cure anything counter predictive. In regards to making individual drugs to generate the most profit, the name of the game is getting your drugs patented. You can only patent something if you are the creator of it and it is hard to argue that the pharmaceutical industry is the creator of tripotolide (or any compound found in nature). So, what they do is take the compounds that have curing capabilities like tripotolide and synthesize them. During this process they change the genetic make up or characteristics just enough to be able to say they created a new compound.
In this particular case, they synthesized tripotolide and changed enough of its genetic code to make a new compound that they called minnelide. They are now adding this to chemo treatment and have started trials on this synthetic, slightly different alternative to a natural compound that annihilates cancer cells.
This process is great for the pharmaceutical industry because it creates a massive amount of money for them while creating return customers; because if we just add minnelide to the extremely toxic chemo cocktail, people’s immune systems will continue to be destroyed by the treatment that is worse than the disease. Not to mention, history has shown us that when the pharmaceutical industry tries to synthesize compounds that already cure without side effects so they can make money, it always leads to a product with side effects, which is less effective than its natural compound counterpart.
Sure, minnelide might be the execption and not cause side effect while also being effective. However, why not just extract the compound that already works with no side effects? It’s safer and cheaper. Perhaps if more people know, the more people will be helped. Afterall, that’s what we strive for at Unveiling Knowledge.
|
50 years later, the poverty of the “War on Poverty” is evident
The Cato Institute’s Dan Mitchell has a pretty good post about the legacy of Lyndon Johnson’s so-called “War on Poverty” at his blog and it’s not good. Since the Industrial Revolution, wealth prosperity in terms of disposable income and access to essential and luxury goods was increasing by leaps and bounds.
The vast majority of those gains arose from America’s relatively high amount of economic freedom (though by no means totally laissez faire) which drew millions and millions of immigrants from faraway lands to to the country of plenty. As the market progressed and wealth increased, previously impoverished immigrants, as well as Blacks migrating north, were able to increase their incomes by underbidding and outcompeting their competitors.
However, as a nation becomes richer and richer it tends to forget how it got that way after a few generations removed from abject poverty. So naturally people begin to ask the question, “How in a country of such abundance as ours is there material suffering for too many people?” Let’s leave aside the fact that even in societies with a largely free economy, governments still put up huge impediments to poor increasing their wealth and opportunity through such cruel anti-market mechanisms like occupational licensing, minimum wages, rent control and heavy land use taxes/regulations, government monopoly schooling, inflating the currency, eminent domain and asset forfeiture, and many more.
Such a question like “why is there poverty?” is the wrong question to ask. What we should be asking is “why is there wealth?” Why is it that only until relatively recently did material prosperity come into being for the West and increasingly other countries like Chile, Estonia, and South Korea?
The reason is capitalism, or the institution of separation of economy and state where all people are free to engage in whatever trade they want to and to reap whatever rewards that come with customers voluntarily paying them for their services. And filling the foundation of that system is the creation of legal regime of protected and tradable private property rights.
The Peruvian economist Hernando de Soto has argued persuasively and forcefully that it is institutions that build wealth that we should look at, not just the wealth that happens to exist at any given time. “What makes people interested in the rule of law, the first thing that they understand… is that everybody on this earth lives on a plot of land,” he writes. This is the basic realization that all of us have the individual right to call that which is truly ours, ours.
Returning to the issue of welfare statism and the War on Poverty, since all wealth that the state redistributes is money taken by force from the earnings of others, this process necessarily reduces the amount people are going to work and the quality of work that they’re going to do since their rewards diminish according directly to the amount of value they produce for society (their customers). So this means that the welfare state becomes a self-fulfilling prophecy in that it creates a direct incentive to feed off the fruits of one’s neighbors as opposed to being self-reliant and self-sustaining.
The result has left us in a very real situation where the federal government has created over 120 separate anti-poverty programs, which as a 2013 Cato Institute study shows, when coupled with state and local benefits, in the majority of states a recipient of the average number of welfare benefits actually has a higher income than a starting level minimum wage job.
Compounding the problem, since welfare benefits are not taxed but wages are, this makes it even more advantageous to go on the dole than to work.
Such a system described above has had terrible results for liberty, prosperity, and social cohesion for all Americans, but especially Blacks. Pre-LBJ, they had marriage rates higher than whites, unemployment rates relatively on par with whites, and children born out of wedlock was almost unheard of.
After the War on Poverty, we see the immense breakdown of Black America as well as the Hispanic community, too.
Ask yourselves, if the situation were reversed and we had had a War on Poverty starting in the late 19th century with poverty rates dropping every year, only to have the policy repealed in the mid-1960’s and have the calamitous affects that we’re now so sadly familiar with, wouldn’t you call that a failure?
So why do we continue this failed policy of wealth destruction, dependency, and mass community dissolution in the name of “helping the poor”?
If we want to truly help the poor, there has been no greater anti-poverty program than economic liberty and free trade. Period dot.
UPDATE: This is not to look askance at the terrible policy of the Drug War and “Law & Order” extremism that has also had an extremely deleterious effect in aggregating human suffering and tyranny in this country, which were begun by the conservative administrations of Richard Nixon and Ronald Reagan.
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
Evolution of Government 1536-1547 Part 3
HideShow resource information
Who is responsible for identifying a 'Tudor revolution in government'?
G.R. Elton
1 of 61
When does Elton say that there was a Tudor revolution in government?
From 1532 to 1540 when Cromwell was Henry's chief minister.
2 of 61
What is the gist of the tudor revolution in government?
That changes in these years that marked change from medieval to modern forms of government. Elton attached great importance to the role of Cromwell
3 of 61
What four parts can Elton's theory of a revolution in government be broken down into?
Structure and organisation of central government, role of Parliament and the scope and authority of statute law, relationship between Church and State, extension of authority in regions
4 of 61
Tudor Revolution in Government - Central Government
'Administrative revolution' with radical change in structure and organisation central government. Especially reorg. finance and creation Privy Council
5 of 61
What was the result of changes in the structure and organisation of central government?
Government by the King was replaced by government under the King
6 of 61
Tudor Revolution in Government - Parliament and Statute law
Concept of natural sovereignty,by using Parliament to enforce the Reformation, Crown emphasising that nothing lay outside parliamentary statute
7 of 61
What was the result of changes in Parliament?
King and parliament had been replaced by king-in-parliament
8 of 61
Tudor Revolution in Government - Church and State
By bringing Church under control of king, Royal Supremacy initiated jurisdictional revolution in relationship between Church and State. Independence of Church removed and balance of power favouring state.
9 of 61
What was the result of changes in the relationship between Church and State?
Church and State had been replaced by Church in State`
10 of 61
Tudor Revolution in Government - Extension of royal authority in regions
By bringing outlying regions under control, Cromwell aimed to create nation that was jurisdictional entity. Gave more authority and purpose to Council of North and reformed government of Wales by empowering the Council of Wales and Marches.
11 of 61
What was a short lived extension of royal authority in the regions by Cromwell?
Council of the West
12 of 61
What was the result of the extension of royal authority in the regions?
A fragmented state was replaced by a unitary state
13 of 61
What does Elton argue about the developments in government under Cromwell?
That these developments were one of the two or three major turning points in the history of British politics
14 of 61
What do some historians that do not support Elton's theory say?
They protest against the term 'revolution' which suggests far-reaching, radical and innovative changes. Prefer use of evolution as changes were measured, piecemeal and conservative.
15 of 61
When was the last great marcher lord of the Welsh marches executed?
Duke of Birmingham, 1521
16 of 61
What were the Welsh marches?
A patchwork of local lordships that had been gradually taken under royal control
17 of 61
Who were the main landowners in the South West in 1536?
Duchy of Cornwall under the Crown. Main landholders were Courtenays, Earls of Devon and Marquis of Exeter
18 of 61
Who dominated East Anglia in 1536?
Howards, Dukes of Norfolk and Earls of Surrey
19 of 61
Which areas constituted the northern marches in 1536?
Counties of Northumberland, Cumberland and Westmorland along with Liberty of Redesdale.
20 of 61
What did each of the three northern marches have in 1536?
Wardens responsible for defence and law/order
21 of 61
Who traditionally controlled the East and Middle marches?
Earl of Northumberland (Percy family)
22 of 61
Who, in 1536, was traditionally the Warden of the West March?
Usually a Dacre or Neville
23 of 61
Who were the dominant family in Lancashire?
Stanleys - Earls of Derby
24 of 61
When did Henry add to the established families in arwea (Earls of Derby and Earls of Shrewbury) and how?
In 1525 created his friend Henry Clifford as Earl of Cumberland.
25 of 61
How were Cromwell's changes in the hope of a unitary state encouraged and enabled?
By changes already taking place in the structure and attitudes of the 'knightly estate' and relationship without crown
26 of 61
Without a police force of a standing army the co-operation of who was essential in enforcing royal authority?
The nobility and gentry - but also had to be aware of threat they posed if 'over mighty'
27 of 61
Like his father, what did Henry VIII want to show to the nobility ?
That the route to power and privilege lay in service to the King and obedience to him.
28 of 61
What policy of his father's did Henry VIII continue regarding service and reward?
Promoting talented servants regardless of origins, and ennobling men of gentry origins (and below - Cromwell)
29 of 61
Although he was more generous in distributing lands and titles than his father, what kind of men were his new creations?
New men like Charles Brandon. Some such as Edward Seymour and William Parr benefited from king's marriages but most elevated for service given or expected.
30 of 61
What happened in the case of suspicion of disloyalty?
Intimidation and in some cases death
31 of 61
What was Henry's approach to service and reward supposed to demonstrate?
That any loyalty and service were more important than birth, inheritance and tradition
32 of 61
Who suppressed the northern risings of 1536-7?
New men like Henry Clifford and Charles Brandon, alongside the heads of established families who had seen the benefits of loyalty such as the Earls of Derby.
33 of 61
Why were the Stanleys created Earls of Derby, becoming a prominent family in Lancashire and Cheshire?
Due to support for Henry VII at the Battle of Bosworth
34 of 61
What is significant in 1537 regarding the men of the north?
No difficulty in finding able men willing to serve on reconstructed Council of North and most came from regional nobility and gentry.
35 of 61
What converted some supporters of the Pilgrimage of Grace and other risings such as Sir Thomas Tempest into loyal administrators?
Demise of patron (Earl of Northumberland), concerns about the danger of popular unrest and the benefits of royal service
36 of 61
What did Cromwell do in 1538 regarding magnate power?
Attack on leading magnate family in the SW, the Courtenays which cumulated in execution of the Marquis of Exeter (Henry Courtenay)
37 of 61
What happened to two members of the Privy Chamber who held estates in Devon and Cornwall?
Sir Edward Neville and Sir Nicholas Carew were tried and executed.
38 of 61
What had Carew and Neville done?
They had supported Aragon but when fellow members of faction rebelled in 1536 had remained loyal to King.
39 of 61
Despite the lack of action by Carew and Neville who likely ordered their execution?
The King due to their association and family links with Reginald Pole whose brother Lord Montague was also tried and executed.
40 of 61
What was the threat from men such as Exeter?
They had a territorial base from which a challenge could be mounted
41 of 61
Who were the Poles?
Yorkists with a claim to the throne
42 of 61
Who was Reginald Pole?
A Catholic who had fled to Rome and organised a propaganda campaign against Henry and all his works.
43 of 61
What happened in 1547 to Norfolk?
Had been sent to the Tower in December 1546 and only escaped with his life as Henry died before signing the death warrant while hiss son was tried and executed in 1547.
44 of 61
What did the future hold for the Duke of Norfolk after Henry's death?
He had been stripped of land and titles and remained in prison until released and restored by Mary in 1553.
45 of 61
How had Henry altered the balance of ower?
Placed in favour of crown with noble rivalries and factional actions increasingly operated within boundaries set by king and competed for royal favour.
46 of 61
What did David Loades say about the majority fo peers in 1547?
"the majority of peers were of the King's own creation, and constituted a service nobility."
47 of 61
What remained a benefit of retaining a local leader?
They had an oversight and capacity to bring different offices and sections together. In north and Welsh marches used Regional Councils for this reason, elsewhere no formal arrangements so use of powerful local figurehead.
48 of 61
In which areas was the need for a local leader most apparent?
Local defence and law and order in times of emergency
49 of 61
In 1539 when fears of French invasion were rife what did Henry order and why?
Establishment of Council for the West as counties of Devon and Cornwall particularly vulnerable to French or Spanish attack and Cornwall had reputation for rebellion.
50 of 61
What did Henry also do in 1539?
Appointed number of Lord Lieutenants in West and elsewhere with responsibility for raising local forces.
51 of 61
When had the position of Vice-Admiral with jurisdiction over private shipping in the area been established, specifically in the west?
52 of 61
What happened regarding developments in the West in 1540?
Council and positions of Lord Lieutenant lapsed in 1540 but Vice-Admiral role continued
53 of 61
Who were most of the men appointed into those short term roles in the West?
Largely geny status rather than gereater nobility.
54 of 61
Who replaced the Courtenays in the south west and why?
Lord John Russell (later Earl of Bedford) was President of the Council of the West in 1539 and influence began to grow from this time of family
55 of 61
What was the role of Lord John Russel from 1539 onwards?
In 1545 given special commission of array to manage defence in the four SW counties of Cornwall, Devon, Dorset and Somerset and after managing suppression of Western Rebellion in 1549 became Lord Lieutenant of all four counties in 1551
56 of 61
When were similar arrangements of those with Lord John Russell adopted throughout southern England in?
1549 due to widespread unrest and protest 'camps' culminating in Western Rebellion and Kett's Rebellion. Series appointemnts in 1549 and 1550
57 of 61
What happened to some of the appointments made in 1549 and 1550?
Some of them lapsed thereafter, but a more systematic approach was gradually adopted.
58 of 61
From around 1549 what happened regarding Lord Lieutenants?
More systematic approach with system of Lord Lieutenants (usually noble men with regional connections) began to bbecome established across country.
59 of 61
When did the system of Lord Lieutenants develop fully?
1580s but by 1553 a start had been made in creation of effective national system of civil and military administration
60 of 61
What progress was made in Henry's reign, in the words of David Loades?
"this was the process that turned the provincial magnate of the fifteenth century into the court-based politicians of the Elizabethan period"
61 of 61
Other cards in this set
Card 2
From 1532 to 1540 when Cromwell was Henry's chief minister.
Card 3
What is the gist of the tudor revolution in government?
Preview of the front of card 3
Card 4
Preview of the front of card 4
Card 5
Tudor Revolution in Government - Central Government
Preview of the front of card 5
View more cards
No comments have yet been made
Similar History resources:
|
Russian Scientists Are Working On A Plan To Deflect Asteroids
"People's lives are at stake," Russian scientist Anatoly Perminov told the Russian radio station Golos Rossii. "We should pay several hundred million dollars and build a system that would allow us to prevent a collision, rather than sit and wait for it to happen and kill hundreds of thousands of people."
Perminov was talking about the Russian scientists plan to spend several hundreds of millions of dollars to design and implement a system capable of deflecting large meteors out of earth's path.
According to NASA, there is a slim chance that the Apophis asteroid might hit earth in 2036.
Details of the plan still need to be work out, but Perminov has invited NASA, the ESA and other space agencies to participate.
Matt also calculated that painting the meteor or covering it with mirrors would change the way it absorbs heat energy enough to steer it out of earth's path in 20 years.
Add new comment
1 + 0 =
Add new comment
|
The Great Japanese: 30 Stories
Product Description
The Great Japanese: 30 Stories introduces 30 stories of famous Japanese personalities that give you insight on Japanese culture, social issues, the Japanese way of thinking and values. Instead of being a regular textbook with only the purpose to improve your Japanese language ability, this book focuses on cross-cultural communication while also providing valuable resources for the JLPT. Aimed at upper intermediate and advanced learners, this book includes 256 grammar structures and 654 words necessary for the N2.
Recommended for you if:
• You are an upper intermediate or advanced learner (aiming for the JLPT N2)
• You want to learn actual Japanese, not textbook Japanese.
• You want to learn more about Japanese culture, social issues, etc.
Additional information:
• Pages: 208
ISBN/UPC: 9784874247020
VENDOR: Three A Network
|
Liberty and Property: Social history of western political thought on the Constitution of Namibia
In trying to examine and understand the history of political philosophy and its role in shaping the world political systems, Hobbes, Locke and Rousseau left an incredible work for us to ponder on, expand our thoughts and engage on moral and ethical issues of the world.
Today, the values of life, liberty and property, which Locke so passionately promoted, are deeply entrenched in the constitutions around many countries of the world. Namibia is one such country. The Preamble of the Constitution of the Republic of Namibia puts emphasis on the recognition of the “…right of the individual to life, liberty and the pursuit of happiness.” We have also entrenched the property clause in our Constitution.
The State of Nature and the State of War
For Locke, the State of Nature, the actual natural condition of mankind, is a perfect and complete liberty. Locke argued that human beings as human beings, separate from all government or society, have certain rights which should never be given up or taken away. Contrary to Thomas Hobbes (1588-1679), Locke does not believe that a person can, by consent or contract, enslave himself to someone else or place himself under the arbitrary power of another.
In Leviathan (1651), Hobbes captures his imaginations and main thesis around morality being same as the law. For him, our behaviour and actions are governed by the law and not our conscience. He essentially argued that in the state of nature, no laws exist. It is more like every man for himself and God for all. It is no secret that human beings can be selfish by nature. The desire to accumulate more, and in the process alienate others, is a reality.
The fundamental flaw of Hobbes’ theory is the failure to recognise that moral obligations and duties are reciprocated. The basic dictum is much more of do unto me what you would expect others to do unto you. Although Locke views the state of nature, as a state of perfect and complete liberty, he cautions that this does not mean a carte blanche to commit crimes. Even if there are no laws, state of nature is not a state without morality.
So in a nutshell, state of nature is not same as the state of war, as asserted by Hobbes. But the interesting dimension is that the state of war is not necessarily ruled out. A state of nature can degenerate into state of war, specifically, a state of war over property disputes.
The above could well find a meaning in Namibia’s history of the liberation struggle. Kaptein Hendrik Witbooi and Chief Samuel Maharero took up arms during the struggle for national resistance to engage the German Imperial Government over land dispossession and protection treaties. The end result was the infamous Genocide in 1904. The primary aphorism of the liberation movements, such as Swanu and Swapo was all about the land. The struggle was primarily about the illegal dispossession of the land and other concomitant natural rights. Today, the Property Clause is the thorn in the Namibian Constitution.
It is sad that in our hastiness for independence that our founding fathers of the Constitution did not thoroughly discuss the property rights clause inclusion as influenced mainly by western philosophy, as represented by their chief representatives, the Western Contact Group. African philosophy could have arrived at perhaps a different conclusion.
African philosophy in the context of property distribution was seen as inferior. In this instance, Swapo sold us out. The ANC was also so blind to carbon-copy our property clause in their constitution.
The rise of the Affirmative Repositioning in Namibia in November 2014, under the populist zeitgeisty Job Shipululo Amupanda lends credibility to this argument and also points to the failure to grasp the full meanings of equality and justice as a fairness principle, as advanced by John Rawls in The Theory of Justice (1971) and to a certain extent Robert Nietzsche in Anarchy, State, and Utopia (1974). Inequitable distribution of resources can also lead to protracted conflicts. This happened in Niger Delta region in Nigeria in the early 1990s over tensions between foreign oil corporations and a number of the Niger Delta’s ethnic minorities.
The critical fact is that since in a state of nature there is no civil authority to which men can appeal to, complicated by the fact that the law of nature allows them to defend their own lives at all costs necessary, once war begins it is unlikely to stop. This is one of the reasons put forward by Locke that men have to abandon the state of nature and reach out to each other and form a civil government.
*Henny H. Seibeb is the co-editor of the book, “The Politics of Apologetics” published in 2010 in Windhoek.
Please enter your comment!
Please enter your name here
|
Dutch is a Germanic language that evolved in Europe, and is spoken in Africa, Europe, and the United States.
Pronoun Cases
Singular Plural
1st 2nd 3rd 1st 2nd 3rd
Masc. Fem. Neut.
Nominative ik du hij zij het wij jij zij
Accusative mi di hem/hen/'n haer/se het/'t ons u hem/hen/'n
Dative haer hem
Genitive mijn dijn zijn haar zijn onser uwer haer
Ad blocker interference detected!
|
Flexor Tendon Injuries
Flexor tendons in the hand and forearm:
The muscles that bend or flex the fingers are called flexor muscles.These flexor muscles move the fingers through cord-like extensions called tendons, which connect the muscles to bone.The flexor muscles start from the elbow and forearm, turn into tendons just past the middle of the forearm, and attach onto the bones of the fingers (See Figures 1 and 2). In the finger, the tendons pass through fibrous rings called pulleys, which guide the tendons and keep them close to the bones, enabling the tendons to move the joints much more effectively.
Deep cuts on the palm side of the wrist, hand, or fingers can injure the flexor tendons and nearby nerves.The injury may appear simple on the outside, but is actually much more complex on the inside.When a flexor tendon is cut completely, it acts like a rubber band, and its cut ends pull away from each other.A tendon that has been partially cut may still allow the fingers to bend, but can cause pain or catching, and may eventually tear all the way through. When tendons are cut completely through, the finger joints cannot bend on their own.
Tendons are living tissue. If the cut ends of a tendon are brought back together, healing can occur. Because of the separation that occurs after a complete flexor tendon laceration, the tendon cannot heal without surgery.
It is important to preserve the pulleys in the finger, and there is very little space between the tendon and pulley in which to perform a repair. Nearby nerves may need to be repaired as well. After surgery, the injured area can either be protected from movement or started on a very specific limited-movement program. Your doctor will prescribe hand therapy for you after surgery.
This is a major problem and results in more surgery and permanent loss of finger function. After six weeks, the fingers are allowed to move slowly and without resistance. Healing takes place during the first three months after the repair.
In most cases, full or nearly full motion of the injured finger will be regained after surgery and appropriate therapy. If good motion is not obtained after appropriate surgery and therapy, it means that the repaired tendon has either pulled apart or is caught in scar tissue. It may be necessary to perform an ultrasound or MRI to evaluate the tendon to see which of these has occurred. In some cases, a scarred tendon can be freed with additional therapy. If good motion is not obtained with therapy, then surgical exploration may be necessary. If the tendon has pulled apart, it is often necessary to perform a two-stage surgery where an artificial tendon is implanted, followed later by tendon grafting. If the tendon is caught in scar tissue, this can be often freed by a surgery known as a tenolysis. Even with appropriate surgery and therapy, approximately 15% of flexor tendons will become caught in scar tissue and require a tenolysis.
Therapy After Surgery
In most cases, a program of limited motion will start seven to ten days after surgery.This will involve the use of a specialized splint to allow the fingers to be passively bent with rubber bands; a small amount of active extension of the fingers will be allowed within the splint. This program helps to minimize the chance that the tendon will become caught in scar tissue.The tendon repair can pull apart if the hand is used too soon or therapy guidelines are not followed. In addition to regaining motion of the finger after tendon injury, therapy is also important to soften scars and build grip strength.
In summary, flexor tendon repairs require surgical treatment and significant follow-up therapy. To obtain the best possible result, the patient must cooperate with the therapist and understand the goals of therapy. With good compliance, an excellent result can be obtained after flexor tendon repair.
|
Black Pain: Slavery & The Traumatic Roots Of Modern Gynecology
sad African American woman
Black people have contributed greatly to the advancement of medicine, oftentimes at the risk of our well-being. Particularly, the Black woman. Henrietta Lacks is a prime example. After being treated for a cervical tumor in 1951, cells from Lacks’ cervix were taken without her permission and used for research. Her cells became known as the HeLa immortal cell line, the first able to replicate infinitely, and are basis for medical breakthroughs like the HPV vaccine.
READ: Is Black Pain Treated Less?
This left an indelible mark on an existing open wound for Blacks and healthcare in the United States. J. Marion Sims, known as the “father of modern gynecology,” used female slaves to treat vesico-vaginal fistula (abnormal fistulous tract extending between the bladder and the vagina that allows the continuous involuntary discharge of urine into the vaginal vault.) Sims didn’t anesthetize these women during these excruciatingly painful surgeries.
His belief was that Black women, unlike upper class White women, could endure the pain at a higher level. During an 1857 lecture he said the surgeries “were painful enough to justify the trouble.” Sims, a slave owner, set the precedent for a practice that still continues to this day.
Dr. Vanessa Worthington Gamble is a physician and scholar widely regarded for her studies in medical humanities. An NPR article with Dr. Vanessa Worthington points out these medical atrocities.
“There was a belief at the time that black people did not feel pain in the same way. They were not vulnerable to pain, especially black women. So that they had suffered pain in other parts of their lives and their pain was ignored.”
Ignored is often how Black people feel in the medical sphere. You might’ve heard your parents say that you should always dress up in your best threads when you go to the hospital. This systemic belief comes from a history of being treated as less than human in America.
READ: Longer Waits, Higher Costs: Why Are Black Men With Prostate Cancer Getting Inferior Care?
“These women were property. These women could not consent. These women also had value to the slaveholders for production and reproduction – how much work they could do in the field, how many enslaved children they could produce. And by having these fistulas, they could not continue with childbirth and also have difficulty working,” states Dr. Worthington.
While Black women aren’t enduring the same level of mistreatment that Sims’ slave patients, Lucy, Betsey and Anarcha received, there is a growing amount of evidence that Blacks are still….
|
Share →
The build and release process is the backbone infrastructure of software development. While it may not be the coolest or hottest part of the software development process, it is a very necessary one.
Why, you might ask?
Software delivery is about reduction of risk and waste, ROI on new features, and flow of value to our customers. Yes, we have heard these definitions before, but how do these definitions really relate to our everyday lives?
I would like to provide an analogy that might help explain why a software delivery pipeline is important and why the incorrect infrastructure (bad tools and practices) can create a negative impact on our software delivery.
We can think of the build and deployment process like a power plant pipeline (provided by solar, wind, gas, oil etc.). The source of the natural raw material (requirements) is captured from a set of smaller stations (developers) and then deposited into a centralized station (source control). It can then be packaged and regulated for distribution to other sub-stations.
The sub-stations further test and regulate the distribution and frequency (builds, and deployments) to other local-stations (QA and Staging) and then deliver to your home or office (production).
Now imagine, that you needed to deliver power from a power station (environment) using old technology and infrastructure. As the demands increase from either a heat wave or a cold snap, the pipeline and delivery system begins to fail, causing brown (rolling outages) and black (complete outages) outages. The station and pipelines that connect the entire grid (requirements to production) together, may not be designed to handle the increase of power and demand that the customers are asking for. The below picture tells a story of an ad-hoc pipeline that may not deliver the reliable, repeatable or reusable infrastructure we need. These type of ad-hoc pipelines are also impossible to scale or to accommodate the company and customers’ requests.
As the delivery of services fail, the customer’s trust of a dependable infrastructure quickly deteriorates. The cost to fix the infrastructure increases along with the cost of damage control and non-productive efforts.
In extreme events, like a complete collapse of a delivery system, the “Firefighting” or “Triage” efforts to keep the system functional in some fashion take us all away from productive initiatives and tasks.
Personally, I can think of any better way to spend our time and effort. For example, spending efforts focused on preventing and improving the delivery system as opposed to taking a “patch and pray” approach and waiting for the next event to happen.
My question when I see these older build and deployment infrastructures that are either built on 10 year old technology, or have been put into place ad-hoc with no real architecture or planning is, “Why haven’t you invested the time and resources to address and really fix the software delivery infrastructure that cause constraints and cost?”
The number one answer to my question is, “We don’t have time to fix it.”
The number two answers is, “The build and deployment system is not a priority, we can build it on our laptop and copy the files over to production.”
Both of these replies are very poor excuses!
From my experience, teams and individuals soon forget the impact of the last event and the cost to the customer loyalty. What’s worse is that the team never gets out of these cycles of firefighting and triage mode, creating a vacuum within the teams’ efforts that are hard to measure.
In addition, I have been at places were the software delivery pipeline does not get the visibility, investment and resources to create a new or improve the current system; to handle peak loads, or to deliver the power/features to the customer in a more efficient and effective manner.
If the delivery pipeline is not developed and designed correctly, then risk, constraints and complexes are introduced, which translate into additional cost and loss opportunities.
By having a software delivery system that is functioning at peak performance and can handle peak periods (the holiday season for example), we are able to deliver features to our customers at the speed and quality we need to.
The software delivery pipeline may not be the most impressive part of the software development cycle, but it is the critical infrastructure and grid that delivers the results and enables all of us to keep the lights on!
Print Friendly
|
• Observation - Something that you watch and study. Perception - How you react or feel about something you see.
• Reality can be observed, touched, smelled, examined, researched, proved. Perception is a belief, you think it is there, but can’t find it.
• Observations are objective. (The event.) Perceptions are subjective (The emotion.)
• Well observation is "seeing something" Preception is "how you react when you see something" does that help any? It is a tough one to explain. +4
• Observation is either an activity of a living being (such as a human), consisting of receiving knowledge of the outside world through the senses, or the recording of data using scientific instruments. The term may also refer to any datum collected during this activity. Perception: In psychology and the cognitive sciences, perception is the process of attaining awareness or understanding of sensory information. It is a task far more complex than imagined.
• While observations are what you observe, perceptions are what your thoughts and feelings are regarding what you observe.
• Observation → WHAT is seen (e.g. The same star is seen by Jack and Jill.) Perception → HOW it is interpreted (e.g. Jack thinks he is looking at a star. Jill thinks she is looking at a light from an aircraft.) Something can be observed by many people. But each person may perceive it it a different way. Perception is how an individual's brain processes an observation.
• 6-21-2017 You observe a guy drinking water, and he twitches. You perceive that he got something besides water in his mouth. Perception is knowledge beyond what you can see.
Copyright 2016, Wired Ivy, LLC
Answerbag | Terms of Service | Privacy Policy
|
Jibburs are small herbivores native to JGF-13. They are known for their high pitching mummbling and squeals. They also are known to jump very high.
Jibburs resemble rabbits somewhat, having large hind legs and large bucktooth teeth. However that is where their similarities end, they are virtually hairless and have four sets of bucktooth teeth set in each side of the mouth.
In between these giant teeth are small molars used to chew up plants. The jibburs are about the size of a small Earth dog and are extremely frail. Their fifth limb ends in a 3 fingered hand,and is located on their chest. They usually use their 5th limb to grab food or grab onto mates during mating.
They have 2 pairs of black, small eyes on the sides of their heads. Their vision is quite poor and they rely mostly on scent and hearing. They have very small ears on the back sides of their heads. They have no tails and have stumpy claws on their 4 legs. Jibburs have slit nostrils on the front of their heads.
Jibburs are quite social creatures, living in small groups in intricate burrows. Their burrows are usually home to their intricate nests. The males of the species seem to look for food to bring back to the nests while the females take care of the young. They seem to lay hard shelled eggs and care for their young. Once their young mature they usually replace their older group members. Their groups are usually commanded by the strongest male in the group. The leader tells the group when to migrate to make new nests,etc. The weakest are used to keep watch for predators and alert the group for trouble.
Jibburs seem to have no natural defenses, however they can somewhat kick predators their size with their backlegs. However this usually just makes the predator angrier.
Jibburs usually feed on small plants and berries, and the occasional nut. They also tend to often eat small insects.
The Jibburs mostly reside in the forests and plains of JGF-13, however some live in the deserts.
Ad blocker interference detected!
|
West Nile and Zika virus safety tips.
• Drain standing water around your home. Mosquitos use stagnant water to lay their eggs, so there is a potential breeding ground wherever still water is found. Empty buckets, cans, pool covers, abandoned swimming pools, drains, flower pots, and old tires. Unclog gutters and keep bird baths and pet water dishes clean.
• Apply insect repellent on exposed skin, particularly below the knee, and clothing when you go outdoors. Products that contain deet, picaridin or oil of lemon eucalyptus will reduce your chance of attracting mosquitoes.
• Use mosquito dunks in fixed, non-drainable objects that can hold even small amounts of water.
• Wear light long sleeve shirts, pants and socks when outdoors.
• Avoid outdoor activity between dusk and dawn when West Nile bearing mosquitoes are most active. Zika carrying mosquitos are active all day.
• Fix torn or ripped door and window screens to help keep mosquitoes outside.
|
There is an art to making a presentation that people will remember. And it is critical to understand
each step so that you are a worthwhile presenter,…so that people will want to listen to you again.
#1) First step is to know your material. Having an in depth knowledge of what you are talking about,
gives you a conscious and subconscious level to work from. This is so critical, because when you
are possibly caught off guard when making a presentation, you can bounce back with ease. This
working knowledge is so important… because the fact remains that speaking is still the #1 fear that
people have. And fear can control and cripple you. You probably already know this from past
#2) Open your presentation in a memorable way. Open with a BOLD statement, or a question that
gets your audience engaged immediately. Open with a time or location statement so that people can
get a picture in their mind. Remember….that brain thinks in pictures! As a professor of Marketing
and Advertising, I tell this to my students all the time.
#3) Now since the brain DOES think in pictures, continue with a story that weaves in and out of
your presentation. No matter what their age, people love stories. And it is up to you to make your
story compelling.
#4) Every 4 to 7 minutes make a change to your stance, your tonality or your story. Keep your
audience engaged and intrigued. Compliments now of so much media stimuli, we now have an
attention span of less than 3 minutes before our mind begins to wander!
#5) Use powerpoint only to enhance your presentation. Remember people are there to listen to you
and not to see a powerpoint! Do not let it be the highlight of your presentation. Let it be a compliment
not a dominant factor. This can make or break your presentation.
#6) Practice, practice, practice your presentation. Know it so that you can extend or delete at any
point in time. Frequently you will find a last minute change being made, and you have to have the
confidence to adapt to this change with little notice. This is easier said than done. Practice at
least 4 times in front of a mirror, a pet or a person. The first two give you very little negative feedback
but also allow you to feel comfortable with your material.
#7) Finally have your presentation written in brief and highlighted format. Do not have the entire
speech written out. Do bold and concise sentences so that you can easily come back and pick up
from where you are speaking. Remember you have practiced your presentation, you know your
material and you believe in yourself!
As a presentations skill coach, I know how important it is to understand each of these 7 critical
steps. Strong Incentives is a professional presentations company that will take you from “confident
to compelling!”
And you are worth it!
Eileen Strong
Strong Incentives…Powerful Results!
|
Public Health England and the British Infection Association recommend that topical antibiotics are reserved only for treatment of very localised lesions because fusidic acid is an antibiotic that is also used systemically. There are concerns that widespread use of topical fusidic acid will lead to increased resistance, rendering systemic fusidic acid (used for severe staphylococcal infections such as osteomyelitis or systemic MRSA) ineffective. If a topical antibiotic is used, a short course (such as 5 days) reduces exposure and the risk of resistance. Since few agents are effective against MRSA, mupirocin should be reserved for such cases.
Public Health England and the British Infection Association recommend flucloxacillin for first-line treatment of impetigo because it is a narrow-spectrum antibiotic that is effective against Gram positive organisms, including beta-lactamase producing Staphylococcus aureus, and it demonstrates suitable pharmacokinetics, with good diffusion into skin and soft tissues. Clarithromycin is recommended for people with penicillin allergy because it is also active against most staphylococcal and streptococcal species.
Koning S, Verhagen AP, van Suijlekom-Smit LWA, Morris AD, Butler C, van der Wouden JC. Interventions for impetigo. Cochrane Database of Systematic Reviews. 2003. Issue 2.
http://www.mrw.interscience.wiley.com/cochrane/clsysrev/articles/CD003261/frame.html Accessed 23.09.14. RATIONALE: Many RCTs identified by this Cochrane review were of poor methodological quality. Pooled data from four RCTs found no difference in cure rates between topical mupirocin and topical fusidic acid (OR 1.22, 95% CI 0.69 to 2.16). Most RCTs that compared topical compared with oral antibiotics used mupirocin. However, mupirocin is reserved for MRSA and should not be used first-line for impetigo. Topical fusidic acid was significantly better than oral erythromycin in one study, but no difference was seen between fusidic acid and oral cefuroxime in a different arm of the same study. Topical bacitracin was significantly worse than oral cefalexin in one small study, but there was no difference between bacitracin and erythromycin or penicillin in two other studies. The results of one non-blinded RCT suggested that topical fusidic acid was more effective than topical hydrogen peroxide, but this did not quite reach statistical significance.
Public Health England and the British Infection Association recommend that topical retapamulin or polymixin are reserved for use in areas where there are rising rates of resistance to fusidic acid. Polymixin (contains bacitracin) has less robust RCT evidence than fusidic acid. Although topical retapamulin has been demonstrated to be non-inferior to topical fusidic acid for the treatment of impetigo in one randomized controlled trial, it is more expensive and there are less safety data available (it is a black triangle drug).
Denton M, O’Connell B, Bernard P, Jarlier V, Williams Z, Santerre Henriksen A. The EPISA study: antimicrobial susceptibility of Staphylococcus aureus causing primary or secondary skin and soft tissue infections in the community in France, the UK, and Ireland. J Antimicrob Chemother 2008;61:586-588. RATIONALE: Of S. aureus isolates from the UK, only 75.6% were susceptible to fusidic acid. A diagnosis of impetigo was associated with reduced fusidic acid susceptibility.
The POCAST project is funded by the National Institute for Health Research Health Protection Research Unit (NIHR HPRU) in Healthcare Associated Infections and Antimicrobial Resistance at Imperial College London and by the Imperial College Healthcare Charity (Grant Ref No:7006/P36U).
|
Emergency Liquid Candles
PHOS in a can
PHOS in a can
From the beginning our candles where designed for use during or after an emergency type situation; such as a tornado, earthquake, flood, or other types of natural disasters where standard candles or lamps could be damaged beyond use. These candles can be used as a daily candle as well. They give off enough light to comfortably read a book.
These types of candles are also called a Liquid Candle they are an open flame candle. Please remember when using one of these candles always:
When using any open flame in a small closed environment you need fresh air so remember to leave a window or door open (till it becomes ajar: LOL) enough to maintain fresh air.
Our candles can be used with different types of fuel which produce a single flame.
Do Not Mix Other Fuels Together In Any Liquid Candle.
With the cap firmly in place these candles are water tight and will float. However they float upside down so if you pick one of these candles up that is filled with fuel and it has been upside down please allow some time upright before lighting.
The Candles body
Is a press-formed steel can with a rolled steel top (like a food can) with a threaded mouth. The threaded steel cap has a permanent non-removable seal. The empty candle weighs approx. 1 ounce and holds 6 ounces of fuel.
The Wick
Is braided fiberglass held in place with a brass tube soldered to an insert for the mouth of the can. The insert can be removed by pulling the metal ring attached. At this point the fuel can be added and/or wick replaced. This all fits under the lid and is kept water tight.
Our candles can be activated as any other candle would be; by applying flame to the wick.
Extinguishing the candles flame:
We do not recommend using the cap to cover the candled or snuff it out as this could compromise the seal. Just blow the candle out, let the wick cool for 1 or 2 seconds and then replace the lid.
Due to the many respiratory ailments common today I have given some MSDS (manufacture safety data sheet) links for these fuels so you can research them and chose the best for your use. We do not recommend the use of Lamp Oil, Kerosene, or Heating Oils as these will burn with a heavy soot.
Liquid Paraffin Wax / Paraffin Oil:
MSDS: http://isites.harvard.edu/fs/docs/icb.topic796497.files/paraffinoil.htm
Denatured Alcohol with a 90% or greater alcohol content – Marine Stove:
MSDS: Fuel http://www.paynesmarine.com/documents/CaptPhabMarineStoveFuel_000.pdf
Grain Alcohol: (caution invisible flame)
91% Isopropyl Alcohol – Rubbing Alcohol:
I also suggest that if you use our candles with other fuels paint your candle (paint is more permanent than a mark) for easy identification. Our favorite fuel is Liquid paraffin or some call it paraffin oil. Recently I have been keeping one candle for use with Grain Alcohol for lunches at the shop, this way I can cook faster as the flame burns hotter and cleaner although the flame is very hard to see. When warming foods in a can like soup, beans or boiling water use a lid of some kind set so that it can vent or you will never reach a boil.
Filling the candles:
Remove the lid and while holding the candle firmly on a flat surface with one hand, grasp the key ring that is inside the mouth of the candle with the other hand and give it a short, but sharp Jerk; the insert will POP out. Fill the candle with fuel, replace insert pushing it in till it snaps into place, then replace the lid or light and enjoy.
Liquid Paraffin fuel-filled candles can also be stored in many locations for long or short term. Some of my candles have 5 year old liquid paraffin and they still light at the first flame from a match. I keep one out in our storage shed. In the middle of summer, the shed can reach 100 degrees or hotter on the south wall and the candle has never swelled or oozed fuel. While in the middle of an ice storm the temperature can drop to below zero and I can still light the candle. The paraffin is almost a solid but I can take it and light it. If the flame is real low all I have to do is shake it and the flame grows, this will happen three to five times with the flame slowly getting bigger as the paraffin starts to warm, it will continue to burn from there.
We don’t keep these candles on a shelf for decorations as all our decorative candles are glass and their paraffin is five years old and still lights the first time-every time. We store these candles with our emergency kit and camping supplies. We use them all the time in the shop for small heating jobs such as shrinking shrink tubing.
You can even store a candle in the trunk of your car or camper trailer with out the risk of rupture. The theory behind this is that if and when a metal container fails (they will over time if they become damaged in any way), what happens when it does rupture ? In our situation, will the fuel ignite upon rupture from a spark caused by the separation of steal?
Liquid Paraffin Wax is an oil with a high flash-point (200 degrees or higher) and has a low vapor point (the vapor is what lights first then the liquid). Will the vapor mixed with the oxygen in the air ignite from a spark from the can?
This is very unlikely because of the low vapor plus the spark having to be 200 degrees or hotter and have a 2% to 12% mixture to oxygen just to light the vapor making the chances very low that the right set of conditions will exist at the right time.
Here are 2 ways for heating cans that I put together in less than 15 min. The coat hanger made a grate while the hook was used on the tripod,The string is for tieing the tripod and attaching the hook , I used a P 38 can opener to remove the can lid and to put 2 slots in the side under the rim then used a screw to open the slot for the wire handle.
|
For just about as long as there’s been a commercial market for beauty products, those products have been stored and sold in elaborate plastic packaging. Think about it: when you come home with a haul from Sephora, you typically have to peel off layers of cellophane and plastic just to get to the box, which is made up of two-to-three parts. Then, after you’ve used every last drop, you throw the mascara tube or night cream jar right into the trash. Little of the plastic used to house makeup and skincare products is recyclable, and most of it ends up in landfills. There, it will sit for years and take nearly 1,000 years to decompose. (And noo, I’m not exaggerating.)
It’s a grim picture. But the fact is plastic packaging makes up an enormous percentage of the trash that ends up in landfills, or elsewhere, and the the beauty industry has historically been a significant contributor. Some estimates say personal care and beauty products account for a third of all landfill waste.
Beauty companies have tried to mitigate their impact on the environment for years. In 2010, a handful of companies helped make eco-friendly packaging a trend. Yet as many do, the trend passed, and overall the beauty industry has still struggled to become truly eco-friendly. However, it’s not simply that beauty companies don’t care enough to help the pollution crisis: it’s largely due to the fact that beauty products have unique packaging needs that make sourcing earth-friendly materials a serious challenge.
What makes packaging sustainable?
Before you can understand these challenges, it’s important to comprehend what it really means for packaging to be “sustainable” in the first place. There’s actually no strict definition or criteria, and there are myriad earth-friendly factors that can make a product somewhat more sustainable than average.
What a product’s packaging is made of is the most obvious factor. Whether a can of hairspray or tube of lipstick is made from renewable, recycled, or biodegradable materials (or if it is itself recyclable) can easily affect how “green” the packaging is.
The simple shape of a product can also make it more environmentally friendly, too. If packaging is designed so that when packed in a box, there is no leftover space, it is better for the environment because it’s more efficient. That’s why, for example, Kevin Murphy uses square packaging.
“Something as simple as using a square shape has a huge environmental benefit,” founder Kevin Murphy explained, “since tightly-packed, boxy bottles use up to 40 percent less resin than standard round packaging and take up less shipping space and packing materials when it leaves our LEED certified distribution facility.” More efficiently packed goods ultimately means less gas will be needed to transfer a shipment. For this same reason, lighter packaging is often more sustainable as well.
Other green factors include how much energy it takes to manufacture the packaging, whether any toxic materials are included, and what impact the manufacturing has on the planet.
Why do beauty brands have trouble going green?
Because sustainability is so dynamic, there are lots of small things companies can do to make their packaging more earth-friendly. Nevertheless, the beauty industry finds itself in particularly tough spot. Though it’s easy to take for granted, many beauty products are delicate, and packaging them properly — even without taking the pollution into account — is no easy feat. Combined with the fact that selling any product en masse comes with certain packaging rules, it makes sense why the beauty industry has struggled to become sustainable.
One of these challenges has to do with retail, or selling a product at stores like Sephora and CVS. Retailers often put restrictions on package sizing to help maximize shelf space in a store (which makes sense: if they can fit more products on the shelves, they can easily sell more). If a brand wants to sell their product at one of these locations, they have to follow the store’s guidelines when designing their packaging.
Another consideration involves the beauty products themselves. Though we don’t often think of them this way, beauty products are a bit like food. That is, they can go bad (yes, you need to throw away that year-old mascara). That deterioration process goes much faster if a product is not stored correctly. The color, odor, and shelf life of a product are all affected by packaging, and many products need air-tight packaging to stay in tact. Many skincare ingredients are finicky (a notorious example is vitamin C). When not properly packaged, the nutritive ingredients that promise to keep you ageless can be destabilized and rendered useless.
In other words, brands trying to make sustainable beauty packaging can’t simply dump their products into recyclable bottles. They have to consider how their products will be affected.
Of course, as with any business consideration, cost plays a huge factor as well.
“Cheap plastics are exactly that: inexpensive, mass produced and wasteful,” says Lori Leib, the creative director at Bodyography Professional Cosmetics, a company that recently overhauled its products to use half as much plastic and incorporate more recyclable cardboard.
“They do not use good quality materials therefore they are able to make the cost of goods next to nothing,” she says. “Sustainable packaging goes through a process that can make the packaging more costly, however the plants and labs that manufacture these goods use environmentally friendly water and electricity systems as well as recycle all goods. One lab we use to manufacture our skincare went totally green a few years ago, slightly raising our cost of goods, but as a conscious brand we added this to our budget.”
Hannah Choi/Allure
Why now?
With all these factors, getting the entire beauty industry to take on the sustainable challenge seems a nearly impossible task. While brands like Tata Harper, Aveda, Lush, and Juice Beauty have been committed to sustainability for years, in 2017 even more big name brands are stepping up to the plate. Dior recently launched a line of skincare called Hydra Life, and its packaging is “designed to remove any unnecessary elements (such as the leaflet, corrugated card and cellophane), with a reduced glass weight and inks predominantly of natural origin.” Meanwhile Garnier, whose packaging is now made from 50 percent post-recycled materials, has teamed up with recycling company TerraCycle,, and Remi Cruz to launch a campaign to increase awareness about beauty product recycling (see video below).
Why now? There are a number of reasons, but a big reason is that in 2017, consumers care more than ever about the effect the consumption is having on the planet.
“Over the years we have become aware that overpopulation and industrialization have created harmful changes on all these elements we depend on,” says Murphy. “We, as industrialized cultures have just recently become aware of this imbalance and the negative effects it has on the environment based on the dramatic changes we see in our world’s climate and in nature.”
Bodyography’s Lori Leib adds, “Just like we live our lives more clean, clothing, products and all goods follow suit.”
“Why spend money to decrease your use of gas or electricity while simultaneously purchasing products that are wasteful?” she says. “Consumers are more aware of what goes into their products these days both ingredient wise and packaging, claiming that your brand uses eco-friendly packaging is both ethical and a talking sales point.”
Get your glow on: The June 2017 Allure Beauty Box contains a deluxe-size sample of the Tata Harper Resurfacing Mask. If you're not already an Allure Beauty Box subscriber, what are you waiting for? Sign up now and get $5 off your first box.
Continue to read up on the state of green beauty, below:
1. 9 Leading Green Beauty Companies Founded By Women
2. Three Green Beauty Brands That Give Back in a Major Way
3. These 4 Organic Beauty Brands Are Eco-Friendly and Effective
Learn how to whip up Jessica Alba's favorite DIY body scrub at home:
|
protozoa, Biology
classification sketch of protozoa
Posted Date: 10/14/2012 1:22:47 AM | Location : United States
Related Discussions:- protozoa, Assignment Help, Ask Question on protozoa, Get Answer, Expert's Help, protozoa Discussions
Write discussion on protozoa
Your posts are moderated
Related Questions
What are the kinds of plant geotropisms? Why do the root and the stem present opposite geotropisms? The kinds of geotropisms are the positive geotropism, that in which the plan
Problem 1: "Structural Genomics aims at determination of the 3D structure of all proteins". Describe with techniques Show genomic techniques for determination of 3D struc
Most of the water taken up by a plant passes through it and is evaporated to the atmosphere. What use is made of the tiny fraction of this water which is retained by the plant?
Q. What do you mean by Systematic Zoology ? It contains articles on animals systematic, and papers dealing with cytological attributes of species and higher taxa, distribution p
Q. What are the three major cell types that form the osseous tissue? What are their functions? The three major cell types of the osseous tissue are the osteocytes, the osteobla
Define Vitamins, Minerals and Phytochemicals requirement for cancer patients? Several vitamins particularly those of the B-group are essential to promote adequate metabolism of
What is light reaction? Light reactions are reactions, which are initiated by light (also photo-induced reactions). Usually, light energy is change into chemical energy.
Pre Pregnancy Counselling To prevent excess spontaneous abortions and congenital malformations in infants of diabetic mothers, diabetic care, education and counselling must be
In the presence of oxygen, the process of glycolysis produces which of the following products? A) 2 glycerol B) 1 lactate C) 2 lactate D) 2 pyruvate E) 1 pyruvate
what is the job of the cell coat?
|
Ostrich Feathers
There are 5 types of feathers: tail feathers, short body feathers, long body feathers, and floss and wing plumes.
The chick feathers are of the wing plumes category and appear soon after hatching. These plumes are ripe at 6 months and the quills 2 months later. They are brown on the top and dark grey on the bottom of the plume.The chick feathers on the lower part of the body and under the belly are white.
After about 5-6 months, the chick feathers begin to lose these characteristics, and are plucked because they are not of high quality. After 2 years, the sex of the ostrich is clear, because the male has black body feathers, while those of the hen are a dull grey.
The female feathers are a dull grey colour, ideal for breeding during the day because she is well-camouflaged. With the grey feathers, she has creamy-white wing, tail and ventral plumes. The white on the feathers forms a unique pattern along the shaft, like a human fingerprint.
The male feathers are black in colour, with the exception of the white wing-tips and tail-plumes and this is perfect camouflage for breeding at night. This is the reason why the males will guard and breed the eggs at night, and the females, with their dull grey feathers, breed during the day. Tail plumes are normally a brownish-orange colour because of the red Oudtshoorn dust, which can be easily washed off to return to its original white colour. The lower 30cm of the neck is covered with feathers, and the remainder with short downy feathers and hairs, as well as the head that is covered with short, straight hairs.
A prize male ostrich yields around 40-50 plumes. Between 200 and 300 wing plumes would make 1kg, whereas female wing plumes are even lighter. The total “harvest” from one bird at one plucking weighs about 2kg including body feathers. IF the feathers stay in the ostrich after they become “ripe”, they lose their lustre and become dull (hence the bird must be plucked, while the quills are still “green”). The quills are pulled out to prevent irritation to the bird and damage to the new feathers starting to grow behind the quills. The ostriches are never left completely bare at any stage, to prevent sun burn and skin damage.
Feather boas are wing plumes “plucked” from the shaft of the feather and tied together with a needle and string. Approximately 40 wing plumes are used to make a 1.5m boa, and takes about a day and a half to make.
30 grams of body feathers are used to make one feather duster. Feather dusters can be successfully washed and dried because the ostrich has no oil glands and the feathers are therefore not oily. The reason why ostrich feathers work so well as feather dusters is because once they are stroked, they become charged with static electricity. This helps the dust particles to stick to the feather, unlike other feather dusters that just move the dust around.
Ostrich feather boas
Genuine natural ostrich feather dusters
Colourful ostrich feather dusters
|
Learning from the Urban Sketching Masters: Anton Pieck
Anton Pieck
AP practice sketch 1
AP practice sketch 2
Innovation, Risk, Surprise, Uncertainty
Seeing What Others Don’t
Illustration: Watercolor, goauche, ink and gesso
Illustration: Watercolor, gouache, ink and gesso
Where we left off, in the previous post, “Little Dancer Coincidences,” was with the notion that “discontinuous discoveries” can result in a shift in our core beliefs. This notion comes from the book, Seeing What Others Don’t: The Remarkable Ways We Gain Insights, by Gary Klein who, as mentioned previously, is a research psychologist specialized in “adaptative decision-making.” Klein studied 120 cases, drawn from the media, books, and interviews, involving stories of how people “unexpectedly made radical shifts in their stories and beliefs about how things work.” From these cases, Klein was able to organize his research into five different strategies for how people gain insights, including: Connections Coincidences Curiosities Contradictions, and Creative Desperation According to Klein, all of the 120 cases he examined fit one of these strategies, but most relied on more than one.
Martin Chalfie
Image: New York Times
Klein begins with the strategy of connections, and before proceeding with several fascinating examples, recalls the story told earlier in the book of Martin Chalfie, a biologist at Columbia University who–by virtue of attending a seminar on a topic unrelated to his work–ends up getting the idea for a natural flashlight that would let researchers look inside living organisms to watch their biological processes in action. At the time he attended the seminar, Chalfie was studying the nervous system of worms. The seminar covered topics that didn’t interest Chalfie initially, according to Klein; suddenly the seminar speaker described how jellyfish can produce visible light and are capable of bioluminescence. This led to Chalfie’s insight applicable to his own field. His insight led to an invention “akin to the invention of the microscope,” writes Klein, because it enabled researchers to see what had previously been invisible. For his work, Chalfie (seen in the photo to the left above) received a Nobel Prize in 2008.
Image: Wikipedia
Like Chalfie, certain people make connections between unrelated matters that their close colleagues don’t. Klein also tells the story of how the Japanese Admiral Isoroku Yamamoto (April 4, 1884- April 18, 1943) saw the implications of the British attack on the First Squadron of the Italian Navy early in World War II–before the United States had entered the conflict–then sheltered in the Bay of Taranto. Since the bay was only 40 feet deep, the Italians believed their fleet was safe from airborne torpedoes. The British, however, had devised adjustments to their torpedoes, including adding wooden fins to them, so that they wouldn’t dive so deeply once they entered the water. For Yamamoto, the successful British attack at Taranto produced the “insight that the American naval fleet “safely” anchored at Pearl Harbor might also be a sitting duck,” writes Klein. Yamamoto refined his ideas until “they became the blueprint for the Japanese attack on Pearl Harbor on December 7, 1941” (although he himself was opposed to Japan’s decision to go to war with the U.S.); ironically, his other insight was that Japan would lose the war with the United States. Yamamoto studied in the U.S., and had two postings in Washington, D.C. as naval attache; he had insights about the U.S. that his colleagues did not. He was resented by his more militaristic colleagues for his views.
Organizations generally block the pathways of connections (and other strategies) needed for such insights to occur, according to Klein. This is because organizations are primarily concerned with avoiding errors. Ironically, this risk-aversion makes people inside organizations reluctant to speak up about their concerns, leading organizations to “miss early warning signals and a chance to head off problems.” Such problems are common in many fields, including science, according to Klein. Promoting forces that can countervail risk-aversion sometimes requires designating “insight advocates,” writes Klein, even though he admits he is dubious that any organization would sustain them or “any other attempt” to strengthen the forces for insight creation. Another method he suggests is to create an alternative reporting channel so that people can publish work that doesn’t go “through routine editing” and thus would “escape the filters.” But, he thinks this method “may work better in theory than in practice.”
A key problem for many organizations is not related to having or noticing insights, but instead it is “about acting on them.” Organizations that are less innovative because they are stifling insights, he says, “should be less successful” than they could be. The deleterious effect of the defect-exposing Six Sigma program on U.S. corporations is an example of how an all-out focus on eliminating errors gets in the way of innovation, says Klein. Clearly it is not a simple matter to balance the needs for efficiency and innovation within the same organization, particularly a “mature” organization. Klein concludes that the examples he gives are, for him, a “collective celebration of our capacity for gaining insights; a corrective to the gloomy picture offered by the heuristics-and-biases-community.” He continues: “Insights help us escape the confinements of perfection, which traps us in a compulsion to avoid errors and in a fixation on the original plan or vision.”
Klein ends up recommending “habits of mind that lead to insights” and help us spot connections and coincidences, curiosities and inconsistencies. The more successful we perceive ourselves being because of our beliefs, “the harder it is to give them {our beliefs} up.” The habits of mind Klein has covered in his book may “combat mental rigidity,” he writes. “They are forces for making discoveries that take us beyond our comfortable beliefs. They disrupt our thinking.” There is a “magic” that occurs when we have an insight, Klein concludes, and it “stems from the force for noticing connections, coincidences, and curiosities; the force for detecting contradictions; and the force of creativity unleashed by desperation.” So, while there is no blueprint for insight creation in Klein’s book, the many examples he cites are compelling reminders of the crucial role that insights play in stimulating new directions in any endeavor.
It seems, then, that insights can be both the source of surprises as well as help spur readiness for surprises. They can be the needed “black swans” to deal with inevitable “black swan events.” A take-away from this book: There may be no ten-step list to creating insights but understanding how to create favorable conditions to disrupt our thinking–so as to stimulate new connections and ideas–seems like useful knowledge in a world of inevitable surprises. Ostriches with their heads in the sand may not do as well as those who see what others don’t.
Little Dancer Coincidences
Little Dancer #1
Image: From National Gallery of Art website
Image: From National Gallery of Art website
Innovation, Risk, Surprise, Uncertainty
Black Elephants and the Magic of Insights
Elephant 6
Illustration: Watercolor, gouache, ink, gesso and coffee grounds by Black Elephant Blog author
If you’ve had a chance to see the new film, “The Imitation Game”, about the brilliant but sadly socially outcast British mathematician Alan Turing, you’ve probably been powerfully reminded–through its artistic rendering of a true story–of the critical roles which serendipity, hunches, and chance encounters have played in devising solutions to the most challenging problems of any age. (Spoiler alert: If you haven’t seen the movie, and wish to be surprised when you do see it, perhaps it is best not to read further.)
In the film, Turing and his teammates–a collection of unusually gifted mathematicians, including one woman– at Bletchley Park in England literally were racing against the clock to figure out how to decode German wartime communications during World War II. Their efforts centered on the invention by Turing of a decoding machine (basically a prototype computer) but, despite hours of hard work and all their smarts, the team was about to be shut down by uncomprehending bosses under pressure to deliver results. (The film has received mixed reviews–such as this one–due to its mix of imagined and actual events, and its alleged failure to convey that the Turing effort was part of a much larger effort underway at Bletchley.)
Illustration: xxx plays Alan Turing in the film, The Imitation Game (Image from xxx/The Economist)
Image: Allstar/The Economist
Without giving away the storyline (the general outline of which is, however, a matter of historical record), it is in a moment of relaxation away from their secret laboratory, bantering with friends who were supporting the war effort themselves but not privy to any of the Turing team’s information, that a chain of interactions leads to a breakthrough insight. In the film, a casual comment by someone who is not on the Turing team has an instantaneous effect. Her hunch becomes Turing’s insight and he and the rest of the team, up to then stymied in their task, had to act immediately.
This insight turns out be the what the team needed to successfully break the Enigma code. Their success is credited by historians with turning around Britain’s fortunes in the war. They also estimate that the code-breakers helped shorten the war by two years and saved approximately 14 million lives.
This film subtly highlights some of the necessary ingredients of breakthrough thinking: talent, expertise, hard work, team work, intensity, diversity, false starts, time pressures, clear purpose, and random encounters with ideas from disparate sources outside the immediate field of inquiry. While perhaps failing to give sufficient credit to Turing’s bosses (per some of the critics), the film also hints at why so many traditional organizations are so poor at facilitating this sort of thinking. Whatever the gap between the historical reality and the movie, it is worth pondering: What are some of the implications of a mismatch between the outsized global issues of our time and the incapacity of most organizations to nurture the modern equivalents of Bletchley Parks? How can talent and good judgment be assembled most effectively to deal with the important, as well as urgent, “Black Elephants” of our times?
Most of us by now have heard of the Black Swan concept but the Black Elephant concept is not well known. For this writer, it came into being when encountered in an op-ed by New York Times columnist, Thomas Friedman, in late 2014. As he explains, a “black elephant” is a “cross between a ‘black swan’ (an unlikely, unexpected event with enormous ramifications) and the ‘elephant in the room’ (a problem that is visible to everyone, yet no one still wants to address it) even though we know that one day it will have vast, black-swan-like consequences.”
At a time of mounting challenges (including but extending well beyond the environmental issues cited in the Friedman piece) that are too big to fit into anyone’s inbox, or even anyone’s organization–where speed, as in the case of Bletchley Park, is of essence and stakes are high–the concept of black elephants seems a timely one.
The focus here on the roots of surprise inquires into how insights and breakthroughs come about. The current age is no different from past ones, such as the example illustrated in The Imitation Game, in needing to aggregate, cull, and distill insights that can be acted upon in a timely way. With more challenges filled with potential for highly improbable (but, therefore, according to Dr. Hand’s “laws of improbability,” practically inevitable) outcomes, however, the need for insights may be multiplied in present circumstances.
With high stakes involved in multiple arenas, this blog’s inquiry into the roots of surprise will next explore the findings of experimental psychologist and expert in “adaptive decision-making,” Dr. Gary Klein, in his fairly new book, Seeing What Others Don’t: The Remarkable Ways We Gain Insights (2013). Klein notes that generally we know very little about how insights are formed or what blocks them. He too thinks it’s important to know more about where insights come from, so his book is meant to fill some of our knowledge gaps about the magic of insights. In an upcoming post, I’ll feature some highlights from this book, and link to related material as I come across it.
|
Beranda > Informasi > Effects of Water Acidification on Turkey Performance
Effects of Water Acidification on Turkey Performance
Acidification of the drinking water has become very popular in the broiler industry as a tool for improving bird performance. However, little is known about the exact effects of water acidification on weight gains, feed conversion efficiency and livability for turkey production. In addition, little documentation exists which compares different drinking water pH adjustment products for turkeys. Therefore a trial was conducted to determine how turkeys respond to different products used to adjust the drinking water pH.
Materials and Methods
Nine hundred and sixty turkey hen poults (day-old) were randomly placed in 48 floor pens to give 20 birds/pen and six replications per treatment. Each pen was equipped with one hanging tube feeder and a water plasson. Each pen had its own water supply via a 5 gallon sealed bucket. Plassons were cleaned every day and water usage was measured for the first 28 days. This measurement involved accounting for the water added to each pen as well as the water removed each time the plassons were cleaned. Seven treatments were compared to a control (Fayetteville city water). The treatments (outlined in Table 1) included PWT (Jones-Hamilton Co., Walbridge, OH) added to the control water to an adjusted pH of 4 and 6, I.D. Russell Citric Acid (Alpharma, Fort Lee, NJ) added to the water to adjust the pH to 4 and 6, Dri Vinegar (BVS, (Willmar, MN)) added to the water to adjust the pH to 6, Acid Sol (BVS, Willmar, MN)) added to the water to adjust the pH to 6 and Ema-Sol (Alpharma, Fort Lee, NJ) added to the water to adjust the pH to 4. Each solution was prepared in a 50 gallon container and then dispersed to the corresponding replicate pens. Each container was filled with Fayetteville city water and allowed to sit over night to allow residual chlorine to dissipate. Prior to the preparation of each solution a hand-held pH meter was first standardized using pH 4, 7 and 10 buffer solutions. The pH was continuously checked as each solution was slowly mixed to the desired pH. To enhance the dissolving of the dry products, PWT and citric acid, concentrated stock solutions of each was prepared in room temperature water. This concentrated solution was slowly stirred into the appropriate treatment container until the desired pH was achieved. Fresh solutions were made at lease twice weekly and more frequently during the last four weeks of the trial. The pH was verified and recorded, as each batch was prepared. All water and feed added to the pens was weighed. Birds received a commercial diet regime supplied by Cargill. Diets were changed every two weeks.
The birds were group weighed by pen at day 1 and then individually weighed on days 14, 28, 42, 56, 70 and 84. Feed consumption was measured for each period. Pens were checked twice daily for mortality. The weight of all dead and cull birds was recorded for use in determining an adjusted feed conversion rate. At week six and twelve, one bird per pen was weighed and sacrificed by suffocation with carbon dioxide. The pH of the crop and gizzard was measured by emptying approximately 20 grams if the contents and blending with an equal amount of distilled, de-ionized water. Results were analyzed using the GLM procedure of SAS. Pens served as the experimental unit. The mortality percentage data was transformed using square root transformation to normalize the distribution. All means which were statistically significant at the P<. 05 level were separated using the repeated t-test. The feed-conversion rates were calculated as cumulative values. The mortality was calculated for each weigh period.
The average body weights of the hens are shown in Table 2. At day 14 the hens receiving the Acid Sol were significantly heavier and the hens receiving the Ema-Sol adjusted to a pH of 4 were significantly lighter than all of the birds receiving the other treatments and the control water. At this time the decision was made to raise the Ema-Sol treatment pH to 6. By day 28 there were no significant differences in body weight and this trend remained throughout the remainder of the trial. Though not significant, the hens receiving the Ema-Sol water lagged behind slightly in weight through day 56 but by day 70 the Ema-Sol birds had similar body weights to the other treatments. Again while not significant, it is interesting to note that the birds receiving the PWT 4, Citric acid 4 or Dri Vinegar 6 treatments had the highest numerical body weights at day 84. No statistical differences were seen for feed conversions for any of the periods measured (Table 3). Birds receiving the Ema-Sol treatment had a significantly higher mortality rate for the first fourteen days. However, overall mortality remained very low and after fourteen days there were no additional losses of Ema-Sol birds until day 56 (Table 4).
Water usage was measured through day 28. However, since the drinkers were plasson and were cleaned daily, this measurement can only be considered an estimation of water usage (Table 5). For the first fourteen days water usage for the Ema-Sol birds significantly lagged behind all other treatments. This trend continued through day 28 and even after raising the Ema-Sol treatment pH to 6 the birds receiving this treatment still lagged slightly behind in consumption. At the time that the pH of the gizzard and crop contents were to be measured, only a small amount of dry material was found in these organs, so an equal weight of distilled de-ionized water (pH 6.68) was added to each sample (Table 6.). While this addition probably influences final pH, the same amount of water added to each sample so that the effect would be the same across all treatments. As seen in the broiler trial, the pH of the gizzard was in the 3 to low 4 range while the crop pH was higher but did not necessarily reflect the pH of the water treatments.
The results of this trial indicate that lowering the pH of the drinking water with PWT, citric acid, Dri vinegar, Acid Sol and Ema-Sol resulted in turkey hen performance similar to the birds receiving the control water. Starting the poults on Ema-Sol adjusted to a pH of 4 resulted in a significantly higher mortality and reduced weights through day 14. The pH of the Ema-Sol treatment was then raised to 6 for the remainder of the trial and the birds had final weights statistically similar to the birds receiving the other treatments.
AUTHOR: Jana Cornelison, Melony Wilson and Susan Watkins, Cooperative Extension Service – University of Arkansas Cooperative Extension Service AVIAN Advice newsletter (Vol. 7 No. 2)
1. Jumat, 7 Juni 2013 pukul 7:49 am
I think the admin of this website is truly working hard for his web page, for
the reason that here every material is quality based data.
2. Minggu, 1 September 2013 pukul 6:47 am
Hi there, after reading this awesome article i am as well
glad to share my know-how here with friends.
1. No trackbacks yet.
Tinggalkan Balasan
You are commenting using your account. Logout / Ubah )
Gambar Twitter
You are commenting using your Twitter account. Logout / Ubah )
Foto Facebook
You are commenting using your Facebook account. Logout / Ubah )
Foto Google+
Connecting to %s
%d blogger menyukai ini:
|
Raccoons By:Madeline Kline
Raccoons are extremely intelligent creatures who get their hands on every little interesting thing they see. They make messes by digging in garbage cans and doing other damage by looking for food. Raccoons have very flexible toes so they are capable of digging in garbage cans and getting food. They are nocturnal so they are most active at night. Their mating season runs January through March. In the article About Raccoons it stated, “Because the male raccoon shows aggressive behavior toward the baby raccoons, the mother only tolerates him being around her during mating and then raises her young alone.” This shows how the mother takes on all the responsibility for taking care of her young. The babies stay with their mothers for about a year after they are born. Raccoons make many different noises such as hisses, whistles, screams, growls, and snarls. They do this when they and their young are in harm's way and for many other reasons. Raccoons are curious and intelligent creatures.
This video shows how a raccoon is so intelligent and curious. They try to get there hands on every little thing they think is food.
This is a family of raccoons. It is the mother in her babies because the father does not live with his babies because he shows aggressive behavior toward them.
These pictures have all aged racoons. They are pictures from when they are just born till when they are an adult.
Raccoons are one of the primary carriers of the rabies virus.
Raccoons can run up to speeds of 15mph.
Racoons can fall about 35 to 40 feet without injuring themselves.
These websites are about raccoons habitats, characteristics, life cycle, behavior, and many ere things. They give you tons of interesting information and facts. For example they tell you what they eat and they are omnivores so they eat mostly everything. It helps you to understand the raccoons way of life.
Created with images by Will Scullin - "Raccoon" • dave and rose - "Raccoon" • ZeMoufette - "Raccoon" • kat+sam - "Raccoon"
Made with Adobe Slate
Make your words and images move.
Get Slate
Report Abuse
|
Maus Essay
• Maus Essay
things" (Maus I, 38). His ability to anticipate what items to conserve saved him on several other occasions as well. After leaving Auschwitz, Vladek saved the thin blanket they had given him, and he used it to hang above the other passengers in the train that they stuffed all of the prisoners into. This saved him from suffocating and also allowed him to reach snow off of the roof of the train. The snow kept him hydrated, and he was able to trade it with other passengers for sugar (Maus II, 85-87)
Words: 1371 - Pages: 6
• Maus Essay
stereotypes.” (Source: Art Spiegelman, in “Mightier Than the Sorehead,” The Nation, January 17, 1994:45.) What are stereotypes? Are they harmful, and if so, how? What are some current examples of stereotypes? How does Spiegelman use stereotypes in Maus? Seek and select specific examples. Summarize his technique, and analyze why he uses them. Infer the artist’s attitudes, and the reason for his choices. A stereotypes is a popular belief about specific types of individuals or certain way of doing
Words: 1626 - Pages: 7
• Essay on Maus
depict the plight of Jews in Hitler’s Germany (p. 33)? Why, on page 125, is the road that Vladek and Anja travel on their way back to Sosnowiec also shaped like a swastika? What other symbolic devices does the author use in this book? Throughout Maus many symbolic devices are used, most notably, the inclusion of animal characters instead of human ones. Spiegelman places swastikas throughout the work to possibly convey the presence of the Nazis--they were inescapable for Jews in Europe. PRISONER
Words: 854 - Pages: 4
• The Perception of Self in The Last of the Just and Maus I and Maus II
and his friends only as insects. Likewise, in Art Spiegelman’s graphic novels, Maus I and Maus II, the motif of the insect and degraded animal imagery is clearly visible from the beginning of each volume. Similar to Ernie in The Last of the Just, Art Spiegelman demonstrates that “the mothers always told so,” and “they taught to their children” about the dangers of playing or associating with the Jews (Spiegelman, Maus I: My Father Bleeds History 149). From an early age, the non-Jewish parents would
Words: 1729 - Pages: 7
• Maus Essay example
contemplate suicide. According to Melisa Brymer who is a director of disaster and terrorism curriculum at UCLA Neuropsychiatric institute, survivor’s loss is many a times, “an expression of grief and loss.” (CNN, 2015). Right from the beginning of the book Maus, you could clearly tell that the relationship between Art Spielgelman and his dad was not good. The two used to not see each other often although they lived in the same house. Art also admits that he did not help his father to do work most of the time
Words: 1469 - Pages: 6
• Maus Essay
They need it lot of thing and he used anything to survive. So the reason he save everything is because even thought he is free and in another country his mind set is in the Holocaust. His ability to hoard and save even the smallest of items, such as the paper wrapper from a piece of cheese was use to send a note to Anja. The cigarettes from his weekly rations was used as money in the camp. These small items took on enormous importance to Vladek, and even many years later, he feels unable to throw
Words: 1219 - Pages: 5
• Essay on Maus by Art Spiegelman
Little did they know that this intruder divulged their hiding spot to the Jewish police, and soon they were prisoners. Once taken captive, Anja, and Vladek were able to be put on the good side because of Vladek’s connections. This book accentuates and focuses mainly in what Vladek had to do to survive. After being smuggled out of one camp, Anja and Vladek managed to jump from house to house. Vladek was very agile at the time, and moved swiftly at night to seek money and food opportunities
Words: 838 - Pages: 4
• In Spiegelman’s Maus, Even the Dedications Are an Essential Part of the Text.’
This is in stark contrast to the relationship between him and his own father. In the prologue of the book a young Artie comes crying home to his father as his friends had skated way without him, to this Vladek remarks “If you lock them together in a room with no food for a week then you could see what it is, friends”. Being hit by this surge of reality is difficult for a ten year old to grasp and is evidence of how Vladek lacks empathy. Vladek’s selfish nature is seen when he feigns a heart attack
Words: 789 - Pages: 4
• Character Analysis for Maus by Art Speigleman Essay
* After his death, Vladek and Anja keep a photograph of their first child hanging on the wall of their bedroom. Mala Spiegelman- f * Mala is Vladek's second wife, and a friend of his family from before the war. * The couple does not get along. * Mala is consumed with frustration towards Vladek's inability to part with money, while Vladek views his wife with considerable distrust and accuses her of trying to steal his money. Francoise- f * Art's wife. * She is French
Words: 2016 - Pages: 9
• Comparing Dehumanization in Narrative of the Life of Frederick Douglass and Maus
I have no accurate knowledge of my age… by far that larger part of the slaves know as little of their ages as horses know of theirs, and it is the wish of most masters within my knowledge to keep their slaves thus ignorant. (Douglass, pg.1) Douglass even compares himself to a horse to show that he is thought of as an animal. Vladek was a simple man. He went to work just like everyone else and he came home to his wife and
Words: 649 - Pages: 3
• Essay on Two Narrators Are Not Always Better Than One
seem, it’s important to remember that they were all edited and written by Art Spiegelman, not an impartial bystander. Vladek is unreliable because his memory is clouded by time and perspective. He only remembers what he wants to remember. Half of Maus is narrated by Vladek, and his part of the novel is the only “real” look into the past the reader is given. From the beginning, Vladek talked about how loving he was towards Anja, but no one can ever be sure if he exaggerated his sweetness or if he
Words: 736 - Pages: 3
• Armenian Genocide and Holocaust Comparison Essay
death in the camps. They would have to strip themselves so the Germans could take their clothes to steal or ruin the belongings inside of them. “Their rights and names were even taken away from them by the Nazis and they were given numbers instead” (Maus 186). ! Lastly, the way in which the Jews and Armenians were killed was almost identicle. In both genocides the people were taken to camps to work, woman were raped and then killed, and the unable were publically executed (Armenian National). Most
Words: 868 - Pages: 4
Popular Topics:
|
Wednesday, April 1, 2015
Is Fusobacterium associated with colon cancer?
Numerous cancers have been linked to microorganisms. Warren et al. from British Columbia Cancer Agency, Vancouver, Canada; investigated the relationship between gut mucosal microbiome and colorectal cancer using genetic methods. The investigation revealed an association between Fusobacterium species and colorectal carcinoma in eleven patients. These investigators have extended their studies with deeper sequencing of a much larger number (n = 130) of colorectal carcinoma and matched normal control tissues.
The new report has revealed differently abundant microbial genome sequence signatures of significance in tumor samples, including those belonging to the Fusobacterium, Campylobacter and Leptotrichia genera. These Gram-negative anaerobes are typically considered to be oral bacteria. However, tumor isolates for Fusobacterium and Campylobacter were genetically diverged from their oral counterparts and carry potential virulence genes. They also observed that sequence signatures from Fusobacterium co-occur with those from Leptotrichia and Campylobacter and that Fusobacterium and Campylobacter strains isolated from tumor tissue co-adhere in culture. A non-invasive assay to detect this polymicrobial signature of colorectal carcinoma may have utility in screening and risk assessment.
It remains unknown whether there is any etiological link between microorganisms and colorectal carcinoma. Any such link could provide a potential mode of intervention in the prevention of colonic cancer.
Fusobacterium necrophorum Gram stain
|
From Resources Available On Internet – What’s The Difference Between Distilled Water And Rainwater
distiled water While rainwater can be consumed if not overloaded with unwanted chemicals, from resources available on Internet, By the way I understand that distilled water isn’t advised for regular consumption because of minerals lack and salts.
What confuses me is that I know water the be pouring down from clouds, thatthatthat is water vapor turning inthe droplets through condensation. This looks the be exactly the same as distillation process, evaporation and condensation of water, the me.
Rain water contains very low amounts of salts and other nitrates but it takes in any gas present in air. That is why acid rain occurs when the air is polluted. The air contains oxygen so people recommend the rain the be consumable as it has loads of oxygen contents in them. It isn’t recommended for consumption as long as of lack of minerals, distilled water does not contain any minerals in them. The three gases exist at the same proportions, 3 equal volumes. Hydrogen volume is combined in whole with volume of oxygen the form all the water from the ground, thatthatthat explains its absence in almost air.
O2 representing 21percent of the air current, nitrogen is known by its inertia the respond, thatthatthat explains its abundance.
2NH3 and 2NH3 > 3H2+ N2, that is the say that it will release easily the hydrogen will combine with oxygen the form water again until exhausted, bolywoord she played the same role as that played by the sun now that is the say a nucleosynthesis until nuclei formation of oxygen because it has been designed or formed by accretion in a space where the hydrogen prevailed, the land was originally a ball of molten material the law reverses, the heavier athe ms will take the downdrafts whose athe ms and light athe ms will escape inthe updrafts whose athe ms s’ unite inthe molecules, the mechanical strength of these updrafts and downdrafts carried around the ground allow the combination of these two gases in incessant explosive chemical reactions with a heat generation, all current waters on earth were formed and hydrogen was the tally consumed, as it unites with the 2 volumes Volume oxygen is what explains its almost nonexistent in the current air the remaining volume of oxygen will form ozone and O2, thatthatthat explains its proportion the abundance of nitrogen is due the its inertia the react it only reacts the 300 ° with the hydrogen the form NH3 in a reversible reaction N2 + 3H2 >. Is it likely that this temperature was reached or even exceeded account keeping heat from the exotherm synthesis water.
Be the first to comment
Leave a Reply
Your email address will not be published.
|
Friday, 16 December 2011
Hannibal and Christmas
Christmas is a great time of year - even if you are not religious. But here's a thought: would it have become such a big celebration without Hannibal's victory at Lake Trasimene?
Saturnalia was a festival for the god of agriculture, Saturn, and was the Roman midwinter celebration of the Solstice* and the greatest of all the Roman annual holidays. It began on December 17th.
At first it lasted only one day, but - arguably thanks to Hannibal - it was to become the precursor of modern Christmas. In the wake of the enormous and unexpected disaster of Lake Trasimenus, Rome extended the festival to lift the morale of its citizens.
During the festival the courts and schools closed and military operations were suspended so that soldiers could celebrate. It was a time of goodwill and jollity that included visiting people, banquets and the exchanging of gifts.
In the late Republic it was extended to two or three days, celebrated over three days in the Augustan Empire and in the reign of Caligula extended to four. By the end of the first century AD, it was technically a five-day holiday.
A cry of Io Saturnalia! and a sacrifice of young pigs at the temple of Saturn inaugurated the festival. They were served up the next day when masters gave their slaves - who were temporarily immune from all punishments - a day off and waited on them for dinner. After dinner there was plenty of clowning and merriment with wine as a social lubricant, sometimes degenerating into wild horseplay. Dice were used to choose one person at the dinner as Saturnalian King - it could be a slave - and everyone was forced to obey his absurd commands to sing, dance or blacken their faces and be thrown into cold water and the like for the entire period.
The dice may have been loaded in 54 AD, when Nero was so chosen. He used the opportunity to humiliate Claudius' son Britannicus, apparently a poor vocalist, by forcing him to sing. It was traditional to deck the halls with boughs of laurel and green trees as well as a number of candles and lamps. These symbols of life and light were probably meant to dispel the darkness.
It was also traditional for friends to exchange gifts and even to carry small gifts on one's person in the event of running into a friend or acquaintance in the streets or in the Forum. Originally the gifts were symbolic candles and clay dolls - sigillaria - purchased at a colonnaded market called Sigillaria which was located in the Colonnade of the Argonauts, later in one of the Colonnades of Trajan's Baths. Something similar is still practised in Rome's Piazza Navona today.
Gifts, which could also include food items such as pickled fish, sausages, beans, olives, figs, prunes, nuts and cheap wine as well as small amounts of money grew to be more extravagant over time - small silver objects were typical - as did their acquisition. How modern the first century writer Seneca sounds when he complains about the shopping season: "Decembris used to be a month; now it's a whole year." At the same time, Martialis may have been the first sage to remark "The only wealth you keep forever is that which you give away."
Nor did the fun stop there. During the entire festival, the laws against gambling were relaxed so that everyone including slaves and children could gamble at dice and other games of chance, children using nuts for wagers. Men stopped wearing their uncomfortable togas in favour of the synthesis (a tunic with a small cloak both brightly-coloured and also wearable by women) for the entire period and simply donned a felt cap, pilleum to show they were not slaves. Away from Rome, Romans still commemorated the festival. In Athens, academy students such as Aulus Gellius and his friends dined together for the occasion, much as American students in a European university may dine together on Thanksgiving Day.
* "Solstice" is a Latin word, by the way, coming to English from Old French and then Middle English, and originally derived from sol sun + status, the past participle of sistere to come to a stop, cause to stand. This makes sense if you think about the solstice as the sun's path reaching an endpoint and then turning around and going the other way. During the few days during which this direction change is occurring, it will appear that there is actually no movement at all.
Source: |
No comments:
Post a Comment
|
Thursday, September 03, 2009
This is not playdough!
Silly putty is one of the all time favorite toys of the baby-boomer generation (and every generation after them!). Silly putty can be formed into any shape just like regular craft clay, with the added plus that it bounces. In addition, when it is pressed against any of the words or pictures in newspapers printed on standard news pulp, the image is copied onto the silly putty. Kids enjoy lifting an image of their favorite comic strip character and distorting it by stretching and squeezing the silly putty.
Silly Putty (or Dow Corning patent 3179) was invented in 1943. It was originally intended for industrial use as a synthetic rubber, but was not usable because it was not as firm as rubber. Silly Putty was scrapped as a potential product until 1949, when an unemployed advertising executive thought it might be a good idea to market it as a toy. He packaged a run of the substance in plastic eggs, and the familiar plastic egg filled with the mysterious goo has been an American toy icon ever since, with sales in the multi-millions of dollars.
Silly putty is a polymer, or to be more proper, an elastomer. A polymer is a substance with long string-like flexible molecules. An elastomer has these same long molecules, but they are connected in several locations on the side to produce a sponge-like texture. Because of the molecules’ natural flexibility, they can be stretched, and absorb mechanical energy in a similar way to rubber.
Actual silly putty would be difficult to produce in a home setting due to the chemicals needed for its production, but a similar substance that has all the same qualities can be easily made with some basic ingredients found in your home.
What you need:
A bottle of white glue
Powdered Borax
Food coloring of your choice
A measuring cup
Empty Soda Bottle
A plastic zip lock bag
First, mix one tablespoon of Borax powder and one cup of water in the empty soda bottle. Replace the cap and shake the mixture until the Borax has dissolved completely. Now place one tablespoon of glue in the plastic bag along with one teaspoon of plain water. At this point you can add a drop of food coloring to make your creation more colorful. Next, add just one tablespoon of the Borax mixture to the bag and seal it. Now massage the mixture for a few minutes until it begins to set up. It will gradually take on a putty-like texture as the polymer chains grow and interconnect. When you are able to remove the putty from the bag in one piece, do so and begin rolling it between your fingers. The more you roll it the more similar it will be to silly putty.
Finally, as a word of caution, Borax is not for human consumption – so this putty should be made and played with under competent adult supervision. Have fun!
No comments:
|
back to
Design Edit describes design as a pattern with a purpose, which is a very good definition. This raises the question: what is purpose? Purpose itself can't be derived from first principle but is something that must be assumed without proof. Our theist notion of purpose is premised on Platonic opposites language, which is why the Aritotelians formulate grammatically correct but meaningless sentences a process of Nietzsche Platonic inversion. Thus any notion of purpose will first have to either assumeGodel's incompleteness theorem Platonic opposites or Nietzsche Platonic inversion. When these two world views use the same semantics they are not communicating the same concept. Edit
The Epicureans of Wikipedia are trying to suppress their Meaningless sentences that they formulated at by redirecting to is the last revision before they attempted to censor the page for realizing that the following are Meaningless sentences:
In Arturo Rosenblueth's cybernetic classification, purpose is a behavior subclass. Behavior is active or passive and active behavior is purposeful or random. Active purposeful behavior is then either feedback teleological on non-teleological. Negative feedback is important to guide the positive goal route. Purposeful teleological feedback helps guide the predictive behavior orders. Teleology is feedback controlled purpose.[12][13]
notes Edit
Behaviorism was shown to be illogical by Chomsky semantics. He showed that language functions as Composite Integrity of grammar, semantics,syntax and pragmatics.
Ad blocker interference detected!
|
Buy an essay pre-written by the Essay Queen.
The presence of genetically modified crops in our society has raised many concerns and legal issues. Overall, genetically modified organisms are thought to be unhealthy and many people believe that they have right to know the origins of the food they eat. This is extremely problematic with genetically modified crops because genetically modified wheat, corn, and syrups are found in many packaged grocery products without any indication of their use. Therefore, it is essential to inform the public about how genetically modified crops are made, the situations where they are helpful, and the situations where they should be avoided...
Cheryl M., Essay Queen Staff - February 26, 2017
1.Select one (1) of the following biotechnology topics to explore, providing a sound rationale for your selection:
◦Genetically modified crop plants
◦Genetically modified microorganisms
◦Genetically modified animals
◦Personal genomics and / or personalized medicine for humans
◦Gene therapy
2.Organize your paper into the following three (3) sections.
Your assignment must follow these formatting requirements:
The specific course learning outcomes associated with this assignment are:
•Discuss the various applications of genomics and biotechnology.
•Use technology and information resources to research issues in biology.
•Write clearly and concisely about biology using proper writing mechanics.
Genetically Modified Plants
|
Touching Spirit Bear
What are the conflicts in touching spirit bear?
Conflicts in touching spirit bear
Asked by
Last updated by Aslan
Answers 1
Add Yours
Cole Matthews is what we consider a juvenile delinquent. He is 15 years old and has a lot of emotional issues. His father is an abusive raging alcoholic and his mother is complacent. So one day Cole's pent up rage is taken out on a boy named Peter who Cole thinks told on him. The real conflict is Cole vs himself. Cole embarks on a journey to understand his rage and take responsibility for his actions. With the help of two Tlingit Indian men, named Edwin and Garvey, Cole gets a chance at banishment (instead of prison). He gets a year on an island, in Alaska, by himself. On this island he faces many physical and emotional challenges. THe conflicts that Cole has on the island are mostly internal as his challenges tie into the main conflict of Cole re-discovering himself.
|
4 New Viruses That Will Definitely Kill You
The Black Death was no fun. Characterized by its telltale red, irritated lymph node swellings, and its ability to kill two thirds of all humans it infects within four days, bubonic plague, along with pneumonic plague and septicimic plague, its two bacterial cousins, ravaged Europe in the Middle Ages, killing an estimated 25 million people in the 14th century, or roughly 60% of Europe’s population at the time.
Viruses like flu and small pox were partly responsible for the genocide of indigenous people the world over during European colonization, and the Spanish flu of 1918 infected 500 million people worldwide, killing anywhere from 50 to 100 million people. AIDS, first recognized in 1981, has killed an estimated 36 million people.
In the 90s, pop culture caught on to Ebola virus, an extremely deadly pathogen first noticed in what was then Zaire, giving us a pandemic-themed thriller with Dustin Hoffman and a monkey.
Since then, the world has faced some scares like SARS and cute flus like seal flu and prairie dog flu melted our hearts while trying to kill us, but no disease has come forward that will definitely kill you dead. Don’t worry, though, or rather be very worried, because that will change.
Here are four new potential plagues that will definitely kill you.
1. Unknown polio-like illness, California
Over the past 18 months, 20 children in California have been stricken with fever leading to localized paralysis of one or more limbs. The symptoms mirror polio, once a worldwide pandemic that has been eradicated everywhere, except for Pakistan, Afghanistan, and Nigeria, thanks to a vaccination. However, tests for polio have all come back negative.
California lawmakers have urged the United States government Center for Disease Control (CDC), to investigate the outbreak (which you’d think they’d already be doing). Again, it’s an unknown disease isolated to California, seemingly rare, but it if you really think about it, it will definitely kill you.
2. MERS, Middle East
MERS (Middle East respiratory syndrome), is a distinct species of the genus Betacoronavirus first discovered in 2012. First confined primarily to Saudi Arabia, it has sickened 180 people, killing 77 (that’s 43%, which is crazy). MERS causes a lung infection similar to pneumonia, eventually leading to renal failure, and although it hasn’t spread widely, it did make its way to the UK last year. Last week, scientists linked the spread of MERS to dromedary (one hump), camels, where it has been occurring for some 20 years, according to reports. That said, two thirds of those sickened with MERS have had no contact with camels.
The takeaway? It seems innocuous, but it will kill you.
3. Avian flu, Asia
This one, hands down, will kill us all.
We all remember 2004’s H5N1 flu epidemic. Was that ever terrifying. This strain of H5N1 had a newly-evolved genotype virologists said developed from 1999 to 2002, and this new strain quickly got to work, decimating the poultry industries of Vietnam and Thailand, and spreading to 10 other Asian countries, including Japan, South Korea and China. This strain of H5N1 is panzootic, meaning it can infect both humans and animals, and be transmitted over wide areas, and that’s exactly what happened. In October of that year, disease experts found it to be far more dangerous than they had previously thought, and now rely on a containment strategy to delay avian flu instead of preventing it, because all hope is lost.
There have been 638 reported cases since 2003, with 379 deaths. The World Health Organization estimates the mortality rate for those contracting H5N1 to be 60%. If you contract avian flu, you are more than likely to die, so if you’re experiencing symptoms, you are already totally fucked.
4. Impending Zombie Flu, a lab in the Netherlands
Everyone loves zombies! Your little sister, all your old college friends, even your grandparents are transfixed to shows like The Walking Dead, and soon they’ll even be able to live it!
That’s right. In 2013, in the journal Nature, flu researchers at the Erasmus Medical Center in the Netherlands said they would be engineering a new strain of H7N9 avian flu, making it more powerful than any other flu. The same team, led by Ron Fouchier, developed a new strain of flu that was communicable between new humans, a capability that strain did not naturally have.
Science, huh?
Well, rest assured. No doubt, a strain of hybridized rabies-flu is sitting in the same lab that would create an airborne, virulent transmission causing uncontrollable zombie-like symptoms. Most likely a sample is sitting in a test tube in a stand which is perched perilously close to the edge of a counter, and a drunk janitor is coming through to mop up, whistling, staggering, elbows swinging.
I’d say stock up, but no amount of shotgun shells and bottled water will save you this time. Have fun!
You Might Also Like:
|
Sexually transmitted diseases
• Sexually transmitted diseases are infections you can get from having unprotected sex with someone who’s infected.
• If you’re pregnant and have an STD, it can cause serious problems for your baby, including premature birth and birth defects.
• If you’re pregnant, you get tested for STDs as part of prenatal care. If you have an STD, getting treatment early can help protect your baby.
• Ask your partner to get tested and treated for STDs.
• The best way to protect your baby from STDs is to protect yourself from infection.
What is a sexually transmitted disease?
A sexually transmitted disease (also called STD) is an infection that you can get from having sex with someone who is infected. You can get an STD from unprotected vaginal, anal or oral sex. And you can get an STD during pregnancy—being pregnant doesn’t protect you from getting infected.
Many people with STDs don’t know they’re infected because some STDs have no signs or symptoms. Nearly 20 million new STD infections happen each year in the United States.
Can you pass an STD to your baby during pregnancy?
Yes. You can pass some STDs to your baby during pregnancy or vaginal birth. Vaginal birth is when contractions in your uterus (womb) help your baby out through the birth canal. Getting early and regular treatment for an STD can help prevent you from passing it to your baby.
What problems can STDs cause for your baby during and after pregnancy?
STDs can cause serious problems during pregnancy, including:
• Premature birth. This is birth that happens too early, before 37 completed weeks of pregnancy. Premature babies can have serious health problems at birth and later in life
• Premature rupture of the membranes (also called PROM). This is when the amniotic sac breaks early. The amniotic sac is the sac or bag inside the uterus that holds a growing baby. The sac is filled with amniotic fluid.
• Ectopic pregnancy. This is when a fertilized egg implants itself outside of the uterus (womb) and begins to grow. It can cause serious, dangerous problems for the mom and always ends in pregnancy loss. Most of the time, ectopic pregnancies are removed by surgery.
STD infection during pregnancy can cause problems for your baby after birth, too, including problems with the eyes, lungs and liver. Some of these problems can affect your baby’s entire life. Some STDs can even cause a baby’s death.
Also, being infected with an STD makes it easier for a person to get infected with HIV. HIV stands for human immunodeficiency virus. It’s a virus that attacks the body’s immune system. In a healthy person, the immune system protects the body from infections, cancers and some diseases. Over time, HIV can destroy the cells in the immune system so that it can’t protect the body. When this happens, HIV can lead to AIDS (acquired immune deficiency syndrome).
How do you know if you have an STD?
At your first prenatal care visit, your health care provider does a blood test to check for STDs including:
If you think you may have an STD, tell your provider right away. Early testing and treatment can help protect both you and your baby.
How can you protect yourself and your baby from STDs?
Here’s what you can do:
• If you do have sex, have safe sex. Have sex with only one person who doesn’t have other sex partners. If you’re not sure if your partner has an STD, use a barrier method of birth control, like a male or female condom or a dental dam. A dental dam is a square piece of rubber that can help protect you from STDs during oral sex.
• Go to all your prenatal care checkups, even if you’re feeling fine. You may have an STD and not know it. If you think you may have an STD, tell your provider so you can get tested and treated right away.
• Ask your partner to get tested and treated. Even if you get treated for an STD, if your partner’s infected he may be able to reinfect you (give you the infection again). Ask your partner to get tested and treated to protect you from infection and reinfection.
More information
Last reviewed: February, 2017
|
5 Ways to Keep Your Spine Healthy
Massage Therapy
Chiropractic Care
Stop Smoking
Good Posture
Balance Your Body With Orthotics
The joints and muscles of the body function most efficiently when they are in physical balance. When foot imbalance is present, there is a negative impact on the knees, hips, pelvis, and the spine. This may lead to symptoms of knee pain, arch pain, hip pain, or lower back pain.
Your feet are the foundation of your body. They support you when you stand, walk, or run. They help protect your spine, bones and soft tissues from damaging stress as you move around.
The foot is constructed with three arches, which when properly maintained give exceptional supportive strength. These three arches form a supporting vault that distributes the entire weight of the body.foot
A loss of arch height will cause a flattening and rolling in of the foot. This malpositioning of the foot is termed pronation and is seen when the ankle starts to fall inward, no longer sitting directly over the foot. Because everything is connected, the bones of the leg also inwardly rotate.
This increased pressure on the medial arch of the foot and can cause heel pain or irritation to the connective tissue along the bottom of the foot leading to plantar fascitis.
Excessive rotation of the bones of the leg will cause unnecessary stresses on the knee as well as twisting of the pelvis and spine. If the pronation is more prevalent on one side, there can be a resultant unleveling of the pelvis. Tilting of the pelvis places tension on the muscles and connective tissues, which can eventually lead to back problems.
To correct imbalances in the feet, we will often recommend custom molded foot orthotics. Placing orthotics in your shoes is similar to placing a shim under a wobbly table: it adds support to eliminate unwanted motion in the entire structure.
Placing the correct support under the foot helps restore the fallen arch, decrease pronation, eliminate rotation to the leg, and take stress off the joints of the lower back and pelvis. By stabilizing and balancing your feet, orthotics enhance your body’s performance and efficiency, reduce pain, and contribute to your total body wellness.
Weeding Out Back Pain
When the weather starts to warm up after months of cold and snow, many people like to get outside and start doing activities such as gardening, planting, and mowing the lawn. We often find gardening to be relaxing and healing, but all of the lifting, bending, and twisting can cause aches and pains into the neck and back, especially if you are doing them improperly.
As you start putting on your gloves and digging out your lawn equipment, it’s important to keep in mind that injuries can occur. Injuries can also occur from weeding, lifting too much, and mowing the lawn with a push mower.
Weeding the garden can seem like a never ending job and can easily cause your back muscles to fatigue quickly. One way to prevent the muscles from tiring so quickly can include sitting on a bucket or the ground with a wide base of support. It is also important to remember to support your back when reaching for weeds. Try to keep your spine long and avoid hunching forward in the same position for too long. Remember to take frequent breaks and alternate which arm you are using to balance the muscles being used.picture12
Lifting heavy bags of soil, tools or other materials for gardening can also cause strains on your back. Use wagons and carts with wheels to carry the weight. This will ease the stress on your back from carrying heavy items and lifting them improperly.
When you are lifting, make sure you bend at the hips and knees instead of the waist. The hips and knees are better equipped to carry a heavy load instead of the back. This will help prevent low back injuries from occurring.
Before you do any gardening or yard work, make sure you stretch and prepare your body for these activities. Always be aware of your posture and body form when you are gardening and know your limitations.
If you follow these tips and still have some aches and pains, seeing a chiropractor can help with these symptoms. Chiropractic care will help heal injuries and will get you back into the garden and the yard again. Doctors of chiropractic can help evaluate the imbalances in your spine and muscles that occur from gardening and can help lead you to a healthier life.
Chiropractic treatment can range from ice or heat, ultrasound therapy, electrical muscle stimulation, and adjustments of the spine or extremities. We may include a specific exercise program to help prevent injury, work on stretching and strengthening the muscles and/or focus on correcting and maintain proper posture.
Give us a call to schedule your chiropractic appointment if you have any spring time pain. 320-253-5650
5 Tips to Avoid Golf Injuries
The warm weather season is finally here, and hopefully to stay awhile. With this warm weather we see the golf courses open and the golfers swarming to play the game! Whether you are an avid golfer or a beginner, it is important to know how to play without any pain.
Though golf is considered a low-impact sport, it doesn’t mean it is a sport where injuries do not happen. The most common injuries associated with golf are in lower back, elbow, wrist and shoulder. To enjoy an injury free, golf-filled summer, follow these tips:
1. Use Proper Equipment– This means both your clubs as well as your shoes. Clubs that are properly fitted will help correct posture, technique, and alignment while reducing the risk of injury. Proper shoes are also important. Walking from hole to hole (or cart to hole) means you need good support for the rest of your body, starting with your feet. Not only does Minser Chiropractic Clinic offer custom fitted orthotics for work shoes and athletic shoes but they also come specifically for golfers!golfer
2. Warm Up– Just like any other physical activity, warming up your muscles is extremely important. It reduces the risk of injury by generating blood flow and loosening the joints. Stretch out all areas of the body-arms, back, and legs to avoid injuries.
3. Stop Immediately if You are Injured– Playing while injured, no matter how small the injury, can further aggravate your muscles, pain and cause more damage. Take a few days off to rest and recover.
4. Use Proper Golfing Techniques– It’s important to watch your swing; not only to improve your game but to avoid injury. If you over-swing, you can create strain on your joints and spine causing lower back pain. Lift your clubs properly. Golf bags full of equipment are not light as a feather. Without proper mechanics, you can easily throw out your back by lifting your bag with your back instead of bending at the knees. Watch how you’re grabbing your golf bag to avoid injury.
5. Get an Adjustment!– Chiropractic is known to help prevent injury, increase athletic ability, and speed up recovery from injuries. Our doctors at Minser Chiropractic Clinic can give you more tips and at home exercises to help you improve your game and keep you out on the course.
Have more tips you want us to share? Connect with us and let us know!
7 Steps to Have Better Posture at Work
The Medical Definition of the word “posture”
is- the carriage of the body as a whole, the attitude of the body, or the position of the limbs (the arms and legs).Why should you have better posture?
Your posture affects everything you do! How you walk, sit, move, breathe, and so much more! Better posture at work means increased productivity, more focused, less back pain, and less headaches.
Here are a few simple steps you can do to have better posture at work.
1. Align Your Head – Rule of thumb is having your ears in line with your shoulders is proper posture for your head. Forward head posture can result in tight muscles in your neck and possibly headaches.
2. Stretch Your Shoulders – Hunched desk posture leads to tightened chest muscles and restricted air ways. You can regularly stretch you shoulders to help relax your shoulder muscles.bigstock_closeup_of_female_hands_typing_21941885
3. Look up! – Keep your monitor centered in front of your body. Adjust your chair or desk so you are not looking downward at your computer and to prevent neck strain.
4. Don’t Slouch – Slouching also tightens your chest muscles and can reduce strength in your upper back muscles. Keeping your computer at eye level can help reduce slouching.
5. Exercise and Stretch – Long hours sitting at a desk with little to no breaks can lead to shortened hip flexor muscles. Try to get exercise outside of work and stretch your hips at home. If you can, take walking breaks throughout the day.
6. Keep Wrists Flat – Maintain a flat keyboard surface and keep your wrists about the keyboard when typing. Occasionally roll out your wrists to reduce tightness.
7. Sit Upright & Move Your Feet – Crossing your legs at your desk can lead to poor circulation and misalignments in your spine. Sit up straight with your feet flat on the floor and move your feet frequently in increase blood flow.
Have more questions about posture? Let us know!
What to do about Extremity Pain
hand-painLegs ache, arms are numb, fingers tingling, and you can’t feel your feet! You wonder what could be the cause of these symptoms and what you can do to get rid of them.
Did you know chiropractic can help treat extremity pain?
Most people think chiropractic is just a treatment for back pain, neck pain, and headaches, but chiropractors can do so much more! If it’s a joint, a chiropractor can adjust it!
That means your knee pain could be a result of a subluxation in your knee or hip instead of your low back. Your tingling fingers could mean a joint is off in your wrist, elbow, shoulder, neck or upper back!
There are cases where numbness, tingling, and lack of feeling in extremities are a sign that there could be a subluxation within the spine that is causing interference with the nerves. Another interference could be located in the extremity itself, i.e. elbow, ankle, shoulder, etc. A simple solution is a chiropractic adjustment!
Chiropractors are highly trained doctors that know not only how to treat the spine, but also extremities of the body. They are also trained in different adjusting techniques so they can effectively treat people of all ages and sizes. No one is too young or too old for an adjustment!
Tips for a Healthy Spine
spineA healthy spine is often overlooked and is an essential part of a healthy lifestyle. Approximately 80-90% of the population suffers from spinal pain at some point in their lives. Because so many of us suffer from spine pain, it’s important for you to try to keep your spine as healthy as possible. The American Chiropractic Association recommends the following spinal health tips for you to do at home:
~Avoid twisting while lifting. Twisting is one of the most dangerous movements for your spine. If you must lift a heavy item, get someone to help you.
~Do NOT bend over at the waist to pick up items. Instead, kneel down on one knee, with the other foot flat on the floor, then pick up the item. You can also squat down and pick it up with an extended arm.
~Lying on your side with a pillow between your knees may reduce the pressure on your back while sleeping. Never sleep in a position that causes a portion of your spine to hurt. Sleeping on your stomach can cause a lot of rotation in your neck which can cause misalignments and pain.
~Spinal adjustments are a great way to prevent any future pain or misalignments and can help subside any current pain you are having. Chiropractic care, while mostly used for acute care, can also be used as a preventative care!
|
Black Women Writers @ Southwestern University
An English / Feminist Studies / Race & Ethnicity Studies Course Blog
The Olinka
1 Comment
A few years ago I was a History major with an African History focus, so when reading The Color Purple I was interested in Walker’s portrayal of indigenous African cultures. A brief check on African history proved my suspicions that the Olinka people that Nettie spends extended time with are a fictional culture. I am not sure what Walker’s intentions were in creating a fictional African society to open up a dialogue between African-American and African culture. I think that, for me at least, the implications of this decision are that the Olinka become a kind of “Pan-African” culture. This seems to take away some of the power of this narrative because it loses its specificity. This is not to say that I don’t appreciate some of the tough subjects that Walker at least nods to in her discussion of the Olinka. Subjects like generation, cash crops, and ritual/tradition are a few that I thought were handled with particular complexity. I am just wondering what the implications are for this well known text to have so much power in defining what people think of when they think of Africa and yet the Africa that is described is fictionalized to this extent. Why did Walker choose not to write about the Bassa, who seem to be (geographically speaking) the closest to the Olinka? The Bassa have heavy populations around the coast of Liberia and specifically in Montserrado country, which is where Monrovia (the capital) is. What is lost or gained by creating a culture that can encompass all of the aspects of “African culture” that Walker wanted to discuss? Does a fictional culture that can act as a pan-African example have the same political relevance/force?
One thought on “The Olinka
1. Prehaps if you look up the meaning of the name “Olinka” you can see why Ms. Walker used it and maybe that is an assumption on my part. Sometimes we try to be so deep when the answer is right there on the surface.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
December 2016
An introduction to mentoring
• By Dr Ali Burston, Organisational Psychologist, Metisphere
Fostering a mentoring relationship can have positive career benefits for both mentors and mentees
What is mentoring?
Mentoring involves a partnership between a less-experienced individual (the mentee) and a more-experienced individual (the mentor) where the purpose is the personal and/or professional growth of the mentee. Although the goals of the mentoring relationship may differ across both settings and relationships, nearly all partnerships involve the acquisition of knowledge (Allen and Eby, 2007).
Mentoring has been of interest to the business community since the 1970s after claims that it contributed to the success of very senior executives (Gibb, 2008). These early mentoring relationships developed informally through mutual interest. Since then, mentoring has undergone considerable research as organisations and industry groups have created structured mentoring systems to leverage off these advantages.
Mentoring is sometimes confused with other types of developmental activities. In particular, the distinction between coaching and mentoring is often ambiguous. Coaching is a short-term, task-focused intervention designed to teach skills and improve performance in order to take on new responsibility (Harvard Business School, 2004). In contrast, mentoring relationships are generally longer term, are focused on strategic career progression and incorporate a holistic approach to the mentee’s professional and personal development.
On a professional level, mentors provide career advice and guidance. If a mentor and mentee both work for the same organisation, the mentor may actively support their mentee by recommending them for particular assignments or introducing them to more senior members of the organisation, commonly known as ‘sponsoring’ (Ghosh and Reio, 2013). Where a mentor and mentee work for different organisations, mentors provide external objective advice by sharing the benefits of their own experiences and industry knowledge and suggesting specific career strategies to assist the mentee in developing relevant skills and knowledge (Ghosh
and Reio, 2013).
On a personal level, a mentor can provide a source of psychosocial support that contributes to the mentee’s sense of professional effectiveness, identity and competence (Ghosh and Reio, 2013). Mentors provide a source of support through success and failure, are a sounding board for ideas and can act as a role model for the mentee to emulate (Ghosh and Reio, 2013). Given the right motivation, mentors can benefit greatly from sharing advice and guidance with a less-experienced person.
Types of mentoring
Mentoring in business or industry can involve different formats, including self-directed (informal) or participating in a structured program (formal). However, in both formats, the focus is on the mentee, their career and support for individual growth and maturity.
Self-directed (informal) mentoring relationships generally develop between a mentor and mentee due to mutual interest, respect and friendship (Inzer and Crawford, 2005). The quality of the mentoring relationship is related to mentee benefits, so it is not surprising that informal relationships, sustained by mutual interest from both parties, are more often associated with optimal outcomes (Gilmore, Coetzee and Schreuder, 2005).
Structured (formal) mentoring relationships involve participating in an industry-led or organisational mentoring program (Inzer and Crawford, 2005) typically designed to meet organisational or industry-wide objectives (Parise and Forett, 2008). Structured programs may include training and specific goal setting and may mandate meeting frequency and program duration. Structured mentoring programs clarify the roles of mentors and mentees and can include a training component (Allen, Eby and Lentz, 2006; Eby and Lockwood, 2005). Structured mentoring programs also control the matching process, so while it may take longer to establish a new and trusting relationship initially, structured mentoring can have very positive outcomes if it is managed in a professional manner.
How can having a mentor be beneficial throughout my career?
In a business context, a mentor is an experienced individual that offers information, advice and guidance for the mentee’s personal and professional development (Harvard Business School, 2004). The role of a mentor is likely to vary over an individual’s career in line with common developmental tasks in each career stage (Isabella, 1988). For example, the benefits of having a mentor in the following career stages can be defined as follows.
The most important tasks for students are to discover their particular interests, skills and aptitudes and choose a career direction (Hess and Jepsen, 2009). Mentors may be able to assist with questions such as:
• What is a mining cycle?
• What are the advantages/disadvantages of working on-site?
• What is it really like to work as a (insert role here)?
• Are you able to provide feedback on my résumé?
By sharing their own journey and work experiences, mentors give students a realistic preview of a profession and industry and can suggest strategies to enter a particular career pathway. Mentors may also be able to alert students to vacation/apprentice opportunities or recommend them for casual work or junior roles in an organisation.
Early career stage (1-5 years into career)
In the early career stage, individuals are creating their professional identity, developing technical competence in their role and building the political skills to successfully negotiate the world of work (Isabella, 1988). At this career stage, mentors may be able to assist with questions such as:
• What are the advantages/disadvantages of working on-site and when should I transition?
• What industry groups will provide the best networking opportunities for me?
• What does my ‘future self’ look like?
• When is a good time to pursue further study?
Mentors can provide advice to young professionals on how to manage challenging situations and provide insight into the ‘unwritten rules’ of an organisation, profession or industry.
Young professional (5-10 years into career)
Once technical skills are established, individuals frequently want their expertise recognised and strive for career progression, which often has to be balanced with family and personal commitments (Isabella, 1988). Working remotely, doing long hours and being on a ‘fly-in, fly-out’ roster are common in the mining sector but may impose additional challenges on individuals and relationships (Pirotta, 2009). At this career stage, mentors may be able to assist with questions such as:
• How can I develop the best relationship with my superintendent?
• How do I work effectively with different personalities in my team?
• How do I find stability in an unsettled climate?
• When do I know if I am on the right track – am I doing what I like doing now?
A mentor can provide empathy and practical strategies to address career issues and enhance an individual’s ability to adapt and cope with the unique challenges posed by the mining industry.
Mid-point (10-20 years into career)
For individuals at the mid-point of their career, there may be a choice of becoming a technical specialist in their field or moving into more senior management or corporate roles, such as project management or business improvement. While being mentored themselves, individuals at the mid-point of their career may also be mentoring less-experienced mentees. In this career stage, mentors may be able to assist with questions such as:
• When is a good time to pursue further study, for example an MBA?
• When should I review my short- and long-term goals?
• What type of satisfaction could I receive from volunteering in my community?
• Am I making a difference?
Mentors may assist mentees in developing the strategic thinking, commercial awareness and financial knowledge necessary for senior managerial or executive board roles.
Mature (20+ years into career)
Individuals that are well-established in their profession and looking towards the end of their career may reflect on their career journey, examine their priorities and have an increased focus on developing others and giving back to the industry (Isabella, 1988). At this career stage, very senior mentors may be able to assist with questions such as:
• How has my career strategy played out? What has and what hasn’t worked?
• How have I achieved my goals and what can I pass on to young professionals?
• Am I interested in seeking new Board positions or enhancing community engagement?
• Do I have a strong social and professional network?
• What are the advantages of being a mentee?
As the purpose of mentoring is to develop personal and/or professional growth in a mentee, there are numerous advantages to sourcing a mentor. Some opportunities that are made available by seeking a mentor include:
• develop strategic career planning techniques and the facilitation of career goal achievement (Gilmore, Coetzee and Schreuder, 2005)
• learn from a mentor’s industry experiences, particularly how to work through mining cycles and become adaptable in a role
• learn from a more experienced person about success, sustainability and adaptability
• learn about self-promotion and capitalise on a mentor’s networks (to enhance the mentee’s own networks)
• work through organisational topics such as conflict resolution, workplace politics and a changing workforce.As a result of these opportunities for growth, research has confirmed that mentees receive numerous personal and professional benefits, including:
• greater job satisfaction (Allen et al, 2004)
• lower levels of work stress (Allen et al, 2004)
• higher self-esteem (Allen et al, 2004)
• increased technical and behavioural competence (Gilmore, Coetzee and Schreuder, 2005)
• increased confidence (Gilmore, Coetzee and Schreuder, 2005).
The greatest benefits of mentoring result from a high-quality relationship between mentor and mentee (Ragins, Cotton and Miller, 2000). When approaching a mentoring partnership, mentees should be prepared to listen, reflect, discuss and learn from an experienced professional. Importantly, mentees must respect and appreciate their mentor’s time and efforts (Harvard Business School 2004; Wallace and Gravells, 2007).
What are the advantages of being a mentor?
Mentors play a crucial role in the lives of their mentees, organisation, profession and industry through their development of the next generation of high-performing professionals. Becoming a mentor offers many opportunities for personal and professional growth, including:
• learn more about personal strengths and areas for development
• engage with the next generation and keep abreast of changing workplace values, culture and technology
• share experiences, tell stories and provide guidance to a less-experienced person
• create pathways for a less-experienced person that contain clarity, transparency and are sustainable
• share networks and provide new prospects.
• As a result of these opportunities for growth, research has confirmed that mentors receive numerous personal and professional benefits, including:
• enhanced job satisfaction (Ghosh and Reio, 2013)
• enhanced organisational commitment (Ghosh and Reio, 2013)
• an intrinsically rewarding experience in watching others develop with the knowledge that they have contributed to a mentee’s success (Allen, 2003; Parise and Forret, 2008)
• enhanced leadership skills (Allen and Eby, 2007)
• higher work performance (Ghosh and Reio, 2013).
Mentors can be of any age, but are recommended to be experienced and influential and have a history of success in their field (Conway, 1998; Delahaye, 2011; Otto, 1994). When approaching a mentoring partnership, mentors will need to develop rapport, trust and confidence in their mentee to ensure positive outcomes for both parties.
To purchase mentoring webinars presented by Dr Ali Burston, visit
Allen T D, 2003. Mentoring others: a dispositional and motivational approach, Journal of Vocational Behavior, 62(1):134–154.
Allen T D and Eby L T, 2007. The Blackwell Handbook of Mentoring: A Multiple Perspectives Approach (Blackwell Publishing Ltd: Hoboken).
Allen T D, Eby L T and Lentz E, 2006. Mentorship behaviors and mentorship quality associated with formal mentoring programs: closing the gap between research and practice, Journal of Applied Psychology, 91:567–578.
Allen T D, Eby L T, Poteet M L, Lentz E and Lima L, 2004. Career benefits associated with mentoring for proteges: a meta-analysis, Journal of Applied Psychology, 89(1):127–136.
Conway C, 1998. Strategies for Mentoring: A Blueprint for Successful Organizational Development (John Wiley & Sons Ltd: Chichester).
Delahaye B, 2011. Human Resource Development: Managing Learning and Knowledge Capital, third edition (Tilde University Press: Melbourne).
Eby L T and Lockwood A, 2005. Protégés and mentors reactions to participating in formal mentoring programs: a qualitative investigation, Journal of Vocational Behavior, 67:441–458.
Ghosh R and Reio T G, 2013. Career benefits associated with mentoring for mentors: a meta-analysis, Journal of Vocational Behavior, 83(1): 106–116.
Gibb S, 2008. Human Resource Development: Process, Practices and Perspectives, second edition (Palgrave Macmillan: New York).
Gilmore N, Coetzee M and Schreuder D, 2005. Experiences of the mentoring relationship: a study in a mining company, SA Journal of Human Resource Management, 3(3):27–32.
Harvard Business School, 2004. Coaching and Mentoring: How to Develop Top Talent and Achieve Stronger Performance (Harvard Business School Publishing: Boston).
Hess N and Jepsen D M, 2009. Career stage and generational differences in psychological contracts, Career Development International, 14(3):261–283.
Inzer L D and Crawford C B, 2005. A review of formal and informal mentoring: processes, problems, and design, Journal of Leadership Education, 4(1):31–50.
Isabella L A, 1988. The effect of career stage on the meaning of key organizational events, Journal of Organizational Behavior, 9(4):345–358.
Otto M L, 1994. Mentoring: an adult developmental perspective, in Mentoring Revisited: Making an Impact on Individuals and Institutions, (ed: M A Wunsch) (Jossey-Bass Publishers: San Francisco).
Parise M R and Forret M L, 2008. Formal mentoring programs: the relationship of program design and support to mentors’ perceptions of benefits and costs, Journal of Vocational Behavior, 72(2):225–240.
Pirotta J, 2009. An exploration of the experiences of women who FIFO, The Australian Community Psychologist, 21(2):37–51.
Ragins B R, Cotton J L and Miller J S, 2000. Marginal mentoring: the effects of type of mentor, quality of relationship, and program design on work and career attitudes, Academy of Management Journal, 43(6):1177–1194.
Wallace S and Gravells J, 2007. Mentoring, second edition (Learning Matters: Exeter).
Share This Article
|
The Origins of the Sitar
A Plucked String Instrument of Indian Origin
(c) C Squared Studios / Getty Images
A sitar is a plucked string instrument common to Classical Indian music, particularly in the Hindustani (northern Indian) classical traditions. Mechanically, the sitar is a fairly complicated musical instrument. It bears sympathetic strings — strings which are tuned, but not plucked and instead simply vibrate and hum when the strings nearby are played — as well as movable frets and over 20 strings!
Origins of the Instrument and How It's Played
The sitar is typically played by balancing the instrument between the player's opposite foot and know. For instance, a left-handed player might hold it against his right foot and stretch it over his left knee.
The player then uses the mezrab, a metallic pick, to pluck individual strings, adjusting tone with a thumb which remains on the fretboard.
Although more adept players can employ some techniques to give the performance flair, many of the frets are already preset to play microtonal notes, allowing the seamless and flowing transition between knows the sitar is most known for.
Application in World Music
It wasn't until the rapid globalization of music in the 1950s through present-day that the sitar truly went global. As early as the 1950s, rock artists like Ravi Shankar began using the instrument on world tours to give a bit of flair to their music, sparking a newfound interest in this popular Indian instrument.
This led to the 1960s short-lived fad of using sitars in Western pop music. The Beatles famously used a sitar on their hit songs "Norwegian Wood (This Bird Has Flown)," "Within You Without You" and "Love You To" in the late 60s and the Rolling Stones used one on "Paint it Black."
The psychedelic rock community especially liked the Middle-Eastern-sounding melodies the sitar could produce. The Doors famously used mostly Indian scales in their albums, often using other instruments along with the sitar to provide a groovy, enchanting backing track to their brand of trippy rock.
Today, electronic musicians, pop artists, world music ensembles and even Youtube-famous guitarists use the sitar to evoke Middle Eastern melody in their performance.
|
Wednesday, November 2, 2011
Blood circulation discovery by Ibnu Nafis
Ibnu Nafis perhaps one of the greatest cardiologists of the Arab-Islamic civilization and pre-modern time.
Ala-al-Din Abu al-Hasan Ali Ibn Abi al-Hazm al-Qarshi al-Dimashqi (known as Ibn Al-Nafis) was born in 1213 A.D. in a small town near Damascus. He was educated at the Medical College Hospital (Bimaristan Al-Noori) founded by Noor al-Din Al-Zanki.
Apart from medicine, Ibn al-Nafis learned jurisprudence, literature and theology. He thus became a renowned expert on the Shafi'i School of Jurisprudence as well as a reputed physician.
In 1236 Ibn Nafis moved to Egypt and worked in Al-Nassri Hospital then in Al-Mansouri Hospital where he became chief of physicians and the Sultan’s personal physician. When he died in 1288 A.D. he donated his house, library and clinic to the Mansuriya Hospital .
The most voluminous of his books is Al-Shamil fi al-Tibb, which was designed to be an encyclopedia comprising 300 volumes, but was not completed as a result of his death. He managed to publish only eighty. His book on ophthalmology is largely an original contribution and is also extant.
His book that became most famous, however, was Mujaz al-Qanun (The Summary of Law) and a number of commentaries that were written on this same topic.
His commentaries include one on Hippocrates' book, and several volumes on Ibn Sina's Qanun, which are still extant. Likewise he wrote a commentary on Hunayn Ibn Ishaq's book.
Another famous book embodying his original contribution was on the effects of diet on health entitled Kitab al-Mukhtar fi al-Aghdhiya.
Ibnu Al-Nafis was an Arab physician who made several important contribution to the early knowledge of pulmonary circulation.
In the third century BC the ancient Alexandrian physician Herophilus maintained that arteries and veins were attached to each other; this scientific fact was long neglected, then revived by the Arab physician Abu al-Abbas al-Majusi (died 994 AD) in his book Kamil al-Sina’a al-tibbiya, which states that if the artery is cut, venal blood is discharged though it.
The Greek philosophy Galen developed an early model of the circulatory system involved two types of blood. He believed that the blood was manufactured in digestive glands and then passed through the liver.
The blood then travelled to the ventricles of the heart where it was mixed with life-giving properties before being consumed in the tissues of the body.
He concluded that the veins along carried the blood of the circulatory system, while the arteries carried the giving air.
Ibnu Al-Nafis was the first person to challenge the long-held contention of the Galen School that blood could passed through the cardiac interventricular septum.
He was the first to correctly describe the constitution of the lungs and gave a description of the bronchi and the interaction between the human body's vessels for air and blood. He also elaborated on the function of the coronary arteries as suppliers of blood to the cardiac musculature.
Blood circulation discovery by Ibnu Nafis
Related Posts Plugin for WordPress, Blogger...
The Most Popular Posts
|
Article 4
Language and Syntax
The language used, the terms used and the sentence structures are so simple as to be almost deceptive, and a person with an advanced education with all its sophisticated terms and jargons is almost certain to dismiss the whole work as childish imagination. That is the magic of this work and its underlying metaphysics - one has to come down to the very basics of life and human existence, to the point that at the very least we realise that today even with all the technology around, we are living a deceptive existence that is only alienating us further and further from our essence.
Despite the simplicity of language, the dramatic and romantic quality of the dialogues is unique. The presenting style and sequence, all combine to create an unfolding drama of magic and intrigue that shatters all of one's preconceptions of reality,
( providing there is room for it) forcing one to revaluate all one has learned.
"As a rule, he always concluded each of our sessions on an abrupt note; thus the dramatic tone of the ending of each chapter is not a literary device of my own, it was a device proper of Don Juan's oral tradition. It seemed to be a mnemonic device that helped me retain the dramatic quality and importance of the lessons."
In the first few books, since no technical words are used, the sentence structure is very simple. So simple that for most of the time, the reader has to generate the mood and the context of the conversation.
Every word has many possible meanings behind it, so it is imperative on the part of the reader to able to figure out the most appropriate one, for the work to be useful. As an example, lets look at the usage of a deceptively simple word like 'power'.
'Power' in its most common usage means 'strong' or 'forceful' or 'influence and control over other people or situations', but don Juan's use of this word has a very different context to it. Rather the possible contexts in which this word is used are extremely wide, but relate to one's personal situation and control. In short it applies primarily to the ability to have discipline and control over one's own faculties, rather than the ability to influence others. In the later books don Juan uses the term 'personal power' rather than 'power' but also says that this power is not that which is owned by the person or belongs to the person as such, rather it is something that is only a sort of temporary beholding that actually commands the person's actions instead of the person commanding the 'power'.
Again, the usage of the word also depends upon the topic of conversation. For instance when Carlos is asking about the effects of the devil's weed, don Juan uses the word 'power' in the specific context of the 'powers' of the devil's weed itself, rather than 'power' as is generally applied. Carlos is confused......
DJ : The root gives them an effect of pleasure, which means they are strong and of violent nature - something that the weed likes That is the way that she entices. The only bad point is that men end up as slaves to the devil's weed in return for the power that she gives them. But those are matters over which we have no control. man lives only to learn. And if he learns, it is because that is the nature of his lot, for good or bad.
( Here Carlos fails to understand the correct context of Don Juan's use of the word 'power'. )
DJ : The weed is used only for power, ( he finally said in a dry, stern tone ). The man who wants his vigour back, the young people who seek to endure fatigue and hunger, the man who wants to kill another man, the woman who wants to be in heat - they all desire power. And the weed will give it to them. Do you feel you like her ?
Now compare this with other sentences in which 'Power' is used :
Personal power is all that we have in this magnificent world.
It must therefore be kept in mind that although simple and unsophisticated language has been used, it needs a peculiar mood and a particular state of mind to grasp the dramatic points that Don Juan makes. A scientific, rigidly logical, or even religious framework will not lead to any comprehension or usefulness of the work
|
Friday, 27 April 2007
Maybe I could widen the research project to encompass different types of pixel graphics produced by other media - namely early 8-bit computer games and the early Internet. By also bringing in aspects of the written/printed media I could draw up similarities and contrasts between the different methods of production.
Looking at some of the stuff by Jodi has inspired me to think of areas such as ASCII art, which has not been a widely used medium in teletext but is the same principle of using a basic character set. This article talks about the quest to code a font that reproduces the teletext font on the PC format. It seems the Minitel ASCII and the one used on computers are different despite containing, on the whole, much the same characters. The image to the left shows an Atom VDG (chip for early 1980s computers) character set. Note the strange pixel characters: these are similar to the ones used in Teletext to create images.
Some possible areas for exploration
Take the three mediums and explore the following criteria.
> Use of language. Visual and textual.
> Use of interaction. Methods of interaction - special devices. Remote controls, joypads, pucks etc.
> Format of medium itself. Teletext - televisions - compatibility. Device people are familiar with. Also don't have to purchase a new system
> Purpose and target audience. Reaching different parts of the world, individual services for regional areas. Different reasons in different areas but largely based on information provided by the TV channel.
> Longevity and popularity. Are they still around today?
> Medium as an information service
The main thing, it seems, is that the Internet has largely overtaken the purpose of teletext and, due to its advances in technology, has widened its scope. However Teletext remains much the same. Whilst the 'new age' of digital set top boxes has brought reduced loading times, the abundance of the net and its now relatively inexpensive access has contributed to teletext's reduction in popularity.
The original service as we know it will be phased out in the next few years in the UK with the switch to digital (digital teletext page from Teletext Ltd. screenshot shown right). In fact, teletext, along with early video games, has gained popularity from the Internet's spreading of these via emulation. Maybe this could be my closing statement.
|
Coffee is something many people cannot start their day without. The energy provided by coffee along with the increase in brain function and alertness makes it a staple among the working class throughout the world. It is considered a drug in the health community in the same sense that nicotine or alcohol are. There is a lot of talk regarding the risks to one’s health that the beverage carries along with its energy benefits. So what exactly is your morning beverage doing for or against you?
For starters, coffee contains many things…..and caffeine, although the seemingly main component, isn’t exactly the “star” ingredient at least where our health is concerned. Coffee also contains various antioxidants, nutrients, and vitamins that are actually essential to our overall well-being.
Here are Several of the Nutrients You Can Find Per 8oz. Cup
Manganese: 3% of the RDA.
Magnesium: 2% of the RDA.
Potassium: 3% of the RDA.
Vitamin B1 (Thiamin): 2% of the RDA.
Vitamin B2 (Riboflavin): 11% of the RDA.
Vitamin B3 (Niacin): 2% of the RDA.
Phosphorus: 1% of the RDA.
Folate: 1% of the RDA.
To Top
|
Monday, January 24, 2011
A new ancestry for elephants
Changes in names and taxonomical classification are a common occurrence as our knowledge of species living and extinct expands. In fact, around 10% of all taxonomic names are changed every year (Nimis, 2001). The changes are mainly in the realm of microbiology where morphology is difficult to apply, but rarely in the realm of charismatic megafauna, e.g. elephants.
However, there has been a long ranging dispute on whether the African savannah elephant and the African forest elephant are merely subspecies of the African elephant (Loxodonta Africana), or whether the genus Loxodonta needs to be re-organised.
Recent work by Rohland et al. (2010) compared genetic markers of the genomes of the iconic woolly mammoth (Mammuthus primigenius) and the American mastodon (Mammut americanum) with the modern African savanna elephant, African forest elephant, and Asian elephant.
A surprising finding from our study is that the divergence of African savanna and forest elephants—which some have argued to be two populations of the same species—is about as ancient as the divergence of Asian elephants and mammoths. Given their ancient divergence, we conclude that African savanna and forest elephants should be classified as two distinct species.
As we see, there is no certainty in taxonomy. And there goes my favourite example of a stable taxonomic name.
Nimis, P. L. (2001), A tale from Bioutopia - Could a change of nomenclature bring peace
to biology's warring tribes?, Nature, 413(6851), 21, doi:10.1038/35092637.
Rohland N, Reich D, Mallick S, Meyer M, Green RE, et al. (2010) Genomic DNA Sequences from Mastodon and Woolly Mammoth Reveal Deep Speciation of Forest and Savanna Elephants. PLoS Biol 8(12): e1000564. doi:10.1371/journal.pbio.1000564
No comments:
|
Economic collapse is inevitable, here’s why…
economic collapse
Arthur SchopenhauerExamine the evidence outlined below, connect the dots and think for yourself.
“All truth passes through three stages.
First, it is ridiculed.
Second, it is violently opposed.
Third, it is accepted as being self-evident.”
– Arthur Schopenhauer
What does the “national debt” even mean?
Let’s cover the basics first… When the government can not cover its spending using the collected revenue from corporate and income taxes and other fees it imposes, it goes into debt. The U.S. national debt is the sum of all outstanding debt owed by the federal government. It includes the money government borrowed, plus the interest it must pay on this debt.
However, since 1974, our deficit went from $4 billion to a shocking $1.33 trillion… stop and think about that for a second… this means that our current annual budget shortfall is roughly triple the size of the total U.S. debt in 1974. Our national debt in 1974 was $484 billion… it is now approaching an unprecedented $16 trillion!
How is that possible? How do you go through World War I, World War II, the Korean War, Vietnam War – and have only $484 billion debt, then skyrocket to 16 trillion in such a short time?! The answer to this question has to do with a key event in 1971 that we’ll go over in a moment… for now, let’s stick with the national debt, so we can understand why it is no longer sustainable.
In a letter to Thomas Jefferson, 1787
– John Adams, Founding Father
Sixteen trillion dollars, so what?
national debt
Imagine you decided to count to one million out loud. How long do you think it would take you at a pace of one number per second?. If you do it non stop, it would take about 12 DAYS. Now, how long would it take you to count to one trillion?… The answer?… 32,000 YEARS!!!
Here’s another illustration.
Last one… If you had a trillion $10 bills and you taped them all end to end. Your money ribbon will become so long that you would actually be able to wrap it around planet Earth more than 380 times!!!… But, that amount of money would still not be enough to pay off the U.S. national debt.
Are you getting the picture yet?
On the right is an illustration of our federal debt that might help you get a better idea visually. You can click on that image to see a larger size.
Keep in mind that what you are looking at are pallets of $100 bills stacked on top of each other. To give you an idea of the size and height of these pallets, in the center is standing the Statue of Liberty in proper scale relative to the money towers. The cash surrounding and dwarfing the Stature of Liberty taken together constitute 16.394 trillion. This represents our current debt ceiling that we’re scheduled to hit in September of 2012.
It’s interesting to note that when we hit this debt ceiling this year, our government will once again move the ceiling up to allow for the debt to grow. Now ask yourself, what is the point of a movable ceiling? A movable ceiling is an oxymoron. If you can move your debt limit on demand, why bother pretending that you have a debt limit in the first place?
– Abraham Lincoln, 16th President of the United States
Statistics the government would rather you didn’t know
• In 2011, the government borrowed $41,000 every second.
Another 54 trillion excluded from the national debt figures
The short video on the left was broadcast by CNN in 2007 featuring the head government accountant David Walker.
According to David Walker who served as United States Comptroller General in the Government Accountability Office from 1998 to 2008, U.S. government’s real financial burden is close to 70 trillion dollars.
These liabilities are ticking time bombs, primed to explode with each new wave of retiring baby boomers. On top of this, medical costs continue to rise across the board driving medicare expenses through the roof.
Keep in mind that at the time this video was broadcast our national debt was “only” around 9 trillion dollars and it is now close to 16 trillion. The catastrophic economic problems predicted by our government’s head accountant are playing themselves out right now.
What’s most disheartening is that David Walker was forced to accept that admonishing Washington of unsustainable debt was a waste of effort. His warnings of the impending financial collapse fell on deaf ears as both administrations simply ignored him. In desperation, Mr. Walker quit his job as the federal government’s chief auditor to travel around the country to find ways to deliver his message directly to the public.
Father of the Constitution and The Bill of Rights, James Madison is quoted saying:
How did we get in so much debt?
To outline all the events that lead us to this mess would take a separate article, but here’s a quick summary.
In 1913 Congress passed the “Federal Reserve Act,” relinquishing the power to create and control money to the Federal Reserve Corporation, a private company owned and controlled by bankers. Over time, more and more legislation was passed to expand Federal Reserve’s functions. The Fed (short for Federal Reserve) was granted two extremely critical powers: the ability to purchase U.S. treasury securities and to manipulate the interest rates. Interest rate manipulation and quantitative easing (pumping money into the economy) by the Fed, are the two driving forces behind the boom/bust cycles and economic bubbles.
The Fed was suppose to be the guardian of U.S. currency, in reality it turned out to be a debt and bubble machine, ran for profit by greedy bankers.
Our founding fathers understood the danger of putting the power to control the currency of a nation in the hands of a few individuals in the form of a monopolistic central bank and were vehemently opposed to such a system.
In 1944, as World War II was drawing closer to the end, representatives of 44 allied nations met in Brenton Woods, New Hampshire where the dollar (backed by gold at $35 per ounce) was accepted as the world reserve currency.
America was granted unprecedented benefits as the issuer of the dollar. However, the gold standard restricted Federal Reserve from printing money unless it had the gold to backup new currency. Even though this ensured the stability of the dollar and a strong economy, such restrictions would not be tolerated by the Fed for very long.
In 1971, under president Nixon, U.S. moved away from a gold-backed monetary system to a fiat paper debt-based monetary system which allowed Federal Reserve to print dollars out of thin air.
fiat currency
This opened the door for unrestricted spending and borrowing. Once we moved away from a “gold standard” to a “debt-currency system” it was only a matter of time before America transformed from the world’s biggest creditor to the world’s biggest debtor.
If you look at the national debt chart by scrolling up, you can see a direct parallel between the explosion of debt and U.S. switching to fiat currency in 1971. Once the Fed could create dollars out of nothing, it took only a few years for the government debt to gain an exponential climb rate.
Now on the surface, Federal Reserve’s ability to print money with no restrictions might sound great since you can just create new currency on demand… but it carries with it two very grave consequences. Consequences that we’re paying for now.
The first consequence is inflation. Each time the Fed issues new dollars, it increases the money supply, which in turn diminishes the value of the rest of the dollars already in circulation. Basically, that means the more dollars are printed, the less they are worth. As the inflation rises, so do the prices and cost of living. Inflation also encourages spending and debt, and discourages saving and capital formation. In the long run, currency inflation wipes out the wealth of the middle class and wrecks the economy. By the way, the dollar has lost 95% of its value since Federal Reserve took over in 1913.
The second consequence is that, we (the people) go into debt every time new money is created. When the government needs extra money, beyond what it collects in taxes, it issues U.S. treasury bonds, which are interest-bearing IOUs guaranteed by the government. These bonds are exchanged with the Federal Reserve for currency. This process is called “monetizing the debt”, hence “debt-currency” system. Federal Reserve collects the interest and the tax payers collect the debt. The bankers prosper and people get enslaved.
Besides debasing the dollar and binding America into debt, the Fed manipulates the interest rates overriding market self regulation. These manipulations create bubbles resulting in devastating consequences for the economy and the average American.
President Andrew Jackson refused to renew the charter (a grant of monopoly) of the Second Bank of the United States. In 1836 Jackson said to the bankers trying to persuade him to renew their charter (so they could continue their harmful monopoly):
-– Andrew Jackson, 7th President of the United States
How is the U.S. government going to finance 70 trillion in liabilities?
If you have been paying attention so far, you should be able to guess correctly… by borrowing. The U.S. government is planning to finance 70 trillion in obligations by selling treasury securities (interest bearing IOU’s) putting America into even more debt.
Since our national debt is exploding and our annual deficit keeps growing every year, we’re forced to admit an obvious fact: our government can not pay its debt without taking on more debt.
This is by definition, a Ponzi scheme. To keep the Ponzi scheme going you must have a constant and ever expanding flow of investors. If the flow stops or even slows down, the whole thing starts to collapse. This is why the government must continuously raise the official debt ceiling.
All Ponzi schemes eventually collapse and our debt-currency system has the same fatal flaw by design.
The video on the left was broadcast on CNBC, May 24, 2012:
Peter Schiff, CEO of Euro Pacific Capital, who not only famously predicted the 2008 housing bubble, but also predicted the specific banks that would go under, as well as the government’s exact response to the 2008 crisis, makes the following statements about U.S. treasuries (short for U.S. treasury securities… again these are interest bearing IOU’s the government must sell to pay for obligations):
There’s no safety in U.S. treasuries. When interests rates go up, we’ve got to default on those treasuries. We can’t pay a market rate of interest, let alone retire the principal. Most of the treasuries that are being bought have very short maturities. We have 5 or 6 trillions coming due in the next year, we can’t pay that back. We’re counting on our creditors to loan us back the money to repay the debt. This is a Ponzi scheme.
It’s the same situation as I said Greece was in. They had no problem selling their bonds when the rates were low. But the minute people figured out that the Greeks couldn’t repay the debt, they didn’t want to buy them anymore. The same thing is going to happen. You have a false perception of safety in the Treasury market. It’s not safe at all. It’s a trap. And it’s being set by Central Banks, the Fed is the biggest buyer, they’re buying like 90% of long term treasuries…
How long can we keep borrowing?
Some economists like to imagine that we can just grow our debt endlessly, because we have the ability to print dollars out of thin air. These “experts” allege that the treasuries market is strong as ever and we can just keep borrowing endlessly. These are the same “experts” that insisted that the real estate prices will continue to rise perpetually, right up to the 2008 crash. They argue, just raise the debt ceiling and keep growing that debt evermore.
But even though we can raise our debt ceiling time after time, there is still a natural debt limit we can not cross. The notion that our government can keep growing our debt without end is preposterous.
First, it’s based on a foolish assumption that the rest of the world is willing to to lend us money that they know we can’t pay back. Second, it ignores a mathematical consequence: exponential growth due to interest alone.
We’ve been able to get away with borrowing so much up until now because the dollar is the world reserve currency, but this privilege has its limits. It’s also a privilege we’re going to lose because we have been shamelessly abusing it.
The Federal Reserve has been keeping the interest artificially low, to help the government keep borrowing. Of course this is no favor on Fed’s part, because the end result is debt enslavement. Since whatever the government owes is inherited by the people, it’s the people who get screwed at the end. If the interest was allowed to return to market rates, it would help prevent the government from borrowing beyond its means.
However, at this point our lenders are realizing that our debt has long passed a sustainable level. If you have ever applied for a loan, you should be familiar with this universal rule: when the borrower is in too much debt, the loan becomes high-risk and so the lender demands a higher interest to make the reward worthy of the risk. With every passing day U.S. plunges into a deeper debt pit and this makes lending to U.S. (by buying treasury securities) a more and more riskier investment.
To make things worse, the Fed is devaluing the dollar at an increasing pace by issuing bailouts, stimulus packages, quantitative easing, etc… and our lenders are realizing this too. This means that the dollars that our creditors are loaning to us now, are worth less when they get them back.
For these two reasons, the U.S. treasury securities (government IOU’s) are now high-risk, low-return investments. What was once considered the safest investment is now a Ponzi scheme at the point of collapse.
Who will bail out America when it runs out of lenders?
Our pool of willing lenders is starting to shrink as our creditors are waking up to the fact that treasuries are now a high-risk, low-return investment. To compensate for this the Fed is forced to buy up all the long term U.S. treasuries in an effort to artificially stimulate demand, to keep up the smokescreen. Of course this only inflates the U.S. bond bubble even more.
When the pool of willing lenders dries up, the scheme will reach its end and the final bubble will explode. Without lenders, the U.S. government has only two appalling choices: default on debt or hyper-inflate the dollar.
Option one is to default on all debt, essentially declaring bankruptcy to renegotiate all obligations. This would create a severe financial shock as the dollar collapses and loses its status as reserve currency. This would lead to a sharp increase in the cost of nearly everything, as more US dollars would be needed to pay for imports, resulting in a catastrophic economic impact for every American. The government will be forced to cut spending dramatically. A broad range of government payments would have to be stopped, including military salaries, Social Security and Medicare payments, unemployment benefits, tax refunds, etc. Companies would be crushed by a US consumer that would no longer have any buying power. In addition, credit would dry up virtually overnight, which would force untold numbers of companies to shut their doors. Unemployment in the country would spike to obscene levels. Interest rates would rise significantly forcing millions of families with adjustable mortgages to go into foreclosures.
Option two is to have the Federal Reserve create trillions upon trillions of dollars out of thin air. This creates an illusion that the debt is being paid back, but in reality the dollars issued to pay the debt would become increasingly worthless, turning rapid inflation into hyperinflation. This would actually create a much worse scenario then the first option as hyperinflation will be even more economically destructive for the average American. Prices would soar to unimaginable levels, unemployment would skyrocket. The average American would be forced to work overtime just to put food on the table, that is if he or she is lucky enough to still have a job.
It’s worth mentioning that it is highly unlikely that U.S. will choose default (option one). Even though hyper-inflation is by far more destructive for the American people in the long term, the government will most likely try to print its way out.
Either way the economy will collapse. Economically, the first option would feel like a heart attack and the second option like a terminal cancer.
The ripple effects of either scenario would be unprecedented. It would not be the end of the world, but you can expect massive social unrest, protests, riots, arson, etc. Supply disruptions on all levels. Basic utility failures and infrastructure decay. Rampant violent crime, specially in metropolitan areas. Eventually followed by a long and very painful readjustment period of living standards for most Americans.
What if we cut spending, raise taxes and balance the budget?
It’s amazing, that even now, you hear the same old catch phrases thrown around by politicians on all the major news shows, like “recovering economy”, “budget cuts” and “responsible spending”. But, anyone out there that insists that this crisis can be fixed under our current system is lying.
The spending cuts and tax increases that Congress is talking about are absolutely meaningless when compared to how rapidly our debt is exploding.
Calling those cuts and taxes “pocket change” would be an insult to pocket change.
No bailout, stimulus package or manipulation by Federal Reserve is going to avoid the massive financial pain that’s coming our way.
So what can our government do to fix the current financial crisis and avoid the dollar crash? What would it take?
It would take the kind of measures that are our government considers too extreme to even discuss and so there’s no chance of them being approved. For starters we would need to abolish the Federal Reserve, go back to the gold standard, shut down overseas military bases, completely reform the tax code, restructure entitlement programs, etc.
Unfortunately, proposing such changes is the fastest way to lose your political funding, become the laughing stock of Washington and be ignored or ridiculed by the mainstream media. Just ask Ron Paul.
Our Congress knows full well that fighting against the system is political suicide. And so no meaningful change that would help lessen the impact of the coming crash will be approved.
As far as the oval office and Congress is concerned, postponing the crash by issuing bailouts and stimulus packages is a more politically favorable approach, even though this ensures an even bigger catastrophe at the end.
The bottom line is this: we’re on a path to an inevitable dollar crash. The ones that run our monetary system and hold the keys to our economy are actually part of the problem instead of the solution. The ones in power that can make the desperately needed changes, dare not.
Rather then risk their careers, they will continue to shamelessly distribute our hard earned money among their friends on Wall Street. The hand full of our honest politicians that are actually brave enough to stand up for the people are shut out by the system.
At this point, we’re on a run away train without brakes, so you better brace yourself. The good news is, there is still time for you to prepare for what’s up ahead. Most people will be completely unprepared when the whole thing comes crashing down.
Don’t be part of that group.
How do I prepare for the coming crisis?
Whether you are broke or wealthy, whether you live in an apartment or a mansion, no matter what your current situation, there are specific things you can do to prepare for the impeding dollar crash.
The next article we publish will focus on step by step action plan that you can follow to minimize the impact of the financial meltdown on you and your family. It will include practical but critical actions you should take to protect your loved one from the ensuing chaos, along with financial advice to safeguard whatever savings you might have.
To be notified when we publish the article on “How to prepare for the coming crisis” please subscribe to our notification list on the right. We hate spam just as much as you do, so we don’t sell or share your email or ever send out spam, your info is kept absolutely private and safe.
Lastly, please share this with your family and friends and coworkers. Warn the ones you care about by emailing them the link to this page. We need to wake our people up from their entertainment induced comas.
– The memo that proves the bankers caused the GFC
– JPMorgan: fined $13b, stole $ trillions and 2 million people lost homes
– Exposed: JP Morgan’s gold and silver price manipulation
– The world according to the global power brokers
– Swiss to investigate manipulation of currency markets
– The biggest price-fixing scandal ever
– The coming derivatives panic that will destroy global financial markets
– World Bank whistleblower reveals how the global elite rule the world
|
High Alkaline Foods - The Secret to a Strong Immune System
High Alkaline Foods
Did you know that high alkaline foods can improve your health condition and strengthen your immune system? Our body is made up of both acid and alkaline substances, thus our body ph is measured based on the acid-alkaline ratio.
The normal pH of a human body is at 7.4. Now this ratio can either increase or decrease depending on our lifestyle. To maintain good health, we need to keep our body's acidity at a much lower level than our level of alkalinity. Unfortunately, our body pH balance can be destroyed by an unclean lifestyle and bad eating habits.
Yes, the food we eat can either be alkalizing or acidic. Foods with alkaline content will also leave alkaline residue in our system while the exact opposite happens when we consume acidic foods. Instead of leaving alkaline-ash, these leave acid-ash or acid residue in our system. If what you frequently eat is acidic, it only follows that your body will be acidic as well.
Nature has provided us with so many choices of high alkaline foods. Examples of these are papaya, melons, watermelons, celery, broccoli, watercress, green beans - just to name a few. We can be glad that practicing a healthy diet doesn't have to be boring. We can choose from a wide variety of fruits and vegetables to sustain our pH balance. As long as we eat more alkalizing foods, we can get rid of excess acid and toxins in our system.
Along with the acid-alkaline diet, we can strengthen our immune system by avoiding highly acidic products such as artificial sweeteners, soft drinks, liquor, artificial fruit juices and carbonated drinks. We must also protect ourselves from exposure to toxins in our environment such as gasoline fumes, air fresheners, insect sprays, and smoking.
Take note that second-hand smoking is even more toxic than first-hand smoke. Even if you have quit the habit, if you're frequently exposed to people who smoke, you are also in danger. Last but not least, engaging in regular exercise or physical activity is just as important in boosting our immune system.
Learn More: Recommended For You
|
Wednesday, 9 November 2011
Honduran White Bat
Image: Wikimedia
They come from parts of Central America and are absolutely tiny! Just 4 or 5 cm (1.5 or 2 in) long. Their nose and ears are yellow and the rest of the body is covered in fuzzy, white fur. They're nocturnal, spending the nights eating fruit and resting through the day.
But these little sugar dumplings aren't content with some cave or the rotten innards of a gothic tree to sleep in. These fellows have bright, white fur to live up to. And shining brightly against the gloom of a shadowy roost is not a great idea for such a tiny creature. So they build themselves a tent instead.
First, they select a great, big leaf a few feet off the floor. Then they cut through the veins that branch out either side of the main central vein, called the midrib. This causes the whole leaf to almost fold in half under its own weight.
The Honduran White Bats now have a nice roof over their head, although they roost upside down so I guess it's a roof over their feet. There are no walls so hopefully the neighbours aren't too nosy, especially since each tent will house 1 male and his harem of around 5 females.
But why are they white? The answer may well be because sunlight filters through the leaf and casts the bats in a green hue, which provides them with some great camouflage. So they're white because it helps them to become green! I guess the only alternative was green paint?
6 comments: said...
Hahaha..."The white *isn't* there to make them endearing to us humans."
Comment1 said...
:D I guess it's always worth pointing that kind of thing out!
TexWisGirl said...
they're so cute in that first photo - they look like little mites clustered together.
Comment1 said...
Yes! Sort of weird how you don't like "louse" but "mite" is kinda cute. Still, they definitely look adorable all huddled together like that!
Crunchy said...
They look like little cotton balls! With... incredibly freaky noses.
Comment1 said...
Bats almost always have a freaky nose!
Related Posts with Thumbnails
|
Literary Essay: "Animal Farm" by George Orwell, The Ultimate Corruption of Napoleon
Essay by NatvinJunior High, 9th gradeA+, January 2007
download word file, 4 pages 4.1
Downloaded 51 times
"Power tends to corrupt and absolute power tends to corrupt absolutely."
-Lord Acton
Throughout time there have been many rulers. Many of these rulers' desire for power overtook their ability to govern wisely and compassionately. Numerous rulers began with ethical beliefs and true concern for their people but as their own personal need to control others grew so did their level of destructive behavior. Once they became a figure of authority and refused to heed any advisor's warning of caution, they were drawn into a deteriorating position from which they could no longer escape, slowly acquiring a quality of total self-absorption, attached with a merciless sense of cruelty towards others. Everyone must realize and always keep in mind that a sincere leader is one who works for his people rather than above his people. In the novel, Animal Farm, the absolute ruler of the animals, Napoleon, portrays precisely this image of power and its tendency to corrupt its possessor.
It illustrates his gradual rise to power and the ensuing attitude of supreme superiority towards the other animals.
Napoleon first begins ruling alongside Snowball, another pig on the farm. The two constantly disagree with each other and Napoleon does not take little interest in Snowball's committees. Snowball comes up with the brilliant idea to build a windmill but building it would entail much hard work and great challenges and Napoleon contends that the animals should attend to their current needs rather than plan for a distant future. Of course Napoleon knows the windmill will be extremely productive for the animals but cannot allow himself to acknowledge Snowball's valuable input.
Throughout his initial steps of authority, Napoleon never shows interest in the strength of Animal Farm itself, only in the strength of his power over it. Thus, the only project he...
|
Posted by: robotnews | March 24, 2006
Why we need Homeland Robotic Security Systems?
u0307999 ZHAI NING
Do you still the scene of 911 attack? It is an unexpected attach which demonstrated how a small group of people can have a huge destructive power on once invulnerable to large-scale terrorists attack, U.S. And this events unveiled the limitless possibilities for more to come if we do not secure ourselves well.
This introduction of weapons of mass destruction furthers the ability of a small group with relatively limited military assets to wreak havoc on asymmetrical warfare or terror. The principle defense against surprise attacks of this or any other nature is advanced warning, which
inherently depends upon the timely and accurate collection and assessment of appropriate information.
Thus robotic security system comes. We need advanced detection scheme to detect the undetected by human beings. We need the advanced assessment technology to identify the different scenario. Because of the characteristics of changing parameters, we need the homeland security robotic system to be very adaptive as well as having the ability to learn from past. Thus for the new kind of homeland robotic security system, it is relatively hard to achieve a satisficatory result as other robotics can achieve.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this:
|
Search icone
Search and publish your papers
Our Guarantee
We guarantee quality.
Find out more!
Can we predict which infants will grow up to offend?
Or download with : a doc exchange
About the author
Freelance Psychology Writer
General public
About the document
Published date
documents in English
term papers
8 pages
General public
0 times
Validated by
0 Comment
Rate this document
1. Introduction
2. Control theory
3. Involvement in crime
4. Family as the most important source of social control
5. Nye's concept of social control
6. Internal (personal) beliefs, commitment attitudes and perceptions
7. Discussion on `bonds' of social control. through Travis Hirschi's book `Causes of delinquency'
8. Gottfredson ideologies
9. Individuals who have low self-control are drawn to criminal activity
10. Family - the main vehicle of socialization
11. Robert Agnew's arguments
12. Conclusion
13. Bibliography
Positivist criminology's vision was to become so advanced that criminologists could differentiate a criminal before they committed crime. Positivism emerged in the late 19th century and endeavored to utilize scientific methodology to explain crime and criminals. Early positivist thinkers such as Cesare Lombroso 1836-1909, who originated the theory of `criminal type', suggested that criminals had physical attributes that could identify them. This crude evaluation of criminals has evolved, positivist scientific approaches now, center on the root causes of crime while an individual may commit offenses they suggest that the causes lie in social conditions. This paper will discuss Control Theory, while focusing on social control, informal controls and why is it that some individuals do not commit crime, in answering that question this paper aims to answer the question? Can we predict which infants will grow up to offend?
[...] This paper will focus on the argument that while, individuals may commit offenses; the ingrained causes of crime lay in social conditions, social conditions may have enough influence on a child that, he or she may grow up to offend but, is it possible to predict which infant will be effected by social conditions thus predicting its future criminal habits. Very unlikely but, the understanding of the characteristics of an infant's social condition is a plausible position with which to begin. [...]
[...] So finally in answer to the question `Can we predict which infants will grow up to offend' my answer would be no, although I am in agreement with many theorists that, there are many factors that contribute to children growing into offenders I believe we are all individuals thus we act and react to situations uniquely therefore to identify one area and suggest that it could predict which infant will grow up to offend is implausible. Reckless stated ?that if a boy is really rotten down to his biology, then there is little that either outer or inner containment can do to prevent the beast from rising? Reckless 1956. [...]
[...] To go against one's conscience can results in feelings of guilt, shame, and remorse. Juveniles, as well as adults, want to avoid such negative feelings. Therefore, according to the theory, those who have strong internal controls will be less likely to commit delinquent acts in order to avoid feeling guilty Indirect control relate to identification with parents and other non- criminal Persons'. In relation to delinquent behaviors, children who are strongly attached to their parents tend to care about what they think and therefore avoid behaviors that would upset or offend them. [...]
Top sold for psychology
Diverse sexual orientations
Probation and parole officers job analysis
Recent documents in psychology category
The depressive state: eternal melancholia to deep despair
The beholder - Erotica and pornography
|
Are you Feeding Your Anxiety?
Losing weight is one of the most common struggles among Americans. According to the 2012 Gallup pole, 36.1% of Americans are overweight and 4% are considered morbidly obese. Weight gain is not simply calories in and calories used. Hormones, stress levels, metabolism, detoxification pathways, and behavioral health all play vital roles in weight management. Ultimately with food, how and why we consume it, in addition to calories, is a driving influence for the number on the scale.
The question to ponder is why do humans eat? Animals eat for survival but humans also eat for pleasure. We all have rituals of preparing our food to develop flavors and then sharing the prepared food with others. The complexity of the rituals surrounding food can stem deep from our minds. Humans, with our large brains and established frontal lobes, have more complex emotions than our animal counterparts. Dealing with these emotions can be difficult, causing us to resort to external sources in order to cope.
Many find enjoyment in preparing and consuming food. But when consuming food is a reaction to external or internal stress, emotional eating can occur. Emotional eating is a common behavior that includes binging, purging, or constant snacking due to some type of stress. Humans consume food in various ways in order to improve energy levels, to improve or change a mood, or simply to relieve boredom.
A study published in the Journal of American Dietetic Association investigated the meaning of food in a population of parental nutrition-dependent adults. The goal was to obtain a deeper understanding of how stress related to food intake and how eating influenced the perception of quality of life. Interviews with the patient population in this study revealed three main topics with food consumption: eating for survival, eating for health benefits, and eating for socialization.
The researchers noted that being able to eat and enjoy food was regarded as an important aspect for quality of life. Patients who were merely eating to consume calories rated themselves as having poorer quality of life. In addition, the social and emotional context of food and mealtime was found to be an important component for the quality of life.[i] Food is a way to feel connected with other people. As a stress response, for people who have an inadequate support network, food can be a viable option. Many times, helping a patient find alternate ways to react to stress can replace the negative association they make with food.
On another hand, the study also reinforced what we learned when we were toddlers, if we don’t like it, we won’t eat it. We must like what we eat. Developing a taste for unusual or new foods takes time. Food largely has a cultural basis and people often eat what is easy to find. Our first palate is developed largely from what others feed us, namely parents and school lunch programs. Prepared foods offer ease and time-efficiency with cooking, and thus have become popular in many cultures, especially in the Unites States. Fast food has inundated our culture. McDonald’s has come to be known as a universal word associated with American globalization.
McDonald’s spends more money on marketing and advertising than any other corporation in the Unites States. A survey found that 96% of American children could positively identify Ronald McDonald. The only fictional character with a higher degree of recognition was Santa Claus. It is clear that the behavioral shaping for developing a taste for fast food begins long before people often have awareness for health concerns.
For working families, the consumption of fast food is the result of the perception of saving time, saving money, and the perception that fast food tastes good. There are several studies that report a positive correlation between fast food consumption and rates of obesity in populations. Specifically, in a 2004 study researchers found “a correlational relationship between both the number of residents per fast food restaurant and the square miles per fast food restaurants with state-level obesity prevalence.”
That leads us to another important question, how can we change negative eating behaviors associated with a poor diet? Guiding a patient to better health means assessing their own fears surrounding changing their current eating rituals. Instead of just looking at what foods to eliminate, we must look at the ritual surrounding the consumption of that particular food. We must help the individual rate the joy surrounding food or understand what purpose the food may be giving that person.
When managing a patient’s diet, their biggest fear may be giving up the food or the food ritual that they most enjoy. Behavior change may not be successful if your patient feels deprived. It is important when making dietary changes to make sure that the change equally provides what the original food/behavior provided. Only then can an individual enjoy alternate food or alternate food rituals in a beneficial way. Feelings of deprivation may sabotage attempts at breaking unhealthy habits. The focus becomes less on breaking unhealthy habits, but adding beneficial replacements. The goal is to teach the patient to readily embrace the newer, healthier behavior.
Addressing the feelings surrounding eating patterns takes a little bit of soul searching. If the patient can dig down deep and find that connection with food and feelings, they may find the key to change. When a patient finds the balance of food, behaviors, and emotions, they may find a way to achieve emotional balance in a healthier way. Therefore, feeding the soul through hobbies, living your dreams, and non-food oriented social contexts are a few methods that change diets that don’t merely rely on will power. So go ahead, work up an appetite doing something you love, don’t let your appetite work you!
|
Heed the lesson from Disneyland measles outbreak
Several Disneyland employees diagnosed with measles
Several Disneyland employees diagnosed with measles
Several Disneyland employees diagnosed with measles
Several Disneyland employees diagnosed with measles 01:27
Story highlights
• A measles outbreak resulted after exposure at Disneyland; 36 became ill
• Cynthia Leifer: Fewer people getting vaccinated, raising risks for all of us
Cynthia Leifer, is an associate professor of immunology at Cornell University and a 2015 Public Voices Fellow at the Op-Ed Project. The opinions expressed in this commentary are hers.
(CNN)For many kids, a trip to Disneyland is a dream come true. But for some of those kids and their families who visited "the happiest place on Earth" a few weeks ago, that dream has become a nightmare. In the past month, 36 people have come down with measles traced to an exposure at the theme park, including five employees of Disneyland.
Prior to the introduction of a vaccine in 1967, nearly every American got the measles; but since 2000, it has effectively been eradicated in this country, with the only sources of exposure being foreign visitors or Americans who traveled and brought it back. The threat is growing, however, because not enough people are getting vaccinated and even for those who have gotten vaccinated, the overall trend is a problem for all of us.
Cynthia Leifer
Vaccine rates in the United States have been in steady decline since the late '90s. Seventeen states now have fewer than 90% of children vaccinated for measles. Often times, like-minded people who have unvaccinated children tend to cluster in the same communities, thus creating pockets with very low vaccination rates.
The reduction in vaccination rates reveals one of the quirks of vaccines; they only protect a population if nearly everyone is vaccinated. This is the concept of herd immunity. The magic number for measles is 95, which means if at least 95% of people in a community are vaccinated, everyone is protected. This is because the chance of the virus finding the individuals in the group who have little or no protection is very low. If a vaccinated, and protected, person gets measles they may not even know it, but the virus will be stopped in its tracks by their immune response before they can make more people sick. However, the more poorly protected, or unprotected, people there are, the easier it is for the virus to find them, make them sick and spread.
For those of you who are vaccinated and might be thinking you're protected from infection even in communities with vaccination rates lower than 95%, you still need to worry. Adults can be affected, too. In fact, 13 of the 18 confirmed cases of measles in Orange County were adults. While the measles vaccine is extremely effective -- the best we have -- effectiveness can reduce over time, leaving both children and adults vulnerable, even if they were vaccinated. Therefore, the risk of any one of us coming down with a completely preventable childhood disease like measles will increase if vaccination rates continue to decline.
Disney officials have recognized the importance of vaccination. After learning of the exposure, they offered vaccination and immunity tests to their employees, according to a statement issued by the Walt Disney Parks and Resorts chief medical officer, Dr. Pamela Hymel. Of the five employees affected, three have been medically cleared to return to work, a spokesman said, and others are on paid leave until medically cleared.
To be sure, there are valid medical reasons that some children can't be vaccinated, such as allergies to components of the vaccine, underlying diseases that compromise the immune system, or because they are simply too young. But these children vitally depend on herd immunity to protect them from childhood diseases.
The unfortunate reality, however, is that more and more parents are choosing not to vaccinate their children for nonmedical reasons. Some refuse vaccines on the grounds of religious beliefs; others refuse on the repeatedly disproved argument that vaccines contribute to autism. Their high-risk decision not to vaccinate endangers not only their children, but also those who can't receive the vaccines, and even those of us who have had the vaccine a long time ago, and depend on herd immunity.
Regardless of the reason why parents choose not to vaccinate their children, it is important for the rest of us to realize they are making the choice for all of us, too. By not vaccinating their own children, they increase everyone else's chance of getting a preventable childhood disease like measles, whooping cough or even polio. Just last week, a 25-day-old baby died of whooping cough, which, like measles, is also spreading unnecessarily in the United States due to the decrease in vaccine rates.
Just like the drunk driver who makes a socially irresponsible decision that can endanger not only his life, but also the lives of the other drivers and passengers on the road, parents who choose not to vaccinate their children put everyone else at risk.
We can each play a part in protecting children by making sure parents understand their responsibility to vaccinate their children and the potential consequences on all of society if they don't. It's a small world after all, and the actions of the few can, and do, affect the many.
|
(redirected from howls)
howl at someone or something
1. Lit. [for a canine] to bay at someone or something. The dog howls at me when I play the trumpet. The wolves howled at the moon and created a terrible uproar.
2. and hoot at someone or something Fig. to yell out at someone or something. The audience howled at the actors and upset them greatly. We hooted at the singer until he stopped.
3. Fig. to laugh very hard at someone or something. Everyone just howled at Tom's joke. I howled at the story Alice told.
See also: howl
howl someone down
and hoot someone down
Fig. to yell at or boo someone's performance; to force someone to stop talking by yelling or booing. The audience howled the inept magician down. They howled down the musician.
See also: down, howl
howl with something
See also: howl
1. n. something funny. What a howl the surprise party turned out to be when the guest of honor didn’t show up.
2. in. to laugh very hard. Everybody howled at my mistake.
References in classic literature ?
The team's computer program analyses both volume (or amplitude) and pitch (or frequency) of wolf howls.
After two years in the forest, he had received responses to almost 500 howls.
And what kind of sound could possibly be as reverential as the howl of a bugle?
The ultrasonic tones it emitted, undetectable to the human ear, often caused dogs to flinch and howl.
Dogs howl to signal location when they are hunting.
We use a combination of howls and scents but it's not just a case of plucking out a tape and playing it.
Ginsberg's naked and trembling victims fill his poem with wails, screams, shrieks and animal howls of pain, Aristotelian pity and terror fill Dante's heart when he sees and hears the suffering of his friends.
She was followed by more howls, one painful sounding screech, Mr.
There I howled my primal howl, howling as a beast howls until my cry scraped like brick against brick.
MY wife and I have a good sex life but every time she orgasms she howls like a wolf.
Of course, the trial made the book, and Ginsberg, famous, and Kerouac complained that people now referred to him as "the guy Howls dedicated to.
She put some howls together to make a big "troop" of three calling monkeys, and left others as single calls in a small "troop.
The article "Howling at the Moon," (Currents, January/February 2005) is environmentalist propaganda, and howls the false gospels of peace, harmony and joy between humankind and large carnivores.
|
Metroids are not pets. Metroids are not for target practice.
Space Pirate logs from Metroid Prime 2: Echoes
Physiology and morphologyEdit
Original Metroid
Possibly the first Metroid ever created by the Chozo.
The Metroids on SR388 possess a multitude of different body shapes and sizes, a result of their complex life cycle that drastically alters their physiology as they grow. They begin life as jellyfish-like creatures before molting into a form akin to an arthropod's, until eventually reaching adulthood with the features of a theropod. Metroids are highly aggressive creatures with no natural predators and are thus at the top of the food chain on SR388. The organisms live off on the mysterious "life energy" that they drain from their victims. What this energy is remains unknown, as the victim loses no bodily fluids but perishes nonetheless. Metroids can also feed on energy used by both artificially-created beings and technology such as Samus Aran's Power Suit. In addition to draining energy, they can heal other life forms by injecting stored life energy as demonstrated in Super Metroid. This stored energy can also be drained from the Metroid, allowing the creature to be used as a living rechargeable power cell.
All Metroids on the planet are sterile with the exception of a single individual whose sole purpose is to propagate the population of the species: the Queen. While this may insinuate at first that the creatures follow a society devoted to their progenitor in a similar fashion to an ant or bee colony, this is not the case as older members of the species are found scattered across SR388 living independently in order to grow. In fact, only Larva Metroids are seen in groups near their birthing site. Interestingly, fully grown adults are found close to the Queen's location, but still at a sufficient distance to suggest an independent life style.
Despite their various forms there is a specific, physical feature shared by all Metroids inhabiting SR388: the presence of a translucent membrane enclosing a red nucleus. This nucleus has numerous neuron-like connections extending from it which connect to the inner surface of the membrane. Its function is unknown, but it is of vital importance to the Metroid, as destroying the membrane kills the organism. Strangely, Metroids in the larval phase possess numerous nuclei within their membranes and are invulnerable to damage unless frozen, while those in more advanced stages possess a single nucleus and can be readily wounded without the need to freeze it first. This suggests that the defensive strength of a membrane is determined by the number of nuclei present inside. An even greater mystery of their biology is the Metroids' ability to perpetually float in midair, despite lacking any visible means of propulsion.
A Metroid Egg.
It has been stated in numerous in-game sources, in particular Metroid Fusion, that the Metroids' true weakness is cold temperatures, yet a strange contradiction is found in older individuals inhabiting SR388 whose carapaces protect them from the effects of the Ice Beam. In any case, SR388 possesses no such sub-zero climate, thus the species are capable of roaming every corner of the planet virtually unchallenged. It is unknown if this vulnerability was intentionally implemented by their Chozo creators.
The Metroids were designed with the intention to control, perhaps even as far as to render extinct, the extremely lethal, rampant and once-dominant X Parasite population. As such, the Metroids are immune to the X's infection, allowing them to feed directly on the parasites' pure gelatinous forms. In addition, the Chozo implemented a metamorphosis that allows Metroids to acquire new abilities as they grow, such as generating electricity, launching toxic and fire-based projectiles. These powers are undoubtedly used in their battles against X Parasite hosts and their mimicries which viciously defend themselves using the abilities of prior victims. Indeed, the Metroids were successful in lowering the X population to the point that Samus was able to safely explore SR388 without encountering a single one. Even in the lack of their main diet, Metroids are more than capable of feeding on other lifeforms as substitutes.
Life CycleEdit
The first encounter between the Galactic Federation and the Metroid species consisted of a horde of Metroid larvae attacking the crew of a research vessel visiting SR388. This incident prevented adequate data to be collected on the planet's ecosystem and consequently, it was erroneously believed for years that the Larva Metroid was the standard form of the entire species. Supporting this false belief was the fact that Metroids cannot metamorphose into their natural advanced stages seen in Metroid II, Samus Returns and Fusion unless they are exposed to SR388's atmosphere and environmental stimuli.
It is only during Samus' mission on SR388 to exterminate the species that the full life cycle seen below would become known.
Metroid EggEdit
Eggs are laid by the Queen Metroid. They are usually oval in shape and are covered in a secretion that seemingly hardens into root-like structures that keep the eggs firmly connected to the ground. Inside the egg is a developing Infant Metroid that eventually bursts violently from it.
Infant MetroidEdit
The infant has a simple body shape similar to a jellyfish with no visible sensory organs. More than half its mass is comprised of the species' characteristic membrane and four red nuclei. Underneath these is a mass of flesh and/or muscle with four tiny fangs. The organism appears and behaves relatively docile, though it is known to be highly unpredictable at times and can attack without warning.
Larva MetroidEdit
Commonly seen as the face of the entire species due to it being the first Metroid form discovered by the Federation. Its physiology is nearly identical to an Infant's, except every aspect is much larger and more vicious. They can only be harmed with concussive weaponry after freezing it with the Ice Beam. Power Bombs can also harm them, though their level of effectiveness varies greatly. Like a moth or butterfly's caterpillar form, the Larva Metroid has a voracious appetite. Seemingly unique to this stage and the previous Infant form are their responsiveness to Beta-Rays, which cause them to split and multiply within 24 hours. This trait, along with its hunger and its high susceptibility to being frozen, make the larva a preferable choice for controlled bioweapon research. The Larva Metroid and Infant Metroid can be grouped up as the larval phase of the Metroid species.
Alpha MetroidEdit
An Alpha Metroid molts out of a larva's massive membrane. The creature has developed a carapace, four minuscule claw-tipped legs, a mouth with three tusks and a pair of eyes, altogether giving it an arthropod-like appearance. The membrane has been relocated to the underside of its body and encloses only a single nucleus; although never confirmed, it would seem that the other nuclei were re-purposed into the creation of the Metroid's new body. Starting at this stage, the Metroid organism is no longer susceptible to the effects of the Ice Beam and will carry its only remaining nucleus for the rest of its life cycle.
Gamma MetroidEdit
This stage is an expansion to the prior form. The four legs have reached their full length, the carapace has developed further, four additional eyes are present and most notably, the tusks have not only enlarged but can now generate electricity. The Metroid can use these medium-ranged projectiles as both an offensive and defensive option. The Gamma and Alpha stages can be categorized as a Metroid's pupal phase although mobile similar to a mosquito's pupa.
Zeta MetroidEdit
The Zeta Metroid molts from the back of the Gamma Metroid. The Zeta Metroid's body shape shares many traits with that of a reptilian, bipedal theropod dinosaur. The creature has grown two arms and two legs (each equipped with two claws), a tail, a head possessing a leech-like mouth with eight individual eyes. The membrane is now seen on its torso, and its back is fully covered with a carapace. The organism can now spit globs of hazardous liquid.
Omega MetroidEdit
The final and most powerful stage of a standard Metroid. The creature is taller and more muscular than the previous stage, and it now unleashes fireballs from its mouth. The membrane is now covered in a rib cage and is extremely resilient to damage. The Omegas and Zetas are together the adult phase of the species.
Queen MetroidEdit
The only natural reproductive stage of the entire species. Only Metroids with a specific genes can become a Queen, however it is unknown if they follow the same life cycle seen above. The Queen is quadruped and is the largest Metroid on SR388, surpassing the size and strength of even Omega Metroids. Her membrane and nucleus have vastly expanded to cover her entire underside. She is highly protective of her eggs and will defend them to the death.
Unnatural Metroid strainsEdit
Throughout the years of extensive research and experiments by Space Pirates, it was discovered that not only are Larva Metroids highly adaptive to alien planets, their biology is also easily influenced by radiations[2]. As a result, the larvae are far more promising and exploitable as bioweapons than the other stages in the species' life cycle. This led to the creation of many unnatural breeds of Metroid, and while there are some that are weaker than their natural counterpart, many others became powerful aberrations. Interesting to note is that most of these unnatural strains were discovered long before the SR388 life cycle itself became known.
The list below covers every single breed of Metroid in the entire series which possess sufficiently distinctive traits that set them apart from the original SR388 strain.
Tallon MetroidEdit
Hunter MetroidEdit
Fission MetroidEdit
Metroid Prime (Dark Samus)Edit
Infant Tallon Metroid (Metroid Cocoon)Edit
Dark Tallon MetroidEdit
Phazon MetroidEdit
Hopping MetroidEdit
Metroid HatcherEdit
A tentacled, armored Metroid mutant that branches from the Phazon Metroid strain. It is also capable of reproduction.
Big MetroidEdit
Unfreezable MetroidEdit
Additional strainsEdit
• Zebesian Metroid: Larvae that have adapted to Zebes. They are identical to the Larva Metroids from SR388 in terms of attack and resilience to damage. A scan in Metroid Prime 3 suggests that the Zebesian breed is potentially more aggressive[3].
• Talvanian Metroid: Although all Metroid Eggs and Larvae on planet Talvania have been artificially enlarged with the Amplification Beam by the Space Pirates, they do not seem to have acquired any note worthy biological changes otherwise.
Several of the breeds observed throughout the series reveal new information about the species as a whole. For instance, the scan of a Tallon Metroid in Metroid Prime 2: Echoes states that repeatedly draining the stored energy from a Metroid's body (when a specimen is used as a power cell) can cause cellular breakdown to occur in the organism. It is also revealed through the Pirate Homeworld scans in Metroid Prime 3: Corruption that Metroids have a sensitivity to certain sonic frequencies, which the Pirates exploited with tank barriers to subdue several Phazon Metroids in the Processing chamber.
Metroid egg
Metroid Eggs on the Pirate homeworld.
—- Samus Aran, Super Metroid's Samus Aran Profile
Official dataEdit
Metroid manualEdit
"This protoplasm in suspended animation was discovered on the planet SR388. It clings onto Samus' body and sucks his[4] energy. It can't be destroyed directly with the normal beam. Freeze it with the ice beam, and then fire 5 missile blasts at it."
Virtual Console retranslationEdit
Official Nintendo Player's GuideEdit
Metroid II manualEdit
Super Metroid manualEdit
Super Metroid Nintendo Player's GuideEdit
Metroid regular
A Larva Metroid in Super Metroid.
"Freeze them, then blast them with Super Missiles."
Official Metroid Fusion websiteEdit
Metroid: Zero Mission manualEdit
Official Metroid: Zero Mission websiteEdit
• "Gelatinous exterior"
• "Multiple brain stems"
• "Gripping claws" (Outer)
• "Energy-sapping fangs" (Inner)
Logbook entryEdit
Metroid scanpic Metroid scanpic 2
Metroid scanpic 3 Metroid scanpic 4
Metroid Prime
Temporary scan
Morphology: Metroid
Energy-based parasitic predator.
Logbook entry
Official Metroid Prime WebsiteEdit
Metroid Prime Pinball manualEdit
"An energy based, highly dangerous parasite."
Smash TipsEdit
Smash Tour (SSB4 Official Game Guide)Edit
"Bumping into this enemy costs you several stat boosts! Reclaim lost stat boosts by bumping ino it a second time."
Appearances in other mediaEdit
Mt hero2
Super Smash Bros seriesEdit
Super Smash Bros.Edit
Super Smash Bros. MeleeEdit
Fig 02 240
Super Smash Bros. Melee website.
(Metroid 08/89).
Super Smash Bros. BrawlEdit
Metroid (1987)
Metroid II: Return of Samus (1991)
Smash Bros. DOJO!! data on the Assist TrophyEdit
Super Smash Bros. for Nintendo 3DS and Wii UEdit
Metroid SSB4
Super Smash Bros. for Nintendo 3DS and Wii U TrophyEdit
Nintendo LandEdit
Non-canon warning: Non-canonical information ends here.
Original Metroid Origins
The original origin of the Metroids.
• Metroids are the Metroid series equivalent to the titular Aliens of the Alien series, featuring a life cycle, a Queen producing eggs and being a menace to the female protagonist.
• Coincidentally, the spinoff prequel film Prometheus, created well after the Metroid franchise, implies that the Xenomorphs were created by an advanced alien race, the Engineers, for use as a bioweapon, similar to the Chozo developing the Metroids to stop the spread of the X Parasites.
• Metroid Other M Metroid Art 94
Note the four nuclei visible within the blue inset.
• NES Remix includes a Metroid Miiverse stamp.
For official artwork, see Metroids' Gallery.
Notes and ReferencesEdit
3. ^ Data decoded. Zebes specimen.
Highly aggressive subject has been
transferred to an off-site laboratory.
Ad blocker interference detected!
|
Friday, August 7, 2009
An Aesop fable and learning
"The Crow and the Pitcher"
The following is from Wired and certainly illustrates some form of "reasoning". I remember a similar instance where a crow had a black walnut [whose shell is extremely hard and easily breached by a squirrel's incisors] and figured that the delicious nut meat could be removed and enjoyed by placing the black walnut in traffic and letting the vehicles crack the hard shell.
"Clever Crows Prove Aesop's Fable Is More Than Fiction"
Hadley Leggett
August 6th, 2009
Aesop's fables are full of talking frogs and mice who wear clothes, but it turns out at least one of the classic tales is scientifically accurate.
Researchers presented four crows with a challenge from Aesop's fable "The Crow and the Pitcher": a container of water not quite full enough for the birds to reach with their beaks. Just like Aesop's crow, all four birds figured out how to raise the water level by dropping stones into the glass. The crows also selectively chose large pebbles over small ones, and quickly realized that dropping rocks into a container of sawdust didn't have the same effect.
"The results of these experiments provide the first empirical evidence that a species of corvid is capable of the remarkable problem-solving ability described more than two thousand years ago by Aesop," wrote the researchers in the paper published Thursday in Current Biology. "What was once thought to be a fictional account of the solution by a bird appears to have been based on a cognitive reality."
The researchers took four adult rooks, a type of intelligent crow, and tempted them with a tasty worm floating on top of a glass of water, just out of reach. Then they placed a pile of small rocks next to the crows. After they assessed the height of the water from the top and sides of the glass, the crows dropped stones into the glass until the water level rose enough for them to grab their prize.
Once they'd caught the worm, the birds didn't keep putting stones in the glass, and they didn't try to grab the worm until they'd dropped in a certain number of stones. "This number was strongly correlated to the number of stones needed to raise the water level to the correct height," the researchers wrote, "suggesting that, having assessed the starting level of the water, rooks translated this into an estimate of the number of stones needed."
Before this experiment, the birds had never been exposed to a glass with water in it, and they'd never used stones as tools. According to the researchers, the only other animal known to perform this kind of task is the orangutan, which has been recorded spitting into a tube to bring a peanut into reach.
"Corvids are remarkably intelligent, and in many ways rival the great apes in their physical intelligence and ability to solve problems," said biologist Christopher Bird of the University of Cambridge in a press release. "This is remarkable considering their brain is so different to the great apes'."
The antics of the four birds - Cook, Fry, Connelly and Monroe - can be seen in the videos below. Cook and Fry snagged the floating worm after just one try, while Connelly and Monroe succeeded after two attempts. Unfortunately, Fry had a bad reaction to one of the worms and gave up in the middle of the experiment.
Was the rook aware of the fundamentals of Archimedes' principle: "Any object, wholly or partly immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object."
Drop an "x" quantity of pebbles into the water filled cylinder and retrieve the food.
Now note that the rook employs an extended cognitive feature...not just any stone--size does matter.
Change the medium [sand for water] and the rook quickly discovers the medium that will yield the food.
No comments:
|
The story of the Kakapo
by Sayraphim on January 21, 2014
Douglas Adams famously described the New Zealand native kakapo as the ‘world’s largest, fattest and least-able-to-fly parrot’. The males can grow up to a whopping 60cm and both males and females are a beautiful mottled green and yellow with black flecks. It may also be the world’s longest living bird with a suspected life expectancy of about 90 years, although the jury is still out on that one. However there is no doubt it’s now one of the rarest parrots, with only 124 known individuals left, specially sequestered on three islands off the coast of NZ. The kakapo has a commonly used ‘Skraark’ sound, which you can hear here and a ‘Ching’ which can be heard here. It also has an amazing mating call, a sub-sonic ‘BooOOoooom’ which can travel for kilometers. You can hear it booming here. When the male kakapo is ready for mating (on average once every 2 – 4 years), he climbs up the side of a hill and then scratches out a little shallow pit, a ‘ track and bowl system’, which is great for further amplifying his booming call. The male kakapo can boom all night in the hope of attracting a female.
The kakapo are unusual for a variety of reasons, one of the most heartbreaking is that it freezes when disturbed. Although I assume the evolutionary reason for that is so it blends in with it’s surroundings, it only makes it easy pickings for any hungry predator that comes across it. Another is it’s very strong smell. This may be for alerting other kakapo to it’s position, but it also clearly works for predators looking for a tasty meal.
The Maori has strong cultural, spiritual and traditional associations with the kakapo, even naming it the word for ‘night parrot’. They hunted it for food, told stories about it and made garments and cloaks out of it’s skin and feathers. A cloak from kakapo feathers would have only been for people of high status. At the right you can see a kakapo cloak from the early 1800s with an estimated 11,000 kakapo feathers.
Beginning in the 1840s, European settlers cleared vast tracts of land for farming and grazing, further reducing Kakapo habitat. They brought more dogs and other predators, including cats, rats and stoats. Europeans knew little of the Kakapo until George Gray of the British Museum described it from a skin in 1845. As the Māori had done, early European explorers and their dogs ate Kakapo. It was said you could go to any tree and shake it and kakapo would fall out like apples.
From the 1870s, the people of New Zealand knew that the kakapo were in danger of extinction. From 1891 to the 1920s there were three attempts to move the kakapo to the relative safety of islands but each time the populations were decimated or wiped out completely by introduced predators.
in the 1950s, the New Zealand Wildlife Service was created,a government agency charged with caring for New Zealand’s wildlife. From 1949-73, the Wildlife Service made more than 60 expeditions to find kakapo, six were caught, but all were males and all but one died within a few months in captivity. By the early 1970s, the situation had become critical. A new initiative was launched in 1974 at which time no birds were known to exist. By 1977, 18 males had been found in Fiordland but with no females known to exist, the species seemed doomed.
The amazing turning point came later the same year, when a population of about 200 kakapo was found living in southern Stewart Island. That discovery breathed new life into the kakapo programme after it was confirmed the population included female birds. However, even these kakapo were in rapid decline, due to predation by feral cats, and so in 1987 the decision was made to evacuate the surviving population to offshore island sanctuaries. I didn’t really understand jsut how difficult a task this was until I read this:
In 1989, a Kakapo Recovery Plan was developed … The first action of the plan was to relocate all the remaining Kakapo to suitable islands for them to breed. None of the New Zealand islands were ideal to establish Kakapo without rehabilitation by extensive revegetation and the eradication of introduced mammalian predators and competitors. Four islands were finally chosen: Maud, Hauturu/Little Barrier, Codfish and Mana. Sixty-five Kakapo (43 males, 22 females) were successfully transferred onto the four islands in five translocations. Some islands had to be rehabilitated several times when feral cats, stoats and weka kept appearing. Little Barrier Island was eventually viewed as unsuitable due to the rugged landscape, the thick forest and the continued presence of rats, and its birds were evacuated in 1998. Along with Mana Island, it was replaced with two new Kakapo sanctuaries, Chalky Island (Te Kakahu) and Anchor Island. The entire Kakapo population of Codfish Island was temporarily relocated in 1999 to Pearl Island in Port Pegasus while rats were being eliminated from Codfish. All Kakapo on Pearl and Chalky Islands were moved to Anchor Island in 2005. (Wikipedia)
Our dog gets disorientated when we take her visiting to friends houses. I cannot fathom the difficulties and dangers associated with moving an entire endangered bird around various islands over two decades!
To monitor the population continuously, each kakapo is fitted with a radio transmitter and each is given a name. You can see the list of kakapo and their family tree here.
At this time there are only 124 known birds left on all three islands. While that’s a heartbreaking number, if you consider the fact that 26 years ago there were only 18 known individuals, things are starting to look up. But it’s still a long hard road even to take the species from it’s current rating of ‘critically endangered’ (faces an extremely high risk of extinction in the immediate future) to an ‘endangered’ rating.
The kakapo are a beautiful bird that deserve to not only live but thrive. They’re on the journey of recovery but it takes an entire community of dedicated people to ensure their survival.
Journey – The Kakapo of Christchurch looks at the parallels of the recovery of the kakapo and Christchurch. It aims to use one of New Zealand’s most beloved birds as handmade gifts for the people of Christchurch and to remind them that there is still joy to be found, even in the rubble. It also helps raise awareness of this dreadfully endearing flightless parrot and it’s plight only millimetres away from extinction.
Can you help me spread a little guerrilla kindness?
Image credits:
Illustration of a Kakapo from the 1873 book “A History of the Birds of New Zealand” by Walter Lawry Buller.
Previous post:
Next post:
|
Presentation is loading. Please wait.
Presentation is loading. Please wait.
Meiosis Chapter 10.
Similar presentations
Presentation on theme: "Meiosis Chapter 10."— Presentation transcript:
1 Meiosis Chapter 10
2 Chromosomes Genes are located on chromosomes inside the cell nucleus
When offspring are formed, 1 set of chromosomes from each parent is passed on
3 Chromosome Number Homologous: chromosomes that are passed on from parents 1 from mom/1 from dad Diploid: cell that contains both sets of homologous chromosomes 2 complete sets of genes # chromosomes in diploid cells written as “2N” (Human 2N = 46)
4 Haploid: cell that contains only 1 set of chromosomes (gamete)
Written as “N” (Human N=23) Gametes: reproductive cells (sperm & eggs) When gametes are formed, contain only 1 set of genes
5 Meiosis Def: The process by which the # chromosomes in a cell is cut in half, and homologous chromosomes are separated 2 Stages: Meiosis I Meiosis II Creates 4 haploid cells (gametes), all genetically different
6 Meiosis I Interphase I: chromosomes replicate , making 2 new chromatids (copies) connected by a centromere Occurs in 4 steps: 1) Prophase I: Each chromosome pairs with homologous chromosome & forms a tetrad
7 Meiosis I Sometimes when tetrads form, crossing-over can occur
Crossing Over: when homologous chromosomes exchange portions of themselves Results in new combinations of genes
8 Meiosis I 2) Metaphase I: Spindle fibers attach and chromosomes line up 3) Anaphase I: fibers pull homologous chromosomes apart 4) Telophase I: cytokinesis occurs; results in 2 new diploid (2N) “daughter” cells
9 Meiosis II Chromosomes DO NOT DUPLICATE again (no Interphase)
Occurs in 4 steps: 1) Prophase II: From Meiosis I, have 2 diploid (2N) daughter cells 2) Metaphase II: Chromosomes line up in center of cell (just like Metaphase I)
10 3) Anaphase II: fibers pull chromatids apart
4) Telophase II: cytokinesis occurs; results in 4 haploid (N) daughter cells
11 Gamete Formation Male Animals: 4 haploid gametes all become sperm
Female Animals: 4 haploid gametes become 1 egg and 3 polar bodies (not used)
12 Mitosis vs. Meiosis Mitosis → 2 genetically identical diploid (2N) cells Used for cell growth and replacement Meiosis → 4 genetically different haploid (N) cells Used for reproduction & formation on gametes
13 Linkage and Gene Maps Thomas Morgan (1910): discovered many traits inherited together i.e.: fruit fly’s red eyes and mini wings Genes can be placed in “linkage groups” “Linkage Group” = chromosome Mendel just happened to study genes on different chromosomes
14 T. Morgan: genes on same chromosome should be inherited together
Chromosomes = group of linked genes Chromosomes assort independently; not individual genes
15 However, crossing-over may “unlink” some genes
Farther apart = more likely to separate This action creates new combinations of alleles → more genetic diversity
16 Gene Mapping Rate which genes become separated and recombine is measured Describes the relative location of each known gene on a chromosome Created by “mapping” genes relative locations
Download ppt "Meiosis Chapter 10."
Similar presentations
Ads by Google
|
Web Domain Names
A web domain name is the part of the address that points to your website. For example the akstrategic.com part of http://www.akstrategic.com/index.php. The http:// is an instruction to transfer hyper text language. Literally it means Hyper Text Transfer Protocol. The WWW part is an instruction to the server to search the world wide web for the domain akstrategic.com. It is worth noting that servers can be set up quite simply so that the www part need not be entered. For example http://akstrategic.com/index.php will take you to the same page.
The domain itself consists of two parts in this case. The first part which is the domain name itself (akstrategic) and the second part (.com) which is called the "Top Level Domain" or TLD. The TLDs come in many formats, for example .co.uk which means company in the UK, or .com.au which means company in Australia. '.com' TLDs are the most common and easiest to remember. If you wanted to go to Ford's website, statistics show you would enter Ford.com in a browser before trying any alternative. When selecting a TLD, it is advisable to go for a .com domain over anything else. That does have some pitfalls to consider.
• Some locally based search engines and directories won't list names unless they have the appropriate domain extension.
• Many of the best names have been either used or at least bought with the .com TLD. This means you may be forced to adjust your domain name to something more awkward in order to get the .com TLD.
To get the right formula in the search for a good domain name, it is important to understand the need to have a catchy or easy to remember domain name. Using the ford example above. Ford .com is ideal. However using a domain name like fordmotorvehicles.com will alienate anyone actively searching for Ford. If the domain name Ford was already taken, ford may have been better off using an alternative TLD like ford.biz.
The other end of this consideration is companies with long names like our own, A-K strategic business solutions. It is unlikely people will remember us for the business solutions part of our business name, but they are likely to remember the A-K Strategic part. This means that anyone looking for us is likely to type either akstrategic.com or a-kstrategic.com in their browser. It is therefore worth considering dropping part of the business name to keep a catchy domain name.
Geographic consideration should be taken when choosing a TLD, if your company's business activities are restricted to a country, then it would be best to use a TLD that is geographically assigned, for example Ford Australia might need to restrict their business activities to Australia and would want to be identified easily as Ford Australia, then it would make sense to register the domain www.ford.com.au
If you are considering registering a domain name with a specific country TLD, each individual country might have rules, conditions and laws relating to the ownership and registration of a domain name. For example to register a com.au TLD, you must have a valid Registered Australian Business, and the domain must be similar to your business name.
A word of advice prior to registering coke.com.it or ford.biz.uk, if are not licensed to trade under that name, then you are probably breaking the law and can be sued. Be careful before you decide to use a trademark.
A TLD can be as strategic as choosing a company name, prior to deciding on it, consider the length, other domain names which might have a similar name and if you will be breaking any laws.
Next, we look at how to find and register your domain name
Copyright.Legal.Privacy.Web Consultants.Home Automation.
|
What does scoreboard mean?
Definitions for scoreboardˈskɔrˌbɔrd, ˈskoʊrˌboʊrd
Here are all the possible meanings and translations of the word scoreboard.
Princeton's WordNet
1. scoreboard(noun)
1. scoreboard(Noun)
2. scoreboard(Noun)
1. Scoreboard
A scoreboard is a large board for publicly displaying the score in a game or match. Most levels of sport from high school and above use at least one scoreboard for keeping score, measuring time, and displaying statistics. Scoreboards in the past used a mechanical clock and numeral cards to display the score. When a point was made, a person would put the appropriate digits on a hook. Most modern scoreboards use electromechanical or electronic means of displaying the score. In these, digits are often composed of large dot-matrix or seven-segment displays made of incandescent bulbs, light-emitting diodes, or electromechanical flip segments. An official or neutral person will operate the scoreboard, using a control panel.
1. Chaldean Numerology
The numerical value of scoreboard in Chaldean Numerology is: 9
2. Pythagorean Numerology
The numerical value of scoreboard in Pythagorean Numerology is: 1
Sample Sentences & Example Usage
1. Jordan Spieth:
No scoreboard watching, just keep my head down and set a goal for myself.
2. Jordan Spieth:
Eight under here is nothing to complain about, just in the zone and hitting some shots. I saw the scoreboard and tried to make a push.
3. Paul Dunne:
I had about a 20-footer for birdie on 15 and there's a scoreboard there and I knew if I made that I'd get on the first page of the leaderboard, it didn't make me nervous, it just kind of made me excited.
4. Nadia Comaneci:
The fact that the scoreboard could not show the 10 added to the drama, it made it bigger, scoring the first 10 in history was a big deal but the fact that even an electronic scoreboard could not figure out how to put out a score, it made the story more historic.
5. Jim Mahoney:
I was on the tee at Winged Foot, me and a friend of mine, phil got out his driver and was bouncing the ball off the face. Phil looked over to the 17th green and there's a scoreboard. It showed that (Colin) Montgomerie had just double bogeyed the 18th. His whole demeanor changed. But he hits this horrendous slice.
Images & Illustrations of scoreboard
Translations for scoreboard
From our Multilingual Translation Dictionary
Get even more translations for scoreboard »
Find a translation for the scoreboard definition in other languages:
Select another language:
Discuss these scoreboard definitions with the community:
Word of the Day
Please enter your email address:
Use the citation below to add this definition to your bibliography:
"scoreboard." Definitions.net. STANDS4 LLC, 2017. Web. 26 Jun 2017. <http://www.definitions.net/definition/scoreboard>.
Are we missing a good definition for scoreboard? Don't keep it to yourself...
Nearby & related entries:
Alternative searches for scoreboard:
Thanks for your vote! We truly appreciate your support.
|
Fun Interactive Decimal Math Activities
These games focus on the concept of the decimal and offer a vast array of different exercises for several specialized subjects, all presented in a fun and meaningful online learning environment.
They combine lessons from basic arithmetic skills, such as adding, subtracting, dividing, and multiplying decimals. Children can also learn about place value and rounding of decimals, and comparing and ordering different numbers as well.
They can also work on converting fractions to decimals and vice versa.
Most of the games focus on one specific subject, so students and teachers can choose games that meet the individual learning needs of each child.
The games here are great for 5th grade students and provide just the right level of challenge for students of that age group - fostering learning by challenging their skills, but not frustrating them by asking them to solve equations beyond their ability.
Decimal Place Value & Rounding
Town Creator - Decimals
Start by choosing whether to practice place value or rounding of decimals.
Then, in every trial, a problem will appear at the top of screen. If you click the correct answer - you get to add a new structure to your town.
Answer 10 correct answers to advance to the next level. Every level unlocks exciting features.
Rounding & Place Value of Decimals
Sea Life - Decimal Numbers
This is a beautiful game to practice and improve your decimal skills!
There are 2 topics to practice: rounding and place value.
In the rounding option, you will be asked to round decimals to the nearest tenths or hundredths.
In the place value option, you will be asked to identify the digits of the decimals given.
Place Value Decimal Game With 3 Decimal Places
Decimal Ducks
A decimal number will appear on the screen.
You will be asked to click its tenths, hundredths or thousandths digit.
Click its correct decimal digit according to the sentence at the top, to collect all 10 ducks.
Place Value Decimal Game With 4 Decimal Places
Decimal Ships
Click on the correct digit of the decimal number, according to the sentence that appears above.
Every correct click will send in a boat or a ship.
Win the game by sending all 20 boats and ships.
Place Value Decimal With up to 5 Decimal Places
Football Math
In every trial you will have to throw the ball successfully into your friend's arms, and then answer a decimal place value question, such as:
"In 3.5746 which digit is in the thousandths place?".
Try to pass successfully maximum trials in the time given.
Starts with 2 decimal places and increments with every stage
Place Value Pirates
A sentence will appear on the top of the screen, describing a certain digit, for example "8 in the tenths place".
Find the pirate that stands on a number that has that digit and click him.
The game starts with numbers of 2 decimal places, and advances to more decimal places with every stage.
Place Value & Estimation With up to 3 Decimal Places
Decimal Detective
In this game you play a detective who is looking for the suspect which is hiding behind a secret decimal number. You look for the number by making guesses between a given range. (for example: The number is between 7 and 8)
After every guess, you will be told if the secret number is higher or lower than your number.
It goes by stages, first you have to find the tenths digit, then the hundredths, and if you choose the hardest level, the thousandths too.
Place Value up to 5 Decimal Places
Click The Decimal Point
In this game the place value problems are presented in sentences. A row with digits appears above.
Answer by clicking on the correct location to set the decimal point in the row of digits, this way you create the number which consists the right answer.
After every answer you will get feedback whether you were correct or not.
Rounding decimal numbers to the nearest tenths
Rounding Decimal Spaceships
Round to the nearest tenths the decimal number that appears below.
Click the spaceship that has the correct answer, and it will be sent on its way.
Rounding decimals to the nearest hundredths
Rounding Decimal Sharks
Round to the nearest hundredths the decimal number given below.
Find the shark with the correct answer and shoot it.
Try to act quick - don't let the sharks eat the goldfish!
Ordering Decimals From Lowest to Highest
Decimal Gallery
Drag the decimals from the bottom of the screen, and place them on the wall in ascending order, from left to right.
It is possible to change the location of the decimals even after they were set on the wall, just drag them to the desired location.
When you are finished, click the "Done" button.
Adding and Subtracting Decimals
Magic Square
In the magic square, one number is missing.
Type the missing number so each row, each column and each diagonal all add up to the same sum.
Hint: use addition to sum rows or columns that the missing number is not a part of them. Then, use subtraction in a row or a column that the missing number is a part of.
Adding Decimals (2 Decimal Places)
Soccer Math
Every trial starts with a decimal addition problem.
After you choose on the correct answer, you will be given a chance to score a goal:
Click once to set the ball's location, and then another click to set the strength of your kick.
Long Addition & Subtraction with Decimal Numbers
Decimal Mania
University of Waterloo, Ontario, Canada
Start by clicking the "Start" button, and an addition problem will appear.
Solve the problem one step at a time, by typing the missing numbers in the highlighted boxes. When finished, click the "Check Answer" button, to receive feedback and get your score.
After solving the addition problem, a subtraction problem will appear.
Convert between fractions and decimal numbers
Decimal Tank
In this game you will be given a fraction and be asked to write it as a decimal number, or vice versa: you will be given a decimal number, and be asked to write it as a fraction.
After a correct answer, you will be given a chance to shoot with the tank and hit a target.
Many decimal subjects: Addition & subtraction, comparing, rounding, place value
Decimal Jeopardy
This game can be played alone (one player game) or by competing with a friend.
In every trial you choose a math decimal problem by clicking a number on the grid.
The number represents how many points you will get for answering correctly.
Addition and Subtraction of Decimal Numbers
Rags to Riches
In this interactive decimal game you will get an increasing amount of money for every correct decimal problem you solve.
You have a total of three hints you can use when you get stuck with a difficult problem.
Try achieving the one million dollar goal. Just note that if you make a mistake - the game is over, so choose your answers carefully...
Comparing Simple Fractions And Decimals
Death To Decimals
This game is similar to "space invaders" but with fractions and decimals.
In every trial a simple fraction will appear on the lower left corner of the screen.
Decimal aliens will be descending slowly from above.
Find the alien with the decimal equivalent to the fraction given, and get your hero under it using the keyboard arrows. When you are in place, press the spacebar, to shoot a calculator and eliminate that alien.
Don't let the aliens reach the ground.
Pay attention, this is not an easy game, it starts with easy fractions, but as you progress through the levels, you get harder fractions (such as 7/9) and you need to act fast.
Matching Simple Fractions And Decimals
Match Fractions With Decimals
This is similar to a matching memory game.
Click two cards to reveal the fraction or decimal number that is hidden underneath it.
If you find a matching pair - they will disappear and a part of the hidden picture will be revealed.
Try to reveal the full hidden picture by finding all the matches.
Multiplying and Dividing Decimals
Spy Guys
Help the secret agent gather his materials to build a tree house for his son, and calculate their costs.
In every step you will get a decimal word problem, click the correct answer.
(Note that there is no need to click any of the action buttons on the black strip on the bottom, the game advances on its own)
Multiplying Decimals with one digit numbers
University of Waterloo, Canada
In every trial you will get a math problem of a number with one decimal place multiplied with a one digit number (for example: 6.3 x 4).
Answer the problem by typing it in the empty boxes, one digit at a time.
You can get help by checking the "Check this box for help". It will assist you with solving the current step of the problem.
Multiplication, Division, Addition & Subtraction of Decimals
Decimal Santa
Choose the right answers to the decimal math problems.
The emphasis in these problems are in putting the decimal point in the exact location.
If you make a mistake, an animated Santa will appear and give you a hint of how to solve the problem.
|
Exercise during pregnancy recommended
Verity Stockdale - Nov 2013
It has been suggested that it may be beneficial for their unborn child if women exercise when they are pregnant.
There are many things that women can do while pregnant that can help to safeguard the health of their unborn child - for example, not smoke, get plenty of sleep, drink lots of water and generally look after themselves.
Now, new research published in the journal Experimental Physiology has suggested that pregnant women should exercise when they are pregnant in order to boost the vascular function of their unborn child into adulthood.
"Our study was the first to demonstrate that maternal exercise during pregnancy significantly impacts vascular function in adult offspring," concluded the authors.
Specifically, by exercising, an expectant mother could improve the health of her child's vascular smooth muscle in particular. This in turn could help to lower the risk factor of a woman's offspring developing cardiovascular disease later in life.
Officially, pregnant women should already be carrying out around half an hour of moderate-intensity exercise on nearly every day of the week - if not, on all of them. Yet, there are some medical professionals who are not yet convinced of the benefits.
This research hopes to dispel any doubt that it is a good idea for pregnant women to get their heart rates up, perhaps by going for a light jog or by carrying out muscle-toning stretches.
"A second important aspect of the findings in our study is that previous research identified the endothelium, which is the single-cell layer lining all blood vessels, to be susceptible to foetal-programming interventions," commented Dr Sean Newcomer from California State University San Marcos, US and Dr Bahls of Universitatsmedizin Greifswald, Germany.
"Contrarily, we show that the vascular smooth muscle was significantly altered in adult offspring from exercise trained mothers," they added.
Now, they said, it will be essential that future research looks at coronary circulations and substantiates any claims about cardiovascular disease susceptibility.
Official guidelines outline which kinds of exercises are and aren't appropriate for pregnant women. For example, those which involve the woman lying flat on her back aren't recommended after 16 weeks, in particular, as the weight of the baby bump could press on the main blood vessel which is transporting blood back to the heart. This could make the woman feel faint.
Other forms of exercise that are advised against include contact sports, scuba diving (as the baby as no protection from the decompression), those at high altitude and activities that come with a high risk of falling, like horse riding.
Swimming could be a good form of exercise as the water will help with support the extra weight due to the pregnancy. Some pools may even host aquanatal classes with qualified instructors, which could be perfect.
Expectant mums looking for other ways to safeguard their overall health while pregnant may be interested in taking healthfood supplements as part of their diets, to ensure they are getting the recommended daily amounts of various essential vitamins and minerals.
A complex such as KBG Algae, available from the Really Healthy Company, could be a particularly suitable formula, as it contains a wealth of valuable nutrients, making it one of the most popular choices on the market at the moment.
As both a vegan and vegetarian source of essential nutrients, the supplement is an ideal broad-spectrum complex for future mums to take in order to ensure they are giving their growing baby all the vital building blocks that it needs for the best start in life.
|
What Is The Place Of Ragtime in Blues Guitar Music?
In regular guitar tuning, a specific fingerpicking style was established called 'monotonic bass' and was typical throughout all areas. The fingers typically picked the melody on the treble strings, while the thumb struck one or 2 bass notes to keep the beat.
The king of the category was a guy called Blind Blake, however there were lots of others who were really slick players, such as Blind Boy Fuller, Reverend Gary Davis and the prominent Willie Walker. Throughout the folk blues revival in the 60s, young guitar players like Stefan Grossman looked for the living blues players to learn various styles from them, as well as fuse their techniques into contemporary blues compositions. A system of notation was created called guitar tablature which permitted individuals to jot down the precise places of the fingers of each hand at any part of a piece of music.
where did ragtime blues guitar originateIn the early part of the century piano music ended up being popular but was certainly a bit limiting for aspiring musicians as pianos are much more costly than guitars. Some guitar players recognized that the piano sound might be simulated by utilizing the thumb alternating in between the bass strings of a guitar, and this is basically how ragtime blues or Piedmont style of fingerstyle guitar was born.
Artists in the city would have been extremely pleased with the style of his finger picking - it was blues, but of a complex sructure that made you want to dance.
The very earliest guitars and fiddles were do-it-yourself affairs and not high quality instruments. This fact, and the damp weather condition were significant elements in the development of bottleneck styles of playing blues, where the guitar was commonly tuned down to open G or D. The thumb is genuinely king when it comes to fingerpicking the blues, whatever style is played.
It wasn't too long prior to a young guitar player understood that the left hand part of a piano rag might be approximated on the guitar. Ragtime blues was born and ended up being fascinating and a challenging technique to master. Some guitar players recognized that this might be simulated utilizing the thumb alternating in between the bass strings of a guitar and ragtime blues or Piedmont style was born.
The brand-new blues music being produced in the black neighborhoods after the ending of slavery in the deep south of America was gradually turning to dance, and artists frequently stated you had to 'drag' the pace a little, or 'rag' it. It's extremely possible that ragtime music came from these sources.
Historians think that the very first music that might properly be called blues appeared in the Mississippi Delta in the Deep South of America in the late 1890s. African servants would sing or shout so-called 'field cries', which were frequently in the call and action format and these tunes assisted the work rate. This kind of tune was carefully connected to spirituals sang in the Churches at that time and were rarely accompanied by any type of instrument.
In 1910 a young whorehouse piano player called Scott Joplin created a brand-new style of playing piano that was extremely complex. He used the classical understanding he had through his musical training and used to play honky-tonk barrel house piano which developed into ragtime. Making use of guitar tablature considerably sped up the learning procedure and it wasn't too long before the very best players were transcribing the initial Scott Joplin piano tunes for guitar and the circle was complete.
Many guitarists took the style even further and utilized the thumb to pick any string at all. The thumb is genuinely king when it comes to fingerpicking the blues, whatever style is played, and it needn't just stick to the bass strings!
Typically it's the small things which guitarists include that create all the difference. A lot of of us have played Candy Man by Gary Davis, for a lengthy time, with varying degrees of success. I've performed it for years, and then I decided to take a closer look tosee what's happening in the picking patterns.
Of course, we understand that the Reverend employed just one finger and his thumb for his right hand picking, but that's simply the start of his genius for blues music. Loosen up those fingers and we'll make a start ... One Of The Last Greatest Blues Guitar Giants
Reverend Gary Davis was uncommon in quite a few respects. The complexity and musical richness of his music is very well known, and we might think ourselves to be quite fortunate because his prowess remained undiminished in his later years. Unlike many blues men, who discontinued performing and restarted after they had been 'found' again, Gary Davis in no way stopped playing.
It was still his habit to play the blues in the streets around Harlem until he became in vogue once again, after that started to make records and play live gigs once again. He was also quite inclined to give blues guitar lessons to pretty much any individual that asked him, it appears, and so the magic were handed on to youthful guitarists similar to Stephan Grossman and many others.
First of all, Davis used the thumb and index finger of his right hand to create all of those extremely complicated sounds. Of course, his finger could move rapidly and seemed to move separately from his thumb. He also used picks, which helps to be much more accurate.
He was really proficient in any key, either major or minor, but it wasn't that fact that exemplified his technique (for me.) The timing of his thumb beats were rock solid, as you would expect, and he could break out of the alternating bass pattern at will, either to double time and generate syncopated rhythms, or to create lightning rapid single string runs. For the second option, he would pick a string alternately with his thumb and finger, as if he had been using a plectrum. This was remarkable enough, but he frequently sang at the same time which is a great trick -try it sometime!
His thumb would also leap across to the treble strings if needed, to finish a run or a phrase, providing the impression of other finger was being applied. The consequence was a special experience of ragtime guitar playing which has certainly not been equaled.
Blind Blake - The King Of Ragtime Blues Guitar
the origins of ragtime blues guitarArthur Blake didn't hold this title for absolutely nothing. He's normally accepted as the fastest and most precise ragtime blues finger picker there ever was, other than perhaps for Willie Walker and Reverend Gary Davis.
When playing ragtime, the thumb rotates in between 2 or 3 bass strings, however can likewise move over to the treble strings if required to make a quick single string run with thumb and fore finger. It's a really hard style to master, not least since the thumb roll hurts the thumb, and it's essential to practice in order to establish a thick callous at the contact point.
With his difficult drinking habits and neglectful personal habits (by all accounts), he went to Chicago from the Deep South and brought with him the rapid Geechee guitar playing style he learnt in his youth. Guitarists in the city would have loved this style of his finger picking - it was blues, however a very different feel to the monotonic bass style prevalent in those times. It simply didn't have the tone or character to sing delta blues. His genuine power and know-how was in his finger style strategy, and specifically his thumb work.
Unlike numerous blues guitar players, no movie of Blake exists which is a big, big pity. When I first heard West Coast Blues, his very first record, I was amazed that it was played on just one guitar - it was just Blake!
Out of all the blues guitar categories in the broad classification 'guitar', ragtime or Piedmont, is maybe the most difficult both to find out what's going on and likewise to perform it with confidence. Modern Travis finger-picking techniques and patterns owe a big debt to the early blues guys who attempted to copy the syncopated melodies of the popular piano rags (Scott Joplin) and make the very same enticing noise on simply 6 guitar strings!
The style is also called 'Piedmont', which is a plateau area between in between the Atlantic Coastal Plain and the primary Appalachian Mountains, and guitar players from a lot of States started playing this type in one form or another. The King Of Ragtime, Blind Blake, came from Florida and settled in the Chicago metropolitan area at about the very same time as Big Bill Broonzy was swinging the blues up there, so it genuinely was an American-wide phenomenon.
Stella Harmony Guitars- A Significant Factor In Spreading The Appeal Of The Blues
In those days, mass produced guitars were just appearing care of Sears Company, and inexpensive instruments might be purchased for the measly sum of $1. Everything came together at the right time, bringing the possibility for making dance music and happy 'blues' tunes with inexpensive instruments - people wanted to escape daily drudgery and a brand-new style of playing was born just right for dancing - ragtime guitar style.
Just a Few of The Outstanding Ragtime Blues Guitarists
Perhaps one the most popular one finger guitar players was Reverend Gary Davis, who was really a complete master on any style of guitar picking. Often, he might play ragtime as well as Delta style blues, however in his later years he chose to sing his Gospel tunes to the passers-by in Harlem where he lived.
It's quite extraordinary that a lot of the excellent players simply utilized one finger, guitar players such as Doc Watson, Floyd Council and lots of others. Broonzy didn't play exactly what we call 'ragtime', he did reveal an incredible capability to develop syncopation without alternating his bass strings, however he was rather an exception. Merle Travis adapted the old ragtime designs he heard as a kid and produced some timeless pieces, and yes, you guessed it - he utilized simply one finger.
The Rest Of The Best Piedmont and Ragtime Style Guitar Pickers
The tunes played by traveling jug bands were mainly spirited ragtime pieces and the collection was complete with other tunes popular at the time, simply to please the rural audiences that gathered as they journeyed from town to town. Apart from the big names that we have all heard, like Blind Boy Fuller (who was taught by Gary Davis) there's a wide variety of so-called 'small' blues guitar players who also performed powerful ragtime guitar songs and tunes.
Jim Bruce Acoustic Blues Guitar Lesson Videos
Privacy TOS
|
July 14, 2009
Small UN Branch Handles “˜Earthly’ Space Affairs
An often-overshadowed branch of the United Nations known as the Office for Outer Space Affairs (UNOOSA) mostly has earthly concerns on its agenda, AFP reported.
The tiny UN office and its 27 employees sit almost forgotten in the vast hallways of the United Nations headquarters in Vienna, where it mostly coordinates help for poor countries to develop crops and manage natural disasters.
"It would be the secretary general of the United Nations... that's why we're here," he said.
The Office for Outer Space Affairs has its roots in the 1957 establishment of the Committee on the Peaceful Uses of Outer Space, which was developed when Russia launch the first satellite, Sputnik, that same year.
In order to prevent an aggressive space race, five major treaties and agreements were drawn up in the following decades, regulating members' activities in space and advocating equal rights and access for all states.
However, today the UN office is keen to emphasize the peaceful aspect of its work, setting up programs to help poor countries gain access to space technology for developmental and aid purposes.
Othman said such programs are what they're most excited about at the UN because that is part of the development agenda of the United Nations.
Modern space technology and satellite imagery can facilitate communications, disaster mitigation, natural resource management, and the study of climate change and tele-epidemiology -- the study of how diseases spread.
Othman believes all states should have access to it, which is why UNOOSA set up a Space Applications Program to train and advise developing countries, and SPIDER (Space-based Information for Disaster Management and Emergency Response) to put them in contact with aid agencies, NGOs and satellite imagery providers.
Among the 69 member states involved in the committee, about two-thirds of them are without a space program.
Othman explained that just because a country cannot have access to space does not mean they have no say in matters relating to space.
However, UNOOSA would also be involved in coordinating responses to an asteroid potentially crashing into Earth.
Jamshid Gaziyev from UNOOSA's Committee Services and Research Section said no single country currently has the capability to deflect or destroy a "near-earth object" large enough to do any damage.
That means forward planning is essential and they must have a strategy in place and avoid disputes between member states should a crisis situation arise. An example would be a dispute over whether to use nuclear weapons and where to divert the asteroid so that it makes the least damage when it makes impact.
The Outer Space Treaty of 1967 and the Moon Agreement of 1979 also forbid "the establishment of military bases, installations and fortifications... and the conduct of military maneuvers on celestial bodies," while describing astronauts as "envoys of mankind in outer space."
The UN committee may even one day be responsible for regulating space burials or space tourism.
Gaziyev said that space is more present in everyday life than we think; therefore the "people in outer space" at UNOOSA will carry on with business as usual.
On the Net:
|
In statistics, there is the concept of the sum of squares, which is a way of finding out, so to speak, how far your data is dispersed from the average. In short, you take your data set, calculate the average, and then subtract each item in the data set from the average, square the difference, and add them up.
For example, the data set 3, 5, 7 obviously has the average of 5. The sum of squares is (3 – 5)^2 + (5 – 5)^2 + (7 – 5)^2, which equals 2^2 + 0^2 + (-2)^2, or 4 + 0 + 4, or 8.
Now take the data set 2, 5, 8. Both data sets have the same average, but the second data set has a much larger sum of squares (18) meaning the data is more disperse.
Why do we always square the numbers, and not just take the result of the subtraction? So that the answer is always positive. If we didn’t square the subtractions, the first data set would have a sum of 2 + 0 + (-2), or 0. Likewise, the second set would have a sum of 3 + 0 + (-3), or 0. By squaring, we prevent them from canceling out.
In short, the sum of squares is always positive. Here endeth the lesson.
So with this brief primer on the topic of sum of squares, pretend you are a statistics programmer, charged with writing small programs to compute statistics of various data sets, and your sum of squares occasionally gives a result that is very, very negative? What’s the problem?
If you answered “arithmetic overflow“, you are correct, and are clearly over-qualified to work in the field of statistics, programming, or both.
|
Chemical flavorings found in e-cigarettes linked to lung disease by Bloomsey in science
[–]shavera 3200 points3201 points (0 children)
Boy, if only we had some kind of.... government agency to verify the compounds in the foods we eat and drugs we take... a... Food and Drug Administration, if you will. Perhaps this government agency could prevent harmful compounds from appearing in products people put into their bodies, or at least warn when they are present.
Can two lazers that both emit light outside of the visible spectrum interfere with each other to create a visible pixel at their intersection? by foolandhismoney in askscience
[–]shavera 1261 points1262 points (0 children)
1. Light doesn't self-interact. Meaning a photon never directly affects another photon.
2. This boils down to what we mean by light coming in "particles" really. Each beam of light is composed of many particles of the same energy. Even if you add the energies of each beam together, all you get is an overall increase in intensity (number of particles per second) not an increase in momentum of any one particle. - similarly, see the "photoelectric effect" for more similar kinds of experiments.
3. There are ways to convert invisible light to visible. If light is below visible (like IR), then some crystals can cause "frequency doubling" (or tripling, or more), where photons give energy to the crystal which then spits out half as many photons, but at twice the frequency. Green laser pointers are actually an IR laser beam passed through such a crystal. (one of the reasons why you should not disassemble green laser pointers: since your eyes can't see IR, your pupils won't contract to reduce the amount of light entering, and you have an intense IR laser beam entering the eye. Very dangerous. Really.
Another way, if the light is more energetic than visible (like UV) is to act through fluorescence. Fluorescent lighting is so named. The gas in a fluorescent tube actually predominantly glows in the UV. The frosty coating on the surface of the tube is made of materials that absorb the UV light, and then re-radiate it down spectrum into blues and reds and greens.
But no, 2 lasers crossed will not change color at their crossing point.
Edit: a couple comments below have made me second guess my initial particle-physics based assumption. Does anyone know what would happen if we mixed light ala beat frequencies? Suppose you shone two UV lasers at 2 PHz and 2.5 PHz at a spot. Would that spot glow in the visible at 500 THz? I'm not good at classical EM/optics to answer this
Edit 2: A lot of people are questioning my phrasing about light self-interaction. Let me address what I mean here:
My background is the strong force. Gluons are the mediating particle, like photons are in EM. Gluons, however, have (strong force) charge. Therefore, a gluon can be attracted to another gluon. A gluon will exchange a gluon with another gluon to attract or repel and generally exchange momentum directly.
A photon is not a charged particle. Photons only interact with charged particles. A photon leaves a charged particle and travels to some other charged particle. Therefore, a photon never is attracted to, nor repelled by other photons. A photon does not exchange photons with other photons to exchange momentum.
There are of course, indirect ways in which photons can interact. Two photons can collide to create particle-antiparticle pairs. Those pairs can annihilate to create two new photons (the originals were destroyed in the collision). Photons do occasionally act like 'virtual' particle-antiparticle pairs, and in that behaviour can, indirectly, exchange a virtual photon from one of their virtual particles to some other photon, and exchange momentum indirectly. But such interactions are always mediated by charged-particles at some step. Never directly.
Finally, there are effects of photons as well. Classically speaking, their waves may interfere constructively or destructively. One photon may push a particle one way, while another photon pulls it the other at the exact same time. These photons aren't interacting with each other... they're interacting with that "receiver" particle. Pushing and pulling at the same time is like destructive interference. Pushing together or pulling together is like constructive interference.
So that's what I mean by light never self-interacts.
Edit the third: Sorry, maybe I misinterpreted OP's question re: laser interference. I understand (as mentioned in point 3 above) that there are ways to convert invisible light to visible. I had interpreted the question to mean a medium-free way of mixing lasers. Like shine two lasers through air/vacuum on a single point to create a single spot of visible light.
Edit 4: How do materials work to change the frequency of light? So, the easiest thing, start with this post from our FAQ. Essentially, the relevant thing to note is that within material, the effective electromagnetic field behaves differently than it does in vacuum. All the charged particles around interact with each other, so we can come up with a modified electromagnetism to describe the material.
When a photon "enters" a material (assuming, for the moment it can), The photon, on its own, ceases to exist really. A new propagation through the effective EM field is set up. Well now, since there's a material around, this effective EM field may be able to do different things than the vacuum one can. In the case of certain crystals, two of these propagations can add their momenta together, forming a new propagation with new momentum.
And when a propagation hits the end of the material it can trade its momentum into a "bare" photon that exits the material. (It can also reflect internally and do other things, but we're ignoring that for now). So the new, summed momentum excitation hits the other end of the material and out pops a new photon that has the sum of the momenta of the photons entering the crystal.
When I mention above that photons don't self-interact, what I mean specifically is that photons have no direct way of giving each other their momentum. They can destroy themselves and become something else and that something else can create new photon(s) with new momentum values. But photons, on their own, in vacuum, don't, generally speaking, do this on their own.
How can we be sure the Speed of Light and other constants are indeed consistently uniform throughout the universe? Could light be faster/slower in other parts of our universe? by -Gabe in askscience
[–]shavera 1180 points1181 points (0 children)
the speed of light plays a factor in a lot of physics beyond just how fast light moves. So if you want to propose a "variable" speed of light, you have to produce the set of measurements that will show your proposal to be better than the existing assumption. Several attempts have been made in the past to derive a variable speed of light, but none of them have panned out experimentally, as far as I know.
As a rough example, let's say your theory predicts that electrons will have different orbits because obviously the speed of light factors into the electromagnetic force that governs how electrons are bound to the nucleus. So you would predict that, as you look out across the universe, the spectral lines of atoms should shift by <some function>. Then you take spectroscopic measurements of distant stars and galaxies. If the spectra differ by your prediction, and can't be explained by other competing ideas, including the current models, then it supports your theory.
What we haven't seen are those kinds of measurements. Obviously we can't go out with a meter stick and stop watch and measure how long light takes to go from a to b. So we have to use indirect measures.
[–]shavera 920 points921 points (0 children)
Right, and the FDA is seeking approval to be able to study the effects of e-cigarettes as well. :
CERN People! Is this real data? by [deleted] in askscience
[–]shavera 893 points894 points * (0 children)
it's more fancy eyecandy than data. Granted we all love how it looks, and displays like this help us debug our software, but usually we're running through millions or billions of collisions and we can't sit down with a picture and a ruler and make measurements. So all the data is stored in some file, then passed over with some analysis software, and then the physics pops out as some histogram or something from the analysis software. But the display is accurate it is a representation of the tracks and energy passing through the detector (the tracks are the things that look like tracks, and energy is the big towery lookin things)
To explain the following deleted statements a long conversation occurred which was significantly off topic. Further statements on the matter will continue to be deleted so do not comment on it, I hate to have to do more work than necessary.
There is no Higgs?? CERN scientists say the Higgs boson is excluded as a possibility with a 95% probability. by rdhatt in science
[–]shavera 857 points858 points (0 children)
Misleading title: It's only excluded from a certain mass region. the 114-145 GeV region still could have it.
Is eating grains bad for us? Is a "paleo" diet better for our health? by kalei50 in askscience
[–]shavera[M] 712 points713 points * (0 children)
please no personal anecdote. That is not a scientific answer to the question. Scientific answers only please.
Edit: my reply to bib4tuna, since apparently that's already downvoted to oblivion too:
attempted to fix the below.
IF YOU DO NOT HAVE A SCIENTIFIC SOURCE DO NOT REPLY TO THIS QUESTION (with an answer, followup questions are always allowed, of course). ಠ_ಠ
Books are notoriously unreliable where the author cherry picks data to support their hypothesis, so no, a book is not a peer-reviewed scientific source. It does not count. Nor do documentary films. Peer-reviewed science only.
I am proud of everyone at least that nearly everything was downvoted to just about zero. Thank you guys for helping to maintain quality.
edit 2: Textbooks are okay.
Why exactly can nothing go faster than the speed of light? by purpsicle27 in askscience
[–]shavera 512 points513 points (0 children)
Please write a textbook. Publish it anonymously. I promise to force my students to buy it and use it.
General relativity tells us that gravity is a result of the curvature of spacetime, so why are we searching for a graviton? by the_future_is_wild in askscience
[–]shavera 440 points441 points * (0 children)
Okay so you may be familiar with electromagnetism. Electromagnetism is a field, and for most day to day things, we can treat it classically and be just fine. But on occasion, when dealing with quantum objects, we have to realize that the electromagnetic field has a structure on its finest levels. And that structure is comprised of individual fundamental excitations that we call "photons."
So now GR. GR tells us that a way of describing the energy and other stuff in a space, the "Stress-Energy Tensor Field" is equivalent to a "Curvature Tensor Field," a description of how space and time "curve." And when we deal with big things, just like E&M, the classical field solution is fine. I didn't go into detail above, but generally when we say a classical field, we mean it can take on any excitation it likes.
But we wonder, what's the curvature field of a single electron? We know that the electrons of a star contribute to its mass and thus to its overall curvature field, but what's the curvature of a single electron? Well we don't have a good answer there. You see, in order to do that, we'd need an accurate description of both position and momentum of the electron to plug into our stress-energy tensor. But we don't know that, we can't according to Heisenberg Uncertainty principle. For big objects, it doesn't matter, the difference is too small to notice, but for quantum particles it matters quite a lot. So essentially, we don't know how to do the maths.
Well some people thought, maybe the curvature field is "quantized" like the electromagnetic field is (and all other fundamental fields we know of). They called those fundamental "quantizations" gravitons, and tried to solve the equations that way. Still no luck. The solutions are "non-renormalizable." Renormalizability was a cute mathematical trick invented in quantum field theory that let an infinite sum of integrals asymptotically approach some value, so one could truncate the sum after so many terms, depending on the level of precision desired. To date, no such trick has been found for graviton fields. So they could exist, they could not, we don't have a working mathematical model in either case yet.
Edit: sorry I wrote this pretty late last night/early this morning and I totally screwed up renormalizability. Please see bataboon and sittinggiant's great responses to get a more accurate picture there.
Does the force of gravity extend over an infinite distance (only becoming infitesimilally small with large distances) or is there a distance after gravity has no influence? by oniony in askscience
[–]shavera 420 points421 points (0 children)
Not... quite. See, the deal is that gravity stems from General Relativity. We solve the equations of GR around a massive body (like the sun) but we do so by assuming the sun is the only thing in the entire universe. And we get a curvature of space-time; this curved space-time gives rise to the effects we commonly call "gravity," as in Newtonian Gravitational Attraction (with some slight modifications that are very important, but that's a different discussion).
But let's go back to that initial "sun is the only thing" assumption. Clearly it's not. But the same kinds of solution seems to work on the scale of galaxies, and even clusters of galaxies. This is why the Andromeda and Milky Way galaxies are gravitationally attracted to each other and going to collide eventually.
But what happens when we look on the very very large scales, cosmological scales; scales where galaxies are just little point masses (approximately). Well on these scales the universe is approximately uniformly dense (at a very very low mass density). And we solve GR here, and we find not gravitation, but the metric expansion of space.
So on local (galaxy clusters and smaller) scales, GR generally solves for gravitation. And on Cosmological scales, GR solves for metric expansion. Does this mean that gravity doesn't "stretch" across the voids between galaxy clusters? Not exactly, it's just not the gravity you're familiar with; it's highly modified by the vast integrated volumes of dark energy between them.
If the particle discovered as CERN is proven correct, what does this mean to the scientific community and Einstein's Theory of Relativity? by r0ckaway in askscience
[–]shavera 419 points420 points (0 children)
First, to reiterate what's been stated here, yes other experiments will need to find similar results. What bothers me about the result is SN1987a. This supernova is 168000 light years away from earth. So if neutrinos gain 60 nanoseconds for every 730 kilometers they should gain 4 years of time for this supernova. But we discovered neutrinos only 3 hours before, and that's due to the fact that the supernova is largely transparent to neutrinos, but delayed the emission of light (the neutrinos got a head start, but traveled slower).
If photons are massless, could an infinite amount of them fit into an infinitely small space? by LiteraryGoat in askscience
[–]shavera 420 points421 points (0 children)
To kind of correct from some of the very poor explanations below:
1) photons are massless, but this has nothing to do with how you can pack them.
2) Photons are Bosons, and hence not subject to Fermi-Dirac Statistics, and thus, not subject to the Pauli-Exclusion principle
2b) Corrollary: in fact, part of the interesting thing about Bose statistics is that bosons prefer to be clumped together, almost acting as "one" particle. These are called "Bose-Einstein Condensates." We made a Bose-Einstein Condensate (BEC) of photons in 2010, in fact.: This is as close to your original question as an answer is likely to get.
3) "infinitely small space" is a problem. Due to the Heisenberg uncertainty principle, the more tightly confined in space particles are, the less precisely known is their momentum. So if photons were in a very very small box, they'd have such a wide range in momenta that they'd never condense into a BEC. However, a BEC of photons may exist within a very small, but not infinitesimally small, space.
4) also, I feel it's worth pointing out: any system of photons in which the photons are not precisely moving in the same direction has a mass, even though all the photons are themselves massless. I actually don't know what it means for a BEC of photons, because since they're all in the same quantum state, they probably all have the same momentum vector, and thus would remain massless still, but I'm not sufficiently strong in the area to say for sure.
How is time an illusion? by cjhoser in askscience
[–]shavera 396 points397 points (0 children)
So let's start with space-like dimensions, since they're more intuitive. What are they? Well they're measurements one can make with a ruler, right? I can point in a direction and say the tv is 3 meters over there, and point in another direction and say the light is 2 meters up there, and so forth. It turns out that all of this pointing and measuring can be simplified to 3 measurements, a measurement up/down, a measurement left/right, and a measurement front/back. 3 rulers, mutually perpendicular will tell me the location of every object in the universe.
But, they only tell us the location relative to our starting position, where the zeros of the rulers are, our "origin" of the coordinate system. And they depend on our choice of what is up and down and left and right and forward and backward in that region. There are some rules about how to define these things of course, they must always be perpendicular, and once you've defined two axes, the third is fixed (ie defining up and right fixes forward). So what happens when we change our coordinate system, by say, rotating it?
Well we start with noting that the distance from the origin is d=sqrt(x2 +y2 +z2 ). Now I rotate my axes in some way, and I get new measures of x and y and z. The rotation takes some of the measurement in x and turns it into some distance in y and z, and y into x and z, and z into x and y. But of course if I calculate d again I will get the exact same answer. Because my rotation didn't change the distance from the origin.
So now let's consider time. Time has some special properties, in that it has a(n apparent?) unidirectional 'flow'. The exact nature of this is the matter of much philosophical debate over the ages, but let's talk physics not philosophy. Physically we notice one important fact about our universe. All observers measure light to travel at c regardless of their relative velocity. And more specifically as observers move relative to each other the way in which they measure distances and times change, they disagree on length along direction of travel, and they disagree with the rates their clocks tick, and they disagree about what events are simultaneous or not. But for this discussion what is most important is that they disagree in a very specific way.
Let's combine measurements on a clock and measurements on a ruler and discuss "events", things that happen at one place at one time. I can denote the location of an event by saying it's at (ct, x, y, z). You can, in all reality, think of c as just a "conversion factor" to get space and time in the same units. Many physicists just work in the convention that c=1 and choose how they measure distance and time appropriately; eg, one could measure time in years, and distances in light-years.
Now let's look at what happens when we measure events between relative observers. Alice is stationary and Bob flies by at some fraction of the speed of light, usually called beta (beta=v/c), but I'll just use b (since I don't feel like looking up how to type a beta right now). We find that there's an important factor called the Lorentz gamma factor and it's defined to be (1-b2 )-1/2 and I'll just call it g for now. Let's further fix Alice's coordinate system such that Bob flies by in the +x direction. Well if we represent an event Alice measures as (ct, x, y, z) we will find Bob measures the event to be (g*ct-g*b*x, g*x-g*b*ct, y, z). This is called the Lorentz transformation. Essentially, you can look at it as a little bit of space acting like some time, and some time acting like some space. You see, the Lorentz transformation is much like a rotation, by taking some space measurement and turning it into a time measurement and time into space, just like a regular rotation turns some position in x into some position in y and z.
But if the Lorentz transformation is a rotation, what distance does it preserve? This is the really true beauty of relativity: s=sqrt(-(ct)2 +x2 +y2 +z2 ). You can choose your sign convention to be the other way if you'd like, but what's important to see is the difference in sign between space and time. You can represent all the physics of special relativity by the above convention and saying that total space-time length is preserved between different observers.
So, what's a time-like dimension? It's the thing with the opposite sign from the space-like dimensions when you calculate length in space-time. We live in a universe with 3 space-like dimensions and 1 time-like dimension. To be more specific we call these "extended dimensions" as in they extend to very long distances. There are some ideas of "compact" dimensions within our extended ones such that the total distance you can move along any one of those dimensions is some very very tiny amount (10-34 m or so).
from here
No can explain my experiencing a "super rainbow" about ten years ago. Help Reddit? by Themerryjenkster in askscience
[–]shavera[M] 325 points326 points (0 children)
haha LSD jokes, drug jokes etc. (/sarc) Pretend they've all been made and deleted already. Please stop and focus on a scientific discussion of the question.
[–]shavera 317 points318 points (0 children)
more beam time at the LHC. That's what's pissing me off about these headlines. Particle searches take time. The press should respect that, back off, and let the physicists do their work and either confirm or reject the Higgs boson.
5 years of software development, 3 months running on 8000 processors, and we have a remarkably accurate model of our universe and its evolution by shavera in science
[–]shavera[S] 317 points318 points (0 children)
Also worth noting: In their future directions section of their site, they address where their models don't match observations and what work needs to be done to improve the simulation.
The "everything you need to know about the Higgs boson" thread. by Ruiner in askscience
[–]shavera 314 points315 points * (0 children)
My tl;dw of the ATLAS talk: everything but 115-131 GeV/c2 has been excluded to 95% confidence level. About 2.3 sigma result with a Higgs mass of 126 GeV/c2 . Next year's data should get 5 sigma results on a Higgs with this mass, and 3 sigma in each of the detection channels. (on ATLAS data alone)
Update: my tl;dw of the CMS talk: they find a 95% confidence level exclusion of the 127 GeV/c2 -600 GeV/c2 region. They find a modest excess of signals in the "allowed" region of 114-127 GeV/c2 that is consistent with either a fluctuation in the data or a standard model Higgs boson. Their results are about 1.9 sigma excess at about 124 GeV/c2 that appears across 5 separate Higgs decay/detection channels.
|
Get the keyboard input language
When my application starts, I need to switch the keyboard language to Greek. Currently I use the statement ActivateKeyboardlayout(0, 0). When I need to switch to English (when the application terminates) I execute the same statement one more time. This works fine, but only if the language before the application's execution is English. So, before the call of the statement, I need to know if the language is Greek or English. How can do this?
I usually use the following cycle:
{ ... }
y := string(t);
ActivateKeyboardLayout(HKL_NEXT, 0);
x := string(t);
((x = y) or (x = '00000405'));
{ ... }
Using this, the English keyboard will give the KeyboardLayoutName '00000409' and the Greek one the '000000408'. These are standard language identifiers. They're the same on any Windows machine.
To display the information, you could use this little trick:
{ ... }
kbd: array[0..2] of Char;
GetLocaleInfo(loWord(GetKeyboardLayout(0)), LOCALE_SENGLANGUAGE, kbd, 2);
Form1.Caption := kbd;
{ ... }
Author: Lou Adler
Product: Delphi 7.x (or higher)
|
Thursday, June 21, 2007
Prepositional Confusion
I've heard in several context that English prepositions are difficult. Indeed, my Eastern European boyfriend and his sister frequently misuse prepositions. Eugene Volokh raises this same point today:
Things on paper are written in pencil, but on or with a typewriter. I'm sure there are lots of other similar examples. Oy. People who have to learn English as adults must find it nightmarish, in a Kafkaesque way.
So is English just unruly or is something else afoot.? The latter I think. The trouble arises because prepositions are common in English. We hear examples of their use continually. This means that our knowledge of them is informal. We develop rules governing their use, but lacking careful thought we draw the wrong conclusions.
Eugene's example actually hinges upon ambiguous word usage, not on the prepositions. Consider: pencil is both a tool and a pigment (graphite). When people say something is written in pencil they refer to the composition of the substance marking the page. Just as they might say written in type. But to say written with a typewriter is to designate the tool used. Just as we might say written with a pencil. Also written is used two different ways.
So this example is apples to oranges.
Thursday, June 14, 2007
Laffer et al
Reading Fred Thompson's Tax Op-Ed Today led me to these thoughts:
Progressives and even many Republicans often mention that tax-cuts do not pay for themselves, and indeed broad-based tax-cuts do not. This applies to all of the major tax-cuts starting with Kemp-Roth in 1981 thru Bush II. What does this mean? It means that the stimulus of the tax-cut does not expand the economy to at least the extent necessary for the tax policy to be revenue neutral.
This idea of revenue neutral tax-cuts comes to us from Art Laffer. The, so called Laffer curve, describes that above some tax-rate further revenues cannot be had because taxes retard productivity and growth and because of evasion. Under such a situation cutting taxes leads to tax-revenue increases.
Several tax-reduction programs have been sold on the basis of their stimulating effects, but it is plainly obvious that they never seem to pay for themselves. Ergo, we must be below the Laffer point. Or not?
The trouble with aggregate analysis is that it obscures our tiered tax structure. As it happens all tax-cuts have effected all of the tax brackets. This ends-up impeding revenue neutrality. Tax-cuts against the lower brackets tend to be very costly to revenue (because the middle-income bracket is very broad) and because there seems to be very little growth stimulus from cutting mid-range taxes.
The picture changes dramatically though if we only look at the top-brackets. Top-bracket cuts pay for themselves. This is the Laffer effect, only it applies to only the Top Brackets not the middle-brackets.
Wednesday, June 13, 2007
Debating Climate Change
Garth Paltridge, the retired Director of the Institute of Antarctic and Southern Ocean Studies writes in a recent essay.
Saturday, June 9, 2007
The Paris Affair
The scuffle over Paris Hilton's imprison is by now old news. It even made its way to the pages of the Volokh Conspiracy where Prof. Kerr couldn't resisted a tongue-in-check "call to arms". Although, he meant it as foil against the calls to pardon Libby.
But seriously: was it within the power of the courts to pull Paris back into prison? I don't think so--as much as she deserves to be treated as no more than another citizen. I'm shocked that a judge would overrule the Sheriff's office in this way. Shouldn't it be the case that the continued confinement of an individual requires the concurrence of the court and a suitable executive officer?
I hope this gets slapped down on appeal, albeit with a heavy dose of dicta criticizing the sheriff's office.
|
Online Learning for Long Term Recall
By | 2017-02-22T13:25:05+00:00 December 5th, 2016|Categories: Learning|Tags: |
The study examined two things in relation to web based or non-web based teaching.
The two things examined were problem solving ability, and biology achievement. Tests given to two groups of students indicated what was happening with each group. One group received online learning only. The other group learned in an offline environment.
Tests applied to the information showed that the web based students did better. They recalled the information longer after having learned it.
They also remained better at problem solving longer after the end of the experiment. This just goes to show that online learning could be the way of the future.
|
Wednesday, May 28, 2014
Hail to the hall - Environmental Acoustics
Showcasing a few different acoustic environments
Software mixer
Writing a software mixer is quite rewarding – a small, well defined task with a handful operations performed on a large chunk of data, thus very suitable for SIMD optimization. A software mixer is also one of those few subsystems that has, or can have, a real work analogue - the physical audio mixer. I chose a conventional, physical abstraction, so my interface classes are named Mixer, Channel, Effect, etc, but there might better ways to structure it.
The biggest hurdle when writing a software mixer turned out to be the actual mixing. Two samples playing at the same time are added together, but what happens if they both play at maximum volume? The intuitive implementation, and what also happens in the real world is clipping. This is what most real audio programs do. Clipping is a form of distortion, where minimum and maximum audio levels are simply clamped above or below the physical threshold, effectively destroying or reshaping the waveform. In an audio software, you would typically adjust the levels manually in order to avoid clipping, but in games, where audio is interactive this can be really tricky. Say for instance you have a click sound for buttons. If there are no other sounds playing you want the click played back at maximum volume, but if there is music in the background the volume level needs to be lowered. If there is an explosion nearby it needs to be adjusted even further to avoid clipping.
One way to reduce clipping is to transform the output signal in a non-linear fashion, so that it never really reaches the maximum level. This still has the problem that it will affect the result when there is only one sample playing. Hence, when the click is played back in isolation it won't be at maximum volume.
Some people suggest that the output should be averaged in the case of multiple channels. So if there are three sounds, A B C playing at once. You mix them as (A+B+C)/3. This is not a good way to do it, because the formula doesn't know anything about the content of each channel (B and C can for instance be silent, still resulting in A played back at a third of the volume).
What we need is some form of audio compression - an algorithm that compress audio dynamically, based on the current levels. Real audio compressors are pretty advanced, with a sliding window to analyze the current audio content and adjust the levels accordingly. Fortunately there is a "magic formula" that sounds good enough in most cases. I found this solution by Viktor T. Toth: mix=A+B-A*B, but when adapting it to floating point math I realized that a slight modification into: mix=A+B-abs(A)*B is more suitable to better deal with negative numbers. Each channel is added to the mix separately, one at a time, using the following pseudo code:
mix = 0
for each channel C
mix = mix+C - abs(mix)*C
This means that if there is only one channel playing, it will pass unmodified through the mixer. The same applies if there are two channels but one is completely silent. If both channels have the maximum value (1.0), the result will be 1.0, and anything in between will be compressed dynamically. It is definitely not the best or most accurate way to do it, but considering how cheap it is, it sounds amazingly good. I use this for all mixing in Smash Hit, and there are typically 10-20 channels playing simultaneously, so it does handle complex scenarios quite well.
I use three separate mixers in Smash Hit – the HUD mixer, which is used for all button clicks and menu sounds, the gameplay mixer, which represent all 3D sounds, and the music mixer which is used for streaming music. The gameplay mixer has a series of audio effects attached to it to emulate the acoustics of different room types.
Given how useful a reverb effect is in game development, it's quite surprising to me how difficult it was to find any implementations or even an explanation online. At a first glance, the reverb effect seems much like a long series of small echoes, but when trying it, the result sounds exactly like that – a long series of small echoes, not the warm, rich acoustics of a big church. If one tries to make the echoes shorter, it turns more and more metallic, like being inside a sewage pipe.
There is a great series of blog posts about digital reverberation by Christian Floisand that contains a lot of the theory and also a practical implementation: Digital reverberation and Algorithmic Reverbs: The Moorer Design.
It uses a series of parallel comb filters passed through all-pass filters in series. The comb filters is basically a short delay line with feedback, representing reflected sounds, while the all-pass filter are used to thicken and diffuse the reflected sound by altering the phase. I don't know enough signal theory to fully understand the all-pass filter, but it works great and implementation is fairly easy.
In addition to the comb filters and all-pass filters I also added a couple of tap-delays (delay line without feedback), representing early reflections on hard surfaces, as well as low-pass filters in each comb filter allowing a great way to control the room characteristics. Christian's article suggest the use of six comb filters, but for performance reasons I cut it down to four. I'm using four tap-delays and two all-pass filter, plus a pre-delay on the entire late reflection network.
All audio is processed in stereo in Smash Hit, so the reverb needs to be processed separately on the left and right channel. I slightly randomize the loop time in the comb filters differently for the left and right channel, which gives the final mix a very nice stereo spread and a much better sense of presence.
In addition to reverb I also implemented a regular echo as well as a low-pass filter. The parameters of these three filters are used to give each room its unique acoustics.
Tuesday, May 13, 2014
Cracking destruction
Procedural breakage
Physically based breakage is hard. Really hard. I remember NovodeX had some kind of FEM approximation for breakage in the early days of PhysX, but it never worked well and didn't look very convincing. I think for most game scenarios it is also completely overkill, especially for brittle fracture, like glass. I designed the breakage in Smash Hit around one rough simplification – objects always break where they get hit. This is not true in the real world, where tension builds up in the material, and objects tend to break at their weakest spot, but hey, we're not really after accurate breakage here. What we want is breakage that feels right, and looks cool.
In Smash Hit, when a breakable object gets hit, and the exerted impulse is above a certain threshold, I carve out a small volume around the point of impact and break that up into several smaller pieces that become new dynamic objects.
It sounds simple, but looks quite convincing. However, there are a few obstacles to overcome in the implementation:
• Carving out a piece from a generic object in a robust way
• Slicing that volume
• Check for topology changes in the original object. Carving out a piece may cause it to fall apart.
Plane splitting
Carving out a volume from a generic mesh in a robust way is a really complex geometric problem. Furthermore, since every object in Smash Hit is physically simulated we want the resulting objects to be well-behaved in a physics-friendly format. From a physics point of view, I represent all objects as compounds of convex shapes, so the resulting broken object must also be a collection of convex shapes. Luckily there is one safe way to split up a convex object into two new objects that are also guaranteed to be convex – slicing it with a plane.
So to carve out a piece from the original object, I slice all the convex shapes with five bounding planes, slightly randomized around the point of impact. It will increase the total number of convex shapes used in the original object, but that is inevitable due to the fact we are making the object more concave. It takes a bit of bookkeeping to get the splitting code right, but as long as the plane splitting is robust, so will the overall breakage algorithm.
The carved out piece can be split up into smaller pieces using the same plane-splitting, so all we need is a really fast, reliable plane-splitting method for convex objects.
Robust splitting
The classic problem with plane-splitting is not handling degenerate cases. Say for instance one of the faces coincide with the splitting plane, so that only one edge crosses the plane due to floating point precision. That can easily result in degenerate geometry, non-closed polyhedra or even broken data structures and hard crashes. To avoid that, I start by determining a side for each vertex (above or below the plane) and then base all edge-splitting on that information, hence only split an edge if the corresponding two vertices are on opposite sides of the plane. The split point for each edge is capped to be a certain distance away from both vertices, so that the resulting two shapes are always closed, well-defined and all edges are above a certain length.
I tried quite a few data structures for doing the plane-splitting before finding one that works well in practice. The biggest hurdle was that I needed a way to track vertices through multiple splits in order to keep vertex normals consistent. It might not be visible at a first glance, but vertex normals are used extensively to make nice gradients in the glass rendering and create a certain softness to the material. Recomputing those normals after splitting an object would create a deviation in the material that is very visible to the player. In the end, the half-edge data structure turned out perfect for the job, offering both high performance and very small memory footprint. I even use 16 bit integers instead of pointers to keep the size down.
Tracking topology
After carving out a hole in the object, it needs to be checked for topology changes. Depending on the shape, it might have been split into two or more separate objects. Finding connected components in a series of inter-connected nodes is classic graph theory. What makes out a connection in this case is wether two shapes are touching face to face. The most elegant way to do this would be to track split faces between shapes and remember the connections, but it takes a lot of bookkeeping. I'm using a more pragmatic approach, where the distance between shapes are measured geometrically. I have earlier on this blog written about the GJK algorithm and how immensely useful it is for all kinds of geometric operations. In this case, we only want to know if two object are within a certain distance of each other. This can be done as a simple overlap test in configuration space, which is extremely fast.
The algorithm for finding connected components goes like this:
Assign each object a unique id
For each object A
For each object B
If id[A] and id[B] are different and they are touching
Replace all id[A] and id[B] with min(id[A], id[B])
After we're done all shapes with the same id represent the same rigid body. This is obviously not a very fast general algorithm due to cubic complexity, but since the number of shapes in a rigid body is usually within a few dozen it works well in practice. The nice thing about it is the complete lack of state, which simplifies further breakage.
Simulation pipeline
A final word on breakage is the importance of getting the simulation pipeline right. With an off-the-shelf physics engine, contact impulses are typically analyzed after the simulation step to determine if something breaks. This give the impression of unbreakable objects colliding and then artificially split up afterwards. The key to natural looking breakage is to limit the contact forces of a breakable object during the simulation and break them up while preserving some of the motion. There are tricks to mix in some of the pre-break velocity on broken pieces, but I went all in and actually do multiple solves during breakage, so a simulation step typically involves running the following steps:
1. General collision detection
2. Run solver with capped impulses on breakable objects
3. Check if some contacts reached the cap and in that case break objects
4. Collision detection for new objects
5. Run the solver again on affected contacts with capped impulses
6. Repeat from 3
7. Integrate
This means that objects are created and broken several times during a simulation step before proceeding to integration. I have capped the number of solves per simulation step to three in order to limit performance impact.
This is the first game where I'm giving my low level physics engine a proper work-out. It has been in development on and off for almost four years so it's great to finally see it in action. Good destruction ties very deeply into the simulation pipeline, so it's not ideal to plaster it on top of an existing physics engine.
Wednesday, April 2, 2014
Smashing tech
Destruction is the core game mechanic and had to be fully procedural, very robust and with predictable performance. The engine supports compounds of convex shapes, like most physics engines. These shapes are then split with planes and glued back together when shattered. Though most objects in the game are flat, the breakage actually support full 3D objects with no limitations. The breaking mechanic is built into the core solver, so that objects can break in multiple steps during the same iteration. This is essential for good breakage of this magnitude.
Due to the highly dynamic environment where there can be hundreds of moving objects at the same time, one draw call per object was not an option. Instead, all objects are gathered into dynamic vertex buffers. So there are basically only one draw call per material. Vertex transformation is done on the CPU to offload the GPU and allow culling before vertices and triangles are even sent to the GPU. CPU transformation also opens up for a few other tricks not available with conventional rendering. The camera is facing the same direction all the time, which allows the use of billboards to approximate geometry. You can see this in a few instances for round shapes in particular throughout the game.
The static soft shadows are precomputed vertex lighting based on ambient occlusion. Lighting is quadratically interpolated in the fragment shader for a natural falloff. The dynamic soft-shadows are gaussian blobs rendered with one quad per rigid body. The size and orientation of the shadow need to be determined in run-time since an object can break arbitrarily. I'm using the inertia tensor of the rigid body to figure this out, and the shadow is then projected down on a plane using a downward raycast. This is of course an enormous simplification, but it looks great in 99% of all cases!
Music and sound
I wrote my own software mixing layer for this game, which enables custom sound effects processing for environmental acoustics. I use a reverb, echoes and low-pass filters with different settings for each environment in the game. The music is made out of about 30 different patterns, each with an intro and an outro, which are sample-correct mixed together during the transitions. The camera motion is synchronized to the music progression, so the music always switches to the next pattern exactly when entering a new room. This was pretty hard to get right, since this had to be done independent of the main time stepping in order to support slower devices. Hence, camera motion and physics simulation had to be completely decoupled in order to have both predictable simulation and music synchronization on all devices.
Scripting has been a very useful tool during the development of this game. Each obstacle in the game is built and animated using a separate lua script. Since each obstacle is procedurally generated, it allows us to make many different varations of the same obstacle. For instance configuring width, height and color, or number blades in a fan, etc. Each obstacle runs within its very own lua context, so it is a completely safe sandbox environment. I've configured lua to minimize memory consumption, and implemented an efficient custom memory allocator, so each context only requires a single memory block of about 40 kb, and there are a few dozed of them active at the same time at most. Garbage collection is amortized to only run for one context each frame, so performance impact is minimal.
The game is designed for multicore devices and uses a fork-and-merge approach for both physics and graphics. I was considering putting the rendering on a separate background thread, but this would incur an extra frame of latency, which is really bad for an action game. The audio mixing and sounds decoding is done on separate threads.
If there is any area you find particularly interesting, let me know!
Tuesday, January 7, 2014
GDC Physics Tutorial
|
Spain unemployment
Unemployment has led to younger people migrating abroad in search of job opportunities.REUTERS/Andrea Comas
Spain's population is dwindling, with records showing more deaths than births in the first half of 2015. According to the National Statistics Institute (INE), deaths outnumbered births by more than 19,000.
The current population of Spain is approximately 46 million but the INE predicts a decrease over the next half century. "If the current demographic trends continue, Spain will lose one million inhabitants in the next 15 years and 5.6 million in the next 50 years," they claimed.
This fluctuation in population is being partly blamed on migration of younger people to countries with better prospects, as Spain continues to suffer from recession and high unemployment. The lack of financial stability is also deterring women from having children, as seen by the rise in the average child-bearing age from 31.7 to 33 years.
The drop in birth rates has been attributed to the "reduction of women at child-bearing age". The rise in life expectancy has also affected the previous balance of births and deaths.
Spain's birth rate has fallen dramatically since Franco's death
"We have seen an incredible decline in the birth rate, which has been cut by half since 1975, and this trend is here to stay," said Jesús María Andrés, of the University of Palencia, which conducted a study on the consequences on the lack of demographic policies.
"We are witnessing a rapid decline in births and it seems that nobody cares. In the short term it is a relief because it means less spending for families and for the state, and nobody is complaining because no one stops to think about the future consequences," Julio Vinuesa, a demographer at Madrid's Autónoma University said of the current situation.
Also, the migration of Spain's younger population from smaller villages has left a large part of the countryside either abandoned or with a populated of mainly older people.
As of 2014, the population rate of immigrants was -2 per 1000 inhabitants. While 332,522 immigrants entered the country that year, 417,191 locals migrated abroad.
|
Current topics in human evolutionary genetics
This Chapter is currently unavailable for purchase.
Primate genomic data have become essential for the understanding of a number of topics related to our evolutionary history. For instance, they have provided new models for the actual speciation process that led to the divergence of the chimpanzee clade from our own evolutionary branch, a process that may have required millions of years and entailed extensive hybridization. New nuclear genetic data have also raised the possibility that some gene flow actually occurred between our species and other hominin groups as our ancestors colonized Eurasia. Hundreds of regions of our genome show the effects of natural selection over the last 50,000 years as people adapted to new environments during their global trek. Finally, genetic studies of the biological basis of language are accumulating rapidly and hold promise for identifying the ensemble of genetic changes responsible for what many linguists consider to be our chief behavioral apomorphy as a species … spoken language.
This is a required field
Please enter a valid email address
|
Voices for liberty in the ancient world
The earliest recorded voices for liberty were in ancient Greece.
It didn’t seem like a place where important things were likely to happen because as historian Herodotus (c.484-c.430 B.C.) remarked, “Greece and Poverty have always been bedfellows.” Greece is rocky and without rain for long periods, often unable to provide more than grazing for goats. But the people had an independent spirit.
The great Greek scholar Gilbert Murray reflected, “In Greece alone men’s consciences were troubled by slavery, and right down through the centuries of decadence, when the industrial slave-system ruled everywhere, her philosophers never entirely ceased protesting against what must have seemed an accepted and inevitable wrong.” Murray added, “The Greeks were not characteristically subjectors of women. They are the first nation that realized and protested against the subjection of women.”
To be sure, we don’t know much about ancient Greek authors because so much has been lost over the centuries. “Between us and them,” Murray explained, “there has passed age upon age of men...who sought in the books that they read other things than truth and imaginative beauty, or who did not care to read books at all. Of the literature produced by the Greeks in the fifth century B.C., we possess about a twentieth part; of that produced in the seventh, sixth, fourth and third, not nearly so large a proportion. All that has reached us has passed a severe test and far from discriminating ordeal. It has secured its life by never going out of fashion for long at a time.”
Fortunately, surviving Greek literature does include some wonderful voices for liberty. A fear of slavery and passion for liberty were expressed in the 6th book of Homer’s Iliad about 750 B.C. Hector was about to go to war, and his wife Andromache expressed terror that he would be killed. Hector replied:
“That is nothing, nothing beside your agony
when some brazen Argive hales you off in tears,
wrenching away your day of light and freedom!
Then far off in the land of Argos you must live,
Laboring at a loom, at another woman’s beck and call.”
Thucydides, in his History of the Peloponnesian War, written around 400 B.C., reported the text of a funeral oration by Pericles, leader of Athens. Pericles referred to personal freedom in Athens, at least for those who weren’t slaves. Pericles said, “Not only in our public life are we liberal, but also as regards our freedom from suspicion of one another in the pursuits of everyday life; for we do not feel resentment at our neighbor if he does what he likes.”
Aeschylus (c. 525-455 B.C.) was the pioneering playwright of tragedies. Aeschylus was believed to have been born around in Eleusis, a city near Athens. He was a soldier who fought against the Persians at Marathon (490 B.C.) and Salamis (480 B.C.). At the time, the Persians controlled the biggest empire in Central Asia, and so these were great victories for the Greeks, and their confidence was expressed in their plays.
Until Aeschylus came along, plays involved a single actor who portrayed various characters (using masks), and chorus danced. Aeschylus introduced a second actor and dialogue, and he made the chorus part of the dramatic action. As was customary, Aeschylus performed in his own plays. Of the more than 90 plays he is believed to have written, only seven survive.
Like other Greek playwrights, Aeschylus didn’t write about contemporary situations. Rather, he drew on mythology, portraying struggles among gods and ancient Greek heroes. Perhaps this was a politically safer way to offer commentary. Shakespeare and the great German playwright Friedrich Schiller did the same thing, making their plays about other people and different eras than their own.
Prometheus Bound, believed to date from Aeschylus’ later years, is the work of greatest interest from the standpoint of liberty. It isn’t much of a play, in terms of the story, but it does express quite a protest against tyranny. Because Prometheus gave human beings the gift of fire, defying the will of Zeus, he had Prometheus chained to a remote mountain. Prometheus was helpless before Zeus, yet he boldly predicted that Zeus was doomed: “in his crashing fall shall Zeus discover how different are rule and slavery.”
Scholar David Grene wrote, “Prometheus is, politically, the symbol of the rebel against the tyrant who has overthrown the traditional role of Justice and Law. He is the symbol of Knowledge against Force. He is symbolically the champion of man, raising him through the gift of intelligence, against the would-be destroyer of man.”
Sophocles (496-406 B.C.) was the most successful Greek playwright. A younger contemporary of Aeschylus, he was born in Colonus, a village near Athens, the son of a factory owner, who by some accounts made armour. Apparently, he was able, well-educated and well-liked, and he became an associate of the great Greek political leader Pericles. In 468, he emerged as a name to reckon with when one of his tragedies won top prizes at the Dionysia, the most famous Greek drama festival, celebrating the god of the countryside. At each festival, three dramatists were selected to do their plays, and judges awarded first, second and third prizes. The ruins of the theatre building suggest audiences as large as 15,000 people. Altogether, Sophocles wrote 123 plays and won first prizes at 18 Dionysia festivals, more than Aeschylus (12) or Euripides (4). Initially, Sophocles performed in his own plays, but reportedly because he had a weak voice, he stopped doing this. Ancient sources suggest he might have written 123 plays, and seven survive.
In Antigone, which Sophocles wrote when he was past 50, the heroine defied her father the king, saying:
“Your edict, King was strong.
But all your strength is weakness itself against
The immortal unrecorded laws of God.
They are not merely now: they were and shall be,
Operative for ever, beyond man utterly.”
Euripides (c.480-406 B.C.) was one of the Greatest Greek dramatists writing tragedies. He is believed to have come from the island of Salamis. The year after his birth Athens won the Persian war, and although much of the city was in ruins, the people had preserved their independence. Athens led the Delian league of Greek city states resisting further Persian threats, but by 440 Athens had emerged as an empire which threatened the independence of others. Sparta, Corinth and Thebes began to resist Athenian rule, which led to the Peloponnesian War (431-404 B.C.). This proved devastating for Athens. Four years before Athens was defeated, Euripides left the city to become an exile in Macedon which is where he died. Fifteen of the 17 surviving plays date from the Peloponnesian War era, and they deal with issues of war.
Presumably because war prisoners were enslaved, and many of the enslaved were women (male prisoners commonly being put to death), Euripides wrote much about slavery and women. He protested the bad treatment of women. For instance, in Medea the heroine said:
“Surely, of all creatures that have life and will, we women
Are the most wretched. When, for an extravagant sum,
We have bought a husband, we must then accept him as
Possessor of our body. This is to aggravate
Wrong with worse wrong. Then the great question: will the man
We get be bad or good? For women, divorce is not
Respectable; to repel the man, not possible.”
Euripides’ play Hecabe showed how the pressures of war drive people to barbaric cruelty. Hecabe and other women prisoners subdued Polymestor, who had killed her son during the Trojan War, and they put out his eyes and killed his two sons.
During the Peloponnesian War, Athenian forces captured Melos, killed all the men and sold women and children as slaves. Apparently to protest these atrocities, Euripides wrote Trojan Women, about women who, after the Trojan War, awaited their fate as prisoners of the victorious Greeks. This must have been a shocking play for Greek audiences, because Greek warriers won their most famous victory in the Trojan War. Homer’s great epic The Iliad told how Agamemnon’s armies captured Troy. Yet Euripides portrayed the Trojan War as unrelieved misery. The messenger Talthybius announced that Troy’s former queen Hecuba would be the slave of Odysseus, Ithaca’s king; Cassandra would be a slave of Agamemnon; and Andromache would be the slave of Pyrrhus, Achilles’ son. These women would be sex slaves of the men who had killed their loved ones.
Hecuba wailed:
“A lying man and a pitiless
Shell be lord of me, a heart full-flown
With scorn of righteousness.”
And Andromache:
“Forth to the Greek I go,
Driven as a beast is driven.”
Cassandra vowed:
“Go I to Agamemnon, Lord most high
Of Hellas! I shall kill him, mother;
I shall kill him, and lay waste his house width fire
As he laid ours.”
The most poignant moment in the play was when the Greeks decided they must kill Andromache’s little boy Astyanax, because he was the son of the brave warrier Hector. It was feared that if he lived, he would grow up and lead a counter-attack against the Greeks.
Euripides’ protest was ignored. Athens persisted with the war, sending a big expedition to Sicily, and the expedition was wiped out. Later, of course, Sparta defeated Athens and sacked the city.
The only surviving complete texts of Old Greek Comedy are those by Aristophanes (c.448-385 B.C.). He was born too late to have known the glory days of Athens, and he grew up amidst the crises of the Peloponnesian War. He was a masterful satirist who ridiculed the intellectuals and politicians who brought all the trouble. Altogether, he is believed to have written more than 40 plays of which 11 have survived.
In Lysistrata, the heroine gathers together the wives of soldiers on both sides of the war and proposes a radical way to stop it: "we must give up -- sex...the men are all like ramrods and can't wait to leap into bed, and then we absolutely refuse -- that'll make them make peace soon enough, you'll see."
One woman asked, "And if they hit us and force us to let go?" Lysistrata replied: "Why, in that case you've got to be as damned unresponsive as possible. There's no pleasure in it if they have to use force and give pain. They'll give up trying soon enough. And no man is ever happy if he can't please his woman."
Later challenged by a magistrate, Lysistrata explained: "we women got together and decided we were going to save Greece. What was the point of waiting any longer, we asked ourselves. Well now, we'll make a deal. You listen to us -- and we'll talk sense, not like you used to -- listen to us and keep quiet, as we've had to do up to now, and we'll clear up the mess you've made." And they did.
After the golden age of Greek theater, some of the texts were translated into Latin and preserved. They were largely forgotten for more than a thousand years. The process of rediscovering them began in Italy during the 15th century. The great Dutch-born scholar Desiderius Erasmus produced fresh Latin translations of a number of Euripides' plays. Later, there were Spanish, French and English translations of plays by Sophocles as well as Euripides. Eventually Aeschylus and Aristophanes were rediscovered, too. The plays were performed, and many of the greatest names of European literature, including Jean Racine, Pierre Corneille, Voltaire, Johann Wolfgang von Goethe, Eugene O'Neill and T.S. Eliot, wrote works pursuing themes in Greek plays. Greek themes also inspired composers like Christoph Wilibald Gluck, Hector Berlioz, Richard Wagner, Richard Strauss and Igor Stravinsky.
Here we see the extraordinary power of ideas to transcend their times, take a life of their own and influence future generations.
P.E. Easterling, The Cambridge Companion to Greek Tragedy (Cambridge: Cambridge University Press, 1997).
James C. Hogan, A Commentary on the Complete Greek Plays: Aeschylus (Chicago: University of Chicago Press, 1984).
James C. Hogan, A Commentary on the Plays of Sophocles (Carbondale, Illinois: Southern Illinois University Press, 1991).
Gilbert Murray, Aeschylus, The Creator of Tragedy (Oxford: Clarendon Press, 1940).
Gilbert Murray, Aristophanes, A Study (New York: Oxford University Press, 1933).
Gilbert Murray, Euripides and His Age (New York: Henry Holt, 1913).
Gilbert Murray, The Literature of Ancient Greece (Chicago: University of Chicago Press, 1956).
Gilbert Murray, The Rise of the Greek Epic (New York: Galaxy, 1960).
Rex Warner, Men of Athens (London: Bodley Head, 1972).
Additional articles and links:
Wall Street Journal calls The Triumph of Liberty -
"a literary achievement"
Voices for liberty in the ancient world
The man who helped finance the American Revolution
Ancient Roman contributions to private property rights
How toleration developed in modern Europe and America
The story of Magna Carta
The best of H.L. Mencken, witty American defender of liberty
Photobooks and photo calendars created online
How and where photo products like individual calendars and books can be created easy.
How private enterprise created modern Japan
Runaway slaves!
The strange battle for the U.S. Bill of Rights
Why has liberty thrived in the West?
This is where enough people stuck out their necks for liberty.
"Honor is a harder master than the law"
Liberty as a woman
Private initiative spurred vital discoveries throughout history
Political liberty impossible without economic liberty
Thomas Jefferson in perspective
How markets nurtured our civilization
Most dramatic orator in the American antislavery movement
Socialism's greatest enemy
They created the first modern agenda for liberty
William S. Gilbert's wicked wit for liberty
Further Links:
Great Thinkers
Dynamic Connections
Epic Debates
Things to See
Best from Web
About Jim Powell
Q&A with Jim Powell
Powell at Cato video
Discussion Board
Other Links and Categories:
Coming soon.
© Copyright libertystory.net. All Rights Reserved.
|
Moonshine History
The story of moonshine is, in many ways, the story of America.
While many Americans are just learning about the history of moonshine today through the increasing popularity of craft distilling or TV shows like Discovery Channel’s “Moonshiners,” moonshine holds a rich and proud history in America.
The skills and traditions of moonshiners continue today, passed down from father to son, from generation to generation. Moonshine is, and always will be, a unique part of America’s proud history.
Whiskey and Colonial America
Moonshining in America dates back to the early 1600’s. Moonshine legend has it that American colonist and Englishman George Thorpe was the first to distill corn whiskey in the United States in the fall of 1620 in what is now Gloucester County, Virginia. Thorpe is said to have brewed a simple beer from corn he obtained from the native Powhatan Indians. Thorpe then distilled this mash, creating the first whiskey from corn, the base of which forms moonshine and, when aged in American oak, bourbon.
Thorpe was not the first person to make whiskey, of course. Whiskey enjoys an even longer history than moonshine, dating back many hundreds of years. Early American settlers were likely versed in the Scotch Irish traditions of whiskey making. But what they found here was a new ingredient than they had available at home – corn – which would come to launch liquor distillation, and moonshine, in America.
American Revolution
The taxation of distilled spirits has played a substantial role in the history of moonshine that continues today. In the early 1760’s, after a series of victories by the British Empire that protected the American Colonies from the French military threat of the French and Indian war, the Britain determined America should contribute to the costs of its defense and began levying onerous taxes on the American Colonies, including taxes on distilled whiskey. This of course would ultimately lead to the Boston Tea Party, “no taxation without representation,” and the American Declaration of Independence from British rule.
Distilled whiskey played a prominent role in the new America. Beer, cider, and whiskey was consumed in higher quantities than water as the fermentation process made them a safer drinking source than contaminated water.
Whiskey Rebellion
Finding itself struggling to pay for the expenses of defeating the British Empire in the American Revolution, Treasury Secretary Alexander Hamilton and President George Washington soon turned to the taxation of whiskey as a means to fund government programs. Beginning in 1791, farmers who had turned their extra corn and grain into profitable whiskey suddenly faced new excise taxes on their whiskey products.
This new tax went over about as well as you would expect, with farmers continuing to distill whiskey, while evading the federal tax collectors, or “Revenuers.” By 1794 the “Whiskey Rebellion” reached its climax when 500 armed rebels attacked the home of the tax inspector general in protest of the whiskey taxes. President Washington responded by quashing the rebellion with 13,000 militia collected from several of the largest early states.
Though Washington won the battle, collection of the whiskey tax remained problematic. The whiskey tax was finally repealed when Thomas Jefferson’s new Republican Party defeated former treasury secretary Alexander Hamilton’s Federalist Party in 1801.
Moonshine History
In 1920, the National Prohibition Act, also known as the Volstead Act, was signed into law by President Woodrow Wilson. The law was the climax to many years of temperance movements, culminating in the enactment of the 18th Amendment to the United States Constitution, which established the national prohibition of alcohol in the United States. The 18th Amendment’s purposes were:
1. to prohibit intoxicating beverages (any beverage containing more than 0.5% alcohol by volume; moonshine can run 40% to as high as 80% ABV);
2. to regulate the manufacture, sale, or transport of intoxicating liquor (but not consumption); and
With their freedom to distill and consume whiskey again threatened as it had been by the British Empire and then their own American government, whiskey drinking and moonshining Americans again rebelled against the outlaw of “intoxicating liquors.” The Roaring Twenties, speakeasies, and classic era of mobs and gangsters soon followed. Ironically, Prohibition actually helped to increase moonshine production – though Prohibition could change the law, it could not change a person’s proclivity toward drink.
It is also during this time that many of the common stereotypes of moonshine and whiskey were solidified. Pushing the production of whiskey underground led to a general decrease in quality and sanitation practices that often produced substandard, and downright dangerous, moonshine. Many of these stereotypes still survive today.
Prohibition is a fascinating period of our American history. Ordinary, otherwise law-abiding, American citizens driven underground merely because they enjoy the fermented and distilled fruits of corn. The creation of a whole subculture of citizens giving money to bootlegging gangsters like Al Capone to help fund their criminal activities. The formation of speakeasies, special secret clubs intricately designed to hide the alcohol consumption taking place within, often with ordinary folks drinking next to high-powered politicians that helped keep the law in place. Just an amazing time in American history.
The years 1920-1933 saw a rapid growth in bootlegging and the creation of large, intricate moonshining networks. Prohibition forced moonshiners to the hills to produce liquor, and whiskey drinkers underground to consume it.
It was during the lead up to and enactment of Prohibition that the culture and history of moonshining took hold in the United States, particularly in the South and Appalachia. Prohibition served to create an increased demand for moonshiners’ supplies, driven by the growth of underground drinking in major metropolitan cities like New York and Chicago. Though Prohibition was finally repealed on December 5, 1933 by the 21st Amendment, it helped cement the traditions and folklore of America and moonshiners.
It is said that in 1941 Lloyd Seay won the National Stock Car Championship in a Ford coupe he had driven just twelve hours before on a moonshine bootlegging run. A day later, Lloyd Seay was shot and killed by his cousin in an argument over sugar – a primary ingredient of moonshine.
Car culture had begun to take hold in America by the 1940’s, and with it came the American muscle car. Moonshiners of the 1940’s had every bit the need to outrun the Revenuers as did their early American predecessors, but had a little more horsepower available to them.
In order to evade the tax collectors, and the law, bootleggers souped up the engines and suspensions of their cars while leaving the exteriors unchanged as a means to evade, and outrun, police should they happen upon a moonshine run. Moonshine runners became skilled drivers, valued on their abilities to outrun and outsmart the law. Bootleggers began to hold informal races of their moonshine running cars which, moonshine legend has it, led to the organization of these races into auto racing and, eventually, stock car racing.
One famous moonshine runner named Junior Johnson is one of the legends of early NASCAR. It is said that he quit illegal moonshining in 1960 after winning the Daytona 500. But today, ol’ Junior has gone legal, selling Junior Johnson’s Midnight Moon Carolina Moonshine 80 proof corn-based liquor.
Popcorn Sutton
No history of moonshine is complete without mentioning immortal moonshiner Marvin “Popcorn” Sutton, the most infamous of modern moonshiners.
Popcorn came from a long line of moonshiners and made a lifelong career of his trade. Sutton’s legend increased in the 2000’s with several television and documentary appearances, including 2002’s documentary The Last One Popcorn Sutton Documentary
as well as recognition from his self-published autobiography and moonshine production guide Me and my Likker.
Popcorn’s legendary run came to an end in 2009 when, facing 18 months in federal prison, Sutton chose to end his life and his moonshining career. His self-authored tombstone reads “Popcorn Said Fuck You.”
Moonshining: An American Tradition
Moonshine Still
Illegal moonshining has waned somewhat from its peak in the 1960’s and 1970’s. But today a growing number of legit, legal moonshines are coming on to the market, many from well-known legendary moonshiners like Tim Smith of the Discovery Channels’ “Moonshiners” and even a ‘shine supposedly based on Popcorn Sutton’s own recipe and affiliated with Hank Williams Jr.
The rise in legal moonshine is a result of government taxation of the moonshine whiskey, which comes in at as much as $15.50 in tax alone just for a gallon. The rapid rise of craft breweries and homebrewing beer has also given rise to a new generation of micro distillers. It’s also paved the way for a growing interest in home distilling, despite the illegality of distilling liquor, even for personal consumption, without a federal license.
Though it is illegal to distill liquor without a permit, it is still legal to own a whiskey still or moonshine still. You can easily find a moonshine still for sale online via marketplaces like Amazon and through a number of still makers providing everything from copper stills to pot stills to stainless steel stills to whiskey still kits.
The Future of Moonshine
Though technology has likely ended the golden era of moonshiners making ‘shine under the cover of the hills by moonlight for good, the traditions of moonshining will live on forever as a fundamental metaphor of American culture.
The recent rise in popularity and interest in moonshine and craft distilling will increase the recognition and appreciation of moonshine, both as a spirit and a uniquely American cultural phenomenon. Tens of thousands of Americans will continue to practice the craft of distilling and efforts to remove penalties for home distilling have already made their way through several state legislatures. If the history of moonshine shows us anything, though, it’s that whiskey will always find a way.
Until then, drink what you think is right. Cheers.
Let’s Hear from You
Have a personal story about your own history with moonshine? Let’s hear it – leave a comment below.
Leave a Reply
|
For ancient people aquamarine was a symbol of hope, happiness, good health, eternal youth and lasting love. In the Middle Ages it was used for decoration of statues of the Virgin Mary. At that time was seen as a symbol of chastity and the pure love.
aquamarine ring
The newlyweds in France exchanged silver rings with aquamarine and believed that it will bring them a long and happy marriage.
This stone is a mineral form of beryl. Its name derives from the Latin aqua marina - sea water, because the color of the stone reminds warm tropical seas. Its specific color is caused by the presence of iron. The color ranges from sea blue, green light blue to dark blue. This mineral shows dichroism, which means it looks blue from one angle and transparent from another. In order to remove the yellow shades the aquamarine is often heat-treated. Then it becomes purer blue. This effect is permanent and there is no danger of stones to fade with time. Unlike emeralds, aquamarines are almost without flaws. The more intense is the color of the stone, the more expensive is the gem. Most benign aquamarines are found in Brazil. There are deposits also in Russia, India and Africa.
aquamarine crystals
aquamarine monocrystals
Notable for the most intensely green-blue are aquamarines from several deposits. The stones mined there have developed their own names:
"Santa Maria" is quality designation for particularly valuable aquamarines, named after the mine with the same name in Ceará state in Brazil.
"Santa Maria Africana" - designation of quality for valuable aquamarines from Mozambique.
"Aquamarine-Maxis" – this stone has specific deep blue color and was discovered in the mine "Maxis" in Minas Gerais state in Brazil. However, the color of the crystals was unstable - in daylight it turned to yellow and reddish-brown.
aquamarine colours
In aquamarine sometimes are observed solids in white called "chrysanthemum" and "snow signs". They are layers of fine crystals or thin needle inclusions.
The sunlight gradually fades to almost colorless, and under indoor light becomes brighter. It is also believed that this stone changes color depending on the weather and the mood of its owner. May be that's why in ancient times it was used as a barometer: If the gemstone becomes cloudy and green this means that we should expect a storm. If the color changed in clear weather it was a sign that the owner is in a bad mood.
Natural crystals of this mineral are of record size. Perhaps the largest in the world crystal was found in 1910 in Brazil. It is beautiful colorless hexagonal stone 48.3 cm long and 41 cm in diameter. Its weight was 110.2 kg. It has azure hue which goes on the sides clear-green and in the transition zone to the yellow color.
aquamarine monocrystal
aquamarine cabochon
In jewelry it is very difficult to distinguish natural aquamarine from blue topaz, synthetic spinel and quartz, as well as the glass. Doublets are also used as imitation. But growing of synthetic crystals is difficult and not efficient.
The majority of currently produced aquamarine is treated by thermal processing. Temperatures up to 375 degrees can reduce or completely remove the green tint. More rigid scheme of heat treatment sometimes can significantly enhance the blue color. In this way even very low valued light green beryl can often be turn into a decent aquamarine.
And some final words. This gem is extremely beautiful. It is almost as popular but more expensive than traditional rubies, sapphires and emeralds. It doesn’t fit gold. However the precious stone inlaid in white gold or silver creates incredibly soft and irresistibly romantic jewelry. Women around the world love it - its amazing blue shades fits almost all colors of skin and eyes.
aquamarine gems
Up to top
› Aquamarine
› Aquamarine
› Aquamarine
|
• 0
Tags :
Why Are We So Allergic?
The Hygiene Hypothesis
We are too clean. There’s a growing body of research showing that children exposed to lots of germs early in life are less likely to develop allergies, asthma or autoimmune disorders as they grow up. The reason being that when these microbes enter the gut they keep a rare part of the immune system reined in. In other words, exposure to common germs keeps the immune system properly functioning and busy and able to not over-react when encountering nasty bugs and other biological stuff later in life. We are meant to encounter some microbes and dirt when we are young. It’s how our immune system grows strong.
Today, we’ve developed a cleanlier lifestyle, and our bodies no longer need to fight germs as much as they did in the past. As a result, the immune system has shifted away from fighting infection to developing more allergic tendencies. There’s also our love of antibacterial everything – and now there’s evidence that suggests that there may be an association between triclosan exposure (an ingredient in antibacterial soaps) and allergies.
What You Can Do: Ditch the antibacterial soaps, disinfectants and toxic chemical cleaners. Use natural cleaning products instead – these include vinegar, baking soda, essential oils, baking soda, and castile soap. Simple hand washing with soap and water still remains one of the most effective ways to decrease the risk of spreading infections after preparing food, using the toilet, or after coughing or blowing your nose. Let your children play and get dirty, and remember – you don’t have to wash or sanitize everything.
The American Diet
Our diet has increasingly become more processed, with less of us eating anything that resembles real food. We’ve also seen the introduction of genetically modified organisms (GMOs) into our food supply. These additives and pesticides may be changing our gut flora so that our bodies can no longer handle certain foods and creating sensitivity. Researchers are discovering that these foods impact the immune system, which influences allergic reactions. Your immune system is the barrier that protects your body against impurities and when it’s broken down the body is less able to protect against allergens and infections.
In addition, there are foods that exaggerate inflammation because they themselves are irritants. These include sugar, alcohol, grains, and processed foods – staples in many of our diets. We ingest far too many foods rich in omega-6 fatty acids – found in processed and fast foods – and far too few rich in omega-3 fatty acids, such as those found in cold-water fish.
What You Can Do: Avoid histamine containing foods (foods that are aged and fermented – beer, alcohol, cheeses, pickles, sausage, etc). Eat more anti-inflammatory foods – fresh produce, fish and nuts (less sugar, grains, and processed foods), and drink more water.
The Rise of C-Sections
Babies born through c-sections may have different immune systems. According to the National Institutes of Health, researchers evaluated more than 1,200 newborns when they were 1 month, 6 months, 1 year and 2 years old. By age 2, babies born by cesarean section were five times more likely to have allergies than those born naturally when exposed to high levels of common household allergens such as pet dander and dust mites. Babies who are born by cesarean and never make that trip through the birth canal apparently never receive some key bugs from their mothers.
What You Can Do: There’s no question that cesarean surgery is both necessary and unavoidable at times, and can save both mothers’ and babies’ lives when performed appropriately. However, it’s important to remember that you have the right to refuse or consent to any procedure and you certainly have the right to make an informed decision when it comes to your health and the health of your baby. If you did have a cesarean for any reason, I recommend using a high-quality infant probiotic to help populate your baby’s gut with beneficial flora.
The Chiropractic Connection
It is vital for your health that your immune system and inflammation response is well balanced. The immune system not only destroys foreign tissue but also unwanted parts of our own tissues. Inflammation is your body’s effort to deal with damaged tissue and begin repair. Upset in these systems can lead to your body causing havoc on itself. Your body’s lack of ability to properly adapt to the changing environment is what we label as a seasonal allergy. After all, everyone breathes the same ragweed pollen, yet everyone does not have seasonal allergies
A healthy spine is essential for a healthy nerve system, which coordinates ALL of the other systems in your body. Only recently have researchers uncovered the molecular connections between the nerve system, the immune system, and inflammation. As chiropractors, we have a direct influence over the nervous system. We now know through research that chiropractic care has beneficial effects on immunoglobulins, B-lymphocytes (white blood cells), pulmonary function and other immune system processes. Besides the growing research, there are countless case studies of patients (including myself) who have seen drastic improvements in their allergies from a balanced immune system from regular chiropractic care.
Print Friendly
Search Our Site
|
Tuesday, September 25, 2012
astro picture for the day
Nasa Hubble Space Telescope image
This picture combines data over a ten year period; telescopes before the Hubble space telescope could view the universe back to seven billion light years; the Hubble calculated the Big Bang to 13.7billion years old precisely, and has imaged it here to 13.2 billion light years distant.
1 comment:
1. http://www.dailymail.co.uk/sciencetech/article-2148201/Sci-fi-reality-DNA-turned-living-drive-able-store-read-erase-data.html
and a black hole laser,
These black holes simulate certain features of black holes; just not the gravitational attraction of an actual piece of mass crushed to densities which would create an actual black hole. Still, this metamaterial idea is . . . fascinating!
The Dna-nanotechnology article which is first is proof in my opinion that we are in a nanomanufacturing era; sure, it's not one that's going to turn the world upside down and allow just about everyone to live for free and perfect health for as long as they please; but, it's capable of making things and progressing to a much more capable nanomanufacturing system.
|
Rhodes Scholarship, do black people understand it’s history?
In 1890, Rhodes became Prime Minister of the Cape Colony and implemented laws that would benefit mine and industry owners. He introduced the Glen Grey Act to push black people from their lands and make way for industrial development. The growing number of enfranchised Black people in the Cape led him to raise the franchise requirements in 1892 to counter this preponderance, with drastic effects on the traditional Cape Qualified Franchise
The big question is how many black African did this man assist in killing during his presence in Africa ?
Hilter is branded a murder for killing 6 millions jews.
Cecil Rhodes killed millions of black people and is a hero, with the “famous” Rhodes Scholarship named after him.
Rather interesting is it.
Still want that Rhodes Scholarship ?
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this:
|
The Copenhagen Wheel: Turn Any Bike into an Electric Assist Bicycle
Here at BEVI we are not only interested in the electric car, but in all electric vehicles available on the market. This summer the BEVI interns have focused their research and attention on smaller and more cost-effective EVs, such as the electric assist bicycle. An electric assist bicycle is just like any other bike you pedal, except there is battery operated motor powering the back wheel. The motor is there only when you need it: to help with long distances, biking up hills, or even just to make it to work without breaking a sweat. Electric assist bicycles promote physical movement, are an affordable alternative to cars, and help reduce traffic. We see the electric assist bicycles as the “last mile” solution, the vehicle that connects people to other modes of transit on their daily commute.
Developed at MIT, The Copenhagen Wheel is a rear wheel that transforms your preexisting bike into an electric assist bike. The wheel itself contains a motor, batteries, sensors, bluetooth technology, and an embedded control system. According to their website, the wheel “learns how you pedal and integrates seamlessly with your motion, multiplying your pedal power 3-10x.” The wheel comes with an app that enables you to control your speed, track your usage, and allows you to unlock or lock your wheel. Additionally, every customer has access to a software development kit that encourages the developers among them to create their own innovative applications. What’s most compelling to me is that anyone can turn their bike into an electric assist bike, making electric vehicles accessible to more people.
|
Goldfish Respiration Essay
Custom Student Mr. Teacher ENG 1001-04 8 April 2016
Goldfish Respiration
The purpose of this experiment is to tests the effects of temperature on the respiration (breathing) rate of goldfish. In order to determine the goldfish respiration change adding small amounts of crushed ice to the water, than behavior will be noted. Then, the goldfish will experience the same experiment four more times and be noted again. This experiment will be conducted with four students, one goldfish, a 250 mL and a 150 mL beaker, thermometer, crushed ice, aquarium water, stirring rod, and a stopwatch. Our results showed that the goldfish respiration started out with more breathes when the temperature drooped and the goldfish showed signs of stress; however, by the third evaluation the goldfish respiration decreased and showed signs of distress. Introduction
Different symptoms of water change affect the respiration rate of goldfish. Homeostasis organism responds to changes in environment to maintain a constant temperature; nevertheless, if organisms do not adjust them it can lead to severe changes and even kill organism ( index/science_of_biology_files/Fish%20Respiration.pdf). Organisms specialize in structures to carry out respiration, which takes in oxygen from its environment than releases carbon dioxide that creates waste product (13). In aquatic animals (goldfish) the gills utilize respiration, and gill filaments allow exposure to oxygen-laden water environment (13). When goldfish operculum closes the mouth opens. When the mouth opens water pass over the gill filaments that make the mouth close and pharynx contract (13). In our experiment, we observed the respiration rate of the goldfish when crushed ice was added. The goldfish started out with a temperature of 25°c and respiratory rate of 73 per mint. The first have of the experiment showed when we added crushed ice, the goldfish respiratory rate increased to 93 breaths per mint, and the temperature dropped down to 20°c; nevertheless, the goldfish was stressed moving very fast in the beaker. Next, we added more crushed ice, the temperature to drop to 17°c, but this time the goldfish respiration rate dropped to 64 breaths per mint and the goldfish was distressed. Materials and Methods
For our experiment, we used a 250 mL beaker, which had aquarium water, and a goldfish in, then placed the thermometer in the beaker to take the temperature. Next, once the goldfish adjust to the new environment for three minutes one person will get the stopwatch and set for a one minute and I will count the breath of the goldfish and record the data. The group adds crushed ice to lower the temperature approximately 2°c, than wait one minute for the goldfish to adjust, set the stopwatch for one minute and count the breaths than record the data. The same experimental was repeated five more time and recorded the data. The group also observed the behavior of the goldfish, and the stress it the goldfish experience. Results
Free Goldfish Respiration Essay Sample
• Subject:
• University/College: University of California
• Type of paper: Thesis/Dissertation Chapter
• Date: 8 April 2016
• Words:
• Pages:
We will write a custom essay sample on Goldfish Respiration
for only $16.38 $13.9/page
your testimonials
|
Geometry All Around Us!
By Aala Nasir
Question #1
Given the measurements above, can you find the measurement of angle x? Can you find the measurement of angle y? Answer: x=88 degrees
y=42 degrees, due to the Exterior Angle Theorem, which states that the sum of the measures of the remote interior angles will equal the measure of the exterior angle.
Question #2
Given that the fences are parallel and that the one in the middle is the transversal, find the measurement of angle x. What type of angles is represented above?
Answer: 108 degrees; alternate interior angles (In order for two lines to be parallel, the measure of the alternate interior angles must be equal.)
Question #3
Given the measurement above, find the measurement of angle x and y. What type of angles are represented above?
Answer: x=84 degrees; y=96 degrees; vertical angles (To solve this, first you would need to subtract 96 from 180 and the difference is 84. Remember, the measures of vertical angles are equal, therefore x=84 degrees and y=94 degrees)
Question #4
Given the measurements of all three angles, list the sides in order from longest to shortest.
Answer: Line b, Line c, Line a (The larger angle will be opposite of the longest side and the smaller angle will be opposite of the shortest side.)
Question #5
Find the sum of all the exterior angles in the polygon above.
Answer: 360 Degrees
Question #6
What is represented on the ceiling of the building above?
Answer: Tessellation (using triangles.)
Question #7
What type of lines do the signs above represent? (parallel, oblique, skew, perpendicular)
Answer: Skew lines! This is because the the signs are not intersecting, nor are they parallel.
Question #8
In order to prove the triangles above congruent, what postulate would you use?
Answer: Side Angle Side Postulate. (SAS)
Question #9
What is the sum of all the interior angles above? What is the measure of one interior angle?
Answer: 1080 degrees; 135 degrees
Question #10
What type of triangle is shown above?
Answer: Obtuse, Isosceles
Comment Stream
|
Optic Flow Tutorial
In Optic Flow you will take the role of a driver. While driving down the road various hazards or other vehicles will obstruct your path. Your task is to assess if you are going to collide with a hazard, while also spotting other objects you are supposed to be aware of and looking for.
Here is how Optic Flow works:
1. When you click START you will begin moving along the road. You need to be aware of two things:
1. Road Signs that will show in different colours and shapes on the overhead sign such as the yellow triangle below:
2. Hazards on the road that you may collide with such as:
2. Once you click START quickly note the shape and colour that is displayed on the overhead sign and click/touch on the object with the same shape and colour. It could be placed on another car approaching you or on the side road signs as you see in the example below.
3. In parallel, you have to keep an eye on hazards on the road.
4. If you come across a hazardthat is about to hit you, press the Spacebar to get rid of it. If however, the hazard will not hit you, just ignore it and don’t press the Spacebar. As a driver, it’s important to train yourself and assess this correctly.
More useful information:
• Shapes on vehicles become increasingly alike.
• Driving conditions change, making it increasingly harder to avoid hazards and identify target shapes. As with real-world driving, the speed driven must be reduced to successfully navigate the changing daylight and weather conditions.
• Type and number of hazards differ.
• Exercise adapts to your performance by reducing the time given to track down the target shape.
• Moving from the desert to the suburbs, and on to the city scene, the backgrounds become more of a distraction and complex.
|
Submitted by: Name: Alankrita Singh Roll No: 08 Branch: Computer Science
Today's microprocessors sport a general-purpose design which has its own advantages and disadvantages.
Advantage: One chip can run a range of programs.
That's why you don't need separate computers for different jobs, such as crunching spreadsheets or editing digital photos Disadvantage: For any one application, much of the chip's circuitry isn't needed, and the presence of those "wasted" circuits slows things down.
Hardware (Application Specific Integrated Circuits) Chameleon computing Software-programmed processors Advantages: •very high performance and efficient Disadvantages: •not flexible (can’t be altered after fabrication) • expensive Advantages: •fills the gap between hardware and software •much higher performance than software •higher level of flexibility than ASIC’s Advantages: •software is very flexible to change Disadvantages: •performance can suffer if clock is not fast •fixed instruction set by hardware .
to describe the functionality of ASICs. such as Verilog or VHDL. . Designers of digital ASICs use a hardware description language (HDL). a chip designed solely to run a cell phone is an ASIC. An application-specific integrated circuit (ASIC) is an integrated circuit (IC) customized for a particular use For example.
. FPGAs contain programmable logic components called "logic blocks". and a hierarchy of reconfigurable interconnects that allow the blocks to be wired together. A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by the customer or designer after manufacturing.
that means when a particular software is loaded the present hardware design is erased and a new hardware design is generated by making a particular number of connections active while making others idle. Reconfigurable processor usually contains several parallel processing computational units known as functional blocks. . A chameleon processor is a reconfigurable microprocessor with erasable hardware that can rewire itself dynamically. the connections inside the functional blocks and the connections in between the functional blocks are changing. While reconfiguring the chip. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time.
Among . Reconfigurable processors are currently available from Chameleon Systems. which allows customers to convert their algorithms to hardware configuration by themselves. those only Chameleon is providing a design environment. It takes just 20 microseconds to reconfigure the entire processing array. Billions of Operations (BOPS). This will define the optimum hardware configuration for that particular software. and PACT (Parallel Array Computing Technology).
Four algorithms would divide the chip into four functional areas. With Reconfigurable Technology. In a conventional ASIC or FPGA. multiple algorithms are implemented as separate hardware modules. So finally the result is: much higher performance. lower cost and lower power consumption . many algorithms are loaded into the entire reconfigurable Fabric one at a time.
which are used to construct the circuit. Resulting configuration stream is downloaded into configuration memory through configuration inputs. The most important parts are the logic circuits. which configure function blocks according to data in the configuration memory. A new chip must inside determine the set of the function blocks (FB). Thus. Machine design supposes that some pins are considered as the configuration inputs and another as data or control inputs and outputs. . The various possible connections between functional blocks are encoded to bits known as Configuration bits. a new Reconfigurable machine is established. rules of their interconnections and ways of the input/output connections.
32-bit RISC processor 64 bit memory controller 32 bit PCI controller reconfigurable processing fabric (RPF) high speed system bus programmable I/O (160 pins) DMA(Direct Mem Access) Subsystem Configuration Subsystem .
16×24-bit Multipliers Operating at 125Mhz. Each tile can be reconfigured at runtime Tiles contain : Datapath Units Local Store Memories 16x24 multipliers Control Logic Unit .000. The CS2112 has 4 Slices with 3 Tiles in each. the basic unit of reconfiguration. they provide up to 3. The fabric is divided into Slices.000 16-bit Million Multiply-Accumulates Per Second 24.32-bit Data path Units 24. It consists of 84. The Fabric provides unmatched algorithmic computation power to Chameleon Chip. 16-bit Million Operations Per Second.
Each Programmable I/O bank (ie each slice) of 40 Programmable I/O pins delivers 0.5 GBytes/sec I/O bandwidth. These chips includes banks of Programmable I/O (PIO) pins which provide tremendous bandwidth. .
with eConfigurable Technology. . each Slice can be configured independently.eCONFIGURABLE™ TECHNOLOGY: eConfigurable™ Technology is instantaneous reconfiguration. Swapping the Background Plane into the Active Plane requires just one clock cycle. this operation does not interfere with active processing on the Fabric. This used for technology reconfigures fabric in one clock cycle and increases voice/data/video channels per chip. Loading the Background Plane from external memory requires just 3 µsec per Slice. As mentioned earlier.
C~SIDE Development Tools : With this software development tool . C~Side uses a combined C language and Verilog flow to map algorithms into the chip’s reconfigurable processing fabric (RPF). debugging and verifying RCP designs. . The Chameleon Systems Integrated Development Environment (C~SIDE) is a complete toolkit for designing. Chameleon Systems are providing the ability for the customers to do the programming themselves thus keeping the secrecy of their algorithms.
configuration management and DMA services. The eBIOS calls are automatically generated at compile time. .eBIOS (eConfigurable Basic I/O Services ): It provides a interface between the Embedded Processor System and the Fabric. eBIOS provides resource allocation. but can be edited for precise control of any function.
and field-programmable gate arrays (FPGAs). system architects continue to struggle with the requirement that communication systems deliver both performance and flexibility. . Enter the reconfigurable processor. However. Today’s system architects have at their disposal an arsenal of highly integrated. an entirely new category of semiconductor solution that serves as a system-level platform for a broad range of applications. high-performance semiconductor technologies. application-specific standard products (ASSPs). such as application-specific integrated circuits (ASICs). digital signal processors (DSPs).
. Early and fast design Reducing development cost Can more quickly adapt to new requirements and standards Increasing bandwidth Reducing power Reducing manufacturing cost.
Inertia – Engineers slow to change Inertia is the worst problem facing reconfigurable computing RCP designs requires comprehensive set of tools 'Learning curve' for designers unfamiliar with reconfigurable logic .
Software-Defined Radio (SDR) SDR concept is applied in Cell phone Technology . High-Performance DSL (Digital Subscriber Line Technology) DSL technology brings high Bandwidth to homely users. bandwidth and reconfigurable nature. Base-station infrastructure will have to be adaptive enough to accommodate those requirements. Wireless Base stations The reconfigurable technology mainly focuses on base stations and their unpredictable combination of voice and data-traffic. With a fixed processor the channels must be able to support both simple voice calls and high-bandwidth data connections Wireless Local Loop (WLL) Reconfigurable technology is widely applied in Wireless Local Loops also because of their high processing power.
xDSL concentrators. Its advantages are that it can create customized communications signal processors . DSP. multichannel voice compression. multiprotocol packet and cell processing protocols. base-stations. fixed wireless local loop. and it can more quickly adapt to new requirements and standards and it has lower development costs and reduce risk. software-defined radio. These new chips called chameleon chips are able to rewire themselves on the fly to create the exact hardware needed to run a piece of software at the outmost speed. highperformance embedded telecom and datacom applications. data-intensive has increased performance and channel count. voice compression. Its applications are in. wireless .
ಧನಯವಾದಗಳು .
Sign up to vote on this title
UsefulNot useful
|
Written by Michael B. Strauss, MD, FACS, AAOS
As the population gets older, coupled with increased awareness of good health practices and the recognition that fitness contributes to participation in activities generally appropriate for younger individuals, decisions need to be made about what are appropriate activities for the older aged scuba diver. It is essential to appreciate the distinction between chronological and physiological age. Three factors, namely fitness, co-morbidities, and mobility and strength are fundamental when making decisions about participation in activities in general and in scuba diving in older adults. There is almost always a time to “call it quits” for everything, including scuba diving. This article discusses the factors critical for making decisions as to whether or not the older aged diver should continue scuba diving activities.
It is noteworthy that when Social Security was enacted in 1935, the average age at death was 65. During that time only one in 23 workers who paid into Social Security lived long enough to collect benefits. Today, one in three workers collects Social Security benefits. Besides better attention to good health practices and more attention than ever to fitness, one-third to one-half of the medical or surgical treatments now available did not exist in 1965. The message is our society is getting older; we’re living longer and want to continue to do things formerly consider inappropriate in former times. Scuba diving is no exception.
Physiological versus Chronological Age
To define “old” requires amplification. It requires differentiation between chronological and physiological age which are not always the same (Figure 1).
Chronological age is a person’s actual age and does not consider cognitive function, comorbidities, experience, physical fitness, judgment, mobility and strength. In contrast, physiological age is highly subjective, often evoking comments that a person appears younger (or older) than his/her stated age would suggest. We consider physical fitness, health status, and cognitive function as the three main considerations for defining apparent age.
Measures to judge physiological age such as ability to do activities of daily living, capacity for doing exercise/sport pursuits, participation in social activities, and work status. Other less objective considerations include cognitive function, creativity, capacity to recover from illness or injuries, tolerance to sleep deprivation, and quickness to regain physical fitness (i.e. getting into “shape”) after periods of inactivity. We designed a quick and easy-to-use tool to quantify a person’s health status (Table 1).
Click to enlarge
Click to enlarge
The tool consists of summating five assessments each graded from 2 (best) to 0 (worst) to generate a 0 to 10 score. The assessments include: 1) ability to do activities of daily living, 2) ambulation, 3) comorbidities, 4) smoking/steroid history (whichever gives the lower score) and 5) neurological deficits. Scores of 8 to 10 points quantify the person as “healthy,” 4 to 7 points as “impaired,” and 0 to 3 points as “decompensated.” Information quickly obtained from this tool helps assess the general health of a SCUBA diver and provides guidelines as to who should and should not dive.
As we get older there are changes in the function of our body organs and organ systems (Table 2 and Elements A-E).
(Note: Click to enlarge any of the images on this page)
Click to enlarge
Click to enlarge
The appreciation of changes that occur with aging in different body systems help in decision making for participation, modifying or discontinuation of an activity. For example, contrast the time an athlete can actively play football versus golfing or swimming. Changes occur in our bodies when getting older and almost all hinder performance. The one change that may improve with aging (and experience) is judgment. Improved judgment can help compensate for deterioration in the other items that occur in the body which impose limitations for SCUBA diving. For example, the older diver may limit his/her diving to easily accessible sites that have optimal diving conditions. Consequently, improved judgment as a favorable change has to be balanced with changes which can deteriorate function when making decisions about SCUBA diving as one gets older (Figure 2).
Click to enlarge
Click to enlarge
While the internet is replete with information defining aging, older age, seniors, elderly, older adults, geriatrics, silver years, mandatory retirement, etc., the distinction is not clear cut between old and not old when considering humans. Some gerontologists make a distinction between young-old (55-74) and old-old (75 and older).1 The question of “old” is clouded by societal norms. One example of this is becoming eligible for full benefits for Social Security in the USA at 66 years of age, but even this criteria is expected to increase with increasing productivity and longevity of our population over this arbitrary age.
Another way to look at aging is through physical and cognitive changes such as graying of hair, loss of near vision, decreased libido,
impaired hearing, slower and/or less cognition skills, reduced ability to recover from injury, thinning of bones, muscle atrophy, decreased food tolerances and quantities, difficulty integrating new technology (i.e. Future Shock) and so on (see Table 2). However, it is obvious that these changes occur over a wide range of ages and extremely variable degrees of seriousness.
A third way of looking at aging is through quotations such as “old age is when you feel old,” “you’re old when you’re no longer able to do things that you able to do when younger,” “aging is when it is hard to get out of bed in the morning and you are stiff all over,” “old age is when you no longer have a youthful outlook,” “old age occurs when managing ailments (i.e. with medications, restricted activities, etc.) takes precedence over doing activities.”
For our purposes with respect to SCUBA diving in older ages, the three crucial considerations are fitness, comorbidities and mobility/strength. These three criteria are the focus of this article.
Fitness as a consideration for SCUBA Diving in Older People
We define physical fitness as the readiness or ability, especially in cardiovascular, respiratory and musculoskeletal systems, to perform tasks requiring increased energy expenditure such as extrication in a diving emergency. However, no standards for diving exist for the recreational SCUBA diver. Bove recommends that the ability to do 13 METS (metabolic equivalents) of exertion on a treadmill test is an essential consideration for SCUBA diving.3 The Divers Alert Network (DAN) study shows that 7-8 METS of exercise capacity is adequate for non-emergency SCUBA diving activities.4
Some associations have their own exercise standards such as the IADRDS (International Association of Dive Rescue Specialists) and their Watermanship Testing standards (Table 3). While being able to meet the highest scoring levels is not practical for most recreational SCUBA divers, the testing does provide physical fitness criteria for the diver to judge himself/herself.
We feel that the diving activity should be paired with the level of physical fitness anticipated to be necessary to safely do it. Pre-dive planning and dive site selection are two essential practical considerations. High versus low energy demands go hand-in-hand with the dive site (Figure 3). Ocean currents, drop and pick-up (i.e. free boat) diving, visibility, water temperature, and number of dives all contribute to the level of exertion required for the dive.
The older aged diver (as well as all other divers) should select activities that are commensurate with his/her levels of fitness, mobility/strength, and anticipated swimming needs for the dive. “Soft” criteria for these decisions can be based on limiting the depth of a dive to the distance that can be easily swum underwater in a swimming pool after a single breath, or equating expected
swimming distances on the dive to the distance the diver can comfortably swim in a pool with fins.
In summary, fitness to dive should be based on physiological age and the ability to do sustained aerobic activities rather than chronological age. The older aged diver should assiduously adhere to diving safety and conservative practices. For example, the older aged diver should use the most “conservative” option on the dive computer, always do slow ascent rates and do the 15 foot three minute rest stops, carry safety/signaling equipment, SCUBA dive only under optimal diving conditions, and take a break from diving after several days of continuous, multiple dives per day. With these recommendations, SCUBA diving in the older aged adult need only be discontinued when they (including underwater and surface swimming capacity) are not met.
While use of a wetsuit does not seem a significant consideration with regard to fitness, consider the following scenario. While diving
in cool, current-laden, offshore Island waters, travel luggage weight restrictions made it necessary to use “production” neoprene 7 mil thick wetsuits provided by the dive operator. The buoyancy from the wetsuit alone required about 35 pounds of weight to become neutral in the salt water. The stiffness of the wetsuit and the difficulty of maintaining the weight belt at waist level made for a less than pleasant dive experience and very considerably increased the exertion required to execute the dive. No less than one older diver surfaced “ill” from the dive with nausea, vomiting, fatigue, and malaise—and after careful evaluation it was ascertained that the symptoms were due to the exertion from the dive.
Considerations for SCUBA Diving in Older Aged People
Comorbidities are other medical conditions that co-exist with a primary condition. For our purposes, the primary condition is “older” age which is, as described before, defined by the beholder. There is general agreement as to what medical conditions constitute absolute, relative, and temporary contraindications for SCUBA diving.3,6 Medical comorbidities that impose absolute or temporary limitations for SCUBA diving are generally independent of age such as seizure disorder, decompensated cardiac conditions, neurological conditions that severely interfere with mobility, impaired pulmonary function, and narcotic addiction. The older the diver, the more likely medical contraindications to continue recreational SCUBA diving will develop or already co-exist (Table 4).
The significant question regarding comorbidities and SCUBA diving (regardless of age) are those conditions that are relative contraindications to dive. Examples include asthma, impaired but not decompensated cardiac function, diabetes mellitus, kidney disease, blindness, residuals of strokes, paraplegia, Raynaud’s disease, cerebral palsy, extremity amputations, myopathies, cognitive function deficits, residual impairments from a previous episode of decompression illness, and similar serious conditions. With all of these conditions, individuals can safely SCUBA dive, with appropriate guidance and dive buddies. To decide whether or not a “wanna be” diver can safely dive with a relative contraindication to diving requires evaluation and medical clearance by a physician knowledgeable in diving medicine. Often, ancillary testing is required such as stress electrocardiograms to evaluate impaired cardiac function, pulmonary function tests for lung disease, etc. The decision to dive with a relative contraindication is difficult to make and is appreciably related to the motivation of the individual and his/her support systems for SCUBA diving rather than just the age itself of the diver.
Mobility and Strength as the Third Consideration for SCUBA Diving in Older Aged People
Mobility and strength in SCUBA diving relates to the ability to move from one place to another and to move joints through a functional range of motion sufficient to don and doff gear, enter and exit the water and to be mobile while in the water. Strength with reference to SCUBA diving is a matter of being able to lift dive equipment, carry the equipment to the entry site, and make safe entries and exits from the water. For open water dives this may require shimmying onto a small craft. Mobility and strength are the most poorly defined of the three age-related considerations for SCUBA diving. Although many of the age-related changes in performance and medical contraindications for diving have mobility and strength ramifications, only in their most extreme manifestations are they a contraindication to SCUBA diving.
Of the three considerations for continuing or discontinuing SCUBA diving, mobility and strength are probably the most important for making age-related decisions to terminate diving. With experience, medications, and controlling the diving environment, fitness and comorbidities can be mitigated. Mobility and strength deficits can be somewhat mitigated in the older aged diver with non diving exercise activities that help maintain joint flexibility and muscle strength in addition to aerobic exercises to maintain fitness.
The older aged diver will probably self-terminate SCUBA diving when it is too hard to do, is no longer worth the effort, nor is it fun anymore. These reasons are significantly related to impaired mobility and muscle strength.
Older Aged SCUBA Diver Case Scenarios
Fitness Considerations
A 65-year-old male diver maintains an avid interest in SCUBA diving. Typically he dives four to five times a year at exotic dive locations doing as many five dives a day. Although not an aerobic exercise fanatic, he maintains good health practices, weight control, and exercises almost daily. He carefully selects dive locations requiring minimal fitness demands such as reef and lagoon dive sites with warm water and excellent visibility, and uses anchor/buoy lines for ascents descents whenever possible. He adheres to recommended ascent rates and the 15foot/3 minute rests stops at all times. He uses a dive computer set to the most conservative mode, does not dive over 70 foot depths, and never dives to the extent that this computer goes into the yellow zone.
This older aged diver exemplifies one who follows our age-related fitness recommendations. His diving fitness is substantiated by his 50-plus dives per year. His selection of diving sites and in particular the avoidance of open ocean drift dives and cold water
diving obviates age-related fitness contraindications to diving.
Comorbidity Considerations
A 67-year-old male diver who exercises almost daily with kayaking, bicycling, and/or swimming required coronary artery bypass stenting for angina. Post-stenting he was placed on Plavix® for its prophylactic anti-platelet adherence activity. He resumed his pre-stenting exercise activities with no restrictions including SCUBA diving. The major concern from SCUBA diving was felt to be bleeding from anticoagulation should a traumatic injury occur. However, he felt that there was no more risk of this occurring with recreational SCUBA diving than with his other exercise activities.
This older aged diver’s comorbidity, i.e. a compensated coronary artery disease condition is a relative contraindication to SCUBA diving. His superior conditioning makes fitness and mobility and strength considerations essentially “non-concerns” with respect to being an older aged SCUBA diver. Nonetheless, he dives conservatively and limits his dives (but not necessarily depths) to two to three a day—often with a day’s break after three to four days of consecutive diving.
Mobility and Strength Considerations
A 68-year-old male developed increasingly severe degenerative joint disease pain symptom in his left hip. A cane became necessary for all walking activities. In the water his swimming ability and conditioning plus the offloading effect of buoyancy in the water compensated for the decreased mobility of his hip joint. While in the water he had almost total relief of his hip pain symptoms.
With each successive year of diving activities, it became increasingly difficult to don and doff diving gear, carry the gear when suited-up and make water entries and exits. A beach entry dive was very challenging and required the assistance of two companion divers to enter the water from a sandy beach. After a total hip replacement, the hip pain, strength, and mobility problems were arrested and greatly facilitated the diving related activities when not actually in the water.
The need for a total joint arthroplasty (i.e. total hip joint replacement) is a relative contraindication for SCUBA diving. When fitness and comorbidity considerations are absent or suitably managed, a functional total joint replacement is not a contraindication for diving.
SCUBA diving for older aged individuals raises many questions. One is defining what older age is. For the purposes of SCUBA diving, the physiological age is a far more important consideration than the chronological age. Another question is what criteria needs to be used when making decisions about whether or not to SCUBA dive. The three important considerations of fitness, comorbidities, and mobility and strength offer criteria for making the decision. When comorbidities present relative contraindications to diving, a decision whether or not to SCUBA dive requires evaluations by and recommendations for SCUBA diving from a physician knowledgeable in diving medicine. Safety becomes the primary concern when making recommendations about diving with relative comorbidities. Finally, the decision for when an older aged SCUBA diver (whose fitness, comorbidities, and mobility and strength are not mitigating factors for not diving) should stop diving largely rests with the diver. In this situation, the decision to stop SCUBA diving becomes a matter when it is “no longer fun.”
Reprinted with permission from the publisher
1. Ohio State University Extension. When does Someone Attain Old Age? http://ohioline.osu.edu/ss-fact/0101.html
2. Boss, M.D. and Seegmiller, M.D., 1981. Age-Related Physiological Changes and Their Clinical Significance. Geriatric Medicine. West J Med 135: 434-440.
3. Alfred A. Bove. Cardiovascular Disorders and Diving. Chapter 25. Bove and Davis Diving Medicine. Elsevier Inc. 2004: 493.
4. Pollock NW, Natoli MJ. Aerobic Fitness of Certified Divers Volunteering for Laboratory Diving Research Studies. Undersea &
Hyperbaric Medicine. 2009; July/Aug; 303-304.
5. I.A.D.R.S. Annual Watermanship Test, http://www.iadrs.org/media/IADRS_Watermanship_Test.pdf
6. Strauss, M.D. and Aksenov, M.D., PhD. 2004. Medical Preparation for Diving: Fitness and Nutrition. Diving Science. Human Kinetics: 185-205.
About the Author:
Michael B. Strauss, MD, FACS, AAOS, is well known to the readers of Wound Care and Hyperbaric Medicine, having contributed six Featured Articles in recent editions. Among his interests in hyperbaric medicine is understanding the mechanisms of hyperbaric oxygen. At the 1988 UHMS ASM, Dr. Strauss first presented on hyperoxygenation, vasoconstriction, and host factor mechanisms. Since then he has continually refined and updated his hyperbaric oxygen (HBO) mechanisms presentations. In his text MasterMinding Wounds, he discusses the mechanisms that especially pertain to wound healing while differentiating HBO mechanisms as primary and secondary.
Best Publishing Company is the world’s foremost publisher of books on diving and hyperbaric medicine. For more information and resources, please visit the Best Publishing Company website www.BestPub.com. You can reach them at [email protected] or (561) 776-6066.
|
Supply and Demand
Supply and demand are the primary factors that drives market prices up or down, and the stock market is no exception, according to the New York Stock Exchange. If there are more stockholders who want to sell their stock than there are investors who are willing to buy, the price per share drops, driving the stock market down. Plenty of factors can influence supply and demand, including company performance, positive or negative news about specific companies or industries, world events and political changes.
Down Market Potential
It is possible to make greater returns during a down market than in an up market, for the simple reason that stocks have the potential to move higher from a lower starting point. For example, a $1,000 investment at the stock market's peak in 1929, just prior to the start of the Great Depression, would have been worth only around $170 by the time the market bottomed out in 1932. But if you had held on to your stocks until 1959, around 30 years, your original investment would be worth more than $9,500, for a total annualized return of around 7.8 percent. If you had waited and invested that $1,000 at the bottom of the market in 1932, your total annualized return by 1959 would have been 16 percent, according to the CNN Money website.
Buy and Hold
Photo Credits
About the Author
Zacks Investment Research
is an A+ Rated BBB
Accredited Business.
|
Global Economic
Trend Analysis
Recent Posts
Tuesday, September 30, 2014 12:35 PM
Mish Moved to MishTalk.Com Click to Visit.
Reader Mike wonders how interest can ever be repaid in a credit-based economy.
Hi Mish,
I wonder if you would be able to comment on this from Bill Gross in For Wonks Only:
This seems to correlate to reality 100% but the implications are stunning. It means that assets must increase in value at the rate of the original loan plus all interest payments ever made. It also means there will be a very major reversal at some point as there will be a moment when the last loan that someone will actually pay gets written and the system will not be able to expand. I always assumed that debt levels would just reach a very high plateau and stay there but Gross is saying that is not possible.
If the system we have requires the interest to be created every year (in the form of new loans) to survive that seems like the very definition of a ponzi scheme.
Do you know the mechanical reason why the interest payments need to be created by issuing new debt? It is possible, of course, that you disagree with Bill Gross but he probably knows more about how debt works than any man alive so my assumption is that you agree with his viewpoint.
I'm sure you get endless requests for articles but this is such a fundamental question I would be extremely grateful (as I'm sure would many other people) if you are able to write a reply as an article.
Exponential Math
We are in this mess precisely because of fractional reserve lending and never-ending policy of inflation by central banks that do not seem to understand the long-term ramifications of exponential math.
I have covered the exponential math aspect before. For details, please see Money as Communication: A Purposely "Non-Educational" Fallacious Video by the Atlanta Fed.
Credit in a Gold-Backed World
There is nothing wrong with credit expansion used for productive purposes. If we had a 100% gold-backed dollar without FDIC, bad debts would be extinguished automatically.
Interest rates would be low for low-risk ventures and high for high-risk ventures, with lenders (depositors willing to lend money) taking the risk.
On high risk ventures, some projects would lose and some win, as it should be.
Importantly, no money held for safe keeping (checking deposits), would ever be at risk in 100% gold-backed system. Nor would there be any mathematical need for credit to expand exponentially forever and ever without end.
30-year mortgages might not even exist, but that would not cause any problems.
Deflation (a natural state of affairs because of rising productivity) would provide price stability central bankers now claim they want.
But that is not the world we live in.
Fiat World Math
Unfortunately, we live in a fiat world, not a 100% gold-backed dollar world. We have fractional reserve lending, and a huge mismatch in duration. Banks borrow short and lend long. It's a recipe for disaster.
Thanks to central bank encouragement and unnaturally low rates for a fiat scheme, credit is out of hand. Loans that have been made cannot possibly be paid back. Unproductive zombie companies survive only because they can roll over debt while expanding it. Covenant-lite debt now accounts for 50% of new debt issuance.
Worse yet, real wages are falling because of central bank inflationary policies in a productivity-driven world increasingly dependent on robots, not human labor.
Minimum wage laws, Obamacare, Congressional fiscal policies, Fed interest rate policies, public unions, and inflationary policies in every phase of government make it likely that companies use robots at a far faster pace than they would otherwise.
Something has to give and it will.
Debtberg Malinvestments and the Zero-Bound Problem
I asked my friend Pater Tenebrarum at the Acting Man blog to chime in on this situation. Pater writes ...
Interest is basically nothing but the discount of future goods vs. present goods. At its root, interest is actually a non-monetary phenomenon. In the modern-day fractionally reserved fiat money system, it has become possible to expand money and credit at immense rates. The reason why the debtberg was able to grow to such immense proportions is that interest rates fell for over 30 years. But now we have arrived at a critical juncture, because interest rates can no longer go any lower. The possibility to refinance existing debt again and again to lower its cost has come to an end.
The size of a debt is immaterial if the debt has been used for productive purposes and is so to speak 'self-liquidating'. Imagine you are a company that borrows $200 million at 3%. If you employ this money to produce goods that have a net profit margin of 6%, the repayment of the debt plus interest poses no problem.
But a lot of debt in the system today is a "dead weight" that will produce nothing. All extant government debt is only a reflection of past spending, and the funds have been 100% consumed. The same obviously holds for consumer debt, but consumers at least have an income based on production (i.e., their work will create value in the future). The government's income relies on the production of others, which is coercively appropriated.
In the realm of corporate debt, which may be considered productive in principle, there is the problem that many of the investments that have been undertaken are really malinvestments, as economic calculation has been falsified by monetary pumping. Debt that has funded capital malinvestment is a dead weight as well, although this may only become obvious at a later stage.
So the situation is now this: debt service will now grow with every new addition to existing debt, as interest rates have arrived at their absolute lows. Given total credit market debt of $60 trillion in the US alone, it will become more and more difficult to actually produce the added value required to service this debt. There is indeed an incentive for many to play a kind of Ponzi scheme that is very similar to the government debt Ponzi. Many companies, especially the junk credits, can only survive by rolling over their debt when it comes due.
Nevertheless, Gross' calculation may be a bit too simplistic. After all, if you are a creditor and get paid interest and principal, money is only moving from A to B. It is still there, only its ownership has changed hands. The problem is that central banks believe in inflating debt away and keeping prices "stable".
In a free market economy, prices would not be stable, they would in fact decline. Thus, interest would be quite low to begin with, and every dollar would be doing more work over time (i.e., could be exchanged for more real wealth/goods/services as time passes).
So we currently have a systemic bias toward more and more debt expansion. Obviously, debt service costs in this system are slated to rise, while an offsetting creation of wealth is no longer guaranteed. You can see this from the fact that more and more new debt is added per dollar of GDP growth. So Gross is quite correct that there is a problem - the problem is the ongoing bubble. Such a bubble does indeed require a constant acceleration in debt and money supply to keep going.
Seems to me that it is a system that is coming ever closer to a cliff.
On the Edge of a Cliff
• Japan is on the edge of a cliff
• Europe is on the edge of a cliff.
• China is approaching the cliff, if not already on the edge.
• US is approaching the cliff.
No one can be sure when some country is going to fall off that cliff, but exponential finance, Ponzi financing schemes, and zero-bound interest limitations suggest the outcome is sooner rather than later. As I have stated before, a global currency crisis awaits.
Mike "Mish" Shedlock
Last 10 Posts
Copyright 2009 Mike Shedlock. All Rights Reserved.
View My Stats
|
Search tew.org
What's New
Tibet The Third Pole
Zone of Peace
Dalai Lama
Site Map
Frogs evolution tracks rise of Himalayas and rearrangement of Southeast Asia
Web Editor: Zhang
21 Aug, 2010
Encyclopedia Updated live from Wikipedia, last check: August 23, 2010 03:50 UTC (40 seconds ago)
From Wikipedia, the free encyclopedia
A view down the Whitechuck Glacier in Glacier Peak Wilderness in 1973
The same view as seen in 2006, where this branch of glacier retreated 1.9 kilometres (1.2 mi)
Glacier mass balance
This map of mountain glacier mass balance changes since 1970 shows thinning in yellow and red, and thickening in blue.
Global glacial mass balance in the last fifty years, reported to the WGMS and NSIDC. The increasing downward trend in the late 1980s is symptomatic of the increased rate and number of retreating glaciers.
Crucial to the survival of a glacier is its mass balance, the difference between accumulation and ablation (melting and sublimation). (Mote and Kaser) Climate change may cause variations in both temperature and snowfall, causing changes in mass balance. A glacier with a sustained negative balance is out of equilibrium and will retreat. A glacier with sustained positive balance is also out of equilibrium, and will advance to reestablish equilibrium. Currently, there are a few advancing glaciers, although their modest growth rates suggest that they are not far from equilibrium.(Trabant)
Glacier retreat results in the loss of the low-elevation region of the glacier. Since higher elevations are cooler, the disappearance of the lowest portion of the glacier reduces overall ablation, thereby increasing mass balance and potentially reestablishing equilibrium. If the mass balance of a significant portion of the accumulation zone of the glacier is negative, it is in disequilibrium with the climate and will melt away without a colder climate and or an increase in frozen precipitation.
The key symptom of a glacier in disequilibrium is thinning along the entire length of the glacier. This indicates thinning in the accumulation zone. The result is marginal recession of the accumulation zone margin, not just of the terminus. In effect, the glacier no longer has a consistent accumulation zone.(Pelto)(Pelto and Hartzell) For example, Easton Glacier (see below) will likely shrink to half its size, but at a slowing rate of reduction, and stabilize at that size, despite the warmer temperature, over a few decades. However, the Grinnell Glacier (pictured above) will shrink at an increasing rate until it disappears. The difference is that the upper section of Easton Glacier remains healthy and snow covered, while even the upper section of the Grinnell Glacier is bare, is melting and has thinned. Small glaciers with minimal altitude range are most likely to fall into disequilibrium with the climate.
Methods for measuring glacier retreat include staking terminus location, global positioning mapping, aerial mapping, and laser altimetry.
Mid-latitude glaciers
Mid-latitude glaciers are located either between the Tropic of Cancer and the Arctic Circle, or between the Tropic of Capricorn and the Antarctic Circle. These two regions support glacier ice from mountain glaciers, valley glaciers and even smaller icecaps, which are usually located in higher mountainous regions. All of these glaciers are located in mountain ranges, notably the Himalayas; the Alps; the Pyrenees; Rocky Mountains and Pacific Coast Ranges of North America; the Patagonian Andes in South America; and mountain ranges in New Zealand. Glaciers in these latitudes are more widespread and tend to be more massive the closer they are located to the polar regions. These glaciers are the most widely studied over the past 150 years. As is true with the glaciers located in the tropical zone, virtually all the glaciers in the mid-latitudes are in a state of negative mass balance and are retreating.
Eastern hemisphere
This map from the annual Glacier Commission surveys in Italy and Switzerland shows the percentage of advancing glaciers in the Alps. Mid-20th century saw strong retreating trends, but not as extreme as the present; current retreats represent additional reductions of already smaller glaciers.
The World Glacier Monitoring Service reports on changes in the terminus, or lower elevation end, of glaciers from around the world every five years.(WGMS) In their 2000-2005 edition, they noted the terminal point variations of glaciers across the Alps. Over the five-year period from 2000-2005, 115 of 115 glaciers examined in Switzerland retreated, 115 of 115 glaciers in Austria reatreated, in Italy during 2005 50 glaciers were retreating and 3 stationary , and all 7 glaciers observed in France were in retreat. French glaciers experienced a sharp retreat in the years 1942–53 followed by advances up to 1980, and then further retreat beginning in 1982. As an example, since 1870 the Argentière Glacier and Mont Blanc Glacier have receded by 1,150 m (3,800 ft) and 1,400 m (4,600 ft), respectively. The largest glacier in France, the Mer de Glace, which is 11 km (6.8 mi) long and 400 m (1,300 ft) thick, has lost 8.3% of its length, or 1 km (0.62 mi), in 130 years, and thinned by 27%, or 150 m (490 ft), in the midsection of the glacier since 1907. The Bossons Glacier in Chamonix, France, has retreated 1,200 m (3,900 ft) from extents observed in the early 20th century. In 2005, of 91 Swiss glaciers studied, 84 retreated from where their terminal points had been in 2004 while the remaining 7 showed no change.(MSNBC)
Other researchers have found that glaciers across the Alps appear to be retreating at a faster rate than a few decades ago. In 2008, the Swiss Glacier survey of 85 glaciers found 78 retreating, 2 stationary and 5 advancing. The Trift Glacier had retreated over 500 m (1,600 ft) just in the three years of 2003 to 2005, which is 10% of its total length. The Grosser Aletsch Glacier, the largest glacier in Switzerland, has retreated 2,600 m (8,500 ft) since 1880. This rate of retreat has also increased since 1980, with 30%, or 800 m (2,600 ft), of the total retreat occurring in the last 20% of the time period.(SFIoTZ) Similarly, of the glaciers in the Italian Alps, only about a third were in retreat in 1980, while by 1999, 89% of these glaciers were retreating. In 2005, the Italian Glacier Commission found that 123 glaciers were retreating, 1 advancing and 6 stationary.(IGC) Repeat photography of glaciers in the Alps provides clear evidence that glaciers in this region have retreated significantly in the past several decades. (Alean) Morteratsch Glacier, Switzerland is one key example. The yearly measurements of the length changes started in 1878. The overall retreat from 1878 to 1998 has been 2 km (1.2 mi) with a mean annual retreat rate of approximately 17 m (56 ft) per year. This long-term average was markedly surpassed in recent years with the glacier receding 30 m (98 ft) per year during the period between 1999–2005.(SFIoTZ) One major concern which has in the past had great impact on lives and property is the death and destruction from a Glacial Lake Outburst Flood (GLOF). Glaciers stockpile rock and soil that has been carved from mountainsides at their terminal end. These debris piles often form dams that impound water behind them and form glacial lakes as the glaciers melt and retreat from their maximum extents. These terminal moraines are frequently unstable and have been known to burst if overfilled or displaced by earthquakes, landslides or avalanches. If a glacier has a rapid melting cycle during warmer months, the terminal moraine may not be strong enough to hold the rising water behind it, leading to a massive localized flood. This is an increasing risk due to the creation and expansion of glacial lakes resulting from glacier retreat. Past floods have been deadly and have resulted in enormous property damage. Towns and villages in steep, narrow valleys that are downstream from glacial lakes are at the greatest risk. In 1892 a GLOF released some 200,000 km3 (2.6×1014 cu yd) of water from the lake of the Glacier de Tête Rousse, resulting in the deaths of 200 people in the French town of Saint Gervais.(Pelto5) GLOFs have been known to occur in every region of the world where glaciers are located. Continued glacier retreat is expected to create and expand glacial lakes, increasing the danger of future GLOFs.
Though the glaciers of the Alps have received more attention from glaciologists than in other areas of Europe, research indicates that throughout most of Europe, glaciers are rapidly retreating. In the Kebnekaise Mountains of northern Sweden, a study of 16 glaciers between 1990 and 2001 found that 14 glaciers were retreating, one was advancing and one was stable.(GSU) During the 20th century, glaciers in Norway retreated overall with brief periods of advance around 1910, 1925 and in the 1990s. In the 1990s, 11 of 25 Norwegian glaciers observed had advanced due to several consecutive winters with above normal precipitation. However, following several consecutive years of little winter precipitation since 2000, and record warmth during the summers of 2002 and 2003, Norwegian glaciers have decreased significantly since the 1990s. By 2005 only 1 of the 25 glaciers monitored in Norway was advancing, two were stationary and 22 were retreating. In 2009 18 glaciers reatreated, three were stationary (less than 2 meters of change) and two advanced. In 2006 glacier mass balances were very negative in Norway and of the 26 glaciers examined, 24 were retreating with one stationary and one advancing.(Elverhoy) The Norwegian Engabreen Glacier has retreated 185 m (610 ft) since 1999, while the Brenndalsbreen and Rembesdalsskåka glaciers have retreated 276 m (910 ft) and 250 m (820 ft), respectively, since 2000. The Briksdalsbreen glacier retreated 96 m (310 ft) in 2004 alone—the largest annual retreat recorded for this glacier since monitoring began in 1900. This figure was exceeded in 2006 with five glaciers retreating over 100 m (330 ft) from the fall of 2005 to the fall of 2006. Four outlets from the Jostedalsbreen ice cap, Kjenndalsbreen, Brenndalsbreen, Briksdalsbreen and Bergsetbreen had a frontal retreat of more than 100 metres. Gråfjellsbrea, an outlet from Folgefonna, had a retreat of almost 100 m (330 ft). Overall, from 1999 to 2005, Briksdalsbreen retreated 336 metres (1,100 ft).(Elverhoy).
In the Spanish Pyrenees, recent studies have shown important losses in extent and volume of the glaciers of the Maladeta massif during the period 1981-2005. These include a reduction in area of 35.7%, from 2.41 km2 (600 acres) to .627 km2 (155 acres), a loss in total ice volume of .0137 km3 (0.0033 cu mi) and an increase in the mean altitude of the glacial termini of 43.5 m (143 ft).(Chueca et alia) For the Pyrenees as a whole 50-60% of the glaciated area has been lost since 1991. At least three glaciers Balaitus, Perdigurero and La Munia have disappeared in this period. Peridido Glacier has shrank from 90 hectares to 40 hectares. (Serrano and Martinez)
Siberia and the Russian Far East, although typically classified as polar regions, owing to the dryness of the winter climate have glaciers only in the high Altai Mountains, Verkhoyansk Range and Cherskiy Range. Kamchatka, exposed to moisture form the Sea of Okhotsk, has much more extensive glaciation totaling around 2,500 square kilometres (970 square miles).
Because the collapse of Communism has caused a large reduction in the number of monitoring stations[1], details on the retreat of Siberian glaciers is much poorer than in most other regions of the world. Nonetheless, available records do indicate a general retreat of all glaciers in the Altai Mountains and (with the exception of volcanic glaciers) in Kamchatka. Sakha's glaciers, totaling seventy square kilometers, have shrunk by around 28 percent since 1945[2], whilst in moister regions of Siberia and on the Pacific coast, the shrinkage is considerably larger[3], reaching several percent annually in some places.
This NASA image shows the formation of numerous glacial lakes at the termini of receding glaciers in Bhutan-Himalaya.
These glaciers in New Zealand have continued to retreat rapidly in recent years. Notice the larger terminal lakes, the retreat of the white ice (ice free of moraine cover), and the higher moraine walls due to ice thinning. Photo.
In New Zealand the mountain glaciers have been in general retreat since 1890, with an acceleration of this retreat since 1920. Most of the glaciers have thinned measurably and have reduced in size, and the snow accumulation zones have risen in elevation as the 20th century progressed. During the period 1971–75, Ivory Glacier receded 30 m (98 ft) from the glacial terminus, and about 26% of the surface area of the glacier was lost over the same period. Since 1980 numerous small glacial lakes were created behind the new terminal moraines of several of these glaciers. Glaciers such as Classen, Godley and Douglas now all have new glacial lakes below their terminal locations due to the glacial retreat over the past 20 years. Satellite imagery indicates that these lakes are continuing to expand. There has been significant and ongoing ice volume losses on the largest New Zealand glaciers, including the Tasman, Ivory, Classen, Mueller, Maud, Hooker, Grey, Godley, Ramsay, Murchison, Therma, Volta and Douglas Glaciers. The retreat of these glaciers has been marked by expanding proglacial lakes and terminus region thinning. The loss in volume from 1975-2005 is 11% of the total.(Salinger)
Several glaciers, notably the much visited Fox and Franz Josef Glaciers on New Zealand's West Coast, have periodically advanced, especially during the 1990s, but the scale of these advances is small when compared to 20th-century retreat. Both glaciers are currently more than 2.5 km (1.6 mi) shorter than a century ago. These large, rapidly flowing glaciers situated on steep slopes have been very reactive to small mass-balance changes. A few years of conditions favorable to glacier advance, such as more westerly winds and a resulting increase in snowfall, are rapidly echoed in a corresponding advance, followed by equally rapid retreat when those favorable conditions end.(USGS3) The glaciers that have been advancing in a few locations in New Zealand have been doing so due to a temporary weather change associated with El Niño, which has brought more precipitation and cloudier, cooler summers since 2002.(Goodenough)
Western hemisphere
The Lewis Glacier, North Cascades National Park after melting away in 1990
North American glaciers are primarily located along the spine of the Rocky Mountains in the United States and Canada, and the Pacific Coast Ranges extending from northern California to Alaska. While Greenland is geologically associated with North America, it is also a part of the Arctic region. Apart from the few tidewater glaciers such as Taku Glacier, that are in the advance stage of their tidewater glacier cycle prevalent along the coast of Alaska, virtually all the glaciers of North America are in a state of retreat. The observed retreat rate has increased rapidly since approximately 1980, and overall each decade since has seen greater rates of retreat than the preceding one. There are also small remnant glaciers scattered throughout the Sierra Nevada mountains of California and Nevada.
The Cascade Range of western North America extends from southern British Columbia in Canada to northern California. Excepting Alaska, about half of the glacial area in the U.S. is contained in the more than 700 glaciers of the North Cascades, a portion of the range between the Canadian border and I-90 in central Washington. These glaciers store as much water as that contained in all the lakes and reservoirs in the rest of the state, and provide much of the stream and river flow in the dry summer months, approximating some 870,000 m3 (1,140,000 cu yd).
The Boulder Glacier retreated 450 m (1,500 ft) from 1987 to 2005.
The Easton Glacier retreated 255 m (840 ft) from 1990 to 2005.
As recently as 1975, many North Cascade glaciers were advancing due to cooler weather and increased precipitation that occurred from 1944 to 1976. However, by 1987 all the North Cascade glaciers were retreating, and the pace of the glacier retreat has increased each decade since the mid-1970s. Between 1984 and 2005, the North Cascade glaciers lost an average of more than 12.5 m in thickness and between 20% and 40% of their volume.(Pelto)
Glaciologists researching the North Cascades glaciers have found that all 47 monitored glaciers are receding and that four glaciers—Spider Glacier, Lewis Glacier (pictured), Milk Lake Glacier, and David Glacier—have disappeared completely since 1985. The White Chuck Glacier (near Glacier Peak) is a particularly dramatic example. The glacier area shrank from 3.1 km2 (1.2 sq mi) in 1958 to .9 km2 (0.35 sq mi) by 2002. Between 1850 and 1950, the Boulder Glacier on the southeast flank of Mount Baker retreated 8,700 feet (2,650 m). William Long of the United States Forest Service observed the glacier beginning to advance due to cooler/wetter weather in 1953. This was followed by a 2,438 feet (743 m) advance by 1979.(Pelto3) The glacier again retreated 450 m (1,500 ft) from 1987 to 2005, leaving barren terrain behind. This retreat has occurred during a period of reduced winter snowfall and higher summer temperatures. In this region of the Cascades, winter snowpack has declined 25% since 1946, and summer temperatures have risen 0.7 °C (1.2 °F) during the same period. The reduced snowpack has occurred despite a small increase in winter precipitation; thus, it reflects warmer winter temperatures leading to rainfall and melting on glaciers even during the winter. As of 2005, 67% of the North Cascade glaciers observed are in disequilibrium and will not survive the continuation of the present climate. These glaciers will eventually disappear unless temperatures fall and frozen precipitation increases. The remaining glaciers are expected to stabilize, unless the climate continues to warm, but will be much reduced in size.(Pelto3)(Pelto4)
US Rocky Mountains
On the sheltered slopes of the highest peaks of Glacier National Park in Montana, its eponymous glaciers are diminishing rapidly. The area of each glacier has been mapped by the National Park Service and the U.S. Geological Survey for decades. Comparing photographs taken in the mid-19th century with contemporary images provides ample evidence that the glaciers in the park have retreated notably since 1850. Repeat photography over the decades since clearly show that glaciers throughout the park such as Grinnell Glacier are all retreating. The larger glaciers are now approximately a third of their former size when first studied in 1850, and numerous smaller glaciers have disappeared completely. Only 27% of the 99 km2 (38 sq mi) area of Glacier National Park covered by glaciers in 1850 remained covered by 1993. (USGS) Researchers believe that by the year 2030, the vast majority of glacial ice in Glacier National Park will be gone unless current climate patterns reverse their course.(USGS5) Grinnell Glacier is just one of many glaciers in Glacier National Park that have been well documented by photographs for many decades. The photographs below clearly demonstrate the retreat of this glacier since 1938.
The semiarid climate of Wyoming still manages to support about a dozen small glaciers within Grand Teton National Park, which all show evidence of retreat over the past 50 years. Schoolroom Glacier, located slightly southwest of Grand Teton, one of the more easily reached glaciers in the park, is expected to disappear by 2025. Research between 1950 and 1999 demonstrated that the glaciers in Bridger-Teton National Forest and Shoshone National Forest in the Wind River Range shrank by over a third of their size during that period. Photographs indicate that the glaciers today are only half the size as when first photographed in the late 1890s. Research also indicates that the glacial retreat was proportionately greater in the 1990s than in any other decade over the last 100 years. Gannett Glacier on the northeast slope of Gannett Peak is the largest single glacier in the Rocky Mountains south of Canada. It has reportedly lost over 50% of its volume since 1920, with almost half of that loss occurring since 1980. Glaciologists believe the remaining glaciers in Wyoming will disappear by the middle of the 21st century if the current climate patterns continue.(WWRDSL)
Canadian Rockies and British Columbia Coast Range
Fast-melting toe of the Athabasca Glacier, 2005
The Athabasca Glacier in the Columbia Icefield of the Canadian Rockies, has retreated 1,500 m in the last century. Also recent animation.
Valdez Glacier has thinned 90 m (300 ft) over the last century and the barren ground near the glacial margins have been exposed due to the glacier thinning and retreating over the last two decades of the 20th century.(Pelto5)
In the Canadian Rockies, the glaciers are generally larger and more widespread than they are to the south in the United States Rocky Mountains. One of the more accessible glaciers in the Canadian Rockies is the Athabasca Glacier, which is an outlet glacier of the 325 km2 (125 sq mi) Columbia Icefield. The Athabasca Glacier has retreated 1,500 m (4,900 ft) since the late 19th century. The rate of retreat for this glacier has increased since 1980, following a period of slow retreat from 1950 to 1980. The Peyto Glacier in Alberta covers an area of about 12 km2 (4.6 sq mi), and retreated rapidly during the first half of the 20th century, stabilized by 1966, and resumed shrinking in 1976.(CCIN) Illecillewaet Glacier in British Columbia's Glacier National Park (Canada) has retreated 2 km (1.2 mi) since first photographed in 1887.
In Garibaldi Provincial park in SW British Columbia over 505 km2, or 26%, of the park, was covered by glacier ice at the beginning of the 18th century. Ice cover decreased to 297 km2 by 1987–1988 and to 245 km2 by 2005, 50% of the 1850 area. The 50 km2 loss in the last 20 years coincides with negative mass balance in the region. During this period all nine glacier examined have retreated significantly. (Koch)
There are thousands of glaciers in Alaska, though only a relative few of them have been named. The Columbia Glacier near Valdez in Prince William Sound has retreated 15 km (9.3 mi) in the last 25 years. Icebergs calved off this glacier were a partial cause of the Exxon Valdez oil spill, as the oil tanker had changed course to avoid the icebergs. The Valdez Glacier is in the same area, and though it does not calve, it has also retreated significantly. "A 2005 aerial survey of Alaskan coastal glaciers identified more than a dozen glaciers, many former tidewater and calving glaciers, including Grand Plateau, Alsek, Bear, and Excelsior Glaciers that are rapidly retreating. Of 2,000 glaciers observed, 99% are retreating." (Molnia2) Icy Bay in Alaska is fed by three large glaciers—Guyot, Yahtse, and Tyndall Glaciers—all of which have experienced a loss in length and thickness and, consequently, a loss in area. Tyndall Glacier became separated from the retreating Guyot Glacier in the 1960s and has retreated 24 km (15 mi) since, averaging more than 500 m (1,600 ft) per year.(Molnia)
The Juneau Icefield Research Program has monitored the outlet glaciers of the Juneau Icefield since 1946. On the west side of the ice field, the terminus of the Mendenhall Glacier, which flows into suburban Juneau, Alaska, has retreated 580 m (1,900 ft). Of the nineteen glaciers of the Juneau Icefield, eighteen are retreating, and one, the Taku Glacier, is advancing. Eleven of the glaciers have retreated more than 1 km (0.62 mi) since 1948 — Antler Glacier, 5.4 km (3.4 mi); Gilkey Glacier, 3.5 km (2.2 mi); Norris Glacier, 1.1 km (0.68 mi) and Lemon Creek Glacier, 1.5 km (0.93 mi).(Pelto6) Taku Glacier has been advancing since at least 1890, when naturalist John Muir observed a large iceberg calving front. By 1948 the adjacent fjord had filled in, and the glacier no longer calved and was able to continue its advance. By 2005 the glacier was only 1.5 km (0.93 mi) from reaching Taku Point and blocking Taku Inlet. The advance of Taku Glacier averaged 17 m (56 ft) per year between 1988 and 2005. The mass balance was very positive for the 1946–88 period fueling the advance; however, since 1988 the mass balance has been slightly negative, which should in the future slow the advance of this mighty glacier.(Pelto and Miller)
Long-term mass balance records from Lemon Creek Glacier in Alaska show slightly declining mass balance with time.(Miller and Pelto) The mean annual balance for this glacier was −0.23 m (0.75 ft) each year during the period of 1957 to 1976. Mean annual balance has been increasingly negatively averaging −1.04 m (3.4 ft) per year from 1990 to 2005. Repeat glacier altimetry, or altitude measuring, for 67 Alaska glaciers find rates of thinning have increased by more than a factor of two when comparing the periods from 1950 to 1995 (0.7 m (2.3 ft) per year) and 1995 to 2001 (1.8 m (5.9 ft) per year).(Arendt, et alia) This is a systemic trend with loss in mass equating to loss in thickness, which leads to increasing retreat—the glaciers are not only retreating, but they are also becoming much thinner. In Denali National Park, all glaciers monitored are retreating, with an average retreat of 20 m (66 ft) per year. The terminus of the Toklat Glacier has been retreating 26 m (85 ft) per year and the Muldrow Glacier has thinned 20 m (66 ft) since 1979.(Adema) Well documented in Alaska are surging glaciers that have been known to rapidly advance, even as much as 100 m (330 ft) per day. Variegated, Black Rapids, Muldrow, Susitna and Yanert are examples of surging glaciers in Alaska that have made rapid advances in the past. These glaciers are all retreating overall, punctuated by short periods of advance.
Andes and Tierra del Fuego
A large region of population surrounding the central and southern Andes of Argentina and Chile reside in arid areas that are dependent on water supplies from melting glaciers. The water from the glaciers also supplies rivers that have in some cases been dammed for hydroelectric power. Some researchers believe that by 2030, many of the large ice caps on the highest Andes will be gone if current climate trends continue. In Patagonia on the southern tip of the continent, the large ice caps have retreated a 1 km (0.62 mi) since the early 1990s and 10 km (6.2 mi) since the late 1800s. It has also been observed that Patagonian glaciers are receding at a faster rate than in any other world region.(BBC2) The Northern Patagonian Ice Field lost 93 km2 (36 sq mi) of glacier area during the years between 1945 and 1975, and 174 km2 (67 sq mi) from 1975 to 1996, which indicates that the rate of retreat is increasing. This represents a loss of 8% of the ice field, with all glaciers experiencing significant retreat. The Southern Patagonian Ice Field has exhibited a general trend of retreat on 42 glaciers, while four glaciers were in equilibrium and two advanced during the years between 1944 and 1986. The largest retreat was on O'Higgins Glacier, which during the period 1896–1995 retreated 14.6 km (9.1 mi). The Perito Moreno Glacier is 30 km (19 mi) long and is a major outflow glacier of the Patagonian ice sheet, as well as the most visited glacier in Patagonia. Perito Moreno Glacier is presently in equilibrium, but has undergone frequent oscillations in the period 1947–96, with a net gain of 4.1 km (2.5 mi). This glacier has advanced since 1947, and has been essentially stable since 1992. Perito Moreno Glacier is one of three glaciers in Patagonia known to have advanced, compared to several hundred others in retreat.(Skvarca and Naruse)(Cassasa). The two major glaciers of the Southern Patagonia Icefield to the north of Moreno, Upsala and Videma Glacier have retreated 4.6 km (2.9 mi) in 21 years and 1 km (0.62 mi) in 13 years respectively (EORC). In the Aconcagua River Basin, glacier retreat has resulted in a 20% loss in glacier area, declining from 151 km2 (58 sq mi) to 121 km2 (47 sq mi).(Brown, Rivera and Acuna) The Marinelli Glacier in Tierra del Fuego has been in retreat since at least 1960 through 2008.
Tropical glaciers
Tropical glaciers are located between the Tropic of Cancer and the Tropic of Capricorn, in the region that lies 23° 26′ 22″ north or south of the equator. Tropical glaciers are the most uncommon of all glaciers for a variety of reasons. Firstly, the tropics are the warmest part of the planet. Secondly, the seasonal change is minimal with temperatures warm year round, resulting in a lack of a colder winter season in which snow and ice can accumulate. Thirdly, few taller mountains exist in these regions upon which enough cold air exists for the establishment of glaciers. All of the glaciers located in the tropics are on isolated high mountain peaks. Overall, tropical glaciers are smaller than those found elsewhere and are the most likely glaciers to show rapid response to changing climate patterns. A small temperature increase of only a few degrees can have almost immediate and adverse impact on tropical glaciers.(Jankowski)
Furtwängler Glacier atop Kilimanjaro in the foreground and snowfields and the Northern Icefields beyond.
With almost the entire continent of Africa located in the tropical and subtropical climate zones, glaciers are restricted to two isolated peaks and the Ruwenzori Range. Kilimanjaro, at 5,895 m (19,340 ft), is the highest peak on the continent. Since 1912 the glacier cover on the summit of Kilimanjaro has apparently retreated 75%, and the volume of glacial ice is now 80% less than it was a century ago due to both retreat and thinning.(Thompson) In the 14-year period from 1984 to 1998, one section of the glacier atop the mountain receded 300 m (980 ft).(Wielochowski) A 2002 study determined that if current conditions continue, the glaciers atop Kilimanjaro will disappear sometime between 2015 and 2020.(Thompson, et alia)(OSU) A March 2005 report indicated that there is almost no remaining glacial ice on the mountain, and it is the first time in 11,000 years that barren ground has been exposed on portions of the summit.(Guardian) (Tyson) Researchers reported Kilimanjaro's glacier retreat was due to a combination of increased sublimation and decreased snow fall.(Mote and Kaser)
The Furtwängler Glacier is located near the summit of Kilimanjaro. Between 1976 and 2000, the area of Furtwängler Glacier was cut almost in half, from 113,000 m2 (1,220,000 sq ft) to 60,000 m2 (650,000 sq ft).(Thompson2) During fieldwork conducted early in 2006, scientists discovered a large hole near the center of the glacier. This hole, extending through the 6 m (20 ft) remaining thickness of the glacier to the underlying rock, is expected to grow and split the glacier in two by 2007.(Thompson)
To the north of Kilimanjaro lies Mount Kenya, which at 5,199 m (17,060 ft) is the second tallest mountain on the African continent. Mount Kenya has a number of small glaciers that have lost at least 45% of their mass since the middle of the 20th century. According to research compiled by the U.S. Geological Survey (USGS), there were eighteen glaciers atop Mount Kenya in 1900, and by 1986 only eleven remained. The total area covered by glaciers was 1.6 km2 (0.62 sq mi) in 1900, however by the year 2000 only about 25%, or 0.4 km2 (0.15 sq mi) remained.(USGS2) To the west of Mounts Kilimanjaro and Kenya, the Ruwenzori Range rises to 5,109 m (16,760 ft). Photographic evidence of this mountain range indicates a marked reduction in glacially covered areas over the past century. In the 35-year period between 1955 and 1990, glaciers on the Ruwenzori Range receded about 40%. It is expected that due to their proximity to the heavy moisture of the Congo region, the glaciers in the Ruwenzori Range may recede at a slower rate than those on Kilimanjaro or in Kenya.(Wielochowski2)
South America
A study by glaciologists of two small glaciers in South America reveals another retreat. More than 80% of all glacial ice in the northern Andes is concentrated on the highest peaks in small glaciers of approximately 1 km2 (0.39 sq mi) in size. A 1992 to 1998 observation of the Chacaltaya Glacier in Bolivia and Antizana Glacier in Ecuador indicated that between 0.6 m (2.0 ft) and 1.9 m (6.2 ft) of ice was lost per year on each glacier. Figures for Chacaltaya Glacier show a loss of 67% of its volume and 40% of its thickness over the same period. Chacaltaya Glacier has lost 90% of its mass since 1940 and is expected to disappear altogether sometime between 2010 and 2015. Research also indicates that since the mid-1980s, the rate of retreat for both of these glaciers has been increasing.(Francou) In Colombia, the glaciers atop Nevado del Ruiz have lost more than half their area in the last 40 years.(Huggel) Further south in Peru, the Andes are at a higher altitude overall, and there are approximately 722 glaciers covering an area of 723 km2 (279 sq mi). Research in this region of the Andes is less extensive but indicates an overall glacial retreat of 7% between 1977 and 1983.(USGS4) The Quelccaya Ice Cap is the largest tropical icecap in the world, and all of the outlet glaciers from the icecap are retreating. In the case of Qori Kalis Glacier, which is Quelccayas' main outlet glacier, the rate of retreat had reached 155 m (510 ft) per year during the three year period of 1995 to 1998. The melting ice has formed a large lake at the front of the glacier since 1983, and bare ground has been exposed for the first time in thousands of years.(Byrd)
Puncak Jaya icecap 1936 USGS
Puncak Jaya glaciers 1972. Left to right: Northwall Firn, Meren Glacier, and Carstensz Glacier. USGS. Also mid 2005 image and animation.
On the large island of New Guinea, there is photographic evidence of massive glacial retreat since the region was first extensively explored by airplane in the early 1930s. Due to the location of the island within the tropical zone, there is little to no seasonal variation in temperature. The tropical location has a predictably steady level of rain and snowfall, as well as cloud cover year round, and there has been no noticeable change in the amount of moisture which has fallen during the 20th century. The 7 km2 (2.7 sq mi) ice cap on Puncak Jaya is the largest on the island, and has retreated from one larger mass into several smaller glacial bodies since 1936. Of these smaller glaciers, research between 1973 and 1976 showed glacier retreat for the Meren Glacier of 200 m (660 ft) while the Carstensz Glacier lost 50 m (160 ft). The Northwall Firn, another large remnant of the icecap that once was atop Puncak Jaya, has itself split into several separate glaciers since 1936. Research presented in 2004 of IKONOS satellite imagery of the New Guinean glaciers provided a dramatic update. The imagery indicated that in the two years from 2000 to 2002, the East Northwall Firn had lost 4.5%, the West Northwall Firn 19.4% and the Carstensz 6.8% of their glacial mass. Researchers also discovered that, sometime between 1994 and 2000, the Meren Glacier disappeared altogether.(Kincaid and Klein) Separate from the glaciers of Puncak Jaya, another small icecap known to have existed on the summit of Puncak Trikora completely disappeared sometime between 1939 and 1962.(Allison and Peterson)
Polar regions
Despite their proximity and importance to human populations, the mountain and valley glaciers of tropical and mid-latitude glaciers amount to only a small fraction of glacial ice on the Earth. About 99% of all freshwater ice is in the great ice sheets of polar and subpolar Antarctica and Greenland. These continuous continental-scale ice sheets, 3 km (1.9 mi) or more in thickness, cap much of the polar and subpolar land masses. Like rivers flowing from an enormous lake, numerous outlet glaciers transport ice from the margins of the ice sheet to the ocean.
The northern Atlantic island nation of Iceland is home to the Vatnajökull, which is the largest ice cap in Europe. The Breiðamerkurjökull Glacier is one of the Vatnajökull outlet glaciers, and had receded by as much as 2 km (1.2 mi) between 1973 and 2004. In the early 20th century, Breiðamerkurjökull extended to within 250 m (820 ft) of the ocean, but by 2004 Breiðamerkurjökull's terminus had retreated 3 km (1.9 mi) further inland. This glacier retreat exposed a rapidly expanding lagoon that is filled with icebergs calved from its front. The lagoon is 110 m (360 ft) deep and nearly doubled its size between 1994 and 2004. Mass-balance measurements of Iceland's glaciers show alternating positive and negative mass balance of glaciers during the period 1987–95, but the mass balance has been predominantly negative since. On Hofsjokull ice cap, mass balance has been negative each year from 1995-2005.
Most of the Icelandic glaciers retreated rapidly during the warm decades from 1930 to 1960, slowing down as the climate cooled during the following decade, and started to advance after 1970. The rate of advance peaked in the 1980s, after which it slowed down as a consequence of rapid warming of the climate that has taken place since the mid-1980s. Most glaciers in Iceland began to retreat after 1990, and by 2000 all monitored non-surge type glaciers in Iceland were retreating. An average of 45 non-surging termini were monitored each year by the Icelandic Glaciological Society from 2000-2005.(Sigurdsson)
Bylot Ice Cap on Bylot Island, one of the Canadian Arctic islands, August 14, 1975 (USGS)
The Canadian Arctic islands have a number of substantial ice caps, including Penny and Barnes Ice Cap on Baffin Island, Bylot Ice Cap on Bylot Island, and Devon Ice Cap on Devon Island. All of these ice caps have been thinning and receding slowly. The Barnes and Penny ice caps on Baffin Island have been thinning at over 1 m (3.3 ft) per year in the lower elevations from 1995 to 2000. Overall, between 1995 and 2000, ice caps in the Canadian Arctic lost 25 km2 (9.7 sq mi) of ice per year.(Abdalati) Between 1960 and 1999, the Devon Ice Cap lost 67 km3 (16 cu mi) of ice, mainly through thinning. All major outlet glaciers along the eastern Devon Ice Cap margin have retreated from 1 km (0.62 mi) to 3 km (1.9 mi) since 1960.(Burgess and Sharpa) On the Hazen Plateau of Ellesmere Island, the Simmon Ice Cap has lost 47% of its area since 1959.(Braun, et alia) If the current climatic conditions continue, the remaining glacial ice on the Hazen Plateau will be gone around 2050. On August 13, 2005 the Ayles Ice Shelf broke free from the north coast of Ellesmere Island, the 66 km2 (25 sq mi) ice shelf drifted into the Arctic Ocean.(National Geographic). This followed the splitting of the Ward Hunt Ice Shelf in 2002. The Ward Hunt has lost 90% of its area in the last century.(Mueller, Vincent and Jeffries)
Northern Europe
Arctic islands north of Norway, Finland and Russia have all shown evidence of glacier retreat. In the Svalbard archipelago, the island of Spitsbergen has numerous glaciers. Research indicates that Hansbreen (Hans Glacier) on Spitsbergen retreated 1.4 km (0.87 mi) from 1936 to 1982 and another 400 m (1,300 ft) during the 16-year period from 1982 to 1998.(Glowacki) Blomstrandbreen, a glacier in the King's Bay area of Spitsbergen, has retreated approximately 2 km (1.2 mi) in the past 80 years. Since 1960 the average retreat of Blomstrandbreen has been about 35 m (110 ft) a year, and this average was enhanced due to an accelerated rate of retreat since 1995.(Greenpeace) Similarly, Midre Lovenbreen retreated 200 m (656 ft) between 1977 and 1995.(Rippin, et alia) In the Novaya Zemlya archipelago north of Russia, research indicates that in 1952 there was 208 km (129 mi) of glacier ice along the coast. By 1993 this had been reduced by 8% to 198 km (123 mi) of glacier coastline.(Aleksey)
Retreat of the Helheim Glacier, Greenland
In Greenland, glacier retreat has been observed in outlet glaciers, resulting in an increase of the ice flow rate and destabilization of the mass balance of the ice sheet that is their source. The net loss in volume and hence sea level contribution of the Greenland Ice Sheet (GIS) has doubled in recent years from 90 km3 (22 cu mi) to 220 km3 (53 cu mi) per year.(Rignot) Researchers also noted that the acceleration was widespread affecting almost all glaciers south of 70 N by 2005. The period since 2000 has brought retreat to several very large glaciers that had long been stable. Three glaciers that have been researched—Helheim Glacier, Kangerdlugssuaq Glacier, and Jakobshavn Isbræ—jointly drain more than 16% of the Greenland Ice Sheet. In the case of Helheim Glacier, researchers used satellite images to determine the movement and retreat of the glacier. Satellite images and aerial photographs from the 1950s and 1970s show that the front of the glacier had remained in the same place for decades. In 2001 the glacier began retreating rapidly, and by 2005 the glacier had retreated a total of 7.2 km (4.5 mi), accelerating from 20 m (66 ft) per day to 35 m (110 ft) per day during that period.(Howat)
Jakobshavn Isbræ in west Greenland, a major outlet glacier of the Greenland Ice Sheet, has been the fastest moving glacier in the world over the past half century. It had been moving continuously at speeds of over 24 m (79 ft) per day with a stable terminus since at least 1950. In 2002 the 12 km (7.5 mi) long floating terminus of the glacier entered a phase of rapid retreat, with the ice front breaking up and the floating terminus disintegrating and accelerating to a retreat rate of over 30 m (98 ft) per day. On a shorter timescale, portions of the main trunk of Kangerdlugssuaq Glacier that were flowing at 15 m (49 ft) per day from 1988 to 2001 were measured to be flowing at 40 m (130 ft) per day in the summer of 2005. Not only has Kangerdlugssuaq retreated, it has also thinned by more than 100 m (330 ft)(Truffer)
The rapid thinning, acceleration and retreat of Helheim, Jakobshavns and Kangerdlugssuaq glaciers in Greenland, all in close association with one another, suggests a common triggering mechanism, such as enhanced surface melting due to regional climate warming or a change in forces at the glacier front. The enhanced melting leading to lubrication of the glacier base has been observed to cause a small seasonal velocity increase and the release of meltwater lakes has also led to only small short term accelerations (Das). The significant accelerations noted on the three larges glaciers began at the calvining front and propagated inland and are not seasonal nature (Pelto) Thus, the primary source of outlet glacier acceleration widely observed on small and large calving glaciers in Greenland is driven by changes in dynamic forces at the glacier front, not enhanced meltwater lubricationn (Pelto). This was termed the Jakobshavns Effect by Terence Hughes at the University of Maine in 1986.(Hughes)
The collapsing Larsen B Ice Shelf in Antarctica is similar in area to the U.S. state of Rhode Island.
The climate of Antarctica is one of intense cold and great aridity. Most of the world's freshwater ice is contained in the great ice sheets that cover the continent of Antarctica. The most dramatic example of glacier retreat on the continent is the loss of large sections of the Larsen Ice Shelf on the Antarctic Peninsula. Ice shelves are not stable when surface melting occurs, and the collapse of Larsen Ice Shelf has been caused by warmer melt season temperatures that have led to surface melting and the formation of shallow ponds of water on the ice shelf. The Larsen Ice Shelf lost 2,500 km2 (970 sq mi) of its area from 1995 to 2001. In a 35-day period beginning on January 31, 2002, about 3,250 km2 (1,250 sq mi) of shelf area disintegrated. The ice shelf is now 40% the size of its previous minimum stable extent.(NSaIDC2) The recent collapse of Wordie Ice Shelf, Prince Gustav Ice Shelf, Mueller Ice Shelf, Jones Ice Shelf, Larsen-A and Larsen-B Ice Shelf on the Antarctic Peninsula has raised awareness of how dynamic ice shelf systems are. Jones ice Shelf had an area of 35 km2 in the 1970s but by 2008 it had disappeared. (Cook and Vaughan) Wordie Ice Shelf has gone from an area of 1500 square kilometers in 1950 to 140 km2 in 2000. (Cook and Vaughan) Prince Gustav Ice Shelf has gone from an area of 1600 km2 to 11 km2 in 2008. (Cook and Vaughan)After their loss the reduced buttressing of feeder glaciers has allowed the expected speed-up of inland ice masses after shelf ice break-up. (Rignot). The Wilkins Ice Shelf is another ice shelf that has suffered substantial retreat. The ice shelf had an area of 16,000 km2 (6,200 sq mi) in 1998 when 1,000 km2 (390 sq mi) was lost.(Humbert) In 2007 and 2008 significant rifting developed and led to the loss of another 1,400 km2 (540 sq mi) of area. Some of the calving occurred in the Austral winter. The calving seemed to have resulted from preconditioning such as thinning, possibly due to basal melt, as surface melt was not as evident, leading to a reduction in the strength of the pinning point connections. The thinner ice than experienced spreading rifts and breakup (Pelto7). This period culminated in the collapse of an ice bridge connecting the main ice shelf to Charcot Island leading to the loss of an additional 700 km2 (270 sq mi) in February-June 2009 (ESA).
Pine Island Glacier, an Antarctic outflow glacier that flows into the Amundsen Sea, thinned 3.5 m (11 ft)± 0.9 m (3.0 ft) per year and retreated a total of 5 km (3.1 mi) in 3.8 years. The terminus of the Pine Island Glacier is a floating ice shelf, and the point at which it starts to float retreated 1.2 km (0.75 mi) per year from 1992 to 1996. This glacier drains a substantial portion of the West Antarctic Ice Sheet and along with the neighboring Thwaites Glacier, which has also shown evidence of thinning, has been referred to as the weak underbelly of this ice sheet.(Rignot) Additionally, the Dakshin Gangotri Glacier, a small outlet glacier of the Antarctic ice sheet, receded at an average rate of 0.7 m (2.3 ft) per year from 1983 to 2002. On the Antarctic Peninsula, which is the only section of Antarctica that extends well north of the Antarctic Circle, there are hundreds of retreating glaciers. In one study of 244 glaciers on the peninsula, 212 have retreated an average of 600 m (2,000 ft) from where they were when first measured in 1953.(AAAS) The greatest retreat was seen in Sjogren Glacier, which is now 13 km (8.1 mi) further inland than where it was in 1953. There are 32 glaciers that were measured to have advanced; however, these glaciers showed only a modest advance averaging 300 m (980 ft) per glacier, which is significantly smaller than the massive retreat observed.(BBC3)
Impacts of glacier retreat
Some of this retreat has resulted in efforts to slow down the loss of glaciers in the Alps. To retard melting of the glaciers used by certain Austrian ski resorts, portions of the Stubai and Pitztal Glaciers were partially covered with plastic (Olefs). In Switzerland plastic sheeting is also used to reduce the melt of glacial ice used as ski slopes.(ENN) While covering glaciers with plastic sheeting may prove advantageous to ski resorts on a small scale, this practice is not expected to be economically practical on a much larger scale.
Many species of freshwater and saltwater plants and animals are dependent on glacier-fed waters to ensure the cold water habitat to which they have adapted. Some species of freshwater fish need cold water to survive and to reproduce, and this is especially true with salmon and cutthroat trout. Reduced glacial runoff can lead to insufficient stream flow to allow these species to thrive. Alterations to the ocean currents, due to increased freshwater inputs from glacier melt, and the potential alterations to thermohaline circulation of the world's oceans, may impact existing fisheries upon which humans depend as well.
The potential for major sea level rise depends mostly on a significant melting of the polar ice caps of Greenland and Antarctica, as this is where the vast majority of glacial ice is located. If all the ice on the polar ice caps were to melt away, the oceans of the world would rise an estimated 70 m (230 ft). However, with little major melt expected in Antarctica, sea level rise of not more than 0.5 m (1.6 ft) is expected through the 21st century, with an average annual rise of 0.004 m (0.013 ft) per year. Thermal expansion of the world's oceans will contribute, independent of glacial melt, enough to double those figures.(NSIDC2)
Back to Archived Reports List
Copyright 1998-2005, Tibet Environmental Watch (TEW)
|
Friday, April 14, 2017
April 15 (A Triple)
1865 - President Abraham Lincoln died, and V.P. Andrew Johnson became the nation's 17th president.
The great leader was dead...He was replaced by a good man, but a below-average leader - and a poor president.
The nation suffered as a result of this change, especially the South, which was so happy to see Lincoln die. They would have been much better off with Lincoln in charge of Reconstruction, because he may have been able to keep the 'Radical Reconstructionists' in Congress from being as harsh as they were...Maybe.
But we weren't able to find out, and it wasn’t to be.
1923 - Insulin became generally available for diabetics.
There is no way to quantify the value of this event...In America alone there are over 18 million diabetics, and who knows how many either are or will become insulin dependent.
Thankfully insulin was discovered in an age of sanity instead of now, because it would likely never make it past the FDA, would be taken off the market due to side effects, or have an insane cost.
1947 - Jackie Robinson became the first black player to play Major League Baseball - for the Brooklyn Dodgers.
JACKIE IS ONE OF THE MOST IMPORTANT PEOPLE IN AMERICAN HISTORY!!! And is the second most important black American - second only to President Obama.
This is no slight on MLK, George Washington Carver, Booker T. Washington, etc. It is what it is, and sports plays an enormous role in American culture - Jackie Robinson should be acknowledged as such...Also, I hope you understand, even though I'm not a fan of Obama's politics there is no way to deny his place in American history.
1861 - President Lincoln sent Congress a message recognizing a state of war with the Southern states and calling for 75,000 volunteer soldiers: U.S. Civil War.
He knew it was coming, but couldn’t believe it when the war started...Bush was stunned for 30 minutes when 9/11 hit, but Lincoln was for two days before he went into action. Can you imagine if the Liberal Lapdog Media were around during the time of the Civil War?
And by the way, 75,000 proved to be a terribly shortsighted number.
1945 - British and Canadian troops liberated the Nazi concentration camp Bergen-Belsen. The troops discovered 28,000 women, 12,000 men and another 13,000 unburied bodies: WWII.
The Holocaust didn’t happen!! How can any human being believe this crap?
It's a good thing the Allies took many pictures, and the Nazi's kept such grand records of their crimes.
1998 - Pol Pot died at age 73, evading prosecution for the deaths of two million Cambodians.
Pol Pot was a butcher, but only a minor member of the 20th Century Mega-Murderer's Club, though his total is almost more impressive because his were 'earned' by using much less sophisticated killing tools (clubs, knifes, etc.):
USSR = 61,900,000
Communist China = 35,200,000 (may actually be around 80,000,000)
Nazi Germany = 20,900,000
Nationalist China 10,000,000
Imperial Japan, 5,900,000
Mao’s Pre-takeover Soviets in China = 3,400,000
Cambodia (Pol Pot) = 2,000,000
It's important to note, the USSR and Germany kept pretty good records of their killing stats, but China, Japan and Cambodia's numbers may be much higher than listed.
I highly recommend you read about all the 20th Century's
Democidal Nuts.
Labels: , , , ,
Post a Comment
Links to this post:
Create a Link
<< Home
|
Wikis For Teachers
Hey, I'm Brandon A. And This Is My Wiki. The answer to the question you see above is Fort Sumter
Brandon A.'s Helpful LInk
The Beginning Of The Civil War
On the 10th of April 1861 General Beauregard (The leader of the Confederate forces in Charleston Harbor) requested that the fort be surrendered. Major Anderson (The leader of the Union forces in the fort) refused. On April 12, 1861 because of Commader Anderson's refusal to surrender the fort under the command of General Beauregard the Confederate batteries opened fire on Fort Sumter. The Union forces inside the fort were unable to return fire effectively. At 2:30 P.M. on April 13th Major Anderson surrendered the fort to the Confederate forces. A large part of why he surrendered was that he was running out of food. However Lincoln had sent a relief party but it failed to arive soon enough to be of any help. While no one was killed in the attack unfortunately one Union artillerist was killed and three were mortally wounded when a cannon exploded pre-maturely. This battle occured at Charleston Harbor, South Carolina. This was considered the begining of the civil war. From 1863 to 1865 the Confederates were able to hold the fort despite a Union seige. However by the end of the seige the fort was little more than a smoldering heap of rubble.
Above: A picture of the confederates bombarding Fort Sumter.
Fort Sumter was located on an island.
Above: A picture of the confederate general, General Beauredgard,
who was in charge of bombarding the fort. He was born near New Orleans,
Louisiana on the 28th of May in 1818. He graduated 2nd in his class
from the prestigious West Point military accademy.
36StarFlag.jpg 120px-Conf_Navy_Jack_(light_blue).svg.png
Left: A Union Flag At The Time Of The Battle - Right: A Confederate Flag At The Time Of The Battle
Below: A telegram sent by Major Anderson to the War Office of the time explaining the conditions at the fort and why the Union forces had no choice but to surrender.
My information and the majority of my pictures came from these websites:;;;
|
Early Political Experience
After returning home shattered from his experience in the military, the young adult Roosevelt took on responsibilities as a millwright working under his father. It was there, under those profoundly unstimulating conditions, that his fiery desire to enter the political ring was rekindled. He began to work his way up to his ultimate, long-held goal: To be President of the United States. Under the advice of a close friend of the family, G. Wilcox Ubernachten, he took up a collection and entered the Halifax mayoral race. As Ubernachten had predicted, the citizens of Halifax had become bored with the old crowd of politicians in the village, and they were galvanized by Roosevelt's idealistic dogmatism. Roosevelt won, and he held the office for five years. After he felt that his tenure was served, he bid farewell to his beloved hometown and went off to Washington, D.C. to enter his name on the ballot as candidate for Senator of New Hampshire. To make a long story short, Roosevelt lost this election. His hopes and dreams seemed ashambles. A Presidential campaign now seemed much farther off than before. He took on legal work for a local House Representative and aided his staff in legislation-writing. His deft skill of simplicity and his ability to get the spirit of the law across through the letter earned him heraldry all around Washington. By the age of thirty-two, he had made many powerful friends. He would now find it much easier to win recognition in his next race; for the New Hampshire Governorship.
Putting in overtime to beat the incumbent establishment, Roosevelt threw together a grassroots campaign that won over the simple folk and the city folk alike. The final straw was his sound defeat in the debate rounds of incumbent governor Heathcliffe von Willowsby, a supercilious old nobleman who had held power for upwards of twenty years. Roosevelt won the election in a landslide, partially due to the deft campaign strategy that his old friend G. Wilcox had helped him formulate. A common campaign theme song during the race had been:
Wigglesby or Willowsby,
Whichever fop our eyes can see
Roosevelt's our man,
For we believe he can
Roosevelt's service as Governor was untarnished, and he received much of the credit for the economic growth caused by the Industrial Revolution. His newfound popularity and stronghold in the Northeast would aid him in the race that would take place six years later. He would, he thought, make a great President after all.
Back to the Outline
|
Group Suggests Regionalizing Some Special Education Services
Mar 5, 2015
Fifteen years ago, roughly one in 150 children was identified as being on the autism spectrum. In 2010, that number jumped to one in 68.
Should all kids, regardless of their individual abilities, be taught in the same classroom?
It's a controversial topic, and the laws around it are a little contradictory. For example, federal law requires disabled students to be taught in what's called the "least restrictive environment." In Connecticut, this is defined by time spent with non-disabled peers. But, for some kids, being around non-disabled peers could actually be considered restrictive.
"The least restrictive environment, that term, is commonly misunderstood and misused," said Julie Swanson, a special education advocate. Swanson is a member of a working group that's looking for ways to save money on special education and provide better services. One of the group's ideas is to create separate regional schools to house some students with disabilities.
The need for these schools, according to Swanson, is partially due to the dramatic rise in autism spectrum disorders. Fifteen years ago, roughly one in 150 children was identified as being on the spectrum, according to the Centers for Disease Control.
In 2010, that number jumped to one in 68. This number could be higher, since the CDC estimates prevalence by looking at children who are already receiving services for their autism. A study in South Korea, for example, found a prevalence rate of one in 38, with two-thirds of those kids having been previously undiagnosed.
The rise in autism is generally attributed to an improved ability to detect the disorder, although others believe that prevalence is increasing. The reasons for this increase have not been pinpointed, though many theories have been posited.
Swanson said that some children on the autism spectrum would benefit from a special school, but certainly not all of them.
"Where the concern comes in, is when you have those kids, who may be less impacted by their autism, yet they still require some pretty intensive services, sending those kids to a regionalized center -- a separate building -- where there isn't any access to typically developing peers, is a concern," Swanson said.
State Representative Terrie Wood co-chairs the working group. She said the regional centers would model other successful special education schools -- such as Windward School in White Plains, New York -- and that parental preference would still matter.
"Parents have the option, as well they should, and the choice, to determine where they want their child educated," Wood said.
The working group consists of 21 members, and is a part of the Municipal Opportunities and Regional Efficiencies (MORE) Commission, which was set up in 2010 to find ways to streamline services among the state's towns and cities.
The General Assembly is set to explore the working group's suggestions.
|
AY McDonald
Every water system has to cope with non-revenue water. Main leaks, theft, tank overflows and unmeasured flow through water meters all contribute to a system's non-revenue water problem. According to the American Water Works Association (AWWA), 14% of indoor household usage in North America can be attributed to leaks. As residential water meters aren't designed to register low-flows such as leaks and drips, much of this usage goes un-metered and un-billed. This type of non-revenue water is called "apparent loss" and is valued at the retail water rate. For systems in which waste water is also billed based on water consumption, this apparent loss is valued at the retail rate for water and waste water combined. Apparent losses through residential water meters can add up to millions of dollars annually in non-revenue water and sewer.
The UFR captures this low-flow water and forces it through the meter in a way that causes nearly every drop to be registered. Apparent losses are reduced and customers are held accountable for their actual usage.
UFR installations can increase the measurement of billable water between as much as five and ten percent. Customers are held accountable for their actual usage and the system's apparent losses are reduced considerably. Customers that are held accountable for their usage are more likely to fix leaks and conserve our most precious resource, clean water.
Product Literature
Technical Papers, Articles and Links
Site Map Contact Us Mobile Site
|
1 of 3
One of Utah's most unusual tourist attractions is dying. A rare type of geyser that operates on cold water — sometimes called the "soda-pop geyser" — has apparently been plugged up by visitors dropping rocks into it.
GREEN RIVER, Emery County — One of Utah's most unusual tourist attractions is dying.
A rare type of geyser that operates on cold water, it's pretty much a fizzle these days.
A typical eruption these days is just a bit of fizzing at the mouth of the geyser, sometimes with a water column rising just a few feet high. The tiny eruptions are notably feeble compared to impressive eruptions of the past that used to soar high above the desert.
"Better than Yellowstone!" Chandler said of those good old days. "It would go up and stay!"
Geo-engineering student Rick Lyons agrees. He's a University of Utah geo-engineering student who's writing his thesis on Crystal Geyser.
"They used to have eruptions up to 100 feet," Lyons said. "It's gotten where most of the eruptions are less than 20 feet. Most of them I would say between 3 and 8 feet."
Although Crystal Geyser looks like a spectacular work of nature — it has dazzling terraces of colorful travertine deposited by gushing water — it was actually created by a man-made accident. Eighty years ago, an oil-drilling project went awry. Instead of oil, the drill rig hit a deep deposit of carbon dioxide.
The fizzing commenced.
"It's exactly the same as opening a bottle of coke, or anything that has carbonation in it," Lyons said. "Same exact process."
Big eruptions have been well-documented over the years. Color film footage of a spectacular eruption was shot by French kayakers traveling down the nearby Green River in 1938. Chandler has assembled a collection of many photographs of eruptions 80 to 100 feet high during the decades when it became a well-known destination for people who lived in the area.
"There'd be 15 or 20 kids here and everybody running underneath the geyser just having a blast," Chandler said.
There are many theories about what caused the recent decline in the geyser's performance: years of drought, a nearby drilling project, even attempted sabotage by a rival geyser owner who allegedly tried to destroy Crystal Geyser with dynamite.
Scientists, though, believe the most likely villain is tourism. Visitors get tired of waiting for an eruption and drop rocks down the old drilling pipe.
"Every time I've been there," Lyons said, "I've talked to people who say, 'Oh, we can get this to trigger. All we have to do is throw some rocks down into the well.'"
Scientists have been studying the geyser for decades, sometimes lowering instruments down its throat. As recently as 2010, they were able to lower the instruments 52 feet down into the hole. But now they hit an obstruction 19 feet down.
The rocks dropped by visitors have plugged it up, just like someone putting the cap back on the soda-pop bottle.
Geyser lovers like Chandler believe someone ought to spend money to re-drill the hole and unshackle the geyser's true potential.
The dying geyser is still of considerable interest to scientists. On a recent day, University of Utah biologist William Brazelton was there to take water samples from the huge terraces of minerals deposited by the geyser.
"This formation is just beautiful, intricate," he said.
Brazelton's studies involve the ecological interactions of microbes.
"We know nothing about how that works," he said. "This might be a weird spot to look for that sort of thing. But weird environments can help you understand how those basic processes work."
|
Article | Open
• Scientific Reports 5, Article number: 10203 (2015)
• doi:10.1038/srep10203
• Download Citation
Published online:
Species evolution is indirectly registered in their genomic structure. The emergence and advances in sequencing technology provided a way to access genome information, namely to identify and study evolutionary macro-events, as well as chromosome alterations for clinical purposes. This paper describes a completely alignment-free computational method, based on a blind unsupervised approach, to detect large-scale and small-scale genomic rearrangements between pairs of DNA sequences. To illustrate the power and usefulness of the method we give complete chromosomal information maps for the pairs human-chimpanzee and human-orangutan. The tool by means of which these results were obtained has been made publicly available and is described in detail.
Structural genomic rearrangements are a major source of intra- and inter-species variation. Chromosomal inversions, translocations, fissions and fusions, are part of the naturally occurring genetic diversity of individuals, are selectable and can confer environment-dependent advantages1. Chromosome rearrangements are also associated with disease, namely, developmental disorders and cancer. For example, many leukaemia patients present a reciprocal translocation between chromosomes 9 and 22, also known as the Philadelphia chromosome. This produces BCR-ABL fusion proteins that are constitutively active tyrosine kinases, contributing to tumour growth and proliferation2. Another striking example is the human inversion polymorphism in the 17q21 region, which contains the neurodegenerative disorder-associated gene MAPT (microtubule associated protein Tau). The direct oriented H1 haplotype is common and relates with increased Alzheimer’s and Parkinson’s disease risk, while the inverted H2 haplotype has higher frequencies in Southwest Asia and Southern Europe populations, particularly around the Mediterranean3. Recurrent inversions are found in the primate lineage, where the H2 haplotype is the ancestral state, and recent work evidences that Neanderthals and Denisovans also carried the H1 allele5.
How genome architecture changes contribute to speciation and which macroevolutionary events occurred through time are fundamental to understand the dynamics of chromosome evolution, and hence, the origins of species. In addition, chromosome alterations are hallmarks of cancer genomes with diagnosis and prognosis value6, and are also used in prenatal and postnatal clinical settings. Several insights into chromosome structure and evolution have been traditionally achieved by cytogenetic procedures such as G-banding, or molecular karyotyping approaches like fluorescence in situ hybridisation (FISH) and, more recently, array-based methods7. However, in some groups, such as the great apes, access to samples is often difficult, e.g. due to ethical reasons. Also, these approaches can be time-consuming, expensive, or lack resolution, as opposed to computational solutions8.
The advent of sequencing technology enabled the analysis of genomic sequences at nucleotide resolution. Nowadays, next-generation sequencing is bringing a substantial increase of speed, quality and reliability of the results for much less costs, although there is still promising space for improvements. The availability of sequenced genomes boosted computational methods into a new era, allowing some expensive and/or lengthy wet lab processes to be complemented by computational approaches9.
Derived scientific insights from genomic sequences, including the conserved distribution of genes on the chromosomes of different species or synteny, have been mostly explored using sequence alignments10,11,12,13,14,15,16,17,18,19, while for visualisation, a wide variety of strategies have been proposed20,21,22,23,24. Specifically, at a macro level the most popular are Mauve13, Cinteny25, Apollo24, MEDEA (, MizBee26 and Circos27, which are discussed in a recent review28. Although, the circle-based visualisation is becoming very popular, for detecting block alignments and re-arrangements across very similar species, such as primates, an ideogram still seems to be the best approach.
We propose a computational method to detect signatures of chromosome evolution. The method is completely alignment-free and is based on the information content of the sequences being compared. The information content itself is estimated using data compression techniques. The resulting stand-alone algorithm depends only on two parameters.
We developed a tool by means of which the proposed method can be tested in practice. The tool has been made publicly available and is described in detail. It is capable of producing an SVG image that shows the correspondence of regions between two sequences. Its performance is demonstrated with the help of several examples. Those involving synthetic sequences are intended to illustrate the underlying principles. More realistic case studies, involving prokaryotic and eukaryotic genomes, are also discussed. In particular, we obtain human/chimpanzee and human/orangutan chromosome maps.
For clarity, the potential and limitations of the tool and some of its design tradeoffs are discussed separately, following the description of the method. This separates limitations that are inherent to the method from those that are by-products of the current implementation, and that as such might be removed in future implementations.
Creating models of the data
The immediate goal of a data compression method is to describe data as compactly as possible. The usefulness of data compression as a tool to find structure in data is perhaps less well-known29,30.
Nevertheless, this ability is a direct consequence of how data compression works. Compression methods usually rely on statistical data models that estimate the probability of the data symbols along the sequence. Better (i.e., more accurate) statistical models tend to lead to better compressors (i.e., higher compression ratios).
Ultimately, the size of the compressed data can be seen as an estimate of the Kolmogorov (algorithmic) complexity of the original data, a fundamental yet noncomputable complexity measure closely related to information theory31.
Genomic data compression, now more than twenty years old32,33,34,35,36,37,38,39,40,41,42,43,44, has been the subject of recent review articles45,46,47. Typically, the compression methods rely on a combination of models that explore the redundancy found in DNA sequences, usually with models developed to handle high information content (i.e., hard to compress) regions and distinct models to handle low information content (i.e,. very compressible) regions.
The method proposed in this paper identifies small-scale or large-scale rearrangements between pairs of sequences called the reference and the target. The method applies to arbitrary sequences, and therefore the reference and the target can be as large as an entire chromosome or genome. The goal of the method is to automatically detect regions in the target sequence that have information content similar to regions found in the reference. The method yields a set of segments of the target sequence and, for each of these, the corresponding segment found in the reference sequence.
Both sequences are preprocessed such that their alphabet is . Symbols originally not belonging to (for example, N’s) are substituted by uniformly distributed symbols from , in order to keep the original length of the sequence. These random generated segments are high information content regions and, therefore, will not share information with any other sequence, hence will not interfere with the matching process.
The core of the method involves the estimation of the amount of conditional information that is required to represent a certain region of the target, using exclusively information from the reference. Basically, if x and y are, respectively, the target and reference sequences, we compute a numerical sequence , where and is the size of the target sequence. For a position in the target sequence, measures the number of bits required to represent the symbol located in that position, according to the aforementioned interpretation of conditional information.
To properly estimate , it is crucial to have a good model of the reference sequence . We have chosen finite-context models (FCMs) for this purpose. FCMs are probabilistic models based on the assumption that the information source is Markovian, i.e., that the probability of the next outcome depends only on some finite number of (recent) past outcomes referred to as the context.
The estimated probability distribution at position , , according to the order-k context is calculated with the symbol counts previously computed on the reference sequence , using the estimator
where represents the number of times that symbol was found in sequence having as context and where
is the total number of events that occurred in y in association with context . The parameter is set to 0.001, forcing the estimator to behave approximately as a maximum likelihood estimator. In practice, this makes the segmentation process easier (see below). The number of bits that is required to represent symbol using exclusively information from the reference sequence is given by
Finding information-similar regions
As explained before, the core idea of the method is to compute, along the target sequence , the amount of information required to represent x using exclusively information from the reference sequence y. Therefore, at a first stage, we end up with a numerical information sequence of size . Fig 1 illustrates how the method operates, using synthetic data generated with an appropriate tool48. The target was created by manipulating some parts of the reference, as described in the figure. Additional examples are provided in the Supplementary Material file.
Figure 1: Similarity discovery, step by step.
Figure 1
(A) scan the target to identify those of its regions that significantly share information content with the reference. (B) scan the reference to find those of its regions associated with each region identified at step A. Step (C), (D), (E), (F), repeat step B for each region identified at step A.
Regions where is small indicate a high level of information sharing with . To mark them, we compare a smoothed version of the information sequence with a threshold (). The result is the set of regions of interest of , for the given reference , which are denoted by .
It remains to find the regions of the reference which are strongly associated with each . To do this we invert the roles of the reference and the target. More precisely, each is now regarded as a reference, and is taken as the target. We thus compute, for each , the information sequences , from which the regions of associated with each can be found.
The described procedure can find pairs of regions that are similar in the sense of information-sharing, but does not take into account possible inversions. For this purpose, the reference sequence should be reverted, complemented and loaded in the FCM model. Then steps entirely similar to those described above need to be taken. Having done this, both inversions and direct homologies can be segmented in the target sequence.
If both the inverted and direct instances of a region are found to have high information content, then the region shares no information with the rest of the data and therefore it is left unmarked. This is the case with regions that are essentially unique and with unsequenced regions (those that originally contained N’s, that have been replaced with random data).
The tool
An implementation of the method (Smash) is freely available, under GPL-2 license, at Smash is a tool that computes chromosome information maps, with an ideogram output architecture. The colours for each block are automatically calculated using the HSV (Hue, Saturation, Value) colour space, where only the Hue varies. For more information about Smash, see the Supplementary Material, Section “The Smash tool”.
The threshold T
Smash has a command-line option by means of which the threshold can be varied in the interval (see the Supplementary Material). The threshold can be regarded as a parameter. In general, the best is data-dependent. The guiding principle is to choose so that it selects regions of complexity sufficiently below the average. In practice, this is not difficult to achieve, but some experimentation may be required to obtain the best results.
As a rule, should be smaller when working with similar species than when working with more distant species. For example, for the human/chimpanzee pair we used but for the chicken/turkey pair we used . When working with entire chromosomes, the threshold can be adjusted to match the degree of divergence encountered.
Model depth
The model depth, described by the parameter k, must be an integer in the range [1,28] (as described in the Subsection “Parameters, Options”, option -c. The default value () works well for sequences, say, longer than 1 Mb (1,000,000 symbols). The default also works well for smaller sequences, although in this case the actual performance may depend on how repetitive they are. We have found out that there is often little practical need to tune k.
The relation between the model depth k and the estimated probabilities (which are directly related to the counters ), and the capabilities of Markov models in the context of DNA sequence modelling, have been treated in detail elsewhere44.
The proposed method is fully commutative, that is, it has the potential to lead to the same results when the reference and the target are swapped. Smash can easily be made commutative as well. However, in most usage scenarios, there is a natural reference sequence. Furthermore, the assumption that one of the two sequences is the reference simplifies the algorithm and leads to time savings. For these two reasons, the current implementation of Smash is approximately commutative, but not exactly so.
To illustrate this, we performed additional experiments using both prokaryotic and eukaryotic genomes. For the prokaryotes, we have used Shigella flexneri (NC_017328) and Escherichia coli (NC_017638). As can be seen in Supplementary Fig. 2, the maps are very similar (apart from some differences in colour and reversed pattern assignment, due to the automatic colouring method used). Nevertheless, it is possible to spot small differences, mainly because we have discarded matched regions smaller than 20 kb. Supplementary Fig. 3, which illustrates the human/chimp pair, shows that at a larger scale these small differences tend to disappear.
Working with distant genomes
Smash does work for more distant genomes than, say, the human/chimpanzee pair studied in detail next. This is shown e.g. by the chicken/turkey map of chromosome 1 included as Supplementary Fig. 1. According to TimeTree62, Gallus gallus and Meleagris gallopavo have an estimated divergence time of 44.6 million years (MY), while between Homo sapiens and Pan troglodytes or Pongo abelii the divergence times are estimated as 6.3 MY and 15.7 MY, respectively.
We emphasise, however, that Smash can be applied to pairs of sequences that are even more distant. Regardless of the exact nature of the reference and target, Smash will find the rearrangements present, even if one or both sequences are synthetic (computer generated). This can be useful to develop a better understanding of how Smash works, or for testing purposes. Examples are presented in Supplementary Figs. 4 and 5, where synthetic sequences containing different rearrangements were processed with Smash. For comparison purposes, the output of widely used tools such as Mauve13 and VISTA15 is also provided. In Supplementary Figs. 6 and 7, the methods are compared in real prokaryotic and eukaryotic sequences, respectively.
Working with unassembled sequences or assembling errors
One of the advantages of Smash is that it works even when the reference is not assembled. Therefore, it can be used with references composed of non-assembled reads obtained directly from the NGS sequencers. In fact, although next-generation sequencing made low cost high speed sequencing possible, it also decreased the size of sequencing reads61. On the other hand, most of the primate assembled sequences use the human genome as a reference. This might be problematic, because of the assumption that humans and the other primates exhibit a high degree of homology, which might not always be true53. Hence, it might be important to measure similarity against non-aligned references.
Figure 2 depict the results of Smash over chromosome 18 of human and chimp using random permutations of blocks with different size, showing its robustness when fragmented references are used. Smash spent less than 8 minutes for each computation.
Figure 2: Smash computation over P. troglodytes chromosome 18, using as reference permuted blocks of different sizes from H. sapiens chromosome 18.
Figure 2
Colours are only consistent for each run of the tool and, therefore, may not be consistent from one run to another run, where the sequences or the parameters are changed. (A) Smash was executed using and . (B) Smash was executed using a variable threshold (upper value) and .
Smash is able to identify regions containing shared information even when one of the sequences is block-permuted, a capability that may be of interest to measure sequence similarity, e.g. when one of the sequences is not assembled, or when there are assembly errors. Obviously, the identification of the precise genomic rearrangements that took place will have to be deferred until final assembly takes place.
Results and Discussion
To illustrate the potential of the proposed method, we show the complete chromosomal information maps for the pairs human-chimpanzee and human-orangutan. Additional examples can be found in the Supplementary Material. The Homo sapiens, Pan troglodytes and Pongo abelii reference assembled chromosomes were downloaded from the NCBI. In order to create the human-chimpanzee map, we have concatenated chromosomes 2A and 2B of the chimpanzee, ran Smash once per chromosome (totalling 23 runs), then manually corrected the associated picture regarding the hypothetical centromere between 2A and 2B, and finally grouped all the maps in one global picture (the one shown in Fig. 3). A similar process was done for the human/orangutan map, shown in Fig. 4. The results obtained confirm and extend previous work based on orthologous gene distribution, array comparative genomic hybridisation (array CGH) and FISH approaches49,50,51.
Figure 3: Human chimpanzee chromosomal map, obtained from chromosome pairwise comparison.
Figure 3
Inversions can be observed in chromosomes 1, 4, 5, 7, 12, 15, 17, 18, and Y. Chromosomes 2A and 2B of chimpanzee have been fused for a more concise representation.
Figure 4: Human orangutan chromosomal map, obtained from chromosome pairwise comparison.
Figure 4
Inversions are present in chromosomes 2, 3, 4, 7, 8, 9, 10, 11, 16, 17, 18 and 20. Chromosomes 2A and 2B of orangutan have been fused for a more concise representation.
Figure 3 shows the complete information maps between human and chimpanzee genomes, using chromosome pairwise comparisons, which are characterised by several inversions, in chromosomes 1, 4, 5, 7, 12, 15, 17, 18, and Y. All known pericentric inversions were detected by our method with the exception of inversions in chromosomes 9 and 16 that are located in regions with limited available sequence information52. The structural rearrangements observed in the chimpanzee Y chromosome agree with previous reports53, where variable copy number and position of Y-specific genes was found among chimpanzees (Pan troglodytes) but not among bonobo (P. paniscus), gorilla (Gorilla gorilla gorilla and G. beringei graueri) or orangutan (Pongo pygmaeus and P. abelii) lineages54. In addition, we identify inversions in chromosome 7 (Fig. 5) that were only partially described before50. Despite their importance, inversions are traditionally difficult to detect and new experimental approaches have been recently developed to improve the available tools55. These two inversions are located in 7p14.1 and 7q11.23 around the GLI3 and ELN genes, respectively, and both are associated with human disorders. Namely, the Greig cephalopolysyndactyly syndrome is caused by mutations, deletion or rearrangements in the region containing the GLI3 transcription factor that affect the development of the limbs, head and face, and is characterised by the presence of extra fingers or toes56. The Williams-Beuren syndrome (WBS) is a neurodevelopmental disease with distinctive facial and behavioural features, as well as several degrees of intellectual disability, caused by deletions of genes including ELN57. Curiously, inversion polymorphisms are present in a significant proportion of parents from WBS patients57,60, which is also observed in the 17q21.31 region59, suggesting that structural variants enhance some microdeletion syndromes. Given the structural differences observed in these chromosomal regions, one might speculate that they have contributed to evolutionary innovation and the emergence of lineage-specific phenotypes.
Figure 5: Progressive human and chimpanzee chromosome 7 information maps.
Figure 5
For each chromosomes two regions have been extracted (35 MB to 45 MB and 70 MB to 80 MB). The progressive maps for these sub-regions show the genes involved in the paracentric inversions detected.
Figure 4 depicts the complete information maps between human and orangutan. It shows that orangutan chromosome 1 is in the opposite direction as compared with human. Moreover, there are large inversions in chromosomes 2, 3, 4, 7, 8, 9, 10, 11, 16, 17, 18 and 20. Although there are fewer data available, the results are consistent with previous cytogenetic approaches that identified new rearrangements on the orangutan genome, specifically, a pericentric inversion on chromosome 1, complex rearrangements on chromosome 2 and a subtelomeric deletion on chromosome 19 60. Also, recent evidence suggests that the orangutan genome maintains the ancestral chromosomal state with observable differences in most chromosomes when compared with humans, including chromosomes 1, 2, 3, 7, 10, 11 and 18 49.
The method and the implementation here described allows the detection of large-scale and small-scale genomic rearrangements, including balanced translocations and inversions that are not detected by array-CGH or chromosome alterations that are below the limits of microscopy, thus, extending the possibilities of genome-wide structure characterisation with a single tool.
In Supplementary Figs. 8 and 9 we provide an example of a translocation between chromosomes 5 and 17 of human and gorilla. As it can be seen, after concatenating the sequences, Smash was able to detect a well known translocation that is one of the bases of gorilla speciation foundations63.
Smash compares pairs of sequences. These pairs can be built using single chromosomes, as shown in Figs. 3 and 4, or sets of chromosomes concatenated in a single sequence, as in the example of the translocation shown in Supplementary Figs. 8 and 9. In either case, Smash looks for and reports the position of regions that are similar, from the point of view of information content. Hence, in the examples provided in Figs. 3 and 4, only the regions that are similar in each pair of chromosomes are reported. To have a full view, it would be required either to run Smash in each possible pair of chromosomes (i.e., all possible pairs formed between the set of human chromosomes and the set of chimpanzee chromosomes, or by concatenating in a single sequence the whole genome of each species). Naturally, when very large sequences are involved (for example, entire genomes concatenated), the visualization granularity is reduced and the computational resources increase. A more detailed discussion can be found in Section 2 of the Supplementary Material.
Chromosome rearrangements can drive adaptation and evolution of novel traits, but they can be deleterious as well. Here, we show that compression-based models are remarkably capable of detecting signatures of genomic chromosomal evolution, namely to determine how information flows between sequences. The method is alignment-free and universal, in the sense that it can accept any input pair of genomic sequences, and depends only on two parameters.
A tool that implements the method has been made available for download. General guidelines have been given on how to select the values of its two parameters, which do not affect its performance in an overly sensitive way. Its advantages and limitations have been discussed.
The tool and the ideas that underlie its design may lead to new insights about important genomic questions, since it allows blind unsupervised detection of rearrangements and similarities between genomic sequences. An obvious example is the detection of evolutionary patterns across species, as demonstrated in the examples, but the tool has similar potential for diagnosis and genetic counselling. The detection of rearrangements in cancer genomes at high resolution levels is also considered important, in connection with risk stratification and personalised therapeutics.
Additional Information
How to cite this article: Pratas, D. et al. An alignment-free method to find and visualise rearrangements between pairs of DNA sequences. Sci. Rep. 5, 10203; doi: 10.1038/srep10203 (2015).
1. 1.
, , & Genome architecture is a selectable trait that can be maintained by antagonistic pleiotropy. Nat. Commun. 4, 10.1038/ncomms3235 (2013).
2. 2.
, , & Philadelphia chromosome-positive acute lymphoblastic leukemia. Cancer 117, 1583–1594 (2011).
3. 3.
et al. Evolutionary toggling of the MAPT 17q21. 31 inversion region. Nat. Genet. 40, 1076–1083 (2008).
4. 4.
et al. The distribution and most recent common ancestor of the 17q21 inversion in humans. Am. J. Hum. Gen. 86, 161–171 (2010).
5. 5.
et al. Using the neanderthal and denisova genetic data to understand the common MAPT 17q21 inversion in modern humans. Hum. Biol. 84, 1 (2013).
6. 6.
, & Advances in understanding cancer genomes through second-generation sequencing. Nat. Rev. Genet. 11, 685–696 (2010).
7. 7.
& Molecular cytogenetics: recent developments and applications in cancer. Clin. Genet. 84, 315–325 (2013).
8. 8.
et al. Digital karyotyping. Proc. Natl. Acad. Sci. USA 99, 16156–16161 (2002).
9. 9.
Analysis of high-throughput ancient DNA sequencing data. Methods Mol. Biol. 840, 197–228 (2012).
10. 10.
et al. Glocal alignment: finding rearrangements during alignment. Bioinformatics 19, i54–i62 (2003).
11. 11.
et al. Human-mouse alignments with blastz. Genome. Res. 13, 103–107 (2003).
12. 12.
Aligning multiple whole genomes with mercator and mavid. In Comparative genomics. 221–235 Springer 2008).
13. 13.
, & T. progressiveMauve: multiple genome alignment with gene gain, loss and rearrangement. PLOS ONE 5, e11147 (2010).
14. 14.
, , & Multiple whole-genome alignments without a reference organism. Genome. Res. 19, 682–689 (2009).
15. 15.
, , , & VISTA: computational tools for comparative genomics. Nucleic Acids Res. 32, W273–W279 (2004).
16. 16.
et al. Evolutionarily conserved elements in vertebrate, insect, worm, and yeast genomes. Genome. Res. 15, 1034–1050 (2005).
17. 17.
et al. Comparative genomic analysis using the ucsc genome browser. In Comparative Genomics, 17–33 Springer- 2008).
18. 18.
et al. Close sequence comparisons are sufficient to identify human cis-regulatory elements. Genome. Res. 16, 855–863 (2006).
19. 19.
et al. A physical map of the mouse genome. Nature 418, 743–750 (2002).
20. 20.
, , & Dagchainer: a tool for mining segmental genome duplications and synteny. Bioinformatics 20, 3643–3646 (2004).
21. 21.
et al. Versatile and open software for comparing large genomes. Genome. Biol. 5, R12 (2004).
22. 22.
, , & Genomematcher: a graphical user interface for dna sequence comparison. BMC Bioinformatics 9, 376 (2008).
23. 23.
24. 24.
et al. Apollo: a sequence annotation editor. Genome. Biol. 3, 1–14 (2002).
25. 25.
& Cinteny: flexible analysis and visualization of synteny and genome rearrangements in multiple organisms. BMC Bioinformatics 8, 82 (2007).
26. 26.
, & Mizbee: a multiscale synteny browser. IEEE Trans. Vis. Comput. Graphics 15, 897–904 (2009).
27. 27.
et al. Circos: an information aesthetic for comparative genomics. Genome. Res. 19, 1639–1645 (2009).
28. 28.
, , , & Visualizing genomes: techniques and challenges. Nat. Methods 7, S5–S15 (2010).
29. 29.
et al. Comparative analysis of long DNA sequences by per element information content using different contexts. BMC Bioinformatics 8, S10 (2007).
30. 30.
, , & DNA sequences at a glance. PLOS ONE 8, e79922 (2013).
31. 31.
& An introduction to Kolmogorov complexity and its applications Springer 2008).
32. 32.
& Compression of DNA sequences. In Proc. of the DCC, 340–350 Snowbird: Utah, 1993).
33. 33.
, , & A guaranteed compression scheme for repetitive DNA sequences. In Proc. of the DCC, 453 Snowbird: Utah, 1996).
34. 34.
& Significantly lower entropy estimates for natural DNA sequences. In Proc. of the DCC, 151–160 Snowbird: Utah, 1997).
35. 35.
, & Biological sequence compression algorithms. In , , & (eds.) Genome. Inform. Ser. 43–52 (Tokyo, Japan 2000).
36. 36.
, , & DNACompress: fast and effective DNA sequence compression. Bioinformatics 18, 1696–1698 (2002).
37. 37.
& A simple and fast DNA compressor. Software: Practice and Experience 34, 1397–1411 (2004).
38. 38.
& An efficient normalized maximum likelihood algorithm for DNA sequence compression. ACM Trans. on Information Systems 23, 3–34 (2005).
39. 39.
& DNA compression challenge revisited. In Combinatorial Pattern Matching: Proc. of CPM-2005, vol. 3537 of LNCS, 190–200 Springer-Verlag 2005).
40. 40.
& Normalized maximum likelihood model of order-1 for the compression of DNA sequences. In Proc. of the DCC, 33–42 Snowbird: Utah, 2007).
41. 41.
, , & A simple statistical algorithm for biological sequence compression. In Proc. of the DCC, 43–52 Snowbird: Utah, 2007).
42. 42.
, , & DNA sequence compression using adaptive particle swarm optimization-based memetic algorithm. IEEE Trans. Evol. Comput. 15, 643–658 (2011).
43. 43.
, & Bacteria DNA sequence compression using a mixture of finite-context models. In Proc. of the SSP Nice: France, 2011).
44. 44.
, , & On the representability of complete genomes by multiple competing finite-context (Markov) models. PLoS ONE 6, e21588 (2011).
45. 45.
, & Computational solutions for omics data. Nat. Rev. Genet. 14, 333–346 (2013).
46. 46.
& Data compression for sequencing data. Algorithms Mol. Biol. 8, 25 (2013).
47. 47.
, & Trends in genome compression. Curr. Bioinform. 9, 315–326 (2013).
48. 48.
, & XS: a FASTQ read simulator. BMC Res. Notes 7, 40 (2014).
49. 49.
, & Timetree: a public knowledge-base of divergence times among organisms. Bioinformatics 22, 2971–2972 (2006).
50. 50.
How genomes are sequenced and why it matters: Implications for studies in comparative genomics of humans and chimpanzees. Answers Res. Journal 4, 81–88 (2011).
51. 51.
52. 52.
, & Recombination rates and genomic shuffling in human and chimpanzee—a new twist in the chromosomal speciation theory. Mol. Biol. Evol. 30, 853–864 (2013).
53. 53.
et al. Discovery of human inversion polymorphisms by comparative analysis of human and chimpanzee DNA sequence assemblies. PLOS Genet. 1, e56 (2005).
54. 54.
et al. Large-scale variation among human and great ape genomes determined by array comparative genomic hybridization. Genome. Res. 13, 347–357 (2003).
55. 55.
, , et al. Modernizing reference genome assemblies. PLOS Biol. 9, e1001091 (2011).
56. 56.
et al. Y-chromosome variation in hominids: intraspecific variation is limited to the polygamous chimpanzee. PLOS ONE 6, e29311 (2011).
57. 57.
et al. Directional genomic hybridization for chromosomal inversion discovery and detection. Chromosome Res. 21, 165–174 (2013).
58. 58.
The greig cephalopolysyndactyly syndrome. Orphanet J. Rare Dis. 3, 238 (2008).
59. 59.
et al. Copy number variation at the 7q11. 23 segmental duplications is a susceptibility factor for the williams-beuren syndrome deletion. Genome. Res. 18, 683–694 (2008).
60. 60.
et al. A 1.5 million-base pair inversion polymorphism in families with williams-beuren syndrome. Nat. Genet. 29, 321–325 (2001).
61. 61.
et al. Discovery of previously unidentified genomic disorders from the duplication architecture of the human genome. Nat. Genet. 38, 1038–1042 (2006).
62. 62.
et al. New aspects of chromosomal evolution in the gorilla and the orangutan. Int. J. Mol. Med. 19, 437–443 (2007).
63. 63.
& Segmental duplications and the evolution of the primate genome. Nat. Rev. Genet. 3, 65–72 (2002).
Download references
Supported by the European Fund for Regional Development (FEDER) through the Operational Program Competitiveness Factors (COMPETE) and by the Portuguese Foundation for Science and Technology (FCT), in the context of projects PEst-OE/EEI/UI0127/2014 and Incentivo/EEI/UI0127/2014. DP is supported by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 305444 “RD-Connect: An integrated platform connecting registries, biobanks and clinical bioinformatics for rare disease research”. RMS is supported by the project Neuropath (CENTRO-07-ST24-FEDER-002034), co-funded by QREN Mais Centro program and the EU.
Author information
1. IEETA/DETI, University of Aveiro, Portugal
• Diogo Pratas
• , Raquel M. Silva
• , Armando J. Pinho
• & Paulo J.S.G. Ferreira
1. Search for Diogo Pratas in:
2. Search for Raquel M. Silva in:
3. Search for Armando J. Pinho in:
4. Search for Paulo J.S.G. Ferreira in:
D.P., A.P. and P.F. designed the algorithms. D.P. implemented and tested the software. D.P., R.S., A.P. and P.F. designed the experiments and interpreted the results. D.P., R.S., A.P. and P.F. wrote the manuscript. All authors reviewed the manuscript.
Competing interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to Diogo Pratas.
Supplementary information
|
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Valleytronics is a portmanteau combining the terms valley and electronics. The term refers to the technology of control over the valley degree of freedom (a local maximum/minimum on the valence/conduction band) of certain semiconductors that present multiple valleys inside the first Brillouin zone—known as multivalley semiconductors.[1][2] The term was coined in analogy to the blooming field of spintronics. While in spintronics the internal degree of freedom of spin is harnessed to store, manipulate and read out bits of information, the proposal for valleytronics is to perform similar tasks using the multiple extrema of the band structure, so that the information of 0s and 1s would be stored as different discrete values of the crystal momentum.
The term is often used as an umbrella term to other forms of quantum manipulation of valleys in semiconductors, including quantum computation with valley-based qubits,[3][4][5][6] valley blockade and other forms of quantum electronics. First experimental evidence of valley blockade predicted in Ref.[7] (which completes the set of Coulomb charge blockade and Pauli spin blockade) has been observed in a single atom doped silicon transistor.[8] Several theoretical proposals and experiments were performed in a variety of systems, such as graphene,[9] some Transition metal dichalcogenide monolayers,[10] diamond,[11] Bismuth,[12] Silicon,[4][13][14] Carbon nanotubes,[6] Aluminium arsenide[15] and silicene.[16]
1. ^ "Condensed-matter physics: Polarized light boosts valleytronics". Kamran Behnia, Nature Nanotechnology 7, 488–489 (2012).
2. ^ "Valleytronics: Electrons dance in diamond". Christoph E. Nebel. Nature Materials 12, 690–691 (2013). doi:10.1038/nmat3724
3. ^ Gunawan, O.; Habib, B.; De Poortere, E. P.; Shayegan, M. (2006-10-30). "Quantized conductance in an AlAs two-dimensional electron system quantum point contact". Physical Review B. 74 (15): 155436. doi:10.1103/PhysRevB.74.155436.
4. ^ a b "Valley-Based Noise-Resistant Quantum Computation Using Si Quantum Dots". Dimitrie Culcer, A. L. Saraiva, Belita Koiller, Xuedong Hu, and S. Das Sarma. Phys. Rev. Lett. 108, 126804 (2012).
5. ^ "Universal quantum computing with spin and valley states". Niklas Rohling and Guido Burkard. New J. Phys. 14, 083008(2012).
6. ^ a b "A valley–spin qubit in a carbon nanotube". E. A. Laird, F. Pei & L. P. Kouwenhoven. Nature Nanotechnology 8, 565–568 (2013).
7. ^ Prati E (2011). "Valley blockade quantum switching in Silicon nanostructures". J Nanosc Nanotech. 11 (10): 8522–8526. arXiv:1203.5368Freely accessible. doi:10.1166/jnn.2011.4957.
8. ^ Crippa A; et al. (2015). "Valley blockade and multielectron spin-valley Kondo effect in silicon". Physical Review B. 92: 035424. arXiv:1501.02665Freely accessible. doi:10.1103/PhysRevB.92.035424.
9. ^ "Valley filter and valley valve in graphene". A. Rycerz, J. Tworzydło and C. W. J. Beenakker. Nature Physics 3, 172 - 175 (2007).
10. ^ "Valley polarization in MoS2 monolayers by optical pumping". Hualing Zeng, Junfeng Dai, Wang Yao, Di Xiao and Xiaodong Cui. Nature Nanotechnology 7, 490–493 (2012).
11. ^ "Generation, transport and detection of valley-polarized electrons in diamond". Jan Isberg, Markus Gabrysch, Johan Hammersberg, Saman Majdi, Kiran Kumar Kovi and Daniel J. Twitchen. Nature Materials 12, 760–764 (2013). doi:10.1038/nmat3694
12. ^ "Field-induced polarization of Dirac valleys in bismuth". Zengwei Zhu, Aurélie Collaudin, Benoît Fauqué, Woun Kang and Kamran Behnia. Nature Physics 8, 89-94 (2011).
13. ^ "Valley polarization in Si(100) at zero magnetic field". K. Takashina, Y. Ono, A. Fujiwara, Y. Takahashi and Y. Hirayama. Physical Review Letters 96,236801 (2006).
14. ^ "Spin-valley lifetimes in a silicon quantum dot with tunable valley splitting". C.H. Yang, A. Rossi, R. Ruskov, N.S. Lai, F.A. Mohiyaddin, S. Lee, C. Tahan, G. Klimeck, A. Morello and A.S. Dzurak. Nature Communications 4, 2069 (2013).
15. ^ "AlAs two-dimensional electrons in an antidot lattice: Electron pinball with elliptical Fermi contours". O. Gunawan, E. P. De Poortere, and M. Shayegan. Phys. Rev. B 75, 081304(R)(2007).
16. ^ "Spin valleytronics in silicene: Quantum spin Hall–quantum anomalous Hall insulators and single-valley semimetals". Motohiko Ezawa, Phys. Rev. B 87, 155415 (2013)
External links[edit]
|
In Paradise: Music and Wine
grapesIn the rolling hills of Tuscany within the province of Siena is the town of Montalcino. Formerly on hard economic times, Montalcino’s brunellodecline was reversed by the popularity of the town’s famous wine, Brunello di Montalcino, made from the sangiovese grapes grown in the area. The number of wine producers in this area grew from 11 in the 1960s to more than 200 today.
il-paradisoBut one of the vineyards, Il Paradiso di Frassini, is decidedly different. The grapes are serenaded all day, every day, by classical music. The experiment is the brainchild of Giancarlo Cignozzi, the owner. In 1999, while he was practicing law in Milan, he discovered a crumbling estate for sale just south of the town of Montalcino. It was a humble place—with a house, farm buildings and vines all in disrepair. But it was near some of the top Brunello wineries, and Cignozzi fell in love with it.
mozart-brunelloCignozzi farms the vineyards organically. But he is also a trained musician, and he was inspired to combine both passions—playing music and creating an organic vineyard. He asked himself, “if cignozzimusic can improve the lives of humans and animals, why not plants?” So he started playing Mozart in his vineyard. And he got results. According to his son, Ulisse, they divide the vineyard into 25 areas and monitor the quality of the grapes at harvest time. The plants near the music are more robust. The grapes that grow closer to the speakers have a higher sugar content.
It wasn’t long before the idea caught the attention of scientists. Stefano Mancuso, a plant neurobiologist from the University of Florence, who has been studying the vineyard since 2003, says, “It’s very difficult to say that plants like classical music…but they can perceive sounds and specific frequencies.” He speculates that Cignozzi’s vines may grow toward the speakers because the music frequencies resemble those of running water. He also believes that the sound reduces insect attacks dramatically. Music may confuse harmful bugs, making them unable to breed. The music may also scare away birds and other creatures that feed on grapes.
boseWhile Cignozzi is proud of the research, he is also an incurable romantic. He prefers the Mozart vision over the theory of vibrations. He’s been serenading his grapes for over a decade and stands by his decision to play Mozart. Mancuso says that they can play many other types of music—even heavy metal if it has enough bass. The vines are affected by low frequencies.
And how does the wine taste? According to my Santa Barbara friend, fellow Italian montalcino-mappalanguage student, and wine expert, Joel Garbarino, “Brunello is a great wine with hints of cherry, spices and mineral, and even mushrooms…and as my Italian friends say, “a touch of lead dust’”. Joel sent me this map, which shows the seven zones of wine producers around Montalcino. Paradiso di Frassini is in the Montalcino Nord zone.
Brunello was the first wine to be awarded the Denominazione di Origine Controllata e magic-fluteGarantita (DOCG) status. Brunello di Montalcino must be aged five years prior to release. But what about the Brunello wines from Paradiso di Frassini? According to the web site, there is a special wine called “The Magic Flute”: “This is a unique and inimitable Brunello. From 2008, thanks to Bose (the consumer electronics company), we were able to play music to one of our Brunello vineyards (comprising 1 hectare or 1.47 acres), using 50 loud-speakers, which we placed along each vine row. We then made sure that the Brunello grapes from this “Mozart vineyard” were fermented separately in our winery. We decided to leave the wine from these fortunate Brunello grapes in the barrel for 8 months longer than normal. This is how Paradiso di Frassini’s first “cru” was born. So immersed in musical harmony, we called it ‘The Magic Flute.’”
This entry was posted in English, Foto, Italia, Musica, Toscana. Bookmark the permalink.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
How Casting Lots Came to Be Written
Now, I would like to tell you how I came to write Casting Lots. I was in church listening to a sermon, when I felt compelled or called to write about how the Gospels came to be written. Having been called, how does one accomplish the task?
For years, I’ve been intrigued by early Christianity and by the Roman Empire. My love of Rome grew out of my second year of Latin class where my professor would dazzle us by being able to recite the Gallic Wars of Julius Caesar in either Latin or English from memory. His enthusiasm about Julius Caesar was infectious. I can still see him standing on his desk lifting a miniature legion’s Eagle while quoting the passage about the Centurion who leapt into the seas’ waves to lead Caesar’s forces to invade England.
So a story about a centurion came to me naturally. Of course, there is a centurion who is described by St. Luke at the crucifixion of Jesus. What prompted that Centurion to utter: “Surely, this was a righteous man.”? I wanted to explore this question.
I also wanted my Centurion to be related to Caesar’s Centurion. To fit the timeline properly, Centurion Cornelius had to be the grandson of Caesar’s Centurion. It seemed to be a fitting way to honor my Latin teacher.
In Acts, St. Luke related the story of the Centurion who was converted by St. Peter. In my mind, it was clear that this Centurion was one and the same man who was both present at the crucifixion and then who was later the first Roman official to be converted to Christianity.
In later blogs, I will relate how other elements of the story of Casting Lots came about.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
Monday, October 16, 2006
The most important part of being a normal weight isn't looking a certain way - it's feeling good and staying healthy. Having too much body fat can be harmful to the body in many ways.
The good news is that it's never too late to make changes in eating and exercise habits to control your weight, and those changes don't have to be as big as you might think. So if you or someone you know is obese or overweight, this article can give you information and tips for dealing with the problem by adopting a healthier lifestyle.
What Is Obesity?
Once the doctor has calculated a child's or teen's BMI, he or she will plot this number on a specific chart to see how it compares to other people of the same age and gender. A person with a BMI above the 95th percentile (meaning the BMI is greater than that of 95% of people of the same age and gender) is generally considered overweight. A person with a BMI between the 85th and 95th percentiles typically is considered at risk for overweight. Obesity is the term used for extreme overweight. There are some exceptions to this formula, though. For instance, someone who is very muscular (like a bodybuilder) may have a high BMI without being obese because the excess weight is from extra muscle, not fat.
What Causes Obesity?
Who Is at Risk for Becoming Obese?
The number of people who are obese is rising. About 1.2 billion people in the world are overweight and at least 300 million of them are obese, even though obesity is one of the 10 most preventable health risks, according to the World Health Organization. In the United States, more than 97 million adults - that's more than half - are overweight and almost one in five adults is obese. Among teenagers and kids 6 years and older, more than 15% are overweight - that's more than three times the number of young people who were overweight in the 1970s. At least 300,000 deaths every year in the United States can be linked to obesity.
In the United States, women are slightly more at risk for becoming obese than men. Race and ethnicity also can be factors - in adolescents, obesity is more common among Mexican Americans and African Americans.
How Can Obesity Affect Your Health?
Obesity is bad news for both body and mind. Not only does it make a person feel tired and uncomfortable, it can wear down joints and put extra stress on other parts of the body. When a person is carrying extra weight, it's harder to keep up with friends, play sports, or just walk between classes at school. It is also associated with breathing problems such as asthma and sleep apnea and problems with hips and knee joints that may require surgery.
There can be more serious consequences as well. Obesity in young people can cause illnesses that once were thought to be problems only for adults, such as hypertension (high blood pressure), high cholesterol levels, liver disease, and type 2 diabetes, a disease in which the body has trouble converting food to energy, resulting in high blood sugar levels. As they get older, people who are obese are more likely to develop heart disease, congestive heart failure, bladder problems, and, in women, problems with the reproductive system. Obesity also can lead to stroke, greater risk for certain cancers such as breast or colon cancer, and even death.
In addition to other potential problems, people who are obese are more likely to be depressed. That can start a vicious cycle: When people are overweight, they may feel sad or even angry and eat to make themselves feel better. Then they feel worse for eating again. And when someone's feeling depressed, that person is less likely to go out and exercise.
How Can You Avoid Becoming Overweight or Obese?
To stay active, try to exercise 30 to 60 minutes every day. Your exercise doesn't have to be hard core, either. Walking, swimming, and stretching are all good ways to burn calories and help you stay fit. Try these activities to get moving:
• Go outside for a walk.
• Take the stairs instead of the elevator.
• Walk or bike to places (such as school or a friend's house) instead of driving.
• Tackle those household chores, such as vacuuming, washing the car, or cleaning the bathroom - they all burn calories.
• Alternate activities so you don't get bored: Try running, biking, skating - the possibilities are endless.
• Go dancing - it can burn more than 300 calories an hour!
• Avoid fast-food restaurants. If you can't, try to pick healthier choices like grilled chicken or salads, and stick to regular servings - don't supersize!
• Eat a healthy breakfast every day.
• Pay attention to the portion sizes of what you eat.
What Can You Do If You Are Overweight or Obese?
Before you start trying to lose weight, talk to a doctor, a parent, or a registered dietitian. With their help, you can come up with a safe plan, based on eating well and exercising. Remember that teenagers need to keep eating regularly. Don't starve yourself because you won't get the nutrients you need to grow and develop normally.
You may also want to keep a food and activity journal. Keep track of what you eat, when you exercise, and how you feel. Changes can take time, but seeing your progress in writing will help you stick to your plan. You might also want to consider attending a support group; check your local hospital or the health section of a newspaper for groups that meet near you. Above all, surround yourself with friends and family who will be there for you and help you tackle these important changes in your life.
Updated and reviewed by: Barbara P. Homeier, MD
Date reviewed: January 2005
Originally reviewed by: Sandra G. Hassink, MD
Source: Obesity
1 comment:
Chuck said...
Wow, Your blog on obesity is excellent.
I'll mark your blog in my browser, and come back to visit.
|
AskwikiTech Blog
Thank you
Subscribe via email
Enter your email address:
The basics of Wi-Fi and WiMAX technology
Wireless Internet access is growing at a furious pace in developing economies like India and China, South America and many other places in rest of the world. The basic standard for this wireless technology is WiFi. WiFi is primarily used to create a Local Area Network (LAN), which allows users within the network to connect wirelessly. The commonest use is primarily in Internet connectivity, but WiFi is also used for closed-circuit business networking and for connecting consumer electronics, such as TVs and DVD players.
WiFi makes connecting to the Internet within a home or business cheap and easy. While WiFi technology has proved largely successful in providing cheap wireless Internet service within close proximity to the WiFi access point, a new technology, WiMax, could expand the potential of wireless penetration and connection quality. WiMax does provide wireless reception over significantly greater distances, and at higher broadband levels. But the technology behind WiMax is significantly different from WiFi, as well as more costly. WiMAX is an acronym for World Wide Interoperability for Microwave Access.
To Sum it up...
# WiMAX is a long range (many kilometers) system that uses licensed or unlicensed spectrum to deliver a point-to-point connection to the Internet from an ISP to an end user. Different 802.16 standards provide different types of access, from mobile (analogous to access via a cellphone) to fixed (an alternative to wired access, where the end user's wireless termination point is fixed in location.)
# Wi-Fi is a shorter range (range is typically measured in hundreds of m) system that uses unlicensed spectrum to provide access to a network, typically covering only the network operator's own property. Typically Wi-Fi is used by an end user to access their own network, which may or may not be connected to the Internet. If WiMAX provides services analogous to a cellphone, Wi-Fi is more analogous to a cordless phone. Related Posts Plugin for WordPress, Blogger...
Latest Tech Updates
|
Monday, July 14, 2008
Planet: A self-gravitating body that is nearly round
The other day I was looking at the Sloan Digital Sky Survey (SDSS) database table on photometric classification. I needed to recall which type was a star and which was a galaxy. So I was interested to see the following two listings in this table buried deep in the SDSS information:
Galaxy: An extended object composed of many stars and other matter.
Star: A a self-luminous gaseous celestial body.
I guess I was just surprised at how pedantic it was.
Kayhan Gultekin said...
Looks like a brown dwarf is a star by their definition.
Becky said...
I guess they don't care why it's self-luminous. That makes life so much simpler.
Are planetary nebulae stars? Are globular clusters just many stars? This would make teaching intro astro so much easier.
Sarah said...
Guess that's solved. Next problem!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.