id
stringlengths 50
55
| text
stringlengths 54
694k
|
---|---|
global_05_local_4_shard_00000656_processed.jsonl/83704
|
gram-pian hills
(used with a plural verb) a range of low mountains in central Scotland, separating the Highlands from the Lowlands. Highest peak, Ben Nevis, 4406 feet (1343 meters).
Also called Grampian Hills. Unabridged
Cite This Source Link To gram-pian hills
Previous Definition: gram-pian
Next Definition: gram-positive
Words Near: gram-pian hills
More from
Synonyms and Antonyms for gram-pian hills
More from
Search for articles containing gram-pian hills
Copyright © 2014, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature
|
global_05_local_4_shard_00000656_processed.jsonl/83706
|
belonging or pertaining to the Lepidoptera, an order of insects comprising the butterflies, moths, and skippers, that in the adult state have four membranous wings more or less covered with small scales.
Also, lepidopteral.
1790–1800; Lepidopter(a) + -ous
nonlepidopteral, adjective
nonlepidopterous, adjective
Dictionary.com Unabridged
Cite This Source Link To lepidopterous
Explore Dictionary.com
Previous Definition: lepidopteron
Next Definition: lepidopterous insect
Words Near: lepidopterous
More from Thesaurus.com
Synonyms and Antonyms for lepidopterous
More from Reference.com
Search for articles containing lepidopterous
More from Dictionary.com Translator
Dictionary.com Word FAQs
Example sentences
They may be reared by collecting lepidopterous and dipterous pupae.
In addition, there is a potential direct risk to non-target lepidopterous
species, including endangered species.
Fossil lepidopterous leaf mines demonstrate the age of some insect-plant
The use sites are all agricultural for use on growing plants to control
lepidopterous pests.
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature
|
global_05_local_4_shard_00000656_processed.jsonl/83709
|
health care
All about malabsorption syndrome causes of malabsorption symptoms of malabsorption diagnosis of malabsorption treatment for malabsorption celiac disease gluten causes of celiac disease symptoms of celiac disease celiac disease diagnosis treatments for celiac disease celiac disease and gluten-free diet lactose intolerance causes of lactose intolerance symptoms of lactose intolerance diagnosis of lactose intolerance treatment of lactose intolerance lactose-free diet Whipple's disease causes of Whipple's disease symptoms of Whipple's disease diagnosis of Whipple's disease treatment for Whipple's disease
What causes malabsorption?
The causes of malabsorption include cystic fibrosis (from lack of pancreatic enzymes to digest food), lactose intolerance, celiac disease (gluten-induced-enteropathy, sprue), Whipple disease, acrodermatitis enteropathica (zinc malabsorption),
biliary atresia, pernicious anemia, and the parasites Giardia lamblia (giardiasis), Strongyloides stercoralis (threadworm), and Necator americanus (the hookworm).
Protein, fats, and carbohydrates (macronutrients) normally are absorbed in the small intestine; the small bowel also absorbs about 80% of the eight to ten liters of fluid ingested daily. There are many different conditions that affect fluid and nutrient absorption by the intestine. A fault in the digestive process may result from failure of the body to produce the enzymes needed to digest certain foods. Congenital structural defects or diseases of the pancreas, gall bladder, or liver may alter the digestive process. Inflammation, infection, injury, or surgical removal of portions of the intestine may also result in absorption problems; reduced length or surface area of intestine available for fluid and nutrient absorption can result in malabsorption. Radiation therapy may injure the mucosal lining of the intestine, resulting in diarrhea that may not become evident until several years later. The use of some antibiotics can also affect the bacteria that normally live in the intestine and affect intestinal function.
More information on malabsorption (celiac disease, lactose intolerance, Whipple's disease)
What is malabsorption? - Malabsorption is the inability to absorb nutrients through the gut lining into the bloodstream. Malabsorption is the failure of the GI tract to absorb one or more substances from the diet.
What causes malabsorption? - The causes of malabsorption include cystic fibrosis, lactose intolerance, celiac disease, Whipple disease, acrodermatitis enteropathica, biliary atresia, pernicious anemia.
What are the symptoms of malabsorption? - The signs and symptoms of malabsorption may include failure to thrive, diarrhea, cramping, frequent bulky stools, bloating, flatulence, and abdominal distention.
How is malabsorption diagnosed? - The diagnosis of malabsorption syndrome and identification of the underlying cause can require extensive diagnostic testing.
What's the treatment for malabsorption? - Treatment of malabsorption is the treatment of the causing disease. Fluid and nutrient monitoring and replacement is essential for individual with malabsorption syndrome.
What is celiac disease? - Celiac disease is a sensitivity to gluten, a wheat protein. Individuals with this disease must avoid gluten-containing grains, which include all forms of wheat, oats, barley, and rye.
What is gluten? - Gluten is a protein in grains such as wheat, oats, rye, and barley. Gluten is responsible for the elasticity of kneaded dough which allows it to be leavened.
What causes celiac disease? - The exact cause of celiac disease is not known. The principal cause of the disorder is an immunologic reaction to components of certain dietary glutens.
What are the symptoms of celiac disease? - The symptoms of celiac disease (CD) vary so widely among patients that there is no such thing as a typical celiac. Symptoms may or may not occur in the digestive system.
How is celiac disease diagnosed? - Celiac disease may be diagnosed by observing the symptoms after an infant begins eating cereals. The diagnosis is suspected when a person has the above-mentioned symptoms.
What are the treatments for celiac disease? - Many of the effects of celiac disease can be treated and minimized with a special diet. People with celiac disease learn to avoid the proteins in cereal.
Celiac disease and gluten-free diet - A gluten-free diet is a diet completely free of ingredients derived from gluten containing cereals, wheat, oats, barley and rye.
What is lactose intolerance? - Lactose intolerance is a set of symptoms resulting from the body's inability to digest the milk sugar called lactose. Lactose is sugar occuring naturally in milk and is also called milk sugar.
What causes lactose intolerance? - Primary lactase deficiency is a genetically inherited. Secondary lactase deficiency is a transient state of lactase deficiency due to damage to the lining of the intestine.
What are the symptoms of lactose intolerance? - People with lactose intolerance usually cannot tolerate milk and other dairy products. The symptoms of lactose intolerance are dose-dependent.
How is lactose intolerance diagnosed? - Lactose intolerance is widely regarded as a medical condition. The most common test for lactose intolerance is the hydrogen breath test.
What's the treatment for lactose intolerance? - Lactose intolerance can be controlled and treated through diet by avoiding foods containing lactose, primarily dairy products.
Manage lactose intolerance with lactose-free diet - People who are very sensitive to lactose should be aware that lactose is widely used as an ingredient in many ready-made meals and other food products.
What is Whipple's disease? - Whipple's disease is a malabsorption disease. It interferes with the body's ability to absorb certain nutrients.
What causes Whipple's disease? - Whipple's disease is caused by the organism Tropheryma whippelii. The disease causes lesions on the wall of the small intestine and thickening of the tissue.
What are the symptoms of Whipple's disease? - Whipple's disease causes weight loss, irregular breakdown of carbohydrates and fats, resistance to insulin, and malfunctions of the immune system.
How is Whipple's disease diagnosed? - Whipple's disease is diagnosed through a tissue sample (biopsy) of the small intestine, or of an enlarged lymph node.
What is the treatment for Whipple's disease? - Whipple's disease is treated with antibiotics to destroy the bacteria that cause the disease, treatment may also include fluid and electrolyte replacement.
Digestive health Mainpage
Topics in digestive disorders
Signs and symptoms of digestive diseases
Anal and rectal disorders
Diverticular disease
Inflammatory bowel diseases
Peptic disorders (Stomach disease)
Emergencies of digestive system
Liver diseases
Irritable bowel syndrome
Diagnostic tests for digestive disorders
Featured articles
Crohn's disease
Ulcerative colitis
Peptic ulcer
Gastroesophageal reflux disease
Hepatitis A
Hepatitis B
Hepatitis C
Liver transplant
Colon cancer
Stomach cancer
Colorectal cancer (bowel cancer)
|
global_05_local_4_shard_00000656_processed.jsonl/83732
|
System requirements: BlackBerry Administration API development computers
Windows® operating system
This the supported operating system.
Microsoft Visual Studio 2005 or 2008
This is the IDE and supporting frameworks that you use to develop and build your application
Was this information helpful? Send us your comments.
|
global_05_local_4_shard_00000656_processed.jsonl/83735
|
This illustration shows a Data Guard configuration during a switchover operation. The San Francisco database (originally the primary database) has changed to the standby role, but the Boston database has not yet changed to the primary role. At this point in time, both the San Francisco and Boston databases are operating in the standby role. Applications that were previously sending read/write transactions to the San Francisco database are preparing to send read/write transactions to the Boston database. On the Boston standby database, the standby database online redo logs and local archived redo logs are still being generated. However, no redo logs are being sent or received over the Oracle Net network. Both of the standby databases are capable of operating in read-only mode.
|
global_05_local_4_shard_00000656_processed.jsonl/83753
|
Cocktail 101
All the basics of the bar.
3 Great New(ish) Cocktail Bitters
Then about eight years ago, the bartender and booze writer Gary Regan formulated the newest and greatest recipe of his orange bitters, sensing a need in the marketplace, and so it came to pass that Regan's Orange Bitters No. 6 became available to bartenders and cocktail nerds.
These days, you kids are spoiled for choice. I decided one day to count the number of upstart companies producing bitters, and I had to stop when I got to 30 because I can't count much higher than that.
So I'm going to start an occasional series here, looking at the world of craft bitters. Each go-round, I'll look at three new(ish) companies that are making bitters, talk about what makes each company unique, and describe the bitters they're brewing up.
But first, perhaps, a brief primer on bitters is in order. For a fuller description of what bitters are, check out my two-parter on bitters, here and here.
What the Heck Are Bitters?
Cocktail bitters are, as I like to say, the spice of the cocktail world. I think this description is apt for two reasons. First, they serve the same role in cocktails that spices do in food: they complement the main ingredient while adding nuances of flavor and complexity. Second, cocktail bitters are quite literally made of spices: roots such as gentian, barks such as cinnamon or angostura, seeds or pods such as cardamom, fruit peels, and so on. If you can cook with a spice, you can probably make bitters with it as well.
Let's get to the bitters, then, shall we?
Hella Bitter
Photo: Hella Bitter
Based in Brooklyn and Queens, Hella Bitter started with a Kickstarter campaign, which enabled the founders to move bitters production out of an apartment kitchen and into a commercial space in an industrial neighborhood of Queens. Hella offers just two products at present, an aromatic and a citrus. The latter features five types of citrus peel, plus ginger and other spices. The aromatic is on the model of Angostura, but emphasizes the use of baking spices such as cinnamon. Hella uses no artificial colorings or flavorings in its products.
I love their dedication to the classic forms. After all, aromatic and citrus were the first bitters available, and it makes sense to focus on those classics. Having tasted both, it's hard to pick a favorite, but I'm inclined toward the citrus. Most bitters makers are doing orange bitters, if they're doing anything citrusy at all, and I find that Hella's citrus has a complexity that a lot of orange bitters lack.
Hella's bitters are available at Whole Foods, West Elm Market, and Boston Shaker, or online at their website or Amazon.
Scrappy's Bitters
Photo: Scrappy's Bitters
Started by a bartender in Seattle, Scrappy's is one of the oldest of the upstart bitters producers. Scrappy's ingredients are almost entirely organic, and they include no artificial colorings or flavorings. Eight flavors are available: lavender, grapefruit, orange, cardamom, chocolate, celery, lime, and aromatic.
Some bitters makers hew to the classics, like the aforementioned Hella. Others are more esoteric. Scrappy's is in the middle: aromatic, orange, and celery are classic bitters, and Scrappy's does them well. Of the esoteric flavors, I particularly like the cardamom. Try it in a martini or a gin and tonic. You'll be amazed.
Scrappy's Bitters are available at Cocktail Kingdom, Boston Shaker, Cask, Keg Works, and Anthropologie, as well as on Amazon.
Urban Moonshine
Photo: Urban Moonshine
Hailing from the Northeast Kingdom of Vermont, Urban Moonshine is unique among bitters producers. They're herbalists. Their organic bitters and herbal tonics are intended to be used in the way bitters were first used, as digestive bitters. However, since they contain the traditional array of ingredients you find in cocktail bitters, they're suitable for mixing, as well. Urban offers three flavors: original (akin to an aromatic bitters), citrus, and maple. The bitters come in dropper bottles and, uniquely, spray bottles.
I find that a mist of maple or citrus bitters sprayed onto the surface of a cocktail is a lovely touch. Something else I love to do is to make an Old Fashioned or a Manhattan, and then just before I stir the cocktail, add a couple of drops of maple bitters. The maple complements those drinks very well.
Urban Moonshine's bitters are available at natural food markets, Kalustyan's, Boston Shaker, and online at the Urban Moonshine site as well as on Amazon.
Your Turn, You Bittered Sling, You
So, what's your favorite bitters company?
Add a comment
Previewing your comment:
|
global_05_local_4_shard_00000656_processed.jsonl/83754
|
A Beginner's Guide to Spanish Wine
Essential info on Spanish wine regions and grapes. [Photograph: Nicole Lerner]
When I asked my mother recently what kind of Spanish wine she enjoyed, she enthusiastically exclaimed, "sangria!" Of course, Spain has much more to offer in wine than just that tasty pitcher drink. You can find so many great values in Spanish wine—delicious (and cheap!) bottles for any night of the week. But you will also be rewarded if you decide to spend a little more and explore the classic wines of Spain. If you mostly drink wines from the New World—say, South America, California, or Australia&mdsah;lush Spanish wines are a great introduction to the Old World.
Facing a new section of your local wine store can be daunting. Today, we'll help you get to know some major Spanish wine regions and grapes so you can confidently choose a few bottles to try.
What You'll See On the Bottle
[Photograph: Imamon on Flickr]
One of the things that makes Spanish wine special is that many Spanish wineries age the wine for you, in oak barrels and in the bottle. This means you get a chance to taste cellared wines that have aged to the point of tasting their best without investing in storage space at home. When you look at a Spanish wine and see the terms Joven, Crianza, Reserva, or Gran Reserva, they're telling you about how long the aging was: those Gran Reservas have been cellared the longest, and a bottle with 'Joven' on the label didn't spend nearly as much time resting at the winery.
Because Spain is part of the European Union, the wine labeling system is pretty similar to those of France and Italy. The category you will most often see at your local shop is Denominación de Origen (DO), which is the equivalent of an Appellation d'Origine Contrôlée (AOC) in France. Each individual DO (for example, Ribera del Duero or Rías Baixas) has its own rules for the wines, such as which grapes can be planted. If for some reason you can't find the DO on the bottle, the "logo" of the DO should be on a sticker on the back or on the capsule over the cork.
The top of the Spanish wine quality pyramid is Denominación de Origen Calificada (it has several abbreviations because of regional dialects: DOCa, DOC or DOQ). There are only two DOCs: Rioja and Priorat. Spain also has a unique category, called DO Pago, which is for single estates.
When you're looking at bottles of Spanish wine, you'll often see the primary grape front and center on the label, or otherwise, on the back. One thing you will notice is that because of regional language differences, sometimes grapes or areas may look just a little different. Garnacha in Catalonia, for example, will appear as Garnatxa.
Weather Shapes the Wine
vineyard in spain
[Photograph: Randi Hausken on Flickr]
Since Spain is a peninsula, the climate varies widely from region to region. Most of central Spain sizzles under the summer sun and gets very cold in the winter. In the northwestern part, called Galicia, the cool ocean breezes and many rivers lead to the moniker "Green Spain." In the south, the brutal, arid land and howling winds can prove too much for most grapes. The Mediterranean to the west contributes warm temperatures and cooling breezes, while the Pyrenees on the border with France block rain clouds from making their way to the north central area.
Ready to start drinking?
[Photograph: Robyn Lee]
Cava is the famous sparkling wine of Spain. You'll mostly find Cava production in Catalonia in the northeast by Barcelona. Cava goes through the traditional method of secondary fermentation in the bottle to get its bubbles—like Champagne in France and Franciacorta in Italy. Cava can be white or rosé and is usually a blend of Xarel-lo, Macabéo, and Parellada grapes, but a few other varieties are also allowed in the blend. Because of extended aging with the spent yeast, most Cavas have a richness that complements crisp appley flavors. Cavas are usually dry, but like with Champagne, the amount of sugar from the dosage will be indicated on the label with such terms as Brut or Semi-Seco. If you're looking for not-too-pricey sparkling wine for a special occasion (or a weeknight dinner), Cava can be a great choice.
Spanish White Wines
Fresh and Salty
Txakoli has a bit of light fizz. [Photograph: Jon Oropeza]
On the Northern coast of Spain near San Sebastian is Basque country. This is where you will find Txakoli (pronounced CHALK-oh-lee), a citrusy wine with low alcohol and some spritz made from the Hondarribi Zuri grape. Ameztoi and Txomin Etxaniz are two producers that are easy to find, but many more have been imported into the US recently and you should be able to find this perfect sunny afternoon sipper wherever you live. The area makes a tiny bit of red wine from the Hondarribi Beltza grape, which also allows them to make rosé. Txakoli rosé is truly one of the great joys in life. It is fun and fresh and tastes like salted watermelon.
On the western coast, north of Portugal, lies Rías Baixas. The star of this area is Albariño, with Loureira and Treixadura being the backup dancers. True to its coastal nature, you can find a briny, ocean touch to this wine, which also has hints of white flowers and stone fruit. Take a hint from the locals and enjoy a glass with seafood. A big bowl of steamed mussels, perhaps?
Rich and Textured
The tiny region of Valdeorras, just a few hours inland from Rías Baixas, makes several styles of wine. Start with the white wines, based on the Godello grape. Godello combines lemon and cantaloupe flavors with a crisp minerality. These wines have enough body to carry you through a meal from a braised octopus appetizer to roasted halibut.
Southeast of Valdeorras is Rueda, which sits on the Duero River in the Castilla y León region. A small amount of red wine is made, but the true gems are white wines made from Verdejo. If the wine is mostly Verdejo, it will say 'Rueda Verdejo' on the bottle. Otherwise, it likely has a significant portion of Viura and Sauvignon Blanc blended with it. The wines are wonderfully aromatic, reminiscent of meyer lemon and almond.
While also planted around Galicia and in Catalonia for use in Cava (under the name Macabéo), Viura is famously known as the white grape of Rioja. It can be bottled on its own or blended with other grapes, such as Garnacha Blanca or even Chardonnay. Lopez de Heredia, one of the greatest wineries in Spain, makes an aged Viura called 'Viña Gravonia' that really is in a class by itself. They cellar it in American oak barrels for years and then it doesn't hit shelves until nearly a decade after the grapes were picked. It is tannic, full-bodied and has an amazingly complex aroma of bruised apple, curry, and coconut. Not all white Rioja is made this way, though. Many that you will find, especially if they are young, will be fresh but still full-bodied, with waxy apple and pear flavors.
Spanish Red Wines
The cellars at Muga in Rioja. [Photograph: Bodegas Muga]
If you've started exploring Spanish wine, you've likely had a bottle or two of Tempranillo. Tempranillo is the most planted red grape in Spain, and it appears under a few names, including Tinto Fino, Tinto de Toro, Cencibel, Ull de Llebre, and Tinto del Pais. The two most famous regions for Tempranillo are Rioja and Ribera del Duero.
Rioja is in north-central Spain on the Ebro River. Wines of Rioja are a great blend of ripe fruit and earthy flavors—they have one foot in the New World and one foot in the Old World. In Rioja, Tempranillo grapes can be blended with Mazuelo, Graciano, Garnacha, and Maturana Tinta. The law also leaves a little room for winemakers to add non-traditional grapes like Cabernet Sauvignon in small proportions. Classic examples will combine ripe plum and dried prune flavors with hints of leather and sweet-and-sour sauce.
Rioja went above and beyond Spanish laws and added some time to their minimum aging requirements. And often, winemakers allow the wines to age for years beyond what is required by Rioja. For red wines, Crianzas are aged at least 2 years total (including 1 year in oak barrels.) Reserva wines are aged at least 3 years total, including 1 year in barrels. Gran Reservas spend at least 2 years in barrels and then three more years in bottles before they're sold.
You might hear people calling wines from Rioja either 'traditional' or 'modern' in style. What does this mean? 'Traditional' wines of Rioja are aged in American oak barrels, which impart hints of coconut and dill to the wine. 'Modern' winemakers tend to use French oak barrels, which add a little vanilla and baking spice flavor. While some winemakers are squarely in one camp or another, many use methods that are somewhere in between. You might find some wines that have been aged in a mixture of American or French oak barrels or even in barrels that are themselves made of both types of oak.
Want to try some great Rioja? Producers to seek out include Muga, Lopez de Heredia, and CVNE.
Ribera del Duero
Ribera del Duero is the other Spanish wine region known for top-quality Tempranillo, and here, the wines are usually entirely Tempranillo, rather than a blend. Like Rioja, most wine labels from Ribera del Duero will let you know how long the wine has been aged by using the terms Crianza, Reserva, and Gran Reserva on the labels. The winemaker's use of oak has a major influence on the finished wine here, too. While you'll see mostly American oak in traditional Rioja bottlings, winemakers in Ribera del Duero often opt for more French oak, so you're more likely to taste vanilla, cinnamon, and clove. Overall, Ribera del Duero is more opulent and polished than the rustic, earthy Rioja. I think of Ribera del Duero as my shiny black pumps and Rioja is best-fitting pair of soft leather loafers.
Tempranillo isn't just limited to Rioja and Ribera del Duero, though. It's grown across the country, and regions such as La Mancha and Valdepeñas offer affordable versions that are lightly oaked and ready to drink right away.
Steep vineyards in Priorat [Photograph: Agricultura Generalitat de Catalunya]
Wines from Priorat are intense and muscular. If you love sun-kissed, full bodied California wines but are looking for an earthier touch, this is a great region to explore. Many of the vineyards in Priorat are so steep they necessitate building terraces—it's like making the hill into a large staircase with rows of vines on each step. Priorat's unique slate soil—called llicorella—looks like broken chalkboard strewn around the hillside. This rough terrain requires vines to dig deep in the earth in search of water and nutrients.
Most of Priorat's red wines are made from a blend of Garnacha and Cariñena with Cabernet Sauvignon, Syrah, and others. Alvaro Palacios was a pioneer in this region and while prices of Priorat in general have skyrocketed over the years, his "Camins del Priorat" bottling is still one of the best values around.
If you're curious about wines like this, but can't swing the price tag, try seeking out wines from Montsant, a region that is like a horseshoe around Priorat. The wines are full-bodied with intense red and black fruit, dried tobacco, and earth.
More Red Wine Values in Spain
If you want to try Spanish wine on a budget, it's worth getting friendly with a few more grapes beyond Tempranillo.
I've already mentioned Garnacha a few times—it appears as part of the blend in Priorat and in Rioja. Known as Grenache in France, this is the third most planted grape in Spain. Garnacha thrives in warm climates, especially in the north-central part of Spain. It is often used to make rosé, but can also make wonderfully ripe, cherry-fruited weeknight wines, such as Borsao's 'Tres Picos' from Campo de Borja.
Monastrell, the Spanish name for Southern France's Mourvèdre, can be found across southern Spain. It needs a lot of sunshine to ripen; it definitely finds that warmth on the sunny Mediterranean coast near Valencia. Often the wines will be full-bodied with aromas of ripe, juicy red fruit, pepper, and meat.
The grape Mencía makes medium- to full-bodied wines with hints of blackberry, anise, and a distinct herbal aroma that often reminds me of Cabernet Franc. While the grape is grown throughout Galicia and northwestern Spain, Bierzo is a good region to seek out.
Sherry and olives are delicious together. [Photograph: Krista on Flickr]
The first time I ever stuck my nose in a glass of sherry was in a wine class. All the students popped their heads up and looked around. Was there something wrong with it? We had never smelled anything like that stuff. The weird wine in the glass was Fino Sherry...and not only was it not flawed, but it was totally delicious. Sherry captures you with its intense aromatics and electrifying acidity.
Most sherry is a fortified wine that goes through a solera, a system of blending where wines from different years are mixed into each other over time. In some sherry barrels, a layer of yeast called flor will form over the top of the wine, protecting it from oxygen while imparting a distinct flavor. The freshest styles are Fino and Manzanilla (that second one is a Fino sherry made in the town of Sanlúcar de Barrameda.) If these styles are exposed to oxygen later on in their aging, combining the taste of flor with nutty, oxidative characteristics, they become the Amontillado and Palo Cortado styles. Oloroso sherry is made without flor to protect the wine from oxygen. This gives the wine rich walnut and toffee notes. (Want to read more about sherry? We have a whole guide here.)
Dry sherries can be such a surprisingly perfect pairing for food. A glass of Manzanilla with almonds and boquerones is classic and delicious. A bottle of Palo Cortado with a crispy-skinned roast chicken will blow you away.
Time for dessert? Sweet styles of sherry, such as Pedro Ximénez and Pale Cream sherries, can be a rich, syrupy delight. They go perfectly with ice cream or chocolate cake, or served as a sweet counterpoint to a cheese plate.
Add a comment
Previewing your comment:
|
global_05_local_4_shard_00000656_processed.jsonl/83765
|
I am so happy to give you the possibility to play the latest version of Earn to die game. Earn to die part 2 is a continuation of the Earn to die games saga. It already has a thousands of players who enjoy with killing zombies, as t was in the previous versions. I had played earn to die 2 before the new version was realized and new has a lot of opportunities and new graphic motions.I am a big fan of the part 2. because this version has much more features and details to install. Also there are new cars, which have more strength and can kill more zombies than the old version cars could.
The main goal of course is the same. You should choose one of the cars, at first you can choose only a small car, after some tries you will get money to add to the car new features, also fill the tank with fuel. It gives an opportunity to kill more zombies, than you could before. When you gather enough money for buying new type of cars, you can do it and improve your chances to achieve your aim, kill all zombies, go through all levels and finish the game.
As I mentioned above, Earn to die part 2 is the newest and in my opinion the best version of the earn to die game. So let’s start the game, do your best, buy new cars, install awesome features and destroy hordes of the zombies.
Come on guys and good luck.
|
global_05_local_4_shard_00000656_processed.jsonl/83766
|
Earthquake Glossary - G or g
All Terms
• G or g
g is the acceleration of gravity 9.8 (m/s2) or the strength of the gravitational field (N/kg) (which it turns out is equivalent).
When acceleration acts on a physical body, the body experiences the acceleration as a force. The force we are most experienced with is the force of gravity, which causes us to have weight.
The equation for the force of gravity is F = mg, at the surface of the earth, or F = GMm/r2 at a distance r from the center of the earth (where r is greater than the radius of the earth). G is the proportionality constant 6.67x10-11 (N-m2/kg2) in Newton's law of gravity.
When there is an earthquake, the forces caused by the shaking can be measured as a percentage of gravity, or percent g.
For example: The shaking at a particular location is measured as an acceleration of 11 feet per second, or 11*12*2.54 cm/sec/sec = 335 cm/sec/sec. The acceleration due to gravity is 980 cm/sec/sec, so the measured shaking is 335/980, or 0.34 g. As a percentage, this is 34% g.
G or g
|
global_05_local_4_shard_00000656_processed.jsonl/83778
|
Bryan Caplan
My Two Favorite Graphs From Coming Apart
My PSST op-ed... My PSST Papers...
I have a predictably optimistic take on Charles Murray's Coming Apart. But these two graphs did indeed shock me. The first contrasts divorce rates for working class ("Fishtown") and professional ("Belmont") whites:
Notice: Among professionals, divorce plateaued over three decades ago at roughly 8%. Working class divorce rates started higher, rose more quickly, and never stopped rising.
Murray's second shocking graph shows the fraction of working class and professional whites who say they're happily married:
Professionals have always been more likely to be happily married, and both groups saw a decline. But for professionals, happiness bottomed out in the mid-90, then rebounded. For the working class, again, there's been a linear decline, leaving only a quarter happily married.
Still, as Kahneman reminds us, "Nothing in life is as important as you think it is when you're thinking about it." If you double-check in the GSS, you'll find that overall happiness has been virtually constant since the survey began in 1972. On a 3-point scale, happiness has decreased by .001 per year. Current trends could continue for a century before we'd see a tenth of a point decline in average happiness. So quit yer mopin'.
Comments and Sharing
COMMENTS (5 to date)
MS writes:
How much of this is explained by professional whites marrying and getting children later in life whereas the working class still marries and becomes parents relatively early (in line with the dreams of a life á 1950 among social conservatives)?
In my (European) capital mean age at first childbirt among professional women has increased significantly during the last 20-30 years (and is now well above 30).
Badger writes:
MS makes a very good point. We're talking about people that marry at different moments in their lives, and their life dynamics has changed unequally across the decades. I know people that had their Fishtown moment earlier in their lives and now live a Belmont moment. It's hard to make inferences based on measurements that are so aggregated, broad and open ended. You'd need real panel data for reliable conclusions.
The way it's been presented, it's nothing more than wild speculation.
Chris Koresko writes:
These data are indeed interesting. They appear to be showing a bifurcation of American social structure along lines that more or less match the bifurcation in family incomes.
I propose as an hypothesis that these effects have a common root cause, namely the creation of a poverty trap via the implementation of the Great Society programs which came into effect at the end of the 1960s. Although there is an unfortunate gap in the data between around 1961 and 1970, it looks as if there was a positive inflection in the divorce rate in "Fishtown" right around that time. That might be explained by the buffering of the impact of divorce on women and children as social subsidies became more important.
This also seems to correlate with a decline in the industriousness of working-class men, and with the end of a steady decline in the poverty rate (which has been bobbling around 5% since 1970, with the bobbles being strongly anticorrelated with GDP growth)
So the narrative is that the Great Society disincentivized virtuous behavior (hard work, promise keeping) for the working class by increasing their effective marginal tax rate to around 1, while the effect on people with upper-middle class and higher incomes is smaller.
Bryan Willman writes:
There's a trap, and an "it's even worse than it looks" aspect.
The trap is that "happiness" is a kind of regulatory mechanism, and baring something like war, is probably relatively constant. In short, "happiness" is NOT a substitute for "well being" or even "quality of life". That matters because changes in "happiness" over time aren't necessarily very informative (in either way.)
The "it's even worse" bit is that I know (many of us do) a fair number of couples in "belmont" who are not married. But are as stable or more stable than some large number of middle class married couples, let alone fishtown couples. When 2 people who made millions working in the software industry cohabitate and raise children, it's really a very very different thing from when 2 poor people shack up. But they will show in the statistics as "unmarried" (and sometimes as "never married")
Tom writes:
Christian Religion and a focus on morality, as was the case in the 30s-early 60s, resulted in a better social situation for the poor.
Comparing the negatives of a society promoting Christian belief, with the negatives of a society specifically NOT promoting either Christian nor other religious beliefs, the data seems to show the non-religious negatives as much higher.
I think believers make fewer economically mistaken decisions.
Comments for this entry have been closed
Return to top
|
global_05_local_4_shard_00000656_processed.jsonl/83783
|
Medical Questions > Cancer > Cancers Forum
Knot On the Back of the Neck
Must Read
I have this knot on the back of my neck, right under my hair line and centered on my spine. It causes me pain in the neck, shoulders and the back. It's about the size of a baseball. Can you help me, I'm only 18 and it has been on me since i was 12.
Did you find this post helpful?
User Profile
replied November 27th, 2007
Extremely eHealthy
Has it changed in its appearance for that period of six years?
When did you start to feel pain for the first time?
Did you find this post helpful?
|
global_05_local_4_shard_00000656_processed.jsonl/83790
|
left right close
George W. Bush
photo George W. Bush
George W. Bush - for
NO! Bush
George W. Bush - against
Graph online : George W. Bush
Full functionality only if Javascript and Flash is enabled
> George W. Bush >
[+] ADD
george w bush elected from, 2001 george w bush election, george bush reelection campaign democratic criticism foreign policy, GEORGE. WBUSH, george w. bush 2001 2009, what was senior bush arguments, george bush sr. 7 11 franchise texas, George W Bush Family and more...
load menu
|
global_05_local_4_shard_00000656_processed.jsonl/83791
|
Mention Bjarne Stroustrup’s (Fig. 1) name to programmers, and they think of C++.That’s not surprising since he came up with the object-oriented programming language and continues to be involved in the C++ standard, the latest being C++11.
Stroustrup has master’s degrees in mathematics and computer science from Aarhus University in Denmark. His PhD in computer science is from the University of Cambridge. He has also taught and written about C++ with books like Programming: Principles and Practice Using C++. So how did he get started designing C++?
“I needed a tool to help me on a project where I needed hardware access, high performance for systems programming tasks, and help with handling complexity. The project was to ‘split’ a Unix kernel into parts that could run on a multi-processor or a high-performance local network,” Stroustrup said.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.
“At the time (1979/80), no language could meet all three requirements, so I added Simula-like classes to C. The earliest designs added function argument checking and conversion (what later became C function prototypes), constructors and destructors, and simple inheritance,” he said.
“My earliest paper on ‘C with Classes,’ as it was called in the early years, used macros to implement a simple form of generic programming. Later I found that didn’t scale and I had to add templates,” he added (see “Interview: Bjarne Stroustrup Discusses C++” at
C++ started in 1979 when Bjarne was working on his PhD thesis. “The C++ Programming Language” was published in 1985. In 1998, the C++ standards committee published the first standard for C++ ISO/IEC 14882:1998. Now known as C++98, most C++ compilers support it. C++03 and C++11 then followed. The next revision will be C++14.
C++ shares quite a bit with C, but it was not a proper super¬set. C11 and C++11 are closer and share most of C’s enhance¬ments. Lambda, or anonymous functions, are part of C++11.C++ also supports namespaces for grouping of entities like classes, objects, and functions.
Namespaces can be mixed together without conflicts that could be encountered with C when mixing libraries. The syntax of C++ is based on C. However, it adds quite a bit more including features like operators, operator overloading, templates, and, of course, object classes.
C++ supports static and dynamic polymorphism. It can also handle single-object and multiple-object inheritance. Java provides single inheritance support but allows interfaces per class. Virtual functions enable C++ to provide dynamic polymorphism. Objects can have constructor and destructor functions. C++ memory management includes static, automatic, and dynamic memory allocation. Libraries can support garbage collection.
Templates provide generic function support via parameterized types. Class and function templates are supported. Templates allow classes and functions to be instantiated at compile time. Compilers generate code as necessary when templates are utilized based on the defined types.
Part of the C++ standard is the C++ Standard Library. It provides features that C++ programmers have come to expect including smart pointers. It includes multithreading support, although C++11 includes native thread support within the language.
C++ is significantly more complex than C but there are significant advantages to using it. You don’t have to learn all of those features to effectively use C++, though, so if you’re looking to learn C or C++, I recommend C++.
C still dominates the embedded programming space, but C++ is overtaking it as more programmers learn C++ and more compilers support it. C++ has had a major impact in the consumer and enterprise space.
C++’S Awful Textbooks —In Stroustrup’s Own Words
When I first was going to teach programming, I looked at the textbooks using C++, and I was furious! There were (and are) books teaching every little obscure detail of C before getting to the far easier to use C++ alternatives and deeming those alternatives “advanced” to scare off all but the most determined student.
Seriously, how could a standard-library vector be as hard to use well as a built-in array? How could using qsort() be simpler than using the more general and efficient sort()? C++ provides better notational support and stronger type checking than C does. This can lead to faster object code.
Other books presented (and present) C++ as a somewhat failed attempt to be a “pure object oriented programming language” and force most every operation into class hierarchies (a la Java) with lots of inheritance and virtual functions. The result is verbose code with unnatural couplings and lots of casting. To add insult to injury, such code also tends to be slow.
As I said, if that’s C++, I don’t like it either! I responded by writing Programming: Principles and Practice Using C++. It does not assume previous programming experience, though it has been popular with programmers wanting to know what C++ is about.
Beyond The Code
Stroustrup headed up the Large-scale Programming Research department at AT&T Labs, formerly Bell Labs, until 2002. He is both an IEEE and ACM Fellow. He is also a member of the National Academy of Engineering. These days he has little time to write a lot of C++ programs because he is teaching. He is now a Distinguished Professor at Texas A&M University, where he holds the College of Engineering Endowed Chair in Computer Science.
He still finds programming to be great fun. His advice to new programmers is to learn to communicate well verbally and in print. Of course, they also should learn programming fundamentals as well as several programming languages.
|
global_05_local_4_shard_00000656_processed.jsonl/83812
|
A-level Chemistry/Edexcel
From Wikibooks, open books for an open world
Jump to: navigation, search
The Pattern of Periodic Table - Periodicity[edit]
In the periodic table, all elements are arrived in order of increasing relative atomic mass and elements with similar properties recurred at regular intervals. The vertical column of similar elements are called groups and the horizontal row is called period. If there is one outermost electron, that element belongs to Group I. If there are 7, it belongs to Group VII.
In Group 0, the electron structure is -s(square) or -s(square)p(to the power of sixth). Because of their stable electron arrangement, noble gases have large first ionisation energies and exist as monatomic molecules. They have very low melting points and very low boiling points. The melting point increases down the column. The melting point of Helium is the lowest (3K), the 'highest' melting point of noble gases is (211K).
Group I and Group II - Alkali and Alkaline-Earth Metals The chemical properties in Group I and II metals are very similar. There are one / two s-electrons, which are held weakly by the positive nucleus. The atoms readily lose the outermost electrons and form positively charged ions.
e.g. Na = Na+ + e-
e.g. Mg = Mg2+ + 2e-
The first ionisation energy is lower than the second, as the number of shells increases, the distance between the nucleus and the electrons increases, thus the force that held the atoms and nucleus together decreases. They can form ionic bond with Group VI / VII elements and metallic structure. In metallic structure, a 'sea' of electrons between each atom held them together. This arrangement allow the atoms to 'slide' above one another without breaking the bonds. It explains the electrical and thermal conductivity of Group I and II metals.
|
global_05_local_4_shard_00000656_processed.jsonl/83813
|
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Front of Celsus Library with aediculae in Ephesus.
The Aedicula where, according to Christian religious tradition, the body of Jesus was buried.
Gothic facade of Exeter Cathedral, with rows of figures in aedicular or tabernacle frames above the door, and two above the crenellations
In ancient Roman religion, an aedicula (plural aediculae) is a small shrine. The word aedicula is the diminutive of the Latin aedes, a temple building or house.
Many aediculae were household shrines that held small altars or statues of the Lares and Penates.[1] The Lares were Roman deities protecting the house and the family household gods. The Penates were originally patron gods (really genii) of the storeroom, later becoming household gods guarding the entire house.
Other aediculae were small shrines within larger temples, usually set on a base, surmounted by a pediment and surrounded by columns. In Roman architecture the aedicula has this representative function in the society. They are installed in public buildings like the Triumphal arch, City gate, or Thermes. The Celsus Library in Ephesus (2. c. AD) is a good example. From the 4th century Christianization of the Roman Empire onwards such shrines, or the framework enclosing them, are often called by the Biblical term tabernacle, which becomes extended to any elaborated framework for a niche, window or picture.
Gothic aediculae[edit]
As in Classical architecture, in Gothic architecture, too, an aedicule or tabernacle frame is a structural framing device that gives importance to its contents, whether an inscribed plaque, a cult object, a bust or the like, by assuming the tectonic vocabulary of a little building that sets it apart from the wall against which it is placed. A tabernacle frame on a wall serves similar hieratic functions as a free-standing, three-dimensional architectural baldaquin or a ciborium over an altar.
In Late Gothic settings, altarpieces and devotional images were customarily crowned with gables and canopies supported by clustered-column piers, echoing in small the architecture of Gothic churches. Painted ædicules frame figures from sacred history in initial letters of Illuminated manuscripts.
Renaissance aediculae[edit]
Two "tabernacle windows" in the Palazzo Medici Riccardi in Florence. These are of the type known as "inginocchiata", "kneeling" on two brackets.
Classicizing architectonic structure and decor all'antica, in the "ancient [Roman] mode", became a fashionable way to frame a painted or bas-relief portrait, or protect an expensive and precious mirror[2] during the High Renaissance; Italian precedents were imitated in France, then in Spain, England and Germany during the later 16th century.[3]
Post-Renaissance classicism[edit]
Aedicular door surrounds that are architecturally treated, with pilasters or columns flanking the doorway and an entablature even with a pediment over it came into use with the 16th century. In the neo-Palladian revival in Britain, architectonic aedicular or tabernacle frames, carved and gilded. are favourite schemes for English Palladian mirror frames of the late 1720s through the 1740s, by such designers as William Kent.
Other aedicula[edit]
Similar small shrines, called naiskoi, are found in Greek religion, but their use was strictly religious.
Aediculae exist today in Roman cemeteries as a part of funeral architecture.
Presently the most famous Aedicule is situated inside the Church of the Holy Sepulchre in city of Jerusalem.
1. ^ One or more of the preceding sentences incorporates text from a publication now in the public domainChisholm, Hugh, ed. (1911). "Aedicula". Encyclopædia Britannica 1 (11th ed.). Cambridge University Press.
2. ^ Metropolitan Museum: tabernacle frame, Florence, ca 1510
3. ^ National Gallery of Art: Tabernacle frames from the Samuel H. Kress collection
See also[edit]
• Adkins, Lesley & Adkins, Roy A. (1996). Dictionary of Roman Religion. Facts on File, inc. ISBN 0-8160-3005-7.
External links[edit]
|
global_05_local_4_shard_00000656_processed.jsonl/83815
|
Balfour Declaration
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with Balfour Declaration of 1926.
Balfour Declaration
Balfour portrait and declaration.JPG
An image of Balfour and the Declaration
Created 2 Nov 1917
Signatories Arthur James Balfour
Purpose Confirming support from the British government for the establishment in Palestine of a Homeland for the Jewish people
World War I[edit]
Further information: Timeline of World War I
In 1914, war broke out in Europe between the Triple Entente (Britain, France and the Russian Empire) and the Central Powers (Germany, Austria-Hungary and later that year, the Ottoman Empire). The war on the Western Front developed into a stalemate. Jonathan Schneer writes:
Further information: Zionism
In 1896, Theodor Herzl, a Jewish journalist living in Austria-Hungary, published Der Judenstaat ("The Jews' State" or "The State of the Jews" – sometimes erroneously translated as "The Jewish State" although Herzl did not mean it as such. He meant a state for the Jews; not a Jewish state), in which he asserted that the only solution to the "Jewish Question" in Europe, including growing antisemitism, was through the establishment of a state for the Jews. Political Zionism had just been born.[4] A year later, Herzl founded the Zionist Organization (ZO), which at its first congress, "called for the establishment of a home for the Jewish people in Palestine secured under public law". Serviceable means to attain that goal included the promotion of Jewish settlement there, the organisation of Jews in the diaspora, the strengthening of Jewish feeling and consciousness, and preparatory steps to attain those necessary governmental grants.[5] Herzl passed away in 1904 without the political standing that was required to carry out his agenda of a Jewish home in Palestine.[6]
During the first meeting between Chaim Weizmann and Balfour in 1906, Balfour asked what Weizmann's objections were to the idea of a Jewish homeland in Uganda, (the Uganda Protectorate in East Africa in the British Uganda Programme), rather than in Palestine. According to Weizmann's memoir, the conversation went as follows:
Two months after Britain's declaration of war on the Ottoman Empire in November 1914, Zionist British cabinet member Herbert Samuel circulated a memorandum entitled The Future of Palestine to his cabinet colleagues. The memorandum stated that "I am assured that the solution of the problem of Palestine which would be much the most welcome to the leaders and supporters of the Zionist movement throughout the world would be the annexation of the country to the British Empire".
The McMahon–Hussein Correspondence[edit]
Henry McMahon had exchanged letters with Hussein bin Ali, Sharif of Mecca in 1915, in which he had promised Hussein control of Arab lands with the exception of "portions of Syria" lying to the west of "the districts of Damascus, Homs, Hama and Aleppo". Palestine lay to the southwest of the Vilayet of Damascus and wasn't explicitly mentioned. That modern-day Lebanese region of the Mediterranean coast was set aside as part of a future French Mandate. After the war the extent of the coastal exclusion was hotly disputed. Hussein had protested that the Arabs of Beirut would greatly oppose isolation from the Arab state or states, but did not bring up the matter of Jerusalem or Palestine. Dr. Chaim Weizmann wrote in his autobiography Trial and Error that Palestine had been excluded from the areas that should have been Arab and independent. This interpretation was supported explicitly by the British government in the 1922 White Paper.
On the basis of McMahon's assurances the Arab Revolt began on 5 June 1916. However, the British and French also secretly concluded the Sykes–Picot Agreement on 16 May 1916.[8] This agreement divided many Arab territories into British- and French-administered areas and allowed for the internationalisation of Palestine.[8] Hussein learned of the agreement when it was leaked by the new Russian government in December 1917, but was satisfied by two disingenuous telegrams from Sir Reginald Wingate, High Commissioner of Egypt, assuring him that the British government's commitments to the Arabs were still valid and that the Sykes-Picot Agreement was not a formal treaty.[8]
He called on the Arab population in Palestine to welcome the Jews as brethren and co-operate with them for the common welfare.[9] Following the publication of the Declaration the British had dispatched Commander David George Hogarth to see Hussein in January 1918 bearing the message that the "political and economic freedom" of the Palestinian population was not in question.[8] Hogarth reported that Hussein "would not accept an independent Jewish State in Palestine, nor was I instructed to warn him that such a state was contemplated by Great Britain".[10] Continuing Arab disquiet over Allied intentions also led during 1918 to the British Declaration to the Seven and the Anglo-French Declaration, the latter promising "the complete and final liberation of the peoples who have for so long been oppressed by the Turks, and the setting up of national governments and administrations deriving their authority from the free exercise of the initiative and choice of the indigenous populations."[8][11]
Lord Grey had been the Foreign Secretary during the McMahon-Hussein negotiations. Speaking in the House of Lords on 27 March 1923, he made it clear that he entertained serious doubts as to the validity of the British government's interpretation of the pledges which he, as foreign secretary, had caused to be given to Hussein in 1915. He called for all of the secret engagements regarding Palestine to be made public.[12] Many of the relevant documents in the National Archives were later declassified and published. Among them were the minutes of a Cabinet Eastern Committee meeting, chaired by Lord Curzon,which was held on 5 December 1918. Balfour was in attendance. The minutes revealed that in laying out the government's position Curzon had explained that: "Palestine was included in the areas as to which Great Britain pledged itself that they should be Arab and independent in the future".[13]
Sykes–Picot Agreement[edit]
Further information: Sykes–Picot Agreement
In May 1916 the governments of the United Kingdom, France and Russia signed the Sykes–Picot Agreement, which defined their proposed spheres of influence and control in Western Asia should the Triple Entente succeed in defeating the Ottoman Empire during World War I. The agreement effectively divided the Arab provinces of the Ottoman Empire outside the Arabian peninsula into areas of future British and French control or influence.
The agreement proposed that an "international administration" would be established in an area shaded brown on the agreement's map, which was later to become Palestine, and that the form of the administration would be "decided upon after consultation with Russia, and subsequently in consultation with the other allies, and the representatives of the Sherif of Mecca". Zionists believed their aspirations had been passed over. William Reginald Hall, British Director of Naval Intelligence criticised the agreement on the basis that "the Jews have a strong material, and a very strong political, interest in the future of the country" and that "in the Brown area the question of Zionism, and also of British control of all Palestine railways, in the interest of Egypt, have to be considered".
Motivation for the Declaration[edit]
British Government[edit]
James Gelvin, a Middle East history professor, cites at least three reasons for why the British government chose to support Zionist aspirations. Issuing the Balfour Declaration would appeal to Woodrow Wilson's two closest advisors, who were avid Zionists.
"The British did not know quite what to make of President Woodrow Wilson and his conviction (before America's entrance into the war) that the way to end hostilities was for both sides to accept "peace without victory." Two of Wilson's closest advisors, Louis Brandeis and Felix Frankfurter, were avid Zionists. How better to shore up an uncertain ally than by endorsing Zionist aims? The British adopted similar thinking when it came to the Russians, who were in the midst of their revolution. Several of the most prominent revolutionaries, including Leon Trotsky, were of Jewish descent. Why not see if they could be persuaded to keep Russia in the war by appealing to their latent Jewishness and giving them another reason to continue the fight?" ... These include not only those already mentioned but also Britain's desire to attract Jewish financial resources.[14]
At that time the British were busy making promises. At a War Cabinet meeting, held on 31 October 1917, Balfour suggested that a declaration favourable to Zionist aspirations would allow Great Britain "'to carry on extremely useful propaganda both in Russia and America"[15]
The cabinet believed that expressing support would appeal to Jews in Germany and America, and help the war effort.[16] It was also hoped to encourage support from the large Jewish population in Russia. Britain promoted the idea of a national home for the Jewish People, in the hope that Britain would implement it and exercise political control over Palestine, effectively "freeze out France (and anyone else) from any post–war presence in Palestine."[17] According to James Renton, Senior Lecturer at Edge Hill University, an Honorary Research Fellow at University College London, and author of The Zionist Masquerade: the Birth of the Anglo-Zionist Alliance: 1914–1918 (2007), Prime Minister David Lloyd George of the United Kingdom supported the creation of a Jewish homeland in Palestine because "it would help secure post-war British control of Palestine, which was strategically important as a buffer to Egypt and the Suez Canal.".[18] In addition, Palestine was to later serve as a terminus for the flow of petroleum from Iraq via Jordan, three former Ottoman Turkish provinces that became British League of Nations mandates in the aftermath of the First World War. The oil officially flowed along the Mosul-Haifa oil pipeline from 1935–1948, and unofficially up until 1954.
David Lloyd George, who was Prime Minister at the time of the Balfour Declaration, told the Palestine Royal Commission in 1937 that the Declaration was made "due to propagandist reasons".[19] Citing the position of the Allied and Associated Powers in the ongoing war, Lloyd George said that (in the Report's words) "In this critical situation it was believed that Jewish sympathy or the reverse would make a substantial difference one way or the other to the Allied cause. In particular Jewish sympathy would confirm the support of American Jewry, and would make it more difficult for Germany to reduce her military commitments and improve her economic position on the eastern front." Lloyd George then said
Regarding the intended future of Palestine, Lloyd George testified:
Weizmann-Balfour relationship[edit]
Lord Balfour's desk, in the Museum of the Jewish Diaspora, in Tel Aviv
One of the main proponents of a Jewish homeland in Palestine was Chaim Weizmann, the leading spokesperson in Britain for organised Zionism. Weizmann was a chemist who had developed a process to synthesize acetone via fermentation. Acetone is required for the production of cordite, a powerful propellant explosive needed to fire ammunition without generating tell-tale smoke. Germany had cornered supplies of calcium acetate, a major source of acetone. Other pre-war processes in Britain were inadequate to meet the increased demand in World War I, and a shortage of cordite would have severely hampered Britain's war effort. Lloyd-George, then minister for munitions, was grateful to Weizmann and so supported his Zionist aspirations. In his War Memoirs, Lloyd-George wrote of meeting Weizmann in 1916 that Weizmann:
... explained his aspirations as to the repatriation of the Jews to the sacred land they had made famous. That was the fount and origin of the famous declaration about the National Home for the Jews in Palestine .... As soon as I became Prime Minister I talked the whole matter over with Mr Balfour, who was then Foreign Secretary.
This may, however, have been only a part of a longer series of discussions about Britain and Zionism held between Weizmann and Balfour which had begun at least a decade earlier. In late 1905 Balfour had requested of Charles Dreyfus, his Jewish constituency representative, that he arrange a meeting with Weizmann, during which Weizmann asked for official British support for Zionism; they were to meet again on this issue in 1914.[20]
Jewish national home vs. Jewish state[edit]
Further information: Homeland for the Jewish people
Explication of the wording of the Balfour Declaration is found in the correspondence leading to the final version of the declaration. The phrase "national home" was intentionally used instead of "state" because of opposition to the Zionist program within the British Cabinet. Following discussion of the initial draft the Cabinet Secretary, Mark Sykes, met with the Zionist negotiators to clarify their aims. His official report back to the Cabinet categorically stated that the Zionists did not want "to set up a Jewish Republic or any other form of state in Palestine immediately"[21] but rather preferred some form of protectorate as provided in the Palestine Mandate. In approving the Balfour Declaration, Leopold Amery, one of the Secretaries to the British War Cabinet of 1917–18, testified under oath to the Anglo-American Committee of Inquiry in January 1946 from his personal knowledge that:
"The phrase 'the establishment in Palestine of a National Home for the Jewish people' was intended and understood by all concerned to mean at the time of the Balfour Declaration that Palestine would ultimately become a 'Jewish Commonwealth' or a 'Jewish State', if only Jews came and settled there in sufficient numbers."[22]
Both the Zionist Organization and the British government devoted efforts over the following decades, including Winston Churchill's 1922 White Paper, to denying that a state was the intention.[a] However, in private, many British officials agreed with the interpretation of the Zionists that a state would be established when a Jewish majority was achieved.[23]
The initial draft of the declaration, contained in a letter sent by Rothschild to Balfour, referred to the principle "that Palestine should be reconstituted as the National Home of the Jewish people."[24] In the final text, the word that was replaced with in to avoid committing the entirety of Palestine to this purpose. Similarly, an early draft did not include the commitment that nothing should be done which might prejudice the rights of the non-Jewish communities. These changes came about partly as the result of the urgings of Edwin Samuel Montagu, an influential anti-Zionist Jew and Secretary of State for India, who was concerned that the declaration without those changes could result in increased anti-Semitic persecution. The draft was circulated and during October the government received replies from various representatives of the Jewish community. Lord Rothschild took exception to the new proviso on the basis that it presupposed the possibility of a danger to non-Zionists, which he denied.[25] At San Remo, as shown in the transcript of the San Remo meeting on the evening of 24 April, the French proposed adding to the savings clause so that it would save for non-Jewish communities their "political rights" as well as their civil and religious rights. The French proposal was rejected.
Sir John Evelyn Shuckburgh of the new Middle East department of the Foreign Office discovered that the correspondence prior to the declaration was not available in the Colonial Office, 'although Foreign Office papers were understood to have been lengthy and to have covered a considerable period'." The 'most comprehensive explanation' of the origin of the Balfour Declaration the Foreign Office was able to provide was contained in a small 'unofficial' note of Jan 1923 affirming that:
little is known of how the policy represented by the Declaration was first given form. Four, or perhaps five men were chiefly concerned in the labour – the Earl of Balfour, the late Sir Mark Sykes, and Messrs. Weizmann and Sokolow, with perhaps Lord Rothschild as a figure in the background. Negotiations seem to have been mainly oral and by means of private notes and memoranda of which only the scantiest records seem to be available.[b]
In his posthumously published 1981 book The Anglo-American Establishment, Georgetown University history professor Carroll Quigley explained that the Balfour Declaration was actually drafted by Lord Alfred Milner. Quigley wrote:
More recently, William D. Rubinstein, Professor of Modern History at Aberystwyth University, Wales, wrote that Conservative politician and pro-Zionist Leo Amery, as Assistant Secretary to the British war cabinet in 1917, was the main author of the Balfour Declaration.[28]
Reaction to the Declaration[edit]
Balfour Declaration as published in The Times 9 November 1917
Arab opposition[edit]
The Arabs expressed disapproval in November 1918 at the parade marking the first anniversary of the Balfour Declaration. The Muslim-Christian Association protested the carrying of new "white and blue banners with two inverted triangles in the middle". They drew the attention of the authorities to the serious consequences of any political implications in raising the banners.[29]
Later that month, on the first anniversary of the occupation of Jaffa by the British, the Muslim-Christian Association sent a lengthy memorandum and petition to the military governor protesting once more any formation of a Jewish state.[30]
Zionist reaction[edit]
Chaim Weizmann and Nahum Sokolow, the principal Zionist leaders based in London, had asked for the reconstitution of Palestine as "the" Jewish national home. As such, the declaration fell short of Zionist expectations.[32]
British opinion[edit]
British public and government opinion became increasingly less favourable to the commitment that had been made to Zionist policy. In February 1922, Winston Churchill telegraphed Herbert Samuel asking for cuts in expenditure and noting:
In both Houses of Parliament there is growing movement of hostility, against Zionist policy in Palestine, which will be stimulated by recent Northcliffe articles.[33] I do not attach undue importance to this movement, but it is increasingly difficult to meet the argument that it is unfair to ask the British taxpayer, already overwhelmed with taxation, to bear the cost of imposing on Palestine an unpopular policy.[34]
Response by Central Powers[edit]
Immediately following the publication of the declaration Germany entered negotiations with Turkey to put forward counter proposals. A German-Jewish Society was formed: Vereinigung jüdischer Organisationen Deutschlands zur Wahrung der Rechte der Juden des Ostens (V.J.O.D.) and in January 1918 the Turkish Grand Vizier, Talaat, issued a statement which promised legislation by which "all justifiable wishes of the Jews in Palestine would be able to find their fulfilment".[35]
See also[edit]
2. ^ Full text of note included CO 733/58, Secret Cabinet Paper CP 60 (23), 'Palestine and the Balfour Declaration, January 1923. FO unofficial note added 'little referring to the Balfour Declaration among such papers as have been preserved'. Shuckburgh's memo asserts that 'as the official records are silent, it can only be assumed that such discussions as had taken place were of an informal and private character'.[26]
1. ^ Yapp, M.E. (1 September 1987). The Making of the Modern Near East 1792–1923. Harlow, England: Longman. p. 290. ISBN 978-0-582-49380-3.
2. ^ Schneer, Jonathan. The Balfour Declaration: The Origins of the Arab-Israeli Conflict. p. 342.
3. ^ Schneer, Jonathan. The Balfour Declaration: The Origins of the Arab-Israeli Conflict. p. 152.
4. ^ Friedman, Isaiah. "Herzl, Theodor." Encyclopaedia Judaica. Ed. Michael Berenbaum and Fred Skolnik. 2nd ed. Vol. 9. Detroit: Macmillan Reference USA, 2007. 54–66. Gale Virtual Reference Library. Web. 15 April 2010.
5. ^ Avish, Shimon. "Herzl, Theodor [1860–1904]." Encyclopedia of the Modern Middle East and North Africa. Ed. Philip Mattar. 2nd ed. Vol. 2. New York: Macmillan Reference USA, 2004. 1021–1022. Gale Virtual Reference Library. Web. 15 April 2010.
6. ^ [Cleveland, William L. A History of the Modern Middle East. Boulder, CO: Westview, 2004. Print. p.224]
7. ^ Weizmann, Trial and Error, p.111, as quoted in W. Lacquer, The History of Zionism", 2003, ISBN 978-1-86064-932-5. p.188
8. ^ a b c d e Khouri, Fred John (1985). The Arab-Israeli Dilemma. Syracuse University Press. ISBN 978-0-8156-2340-3, pp. 8–10.
9. ^ Palestine, a Twice-promised Land?: The British, the Arabs & Zionism, 1915–1920 By Isaiah Friedman, page 171
10. ^ Huneidi, Sahar (2000). A Broken Trust: Herbert Samuel, Zionism and the Palestinians, 1920–1925. IB Tauris. ISBN 978-1-86064-172-5, p. 66.
11. ^ Report of a Committee Set up to Consider Certain Correspondence Between Sir Henry McMahon and the Sharif of Mecca in 1915 and 1916, UNISPAL, Annex A, paragraph 19.
12. ^ Report of a Committee Set Up To Consider Certain Correspondence Between Sir Henry McMahon and The Sharif of Mecca[dead link]
13. ^ cited in Palestine Papers, 1917–1922, Doreen Ingrams, page 48 from the UK Archive files PRO CAB 27/24.
14. ^ Gelvin, James (2005). The Israel-Palestine Conflict: One Hundred Years of War. New York: Cambridge. pp. 82 and 83.
15. ^ Palestine Papers 1917–1922, Doreen Ingrams, page 16
16. ^ Wall Street Journal review of Jonathan Shneer, Balfour Declaration "As Mr. Schneer documents, the declaration was, among much else, part of a campaign to foster world-wide Jewish support for the Allied war effort, not least in the U.S."
17. ^ Grainger, John D. (2006). The Battle for Palestine, 1917. Woodbridge: Boydell Press. ISBN 978-1-84383-263-8. p. 178
18. ^ James Renton, The Balfour Declaration: its origins and consequences, Jewish Quarterly, Spring 2008, Number 209,
19. ^ a b c Palestine Royal Commission Report, Cmd 5479, 1937, pp23–24.
20. ^ Harry Defries, Conservative Party Attitudes to Jews 1900–1950, Routledge, 2001, ISBN 978-0-7146-5221-4. pp.50–51.
21. ^ Strawson, John (2010). Partitioning Palestine: Legal Fundamentalism in the Palestinian–Israeli Conflict. London: Penguin Books. p. 33. ISBN 9780745323244.
22. ^ The Palestine Yearbook of International Law 1984. Martinus Nijhoff. 1997. p. 48. ISBN 9789041103383.
23. ^ Mansfield, Peter (1992). The Arabs. London: Penguin Books. pp. 176–77.
25. ^ Ingrams, Doreen (2009). Palestine Papers 1917–1922: Seeds of Conflict. Eland Books. p. 13. ISBN 9781906011383.
26. ^ Huneidi, Sahar (2001). "A Broken Trust: Sir Herbert Samuel, Zionism and the Palestinians". I.B.Tauris. p. 256. ISBN 9781860641725. Retrieved 2014-07-12.
27. ^ Quigley, Carroll (June 1981). The Anglo-American Establishment. New York: Books in Focus. p. 169. ISBN 978-0-945001-01-0. [dead link]
28. ^ William D. Rubinstein (2000). "The Secret of Leopold Amery". Historical Research (Institute of Historical Research) 73 (181, June 2000): 175–196. doi:10.1111/1468-2281.00102.
29. ^ Zu'aytir, Akram, Watha'iq al-haraka a-wataniyya al-filastiniyya (1918–1939), ed. Bayan Nuwayhid al-Hut. Beirut 1948. Papers, p. 5. Cited by Huneidi, Sahar "A Broken Trust, Herbert Samuel, Zionism and the Palestinians". ISBN 978-1-86064-172-5 p.32.
30. ^ 'Petition from the Moslem-Christian Association in Jaffa, to the Military Governor, on the occasion of the First Anniversary of British Entry into Jaffa', 16 November 1918, Zu'aytir papers pp. 7–8. Cited by Huneidi p.32.
31. ^ Benny Morris. The Righteous Victims. 2001 ISBN 0-679-74475-4 p.76
32. ^ Balfour Declaration. (2007). In Encyclopædia Britannica. Retrieved 12 August 2007, from Encyclopædia Britannica Online.
33. ^ Defries, Harry (2001). "Conservative Party Attitudes to Jews, 1900-1950". Psychology Press. p. 103. ISBN 9780714652214. Retrieved 2014-07-12.
34. ^ CO 733/18, Churchill to Samuel, Telegram, Private and Personal, 25 February 1922. Cited Huneidi, Sahar "A Broken Trust, Herbert Samuel, Zionism and the Palestinians" 2001, ISBN 978-1-86064-172-5, p.57.
35. ^ MacMunn, Lieut.-General Sir George (1928) Military Operations. Egypt and Palestine. From the outbreak of war with Germany to June 1917. HMSO. Pages 219,220.
Further reading[edit]
• Schneer, Jonathan. The Balfour Declaration: The Origins of the Arab-Israeli Conflict (Random House, 2010) 464pp; ISBN 978-1-4000-6532-5
• Smith, Charles. Palestine and the Arab-Israeli conflict. Boston: Bedford/St. Martin's, 2007.
External links[edit]
|
global_05_local_4_shard_00000656_processed.jsonl/83817
|
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For the taxonomic method, see DNA barcoding.
A UPC-A barcode symbol
An early use of one type of barcode in an industrial context was sponsored by the Association of American Railroads in the late 1960s. Developed by General Telephone and Electronics (GTE) and called KarTrak ACI (Automatic Car Identification), this scheme involved placing colored stripes in various combinations on steel plates which were affixed to the sides of railroad rolling stock. Two plates were used per car, one on each side, with the arrangement of the colored stripes representing things such as ownership, type of equipment, and identification number.[1] The plates were "read" by a trackside scanner located, for instance, at the entrance to a classification yard while the car was moving past.[2] The project was abandoned after about ten years because the system proved unreliable after long-term use in the field.[1]
Other systems have made inroads in the AIDC market, but the simplicity, universality and low cost of barcodes has limited the role of these other systems until the 2000s (decade), over 40 years after the introduction of the commercial barcode, with the introduction of technologies such as radio frequency identification, or RFID.
In 1948 Bernard Silver, a graduate student at Drexel Institute of Technology in Philadelphia, Pennsylvania, US overheard the president of the local food chain, Food Fair, asking one of the deans to research a system to automatically read product information during checkout.[4] Silver told his friend Norman Joseph Woodland about the request, and they started working on a variety of systems. Their first working system used ultraviolet ink, but the ink faded too easily and was rather expensive.[5]
IBM offered to buy the patent, but its offer was not high enough. Philco purchased their patent in 1962 and then sold it to RCA sometime later.[5]
Collins at Sylvania[edit]
During his time as an undergraduate, David Collins worked at the Pennsylvania Railroad and became aware of the need to automatically identify railroad cars. Immediately after receiving his master's degree from MIT in 1959, he started work at GTE Sylvania and began addressing the problem. He developed a system called KarTrak using blue and red reflective stripes attached to the side of the cars, encoding a six-digit company identifier and a four-digit car number.[5] Light reflected off the stripes was fed into one of two photomultipliers, filtered for blue or red.[citation needed]
The Boston and Maine Railroad tested the KarTrak system on their gravel cars in 1961. The tests continued until 1967, when the Association of American Railroads (AAR) selected it as a standard, Automatic Car Identification, across the entire North American fleet. The installations began on 10 October 1967. However, the economic downturn and rash of bankruptcies in the industry in the early 1970s greatly slowed the rollout, and it was not until 1974 that 95% of the fleet was labeled. To add to its woes, the system was found to be easily fooled by dirt in certain applications, which greatly affected accuracy. The AAR abandoned the system in the late 1970s, and it was not until the mid-1980s that they introduced a similar system, this time based on radio tags.[6]
The railway project had failed, but a toll bridge in New Jersey requested a similar system so that it could quickly scan for cars that had purchased a monthly pass. Then the U.S. Post Office requested a system to track trucks entering and leaving their facilities. These applications required special retroreflector labels. Finally, Kal Kan asked the Sylvania team for a simpler (and cheaper) version which they could put on cases of pet food for inventory control.
Computer Identics Corporation[edit]
In 1967, with the railway system maturing, Collins went to management looking for funding for a project to develop a black-and-white version of the code for other industries. They declined, saying that the railway project was large enough and they saw no need to branch out so quickly.
Collins then quit Sylvania and formed Computer Identics Corporation.[5] Computer Identics started working with helium–neon lasers in place of light bulbs, scanning with a mirror to locate the barcode anywhere up to several feet in front of the scanner. This made the entire process much simpler and more reliable, as well as allowing it to deal with damaged labels by reading the intact portions.
Computer Identics Corporation installed one of its first two scanning systems in the spring of 1969 at a General Motors (Buick) factory in Flint, Michigan.[5] The system was used to identify a dozen types of transmissions moving on an overhead conveyor from production to shipping. The other scanning system was installed at General Trading Company's distribution center in Carlstadt, New Jersey to direct shipments to the proper loading bay.
Universal Product Code[edit]
In 1966 the National Association of Food Chains (NAFC) held a meeting where they discussed the idea of automated checkout systems. RCA had purchased rights to the original Woodland patent, attended the meeting and initiated an internal project to develop a system based on the bullseye code. The Kroger grocery chain volunteered to test it.
In mid-1970s, the NAFC established the U.S. Supermarket Ad Hoc Committee on a Uniform Grocery Product Code, which set guidelines for barcode development and created a symbol selection subcommittee to help standardize the approach. In cooperation with consulting firm McKinsey & Co., they developed a standardized 11-digit code to identify any product. The committee then sent out a contract tender to develop a barcode system to print and read the code. The request went to Singer, National Cash Register (NCR), Litton Industries, RCA, Pitney-Bowes, IBM and many others.[7] A wide variety of barcode approaches were studied, including linear codes, RCA's bullseye concentric circle code, starburst patterns and others.
In the spring of 1971 RCA demonstrated their bullseye code at another industry meeting. IBM executives at the meeting noticed the crowds at the RCA booth and immediately developed their own system. IBM marketing specialist Alec Jablonover remembered that the company still employed Woodland, and he established a new facility in North Carolina to lead development.
In July 1972 RCA began an eighteen-month test in a Kroger store in Cincinnati. Barcodes were printed on small pieces of adhesive paper, and attached by hand by store employees when they were adding price tags. The code proved to have a serious problem. During printing, presses sometimes smear ink in the direction the paper is running, rendering the code unreadable in most orientations. A linear code, like the one being developed by Woodland at IBM, however, was printed in the direction of the stripes, so extra ink simply makes the code "taller" while remaining readable, and on 3 April 1973 the IBM UPC was selected by NAFC as their standard. IBM had designed five versions of the UPC symbology for future industry requirements: UPC A, B, C, D, and E.[8]
NCR installed a testbed system at Marsh's Supermarket in Troy, Ohio, near the factory that was producing the equipment. On 26 June 1974, Clyde Dawson pulled a 10-pack of Wrigley's Juicy Fruit gum out of his basket and it was scanned by Sharon Buchanan at 8:01 am. The pack of gum and the receipt are now on display in the Smithsonian Institution. It was the first commercial appearance of the UPC.[9]
In 1971 IBM had assembled a team for an intensive planning session, day after day, 12 to 18 hours a day, to thrash out how the whole system might operate and to schedule a roll-out plan. By 1973 they were meeting with grocery manufacturers to introduce the symbol that would need to be printed on the packaging or labels of all of their products. There were no cost savings for a grocery to use it unless at least 70% of the grocery's products had the barcode printed on the product by the manufacturer. IBM was projecting that 75% would be needed in 1975. Even though that was achieved, there were still scanning machines in fewer than 200 grocery stores by 1977.[10]
Economic studies conducted for the grocery industry committee projected over $40 million in savings to the industry from scanning by the mid-1970s. Those numbers were not achieved in that time-frame and some predicted the demise of barcode scanning.[who?] The usefulness of the barcode required the adoption of expensive scanners by a critical mass of retailers while manufacturers simultaneously adopted barcode labels. Neither wanted to move first and results were not promising for the first couple of years, with Business Week proclaiming "The Supermarket Scanner That Failed."[9]
Experience with barcode scanning in those stores revealed additional benefits. The detailed sales information acquired by the new systems allowed greater responsiveness to customer needs. This was reflected in the fact that about 5 weeks after installing barcode scanners, sales in grocery stores typically started climbing and eventually leveled off at a 10–12% increase in sales that never dropped off. There also was a 1–2% decrease in operating cost for the stores that enabled them to lower prices to increase market share. It was shown in the field that the return on investment for a barcode scanner was 41.5%. By 1980, 8,000 stores per year were converting.[10]
The global public launch of the barcode[when?] was greeted with minor skepticism from conspiracy theorists, who considered barcodes to be an intrusive surveillance technology, and from some Christians[when?] who thought the codes hid the number 666, representing the number of the beast.[11] Television host Phil Donahue described barcodes as a "corporate plot against consumers".[12]
Industrial adoption[edit]
In 1981, the United States Department of Defense adopted the use of Code 39 for marking all products sold to the United States military. This system, Logistics Applications of Automated Marking and Reading Symbols (LOGMARS), is still used by DoD and is widely viewed as the catalyst for widespread adoption of barcoding in industrial uses.[13]
Barcodes such as the UPC have become a ubiquitous element of modern civilization, as evidenced by their enthusiastic adoption by stores around the world; most items other than fresh produce from a grocery store now have UPC barcodes.[citation needed] This helps track items and also reduces instances of shoplifting involving price tag swapping, although shoplifters can now print their own barcodes.[14] In addition, retail chain membership cards (issued mostly by grocery stores and specialty "big box" retail stores such as sporting equipment, office supply, or pet stores) use barcodes to uniquely identify consumers, allowing for customized marketing and greater understanding of individual consumer shopping patterns. At the point of sale, shoppers can get product discounts or special marketing offers through the address or e-mail address provided at registration.
Example of barcode on a patient identification wristband
Barcodes can allow for the organization of large amounts of data. They are widely used in the healthcare and hospital settings, ranging from patient identification (to access patient data, including medical history, drug allergies, etc.) to creating SOAP Notes[15] with barcodes to medication management. They are also used to facilitate the separation and indexing of documents that have been imaged in batch scanning applications, track the organization of species in biology,[16] and integrate with in-motion checkweighers to identify the item being weighed in a conveyor line for data collection.
Barcoded parcel
Some applications for barcodes have fallen out of use; In the 1970s and 1980s, software source code was occasionally encoded in a barcode and printed on paper (Cauzin Softstrip and Paperbyte[20] are barcode symbologies specifically designed for this application), and the 1991 Barcode Battler computer game system used any standard barcode to generate combat statistics.
In the 21st century, many artists have started using barcodes in art, such as Scott Blake's Barcode Jesus, as part of the post-modernism movement.
Linear symbologies can be classified mainly by two properties:
• Continuous vs. discrete: Characters in continuous symbologies usually abut, with one character ending with a space and the next beginning with a bar, or vice versa. Characters in discrete symbologies begin and end with bars; the intercharacter space is ignored, as long as it is not wide enough to look like the code ends.
• Two-width vs. many-width: Bars and spaces in two-width symbologies are wide or narrow; the exact width of a wide bar has no significance as long as the symbology requirements for wide bars are adhered to (usually two to three times wider than a narrow bar). Bars and spaces in many-width symbologies are all multiples of a basic width called the module; most such codes use four widths of 1, 2, 3 and 4 modules.
Some symbologies use interleaving. The first character is encoded using black bars of varying width. The second character is then encoded, by varying the width of the white spaces between these bars. Thus characters are encoded in pairs over the same section of the barcode. Interleaved 2 of 5 is an example of this.
Stacked symbologies repeat a given linear symbology vertically.
The most common among the many 2D symbologies are matrix codes, which feature square or dot-shaped modules arranged on a grid pattern. 2D symbologies also come in circular and other patterns and may employ steganography, hiding modules within an image (for example, DataGlyphs).
Linear symbologies are optimized for laser scanners, which sweep a light beam across the barcode in a straight line, reading a slice of the barcode light-dark patterns. Stacked symbologies are also optimized for laser scanning, with the laser making multiple passes across the barcode.
In the 1990s development of charge coupled device (CCD) imagers to read barcodes was pioneered by Welch Allyn. Imaging does not require moving parts, as a laser scanner does. In 2007, linear imaging had begun to supplant laser scanning as the preferred scan engine for its performance and durability.
2D symbologies cannot be read by a laser as there is typically no sweep pattern that can encompass the entire symbol. They must be scanned by an image-based scanner employing a CCD or other digital camera sensor technology.
Scanners (barcode readers)[edit]
Main article: Barcode reader
Barcode scanners can be classified into three categories based on their connection to the computer. The older type is the RS-232 barcode scanner. This type requires special programming for transferring the input data to the application program.
"Keyboard interface scanners" connect to a computer using a PS/2 or AT keyboard–compatible adaptor cable (a "keyboard wedge"). The barcode's data is sent to the computer as if it had been typed on the keyboard.
Like the keyboard interface scanner, USB scanners are easy to install and do not need custom code for transferring input data to the application program. On PCs running Windows the HID interface emulates the data merging action of a hardware "keyboard wedge", and the scanner automatically behaves like an additional keyboard.
Many phones are able to decode barcodes using their built-in camera, as well. Google's mobile Android operating system uses both their own Google Goggles application or third party barcode scanners like Scan.[21] Nokia's Symbian operating system features a barcode scanner,[22] while mbarcode[23] is a QR code reader for the Maemo operating system. In the Apple iOS, a barcode reader is not natively included but more than fifty paid and free apps are available with both scanning capabilities and hard-linking to URI. With BlackBerry devices, the App World application can natively scan barcodes and load any recognized Web URLs on the device's Web browser. Windows Phone 7.5 is able to scan barcodes through the Bing search app. However, these devices are not designed specifically for the capturing of barcodes. As a result, they do not decode nearly as quickly or accurately as a dedicated barcode scanner or PDT.
Quality control and verification[edit]
Barcode verification examines scanability and the quality of the barcode in comparison to industry standards and specifications. Barcode verifiers are primarily used by businesses that print and use barcodes. Any trading partner in the supply chain can test barcode quality. It is important to verify a barcode to ensure that any reader in the supply chain can successfully interpret a barcode with a low error rate. Retailers levy large penalties for non-compliant barcodes. These chargebacks can reduce a manufacturer's revenue by 2% to 10%.[24]
A barcode verifier works the way a reader does, but instead of simply decoding a barcode, a verifier performs a series of tests. For linear barcodes these tests are:
• Edge Determination
• Minimum Reflectance
• Symbol Contrast
• Minimum Edge Contrast
• Modulation
• Defects
• Decode
• Decodability
2D matrix symbols look at the parameters:
• Symbol Contrast
• Modulation
• Decode
• Unused Error Correction
• Fixed (finder) Pattern Damage
• Grid Non-uniformity
• Axial Non-uniformity[25]
Depending on the parameter, each ANSI test is graded from 0.0 to 4.0 (F to A), or given a pass or fail mark. Each grade is determined by analyzing the scan reflectance profile (SRP), an analog graph of a single scan line across the entire symbol. The lowest of the 8 grades is the scan grade and the overall ISO symbol grade is the average of the individual scan grades. For most applications a 2.5 (C) is the minimum acceptable symbol grade.[26]
Compared with a reader, a verifier measures a barcode's optical characteristics to international and industry standards. The measurement must be repeatable and consistent. Doing so requires constant conditions such as distance, illumination angle, sensor angle and verifier aperture. Based on the verification results, the production process can be adjusted to print higher quality barcodes that will scan down the supply chain.
Barcode verifier standards[edit]
• Barcode verifiers should comply with the ISO/IEC 15416 (linear)] or ISO/IEC 15426-2 (2D).
This standard defines the measuring accuracy of a barcode verifier.
• The current international barcode quality specification is ISO/IEC 15416 (linear) and ISO/IEC 15415 (2D). The European Standard EN 1635 has been withdrawn and replaced by ISO/IEC 15416. The original U.S. barcode quality specification was ANSI X3.182. (UPCs used in the US – ANSI/UCC5).
This standard defines the quality requirements for barcodes and Matrix Codes (also called Optical Codes).
International standards are available from the International Organization for Standardization (ISO).[28]
These standards are also available from local/national standardization organizations, such as ANSI, BSI, DIN, NEN and others.
In point-of-sale management, barcode systems can provide detailed up-to-date information on the business, accelerating decisions and with more confidence. For example:
• Fast-selling items can be identified quickly and automatically reordered.
• This technology also enables the profiling of individual consumers, typically through a voluntary registration of discount cards. While pitched as a benefit to the consumer, this practice is considered to be potentially dangerous by privacy advocates.
Besides sales and inventory tracking, barcodes are very useful in logistics and supply chain management.
• When a manufacturer packs a box for shipment, a Unique Identifying Number (UID) can be assigned to the box.
• A database can link the UID to relevant information about the box; such as order number, items packed, quantity packed, destination, etc.
• The information can be transmitted through a communication system such as Electronic Data Interchange (EDI) so the retailer has the information about a shipment before it arrives.
• Shipments that are sent to a Distribution Center (DC) are tracked before forwarding. When the shipment reaches its final destination, the UID gets scanned, so the store knows the shipment's source, contents, and cost.
Barcode scanners are relatively low cost and extremely accurate compared to key-entry, with only about 1 substitution error in 15,000 to 36 trillion characters entered.[29][unreliable source?] The exact error rate depends on the type of barcode.
Types of barcodes[edit]
Linear barcodes[edit]
A first generation, "one dimensional" barcode that is made up of lines and spaces of various widths that create specific patterns.
Symbology Continuous
Bar widths Uses
Codabar Discrete Two Old format used in libraries and blood banks and on airbills (out of date)
Code 25 – Non-interleaved 2 of 5 Continuous Two Industrial
Code 25 – Interleaved 2 of 5 Continuous Two Wholesale, libraries International standard ISO/IEC 16390
Code 11 Discrete Two Telephones (out of date)
Code 39 Discrete Two Various – international standard ISO/IEC 16388
Code 93 Continuous Many Various
Code 128 Continuous Many Various – International Standard ISO/IEC 15417
CPC Binary Discrete Two
DUN 14 Continuous Many Various
EAN 2 Continuous Many Addon code (magazines), GS1-approved – not an own symbology – to be used only with an EAN/UPC according to ISO/IEC 15420
EAN 5 Continuous Many Addon code (books), GS1-approved – not an own symbology – to be used only with an EAN/UPC according to ISO/IEC 15420
EAN-8, EAN-13 Continuous Many Worldwide retail, GS1-approved – International Standard ISO/IEC 15420
Facing Identification Mark Continuous One USPS business reply mail
GS1-128 (formerly named UCC/EAN-128), incorrectly referenced as EAN 128 and UCC 128 Continuous Many various, GS1-approved -is just an application of the Code 128 (ISO/IEC 15417) using the ANS MH10.8.2 AI Datastructures. Its not an own symbology.
GS1 DataBar, formerly Reduced Space Symbology (RSS) Continuous Many Various, GS1-approved
HIBC (HIBCC Health Industry Bar Code) Discrete Two Healthcare[30] – is a datastructure to be used with Code 128, Code 39 or Data Matrix
Intelligent Mail barcode Continuous Tall/short United States Postal Service, replaces both POSTNET and PLANET symbols (formerly named OneCode)
ITF-14 Continuous Two Non-retail packaging levels, GS1-approved – is just an Interleaved 2/5 Code (ISO/IEC 16390) with a few additional specifications, according to the GS1 General Specifications
JAN Continuous Many Used in Japan, similar and compatible with EAN-13 (ISO/IEC 15420)
KarTrak ACI Discrete Coloured bars Used in North America on railroad rolling equipment
Latent image barcode Neither Tall/short Color print film
MSI Continuous Two Used for warehouse shelves and inventory
Pharmacode Neither Two Pharmaceutical packaging (no international standard available)
PLANET Continuous Tall/short United States Postal Service (no international standard available)
Plessey Continuous Two Catalogs, store shelves, inventory (no international standard available)
PostBar Discrete Many Canadian Post office
POSTNET Continuous Tall/short United States Postal Service (no international standard available)
RM4SCC / KIX Continuous Tall/short Royal Mail / Royal TPG Post
Telepen Continuous Two Libraries (UK)
U.P.C. Continuous Many Worldwide retail, GS1-approved – International Standard ISO/IEC 15420
Matrix (2D) barcodes[edit]
A matrix code, also termed a 2D barcode or simply a 2D code, is a two-dimensional way to represent information. It is similar to a linear (1-dimensional) barcode, but can represent more data per unit area.
Example Name Notes
Azteccodeexample.svg Aztec Code Designed by Andrew Longacre at Welch Allyn (now Honeywell Scanning and Mobility). Public domain. – International Standard ISO/IEC 24778
Code 1 Public domain. Code 1 is currently used in the health care industry for medicine labels and the recycling industry to encode container content for sorting.[31]
ColorCode ColorZip[32] developed colour barcodes that can be read by camera phones from TV screens; mainly used in Korea.[33]
Color Construct Code Color Construct Code is one of the few barcode symbologies designed to take advantage of multiple colors.[34][35]
CyberCode From Sony.
d-touch readable when printed on deformable gloves and stretched and distorted[36]
DataGlyphs From Palo Alto Research Center (also termed Xerox PARC).[37]
Patented.[38] DataGlyphs can be embedded into a half-tone image or background shading pattern in a way that is almost perceptually invisible, similar to steganography.[39][40]
Datamatrix.svg Data Matrix From Microscan Systems, formerly RVSI Acuity CiMatrix/Siemens. Public domain. Increasingly used throughout the United States. Single segment Data Matrix is also termed Semacode – Standard: ISO/IEC 16022.
Datastrip Code From Datastrip, Inc.
digital paper patterned paper used in conjunction with a digital pen to create handwritten digital documents. The printed dot pattern uniquely identifies the position coordinates on the paper.
Example of an EZcode. EZcode Designed for decoding by cameraphones;[41] from ScanLife.[42]
High Capacity Color Barcode.svg High Capacity Color Barcode Developed by Microsoft; licensed by ISAN-IA.
HueCode From Robot Design Associates. Uses greyscale or colour.[43]
InterCode From Iconlab, Inc. The standard 2D barcode in South Korea. All 3 South Korean mobile carriers put the scanner program of this code into their handsets to access mobile internet, as a default embedded program.
MaxiCode.svg MaxiCode Used by United Parcel Service. Now Public Domain
MMCC Designed to disseminate high capacity mobile phone content via existing colour print and electronic media, without the need for network connectivity
NexCode.png NexCode NexCode is developed and patented by S5 Systems.
Nintendo e-Reader#Dot code Developed by Olympus Corporation to store songs, images, and mini-games for Game Boy Advance on Pokémon trading cards.
Better Sample PDF417.png PDF417 Originated by Symbol Technologies. Public Domain.
Qode example. Qode American proprietary and patented 2D barcode from NeoMedia Technologies, Inc.[42]
Wikipedia mobile en.svg QR code Initially developed, patented and owned by Toyota subsidiary Denso Wave for car parts management; who have chosen not to exercise their patent rights. Can encode Japanese Kanji and Kana characters, music, images, URLs, emails. De facto standard for Japanese cell phones. Also used with BlackBerry Messenger to pickup contacts rather than using a PIN code. These codes are also the most frequently used type to scan with smartphones. – International Standard : ISO/IEC 18004
Shotcode.png ShotCode Circular barcodes for camera phones. Originally from High Energy Magic Ltd in name Spotcode. Before that probably termed TRIPCode.
SPARQCode-sample.gif SPARQCode QR code encoding standard from MSKYNET, Inc.
Example images[edit]
In popular culture[edit]
In architecture, a building in Lingang New City by German architects Gerkan, Marg and Partners incorporates a barcode design,[45] as does a shopping mall called Shtrikh-kod (the Russian for barcode) in Narodnaya ulitsa ("People's Street") in the Nevskiy district of St. Petersburg, Russia.[46]
In media, the National Film Board of Canada and ARTE France launched a web documentary entitled, which allows users to view films about everyday objects by scanning the product's barcode with their iPhone camera.[47][48]
In professional wrestling, the WWE stable D-Generation X incorporated a barcode into their entrance video, as well as on a t-shirt.[49][50]
In video games, the protagonist of the Hitman video game series has a barcode tattoo on the back of his head.
In the films Back to the Future Part II and The Handmaid's Tale, cars in the future are depicted with barcode licence plates.
See also[edit]
1. ^ a b Cranstone, Ian. "A guide to ACI (Automatic Car Identification)/KarTrak". CANADIAN FREIGHT CARS A resource page for the Canadian Freight Car Enthusiast. Ian Cranstone. Retrieved 26 May 2013.
2. ^ Keyes, John (August 22, 2003). "KarTrak". John Keyes Boston photoblogger. Images from Boston, New England, and beyond. John Keyes. Retrieved 26 May 2013.
3. ^ Fox, Margalit (15 June 2011), "Alan Haberman, Who Ushered in the Bar Code, Dies at 81", The New York Times
4. ^ Fishman, Charles (1 August 2001). "The Killer App – Bar None". American Way. Retrieved 2010-04-19.
5. ^ a b c d e f Seideman, Tony, "Barcodes Sweep the World", Wonders of Modern Technology
6. ^ Graham-White, Sean (August 1999). "Do You Know Where Your Boxcar Is?". Trains (Kalmbach Publishing) 59 (8): 48–53.
7. ^ George Laurer, "Development of the U.P.C. Symbol",
8. ^ Nelson, Benjamin (1997). From Punched Cards To Bar Codes.
9. ^ a b Varchaver, Nicholas (31 May 2004). "Scanning the Globe". Fortune. Archived from the original on 14 November 2006. Retrieved 2006-11-27.
10. ^ a b Selmeier, Bill (2008). Spreading the Barcode. pp. 26, 214, 236, 238, 244, 245, 236, 238, 244, 245. ISBN 978-0-578-02417-2.
11. ^ "What about barcodes and 666: The Mark of the Beast?". 1999. Retrieved 2014-03-14.
12. ^ Bishop, Tricia (5 July 2004). "UPC bar code has been in use 30 years". Archived from the original on 2004-08-23. Retrieved 22 December 2009.
13. ^ "". Retrieved 2011-11-28.
14. ^ "Retrieved November 17, 2011". 2 May 2011. Retrieved 2011-11-28.
15. ^ Oberfield, Craig. "QNotes Barcode System". US Patented #5296688. Quick Notes Inc. Retrieved 15 December 2012.
16. ^ National Geographic, May 2010, page 30
17. ^ David L. Hecht. "Printed Embedded Data Graphical User Interfaces". Xerox Palo Alto Research Center. IEEE Computer March 2001.
18. ^ Jon Howell and Keith Kotay. "Landmarks for absolute localization". Dartmouth Computer Science Technical Report TR2000-364, March 2000.
19. ^ "". 21 November 2011. Retrieved 2011-11-28.
20. ^ "Paperbyte Bar Codes for Waduzitdo" Byte magazine, 1978 September p. 172
21. ^ "Scan".
22. ^ "Nokia Europe – Nokia N80 – Support".
23. ^ "package overview for mbarcode". Archived from the original on 14 August 2010. Retrieved 28 July 2010.
24. ^ Zieger, Anne (October 2003). "Retailer chargebacks: is there an upside? Retailer compliance initiatives can lead to efficiency". Frontline Solutions. Retrieved 2 August 2011.
25. ^ Bar Code Verification Best Practice work team (May 2010). "GS1 DataMatrix: An introduction and technical overview of the most advanced GS1 Application Identifiers compliant symbology". Global Standards 1 1.17: 34–36. Archived from the original on 20 July 2011. Retrieved 2 August 2011.
26. ^ GS1 Bar Code Verification Best Practice work team (May 2009). "GS1 Bar Code Verification for Linear Symbols". Global Standards 1 (4.3): 23–32. Retrieved 2 August 2011.
27. ^ "Technical committees – JTC 1/SC 31 – Automatic identification and data capture techniques". ISO. Retrieved 2011-11-28.
28. ^ "ISO web site". Retrieved 2011-11-28.
29. ^ Harmon and Adams(1989). Reading Between The Lines, p.13. Helmers Publishing, Inc, Peterborough, New Hampshire, USA. ISBN 0-911261-00-1.
30. ^, Health Industry Bar Code (HIBC) supplier labeling standard
31. ^ Russ Adams (15 June 2009). "2-Dimensional Bar Code Page". Archived from the original on 7 July 2011. Retrieved 2011-06-06.
32. ^ "". Retrieved 2011-11-28.
33. ^ "Barcodes for TV Commercials". 31 January 2006. Retrieved 2009-06-10.
34. ^ "Colour Code Technologies Co., Ltd". Retrieved 2012-11-04.
35. ^ "Frequently Asked Questions". Retrieved 2012-11-04.
36. ^ d-touch topological fiducial recognition; "d-touch markers are applied to deformable gloves",
37. ^ See for details.
38. ^ "DataGlyphs: Embedding Digital Data". 2006-05-03. Retrieved 2014-03-10.
39. ^ """DataGlyph" Embedded Digital Data"". Retrieved 2014-03-10.
40. ^ "DataGlyphs". Retrieved 2014-03-10.
41. ^ "". Retrieved 2011-11-28.
42. ^ a b Steeman, Jeroen. "Online QR Code Decoder". Retrieved 9 January 2014.
43. ^ "BarCode-1 2-Dimensional Bar Code Page". Retrieved 2009-06-10.
44. ^ (株)デンソーウェーブ, (Japanese) Copyright
45. ^ Barcode Halls – gmp[dead link]
46. ^ "image". Retrieved 2011-11-28.
47. ^ Lavigne, Anne-Marie. "Introducing, a new interactive doc about the objects that surround us". NFB Blog. National Film Board of Canada. Retrieved 7 October 2011.
48. ^ Anderson, Kelly (6 October 2011). "NFB, ARTE France launch ‘Bar Code’". Reelscreen. Retrieved 7 October 2011.
49. ^ [1][dead link]
50. ^ "Dx theme song 2009-2010". YouTube. 2009-12-19. Retrieved 2014-03-10.
• Automating Management Information Systems: Barcode Engineering and Implementation – Harry E. Burke, Thomson Learning, ISBN 0-442-20712-3
• Automating Management Information Systems: Principles of Barcode Applications – Harry E. Burke, Thomson Learning, ISBN 0-442-20667-4
• The Bar Code Book – Roger C. Palmer, Helmers Publishing, ISBN 0-911261-09-5, 386 pages
• The Bar Code Manual – Eugene F. Brighan, Thompson Learning, ISBN 0-03-016173-8
• Handbook of Bar Coding Systems – Harry E. Burke, Van Nostrand Reinhold Company, ISBN 978-0-442-21430-2, 219 pages
• Information Technology for Retail:Automatic Identification & Data Capture Systems – Girdhar Joshi, Oxford University Press, ISBN 0-19-569796-0, 416 pages
• Lines of Communication – Craig K. Harmon, Helmers Publishing, ISBN 0-911261-07-9, 425 pages
• Punched Cards to Bar Codes – Benjamin Nelson, Helmers Publishing, ISBN 0-911261-12-5, 434 pages
• Revolution at the Checkout Counter: The Explosion of the Bar Code – Stephen A. Brown, Harvard University Press, ISBN 0-674-76720-9
• Reading Between The Lines – Craig K. Harmon and Russ Adams, Helmers Publishing, ISBN 0-911261-00-1, 297 pages
• The Black and White Solution: Bar Code and the IBM PC – Russ Adams and Joyce Lane, Helmers Publishing, ISBN 0-911261-01-X, 169 pages
• Sourcebook of Automatic Identification and Data Collection – Russ Adams, Van Nostrand Reinhold, ISBN 0-442-31850-2, 298 pages
External links[edit]
|
global_05_local_4_shard_00000656_processed.jsonl/83818
|
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with Barrett or Barette (sport).
Various types of hair slides
A hair barrette on the back of a woman's head
A barrette (American English) also known as hair clip, hair-slide or clasp (British English) is a clasp for holding hair in place. These clasps are often made from metal and/or plastic, and sometimes feature decorative fabric. In one type of barrette, a clasp is used to secure the barrette in place; the clasp opens when the two metal pieces at either side are pressed together.
Barrettes are worn in different ways partly according to their size, with small ones often used at the front and large ones in the back to hold more hair. In some cases they are used to keep hair out of the eyes, or to secure a hairstyle such as a ponytail. Short metal "clip" barrettes are sometimes used to pull back front pieces of hair. Barrettes are also sometimes used purely for decorative purposes.
Larger barrettes (some can be as long as 3–4 in (8–10 cm)) are designed to pull back longer hair or a large amount of hair, and are usually worn at the back of the head. If the intent is to pull back hair, the length of the barrette is not the only consideration; the width of the barrette also indicates approximately how much hair can be secured by it.
Many different kinds of hair clips have been invented in the 20th century, the more famous ones are the elongated hair clip (seen at the top of the "Various types of hair slides" image) which was invented in 1972[1] and the simple "clips" hair-clip, which works by snapping the clip from a concave to convex position, springing it into a locked position, or opening it. Several of these are seen in the image.[2]
External links[edit]
• The dictionary definition of barrette at Wiktionary
Wikimedia Commons has media related to Barrettes.
|
global_05_local_4_shard_00000656_processed.jsonl/83819
|
Bridgewater Hall
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The Bridgewater Hall
The Bridgewater
Bridgewater Hall in 2008.jpg
Address The Bridgewater Hall
Lower Mosley Street
M2 3WS
Location Manchester, United Kingdom
Type Concert hall
Built 1993–1996
Opened 11 September 1996
Owner SMG Europe[1]
Capacity 2,400
The Bridgewater Hall is an international concert venue in Manchester city centre, England. It cost around £42 million[2] to build and currently hosts over 250 performances a year.
The hall is home to The Hallé orchestra, the UK's oldest extant symphony orchestra, and is the primary concert venue for the BBC Philharmonic Orchestra. The building sits on a bed of 280 springs, which help reduce external noise.
The venue is named after the Third Duke of Bridgewater who commissioned the eponymous Bridgewater Canal that crosses Manchester, although the hall is situated on a specially constructed arm of the Rochdale Canal.
Proposals to replace the concert venue in the Free Trade Hall existed since it was damaged in the Second World War[3] but the hall, which was home to The Hallé orchestra was repaired and renovated. Despite being a popular venue, the Free Trade Hall, built in the 1850s, had poor acoustics. Throughout the 1970s and 80s several schemes to replace it were considered but the project became more likely in 1988 after the creation of the Central Manchester Development Corporation.[3]
In the 1990s, land east of Lower Mosley Street and north of Great Bridgewater Street adjacent to the G-Mex exhibition centre (now Manchester Central Convention Complex) which was occupied by a former bus station and car park near the Rochdale Canal was identified as the site for a new hall. A competition inviting architects to present designs for the new concert hall was launched and a proposal by Renton Howard Wood Levin (RHWL) architects was chosen.[3] The development included the construction of a basin on a specially built short arm of the Rochdale Canal and part of the Manchester & Salford Junction Canal providing a waterfront setting for the hall.[4][5]
The Bridgewater Hall held its first concert on 11 September 1996 and was officially opened on 4 December by Queen Elizabeth II, alongside the Duke of Edinburgh. The Bridgewater Hall was one of a number of structures built in the 1990s that symbolised the transition to a new and modern Manchester.[3]
The Bridgewater was well received and won a number of awards. In November 1996, only months after opening, the concert hall won the RIBA North West award.[6] In 1998 the Hall won the Civic Trust Special Award,[7] which is given to a building which enhanced the appearance of a city centre.
Bridgewater Hall overlooks the Rochdale Canal
Construction of the hall was a joint venture between Manchester City Council and the Central Manchester Development Corporation who obtained funding from the European Regional Development Fund[4] The architects were RHWL and the builders were John Laing Construction later Laing O'Rourke. The acoustics were designed by Rob Harris of Arup Acoustics; his colleagues at Arup were the building engineers.[6] The Bridgewater Hall can seat 2341 people over four tiers in the auditorium: the stalls, choir circle, circle, and gallery.[8]
The main auditorium sits on a foundation of earthquake-proof isolation bearings that insulate it from noise and vibration from the adjacent road and Metrolink line.[9] The hall's 26,500 tonne superstructure rests on 280 GERB isolation bearings consisting of rows of steel springs between concrete piers. Bridgewater Hall is the first concert hall built with this technology.[4]
The structure is mostly formed from solid, reinforced concrete, moulded and cast like a vast sculpture.[3] The auditorium has a double-skinned roof with a stainless steel outer shell.[4] The lower part of the hall is built of deep red sandstone from Corsehill Quarry in Annan, the upper walls are clad in aluminium and glass. The interior uses Jura limestone.
Inside the hall, the focal point is a £1.2 million[1] pipe organ (with 5500 pipes) built by Marcussen & Son, which dominates the auditorium, covering the rear wall with wood and burnished metal. At the time of construction, the organ was the largest instrument to be installed in the UK for a century.[10]
Barbirolli Square
On the plaza outside is the "Ishinki Touchstone", a sculpture by Kan Yasuda made of polished Italian Carrara marble which is white streaked with bluish-grey. The stone weighs 18 tonnes and was installed in August 1996. Its £200,000 cost was financed by the Arts Council, Lottery Fund, Manchester Airport and Manchester City Council. To prevent vandalism, the stone is coated with an anti-graffiti solution.[11][12][13]
Beside the main entrance is a sculpture of Sir John Barbirolli by Byron Howard (2000).[14]
Since its opening on 11 September 1996, it has been the home of the Hallé Orchestra, the Hallé Choir and the Manchester Boys Choir, and is a regular venue for concerts by the BBC Philharmonic and Manchester Camerata. From September 2002 it has been home to the Hallé Youth Orchestra and Youth Choir, founded for musicians under the age of nineteen who are not in full-time musical education.
As well as concerts, the Bridgewater Hall hosts conferences and events for external parties such as annual presentation evenings. Manchester Metropolitan University has held its graduation ceremony in the hall in July each year since the early 2000's.
See also[edit]
1. ^ a b "The Bridgewater Hall". SMG Europe. Retrieved 2011-10-11.
2. ^ "Bridgewater Hall: 10 Years, Page 1". BBC News. Retrieved 2011-10-11.
3. ^ a b c d e "The Bridgewater Hall – History and Architecture". Bridgewater Hall. Retrieved 2011-10-11.
4. ^ a b c d Bridgewater Hall, Manchester University, retrieved 2011-10-13
5. ^ Hartwell, Clare (2001). Manchester. Pevsner Architectural Guides. London: Penguin. pp. 145, 147. ISBN 0 14 071131 7.
6. ^ a b "Arup – Bridgewater Hall". Arup. Retrieved 2011-10-11.
7. ^ "RHWL architectsBridgewater Hall", Bridgewater Hall, retrieved 2011-10-13
8. ^ "Bridgewater Hall: 10 Years – Page 2". BBC. Retrieved 2011-10-11.
9. ^ "The quietest room in the world". BBC News. 27 August 2009. Retrieved 2011-10-11.
10. ^ "Bridgewater Hall: 10 Years – Page 5". BBC. Retrieved 2011-10-11.
11. ^ Bridgewater Hall, Manchester Metropolitan University, retrieved 12 December 2011
12. ^ Ishinki Touchstone, Manchester Art Gallery, retrieved 4 January 2012
13. ^ Ishinki-Touchstone, Public Monument and Sculpture Association, retrieved 4 January 2012
14. ^ John Barbirolli, Manchester Art Gallery, retrieved 28 April 2012
External links[edit]
Coordinates: 53°28′31″N 2°14′45″W / 53.47528°N 2.24583°W / 53.47528; -2.24583
|
global_05_local_4_shard_00000656_processed.jsonl/83820
|
Clive James
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Clive James
CLIVE JAMES (6902036259).jpg
Clive James, 2012
Born Vivian Leopold James
(1939-10-07) 7 October 1939 (age 74)
Kogarah, Sydney, Australia
Occupation Essayist, poet, broadcaster
Nationality Australian
Notable work(s)
Cultural Amnesia
Unreliable Memoirs
Notable award(s) Philip Hodgins Memorial Medal for Literature
Spouse(s) Prue Shaw
Children Claerwen James
Lucinda James
Visions Before Midnight, by Clive James
Clive James AO CBE (born Vivian Leopold James on 7 October 1939) is an Australian author, critic, broadcaster, poet, translator and memoirist, best known for his autobiographical series Unreliable Memoirs, for his chat shows and documentaries on British television and for his prolific journalism. He has lived and worked in the United Kingdom since the early 1960s.
Early life[edit]
James was born in Kogarah, a southern suburb of Sydney. He was allowed to change his name as a child because "after Vivien Leigh played Scarlett O'Hara the name became irrevocably a girl's name no matter how you spelled it".[1]
James' father was taken prisoner by the Japanese during World War II. Although he survived the prisoner of war camp, he died when the aeroplane returning him to Australia crashed in Manila Bay; he was buried in Hong Kong. James, who was an only child, was brought up by his mother, a factory worker,[2] in the Sydney suburb of Kogarah, after living some years with his English maternal grandfather.[1]
In Unreliable Memoirs, James says an IQ test taken in childhood put his IQ at 140.[3] He was educated at Sydney Technical High School (despite winning a bursary award to Sydney Boys High School) and the University of Sydney, where he studied Psychology and became associated with the Sydney Push, a libertarian, intellectual subculture. At the university, he edited the student newspaper, Honi Soit, and directed the annual Union Revue. After graduating, James worked for a year as an assistant editor for The Sydney Morning Herald.
In early 1962, James moved to England, where he made his home. During his first three years in London, he shared a flat with the Australian film director Bruce Beresford (disguised as "Dave Dalziel" in the first three volumes of James' memoirs), was a neighbour of Australian artist Brett Whiteley, became acquainted with Barry Humphries (disguised as "Bruce Jennings") and had a variety of occasionally disastrous short-term jobs (sheet metal worker, library assistant, photo archivist, market researcher).
James later gained a place at Pembroke College, Cambridge, to read English literature. While there, he contributed to all the undergraduate periodicals, was a member and later President of the Cambridge Footlights, and appeared on University Challenge as captain of the Pembroke team, beating St Hilda's, Oxford but losing to Balliol on the last question in a tied game. During one summer vacation, he worked as a circus roustabout to save enough money to travel to Italy.[4] His contemporaries at Cambridge included Germaine Greer (known as "Romaine Rand" in the first three volumes of his memoirs), Simon Schama and Eric Idle. Having, he claims, scrupulously avoided reading any of the course material (but having read widely otherwise in English and foreign literature), James graduated with a 2:1—better than he had expected—and began a D.Phil. thesis on Percy Bysshe Shelley.
Critic and essayist[edit]
James became the television critic of The Observer in 1972,[2] remaining in the job until 1982. Selections from the column were published in three books—Visions Before Midnight, The Crystal Bucket and Glued to the Box—and finally in a compendium, On Television.
He has written literary criticism extensively for newspapers, magazines and periodicals in Britain, Australia and the United States, including, among many others, The Australian Book Review, The Monthly, The Atlantic Monthly, the New York Review of Books, The Liberal and the Times Literary Supplement. John Gross included James's essay 'A Blizzard of Tiny Kisses' in the Oxford Book of Essays (1992, 1999).
The Metropolitan Critic (1974), his first collection of literary criticism, was followed by At the Pillars of Hercules (1979), From the Land of Shadows (1982), Snakecharmers in Texas (1988), The Dreaming Swimmer (1992), Even As We Speak (2004), The Meaning of Recognition (2005) and Cultural Amnesia (2007), a collection of miniature intellectual biographies of over 100 significant figures in modern culture, history and politics. A defence of humanism, liberal democracy and literary clarity, the book was listed among the best of 2007 by The Village Voice.
Another volume of essays, The Revolt of the Pendulum, was published in June 2009.
He has also published Flying Visits, a collection of travel writing for The Observer.
Poet and lyricist[edit]
James has published poetry in periodicals all over the English-speaking world. He has published several books of poetry, including Poem of the Year (1983), a verse-diary, Other Passports: Poems 1958–1985, a first collection, and The Book of My Enemy (2003), a volume that takes its title from his poem "The Book of My Enemy Has Been Remaindered."[5]
He has published four mock-heroic poems: The Fate of Felicity Fark in the Land of the Media: a moral poem (1975), Peregrine Prykke's Pilgrimage Through the London Literary World (1976), Britannia Bright's Bewilderment in the Wilderness of Westminster (1976) and Charles Charming's Challenges on the Pathway to the Throne (1981).
During the seventies he also collaborated on six albums of songs with Pete Atkin:
• Beware of the Beautiful Stranger (1970),
• Driving Through Mythical America (1971),
• A King at Nightfall (1973),
• The Road of Silk (1974),
• Secret Drinker (1974), and
• Live Libel (1975).
A revival of interest in the songs in the late 1990s, triggered largely by the creation by Steve Birkill of an Internet mailing list "Midnight Voices" in 1997, led to the reissue of the six albums on CD between 1997 and 2001, as well as live performances by the pair. A double album of previously unrecorded songs written in the seventies and entitled The Lakeside Sessions: Volumes 1 and 2 was released in 2002 and "Winter Spring", an album of new material written by James and Atkin was released in 2003.[citation needed]
James acknowledged the importance of the "Midnight Voices" group in bringing to wider attention the lyric-writing aspect of his career. He wrote in November 1997 that, "one of the midnight voices of my own fate should be [that] the music of Pete Atkin continues to rank high among the blessings of my life, and on my behalf as well as his I bless you all for your attention".[citation needed]
In 2013, he issued his translation of Dante's Divine Comedy. The work, adopting quatrains to translate the original's terza rima, was well received by Australian critics.[6][7] Writing for the New York Times,[8] Joseph Luzzi thought it often fails to capture the more dramatic moments of the Inferno, but that it is more successful where Dante slows down, in the more theological and deliberative cantos of the Purgatorio and Paradiso.
Novelist and memoirist[edit]
In 1980 James published his first book of autobiography, Unreliable Memoirs, which recounted his early life in Australia and extended to over a hundred reprintings. It was followed by four other volumes of autobiography: Falling Towards England (1985), which covered his London years; May Week Was in June (1990), which dealt with his time at Cambridge; North Face of Soho (2006), and The Blaze of Obscurity (2009), concerning his subsequent career. An omnibus edition of the first three volumes was published under the generic title of Always Unreliable.
James has also written four novels: Brilliant Creatures (1983), The Remake (1987), Brrm! Brrm! (1991), published in the United States as The Man from Japan, and The Silver Castle (1996).
In 1999, John Gross included an excerpt from Unreliable Memoirs in The New Oxford Book of English Prose. John Carey chose Unreliable Memoirs as one of the fifty most enjoyable books of the twentieth century in his book Pure Pleasure (2000).
James developed his television career as a guest commentator on various shows, including as an occasional co-presenter with Tony Wilson on the first series of So It Goes, the Granada Television pop music show. On the show when the Sex Pistols made their TV debut, James commented: "During the recording, the task of keeping the little bastards under control was given to me. With the aid of a radio microphone, I was able to shout them down, but it was a near thing...they attacked everything around them and had difficulty in being polite even to each other".[9]
James subsequently hosted the ITV show Clive James on Television, in which he showcased unusual or (often unintentionally) amusing television programmes from around the world, notably the Japanese TV show Endurance. After his defection to the BBC in 1989, he hosted a similarly-formatted programme called Saturday Night Clive (1988–1990) which initially screened on Saturday evening, returning as Saturday Night Clive on Sunday in its second series when it changed screening day and then Sunday Night Clive in its third and final series. In 1995 he set up Watchmaker Productions to produce The Clive James Show for ITV, and a subsequent series launched the British career of singer and comedienne Margarita Pracatan. James hosted one of the early chat shows on Channel 4 and fronted the BBC's Review of the Year programmes in the late 1980s (Clive James on the '80s) and 1990s (Clive James on the '90s), which formed part of the channel's New Year's Eve celebrations.
In the mid-1980s, James featured in a travel programme called Clive James in... (beginning with Clive James in Las Vegas) for LWT (now ITV) and later switched to BBC, where he continued producing travel programmes, this time called Clive James' Postcard from... (beginning with Clive James' Postcard from Miami) - these also eventually transferred to ITV. He was also one of the original team of presenters of the BBC's The Late Show, hosting a round-table discussion on Friday nights.
His major documentary series Fame in the 20th Century (1993) was broadcast in the United Kingdom by the BBC, in Australia by the ABC and in the United States by the PBS network. This series dealt with the concept of "fame" in the 20th century, following over a course of eight episodes (each one chronologically and roughly devoted to one decade of the century, from the 1900s to the 1980s) discussions about world famous people of the 20th century. Through the use of film footage, James presented a history of "fame" which explored its growth to today's global proportions. In his closing monologue he remarked, "Achievement without fame can be a rewarding life, while fame without achievement is no life at all."
A well known fan of motor racing, James presented the 1982, 1984 and 1986 official Formula One season review videos produced by the Formula One Constructors Association, more commonly known as FOCA. James, who attended most F1 races during the 1980s and is a friend of FOCA boss Bernie Ecclestone, added his own humour to the reviews which became popular with fans of the sport. He also presented The Clive James Formula 1 Show for ITV to coincide with their Formula One coverage in 1997.
Summing up the medium, he has said: "Anyone afraid of what he thinks television does to the world is probably just afraid of the world".
In 2007, James started presenting the BBC Radio 4 series A Point of View, with transcripts appearing in the "Magazine" section of BBC News Online. In this programme James discussed various issues with a slightly humorous slant. Topics covered included media portrayal of torture,[10] young black role models[11] and corporate rebranding.[12] Three of James's broadcasts in 2007 were shortlisted for the 2008 Orwell Prize.[13]
In October 2009 James read a radio version of his book The Blaze of Obscurity, on BBC Radio 4's Book of the Week programme.[14]
In December 2009 James talked about the P-51 Mustang and other American fighter aircraft of World War II in The Museum of Curiosity on BBC Radio 4.[15]
In late 2009, James returned to presenting A Point of View for BBC Radio 4 with a series of thirteen talks.
In May 2011 the BBC published a new podcast, A Point of View: Clive James, which features all sixty A Point of View programmes presented by James between 2007 and 2009.
He has posted vlog conversations from his internet show Talking in the Library, including conversations with Ian McEwan, Cate Blanchett, Julian Barnes, Jonathan Miller and Terry Gilliam. In addition to the poetry and prose of James himself, the site features the works of other literary figures such as Les Murray and Michael Frayn, as well as the works of painters, sculptors and photographers such as John Olsen and Jeffrey Smart.
In 2008 James performed in two self-titled shows at the Edinburgh Comedy Festival: Clive James in Conversation and Clive James in the Evening. He took the latter show on a limited tour of the UK in 2009.
Honours and awards[edit]
In 1992, he was made a Member of the Order of Australia (AM). This was upgraded to Officer level (AO) in the 2013 Australia Day Honours.
James was appointed Commander of the Order of the British Empire (CBE) in the 2012 New Year Honours for services to literature and the media.[16]
In 2003 he was awarded the Philip Hodgins Memorial Medal for Literature. He has received honorary doctorates from the Universities of Sydney and the East Anglia. In April 2008, James was awarded a Special Award for Writing and Broadcasting by the judges of the Orwell Prize.[17]
He was elected a Fellow of the Royal Society of Literature in 2010.[18]
Personal life[edit]
James is married to Prue Shaw, his wife of 45 years,[19] an emeritus reader in Italian studies at University College London and the author of Reading Dante: From Here to Eternity. James and Shaw have two daughters. In April 2012, the Australian Channel Nine programme A Current Affair ran an item in which the former model Leanne Edelsten admitted to an eight-year affair with James beginning in 2004.[20] Shaw threw her husband out of the family home as a result of the revelation.[19] Prior to this, for most of his working life, James divided his time between a converted warehouse flat in London and the family home in Cambridge. He maintained a general policy of not talking about his family publicly, although he has made occasional self-deprecating comments in his various memoirs about some of his experiences of living in a house with three women.
After the death of his friend Diana, Princess of Wales, James wrote a piece for The New Yorker entitled "I Wish I'd Never Met Her", recording his overwhelming grief.[21] Since then he has mainly declined to comment about their friendship, apart from some remarks in his fifth volume of memoirs Blaze of Obscurity.
James' political views have been prominent in much of his later writing. While a detractor of communism and socialism for their tendency towards totalitarianism, he still identifies himself with the left. In a 2006 interview in The Sunday Times,[22] James said of himself: "I was brought up on the proletarian left, and I remain there. The fair go for the workers is fundamental, and I don't believe the free market has a mind." In a speech given in 1991, he criticised privatisation: "The idea that Britain's broadcasting system—for all its drawbacks one of the country's greatest institutions—was bound to be improved by being subjected to the conditions of a free market: there was no difficulty in recognising that notion as politically illiterate. But for some reason people did have difficulty in realising that it was economically illiterate too."[23]
Overall, James identifies as a liberal social democrat.[24] He strongly supported the 2003 invasion of Iraq, saying in 2007 that "the war only lasted a few days" and that the continuing conflict in Iraq was "the Iraq peace".[25] He has also written that it was "official policy to rape a woman in front of her family" during Saddam Hussein's regime and that women have enjoyed more rights since the invasion.[26] He is also currently a Patron of the Burma Campaign UK an organisation that campaigns for human rights and democracy in Burma.[27]
James has been noted for expressing views sympathetic to climate change scepticism.[28][29]
Describing religions as "advertising agencies for a product that doesn't exist", James is an atheist and sees this as the default and obvious position.[30][31]
James is able to read, with varying fluency, French, German, Italian, Spanish, Russian and Japanese.[32] A tango enthusiast, he has travelled to Buenos Aires for dance lessons and has a dance floor in his house which allows him to practice.[30]
For much of his life, James was a heavy drinker and smoker. He recorded in North Face of Soho his habit of filling a hubcap ashtray daily. At various times he wrote of attempts - intermittently successful - to give up drinking and smoking.[33] He admitted smoking 80 cigarettes a day for a number of years.[34] In April 2011, after media speculation that he had suffered kidney failure,[35] James confirmed that he was suffering from B-cell chronic lymphocytic leukemia and had been in treatment for 15 months at Addenbrooke's Hospital.[36] In an interview with BBC Radio 4 in June 2012, James admitted that the disease "had beaten him" and that he was "near the end".[37] He said that he was also diagnosed with emphysema and kidney failure in early 2010.[38]
On 3 September 2013, a television interview, Clive James: The Kid from Kogarah, was broadcast by the Australian Broadcasting Corporation (ABC) with James interviewed by journalist Kerry O'Brien. The interview was filmed at Cambridge University.
Year Genre Title Notes
1974 Non-fiction The Metropolitan Critic
1975 Poetry The Fate of Felicity Fark in the Land of the Media: a moral poem
1976 Poetry Peregrine Prykke's Pilgrimage Through the London Literary World
1976 Poetry Britannia Bright's Bewilderment in the Wilderness of Westminster
1977 Poetry Fan-mail: seven verse letters
1977 Non-fiction Visions Before Midnight: television criticism from the Observer 1972-76
1979 Non-fiction At the Pillars of Hercules
1980 Autobiography Unreliable Memoirs
1981 Poetry Charles Charming's Challenges on the Pathway to the Throne
1981 Non-fiction The Crystal Bucket: television criticism from the Observer 1976-79
1982 Non-fiction From the Land of Shadows
1983 Fiction Brilliant Creatures
1983 Poetry Poem of the Year
1983 Non-fiction Glued to the Box: television criticism from the Observer 1979–82
1984 Non-fiction Flying Visits: Postcards from the Observer, 1976–83
1985 Autobiography Falling Towards England
1986 Poetry Other Passports: poems 1958–1985
1987 Fiction The Remake
1988 Non-fiction Snakecharmers in Texas: essays 1980–87
1990 Autobiography May Week Was in June
1991 Non-fiction Clive James On Television A one-volume edition of the television criticism books
1991 Fiction Brrm! Brrm! Released in the United States as The Man From Japan (1993)
1992 Non-fiction The Dreaming Swimmer: non-fiction, 1987–1992
1993 Non-fiction Fame in the 20th Century
1996 Fiction The Silver Castle
2003 Poetry The Book of My Enemy Poetry and lyrics
2004 Non-fiction Even as We Speak: New Essays 1993–2001
2005 Non-fiction The Meaning of Recognition: New Essays 2001–2005
2006 Autobiography North Face of Soho
2007 Non-fiction Cultural Amnesia: Necessary Memories from History and the Arts
2009 Autobiography The Blaze of Obscurity
2009 Poetry Opal Sunset: Selected Poems 1958–2009
2009 Non-fiction The Revolt of the Pendulum: Essays 2005–2009
2011 Non-fiction A Point of View Reproductions of sixty BBC Radio 4 10-minute segments from 2007 to 2009
2013 Translation (Epic Poem) Dante's Divine Comedy Translation in quatrains, of Dante Alighieri's Divine Comedy
See also[edit]
1. ^ a b James, C., Unreliable Memoirs, Pan Books, 1981, p.29.
2. ^ a b Decca Aitkenhead "Clive James: 'I would have been an obvious first choice for cocaine death. I could use up a lifetime's supply of anything in two weeks'", The Guardian, 25 May 2009
3. ^ James, C., 'Unreliable Memoirs', Pan Books, 1981, p.59 .
4. ^ James, C.,'May Week Was In June', Jonathan Cape, 1990, p.49 .
5. ^ Garner, Dwight (24 July 2007). "The Book of My Enemy". The New York Times.
6. ^ Peter Craven, 'Master craftsman's crowning glory,' at The Sydney Morning Herald, 1 June 2013.
7. ^ Peter Goldsworthy, 'Clive James's Dante is simply divine,' at The Australian, 1 June 2013.
8. ^ Joseph Luzzi,'This Could Be ‘Heaven,’ or This Could Be ‘Hell’, at New York Times, 19 April 2013.
9. ^ "The Observer, November 1976". Retrieved 24 December 2007.
10. ^ James, Clive (30 March 2007). "The clock's ticking on torture". BBC News Magazine. Retrieved 24 December 2007.
11. ^ "Young, gifted and black". BBC News Magazine. 23 March 2007. Retrieved 24 December 2007.
12. ^ James, Clive (16 February 2007). "The name-changing fidgets". BBC News Magazine. Retrieved 24 December 2007.
13. ^ "Shortlist 2008", The Orwell Prize
14. ^ "Book of the Week – The Blaze of Obscurity". BBC. 19 October 2009. Retrieved 19 October 2009.
15. ^ "Museum of Curiosity on Radio 4 web site". BBC. 25 December 2009. Retrieved 25 December 2009.
17. ^ Stephen Brook (25 April 2008). "Hari and James take Orwell prizes". London: The Guardian. Retrieved 25 April 2008.
19. ^ a b Robert McCrum "Clive James – a life in writing", The Guardian, 5 July 2013
20. ^ "Star’s secret affair". ninemsn: A Current Affair. 23 April 2012. Retrieved 26 June 2012.
21. ^ Clive James on Diana
22. ^ Appleyard, Bryan (12 November 2006). "Interview Clive James". The Times (London). Retrieved 30 April 2010.
23. ^ "On the Eve of Disaster"
24. ^ Arts Today with Michael Cathcart 12/12/2001
25. ^ "Bill Moyers talks with Cultural Critic, Clive James.". Retrieved 7 May 2009.
26. ^ "Still looking for the western feminists". BBC News. 22 May 2009. Retrieved 23 May 2009.
27. ^ "The Burma Campaign UK: AboutUs". Retrieved 24 December 2007.
28. ^ "Programme 1: On Climate Change". Retrieved 5 November 2011.
29. ^ Monbiot, George (2 November 2009). "Clive James isn't a climate change sceptic, he's a sucker - but this may be the reason". London: Retrieved 5 November 2011.
30. ^ a b "Enough Rope with Andrew Denton – episode 84: Clive James (04/07/2005)". Retrieved 16 September 2008.
31. ^ "Discussion between Richard Dawkins and Clive James at the Edinburgh Book Festival". Retrieved 27 August 2010.
32. ^ Haynes, Deborah (12 May 2007). "Culture vulture". The Times (London). Retrieved 30 April 2010.
33. ^ Smoking the Memory | In A Point of View he notes that this account of giving up smoking needed updating as he had gone back to it.
34. ^ "Smoking, my lost love". BBC News. 3 August 2007.
35. ^ "Clive James battles leukaemia"
36. ^ "I'm battling leukaemia, reveals broadcaster Clive James". London: Daily Mail. 30 April 2011. Retrieved 30 April 2011.
37. ^ "Clive James tells BBC "I am dying, I am near the end"". Belfast Telegraph. 21 June 2012. Retrieved 21 June 2012.
38. ^ "Clive James: 'I'm getting near the end'". BBC News: Entertainment and Arts. 21 June 2012. Retrieved 26 June 2012.
External links[edit]
Cultural offices
Preceded by
Andrew Mayer
Footlights President
Succeeded by
Jonathan James-Moore
|
global_05_local_4_shard_00000656_processed.jsonl/83821
|
Deep Web
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Deep Web (also called the Deepnet,[1] Invisible Web,[2] or Hidden Web[3]) is World Wide Web content that is not part of the Surface Web, which is indexed by standard search engines. It should not be confused with the dark Internet, the computers that can no longer be reached via the Internet, or with a Darknet distributed filesharing network, which could be classified as a smaller part of the Deep Web. Some prosecutors and government agencies think that the Deep Web is a haven for serious criminality.[4]
Bright Planet, a web-services company, describes the size of the Deep Web in this way:
It is impossible to measure or put estimates onto the size of the deep web because the majority of the information is hidden or locked inside databases. Early estimates suggested that the deep web is 4,000 to 5,000 times larger than the surface web. However, since more information and sites are always being added, it can be assumed that the deep web is growing exponentially at a rate that cannot be quantified. [8]
Estimates based on extrapolations from a study done at University of California, Berkeley in 2001[7] speculate that the deep web consists of about 7.5 petabytes. More accurate estimates are available for the number of resources in the deep Web: research of He et al. detected around 300,000 deep web sites in the entire Web in 2004,[9] and, according to Shestakov, around 14,000 deep web sites existed in the Russian part of the Web in 2006.[10]
Bergman, in a seminal paper on the deep Web published in The Journal of Electronic Publishing, mentioned that Jill Ellsworth used the term invisible Web in 1994 to refer to websites that were not registered with any search engine.[7] Bergman cited a January 1996 article by Frank Garcia:[11]
It would be a site that's possibly reasonably designed, but they didn't bother to register it with any of the search engines. So, no one can find them! You're hidden. I call that the invisible Web.
Another early use of the term Invisible Web was by Bruce Mount and Matthew B. Koll of Personal Library Software, in a description of the @1 deep Web tool found in a December 1996 press release.[12]
The first use of the specific term Deep Web, now generally accepted, occurred in the aforementioned 2001 Bergman study.[7]
Deep resources[edit]
Deep Web resources may be classified into one or more of the following categories:
• Dynamic content: dynamic pages which are returned in response to a submitted query or accessed only through a form, especially if open-domain input elements (such as text fields) are used; such fields are hard to navigate without domain knowledge.
• Unlinked content: pages which are not linked to by other pages, which may prevent Web crawling programs from accessing the content. This content is referred to as pages without backlinks (or inlinks).
• Private Web: sites that require registration and login (password-protected resources).
• Contextual Web: pages with content varying for different access contexts (e.g., ranges of client IP addresses or previous navigation sequence).
• Limited access content: sites that limit access to their pages in a technical way (e.g., using the Robots Exclusion Standard, CAPTCHAs, or no-cache Pragma HTTP headers which prohibit search engines from browsing them and creating cached copies.[13])
• Scripted content: pages that are only accessible through links produced by JavaScript as well as content dynamically downloaded from Web servers via Flash or Ajax solutions.
• Non-HTML/text content: textual content encoded in multimedia (image or video) files or specific file formats not handled by search engines.
Accessing the Deep Web[edit]
While it is not always possible to discover a specific web server's external IP address, theoretically almost any site can be accessed via its IP address, regardless of whether or not it has been indexed.
Certain content is intentionally hidden from the regular internet, accessible only with special software, such as Tor. Tor allows users to access websites using the .onion host suffix anonymously, hiding their IP address. Other such software includes I2P and Freenet.
In 2008, in order to facilitate user access and search engine indexing of hidden services using the .onion suffix, Aaron Swartz designed Tor2web a proxy software able to provide access to Tor hidden services by means of common web browsers.[14]
To discover content on the Web, search engines use web crawlers that follow hyperlinks through known protocol virtual port numbers. This technique is ideal for discovering resources on the surface Web but is often ineffective at finding Deep Web resources. For example, these crawlers do not attempt to find dynamic pages that are the result of database queries due to the indeterminate number of queries that are possible.[5] It has been noted that this can be (partially) overcome by providing links to query results, but this could unintentionally inflate the popularity for a member of the deep Web.
DeepPeep, Intute, Deep Web Technologies, Scirus, and are a few search engines that have accessed the Deep Web. Intute ran out of funding and is now a temporary static archive as of July, 2011.[15] Scirus retired near the end of January, 2013.[16]
Crawling the Deep Web[edit]
Researchers have been exploring how the Deep Web can be crawled in an automatic fashion. In 2001, Sriram Raghavan and Hector Garcia-Molina (Stanford Computer Science Department, Stanford University)[17][18] presented an architectural model for a hidden-Web crawler that used key terms provided by users or collected from the query interfaces to query a Web form and crawl the Deep Web resources. Alexandros Ntoulas, Petros Zerfos, and Junghoo Cho of UCLA created a hidden-Web crawler that automatically generated meaningful queries to issue against search forms.[19] Several form query languages (e.g., DEQUEL[20]) have been proposed that, besides issuing a query, also allow to extract structured data from result pages. Another effort is DeepPeep, a project of the University of Utah sponsored by the National Science Foundation, which gathered hidden-Web sources (Web forms) in different domains based on novel focused crawler techniques.[21][22]
Commercial search engines have begun exploring alternative methods to crawl the deep Web. The Sitemap Protocol (first developed, and introduced by Google in 2005) and mod oai are mechanisms that allow search engines and other interested parties to discover deep Web resources on particular Web servers. Both mechanisms allow Web servers to advertise the URLs that are accessible on them, thereby allowing automatic discovery of resources that are not directly linked to the surface Web. Google's deep Web surfacing system pre-computes submissions for each HTML form and adds the resulting HTML pages into the Google search engine index. The surfaced results account for a thousand queries per second to deep Web content.[23] In this system, the pre-computation of submissions is done using three algorithms:
1. selecting input values for text search inputs that accept keywords,
2. identifying inputs which accept only values of a specific type (e.g., date), and
3. selecting a small number of input combinations that generate URLs suitable for inclusion into the Web search index.
Classifying resources[edit]
Automatically determining if a Web resource is a member of the surface Web or the deep Web is difficult. If a resource is indexed by a search engine, it is not necessarily a member of the surface Web, because the resource could have been found using another method (e.g., the Sitemap Protocol, mod_oai, OAIster) instead of traditional crawling. If a search engine provides a backlink for a resource, one may assume that the resource is in the surface Web. Unfortunately, search engines do not always provide all backlinks to resources. Furthermore, a resource may reside in the surface Web even though it has yet to be found by a search engine.
Most of the work of classifying search results has been in categorizing the surface Web by topic. For classification of deep Web resources, Ipeirotis et al.[24] presented an algorithm that classifies a deep Web site into the category that generates the largest number of hits for some carefully selected, topically-focused queries. Deep Web directories under development include OAIster at the University of Michigan, Intute at the University of Manchester, Infomine[25] at the University of California at Riverside, and DirectSearch (by Gary Price). This classification poses a challenge while searching the deep Web whereby two levels of categorization are required. The first level is to categorize sites into vertical topics (e.g., health, travel, automobiles) and sub-topics according to the nature of the content underlying their databases.
The more difficult challenge is to categorize and map the information extracted from multiple deep Web sources according to end-user needs. Deep Web search reports cannot display URLs like traditional search reports. End users expect their search tools to not only find what they are looking for special, but to be intuitive and user-friendly. In order to be meaningful, the search reports have to offer some depth to the nature of content that underlie the sources or else the end-user will be lost in the sea of URLs that do not indicate what content lies beneath them. The format in which search results are to be presented varies widely by the particular topic of the search and the type of content being exposed. The challenge is to find and map similar data elements from multiple disparate sources so that search results may be exposed in a unified format on the search report irrespective of their source.
See also[edit]
1. ^ Hamilton, Nigel. The Mechanics of a Deep Net Metasearch Engine. CiteSeerX:
2. ^ Devine, Jane; Egger-Sider, Francine (July 2004). "Beyond google: the invisible web in the academic library". The Journal of Academic Librarianship 30 (4): 265–269. Retrieved 2014-02-06.
3. ^ Raghavan, Sriram; Garcia-Molina, Hector (11–14 September 2001). "Crawling the Hidden Web". 27th International Conference on Very Large Data Bases (Rome, Italy).
4. ^ The Secret Web: Where Drugs, Porn and Murder Live Online
5. ^ a b Wright, Alex (2009-02-22). "Exploring a 'Deep Web' That Google Can’t Grasp". The New York Times. Retrieved 2009-02-23.
6. ^ Bergman, Michael K (July 2000). The Deep Web: Surfacing Hidden Value. BrightPlanet LLC.
7. ^ a b c d Bergman, Michael K (August 2001). "The Deep Web: Surfacing Hidden Value". The Journal of Electronic Publishing 7 (1). doi:10.3998/3336451.0007.104.
8. ^ "Deep Web: A Primer". BrightPlanet. Retrieved June 7, 2014.
9. ^ He, Bin; Patel, Mitesh; Zhang, Zhen; Chang, Kevin Chen-Chuan (May 2007). "Accessing the Deep Web: A Survey". Communications of the ACM (CACM) 50 (2): 94–101. doi:10.1145/1230819.1241670.
10. ^ Denis Shestakov (2011). "Sampling the National Deep Web" (PDF). Proceedings of the 22nd International Conference on Database and Expert Systems Applications (DEXA) (in Russian). pp. 331–340. Archived from the original on September 2, 2011. Retrieved 2011-10-06.
11. ^ Garcia, Frank (January 1996). "Business and Marketing on the Internet". Masthead 15 (1). Archived from the original on 1996-12-05. Retrieved 2009-02-24.
12. ^ @1 started with 5.7 terabytes of content, estimated to be 30 times the size of the nascent World Wide Web; PLS was acquired by AOL in 1998 and @1 was abandoned. "PLS introduces AT1, the first 'second generation' Internet search service" (Press release). Personal Library Software. December 1996. Retrieved 2009-02-24.
13. ^ "HTTP 1.1: Header Field Definitions (14.32 Pragma)". HTTP — Hypertext Transfer Protocol. World Wide Web Consortium. 1999. Retrieved 2009-02-24.
14. ^ Aaron, Swartz. "In Defense of Anonymity". Retrieved 4 February 2014.
15. ^ "Intute FAQ". Retrieved October 13, 2012.
16. ^
17. ^ Sriram Raghavan; Garcia-Molina, Hector (2000). Crawling the Hidden Web (PDF). Stanford Digital Libraries Technical Report. Retrieved 2008-12-27.
18. ^ Raghavan, Sriram; Garcia-Molina, Hector (2001). "Crawling the Hidden Web" (PDF). Proceedings of the 27th International Conference on Very Large Data Bases (VLDB). pp. 129–38.
19. ^ Alexandros, Ntoulas; Zerfos, Petros; Cho, Junghoo (2005). Downloading Hidden Web Content (PDF). UCLA Computer Science. Retrieved 2009-02-24.
21. ^ Barbosa, Luciano; Freire, Juliana (2007). An Adaptive Crawler for Locating Hidden-Web Entry Points (PDF). WWW Conference 2007. Retrieved 2009-03-20.
22. ^ Barbosa, Luciano; Freire, Juliana (2005). Searching for Hidden-Web Databases.. WebDB 2005. Retrieved 2009-03-20.
23. ^ Madhavan, Jayant; Ko, David; Kot, Łucja; Ganapathy, Vignesh; Rasmussen, Alex; Halevy, Alon (2008). Google’s Deep-Web Crawl (PDF). VLDB Endowment, ACM. Retrieved 2009-04-17.
24. ^ Ipeirotis, Panagiotis G.; Gravano, Luis; Sahami, Mehran (2001). "Probe, Count, and Classify: Categorizing Hidden-Web Databases" (PDF). Proceedings of the 2001 ACM SIGMOD International Conference on Management of Data. pp. 67–78.
25. ^
Further reading[edit]
External links[edit]
|
global_05_local_4_shard_00000656_processed.jsonl/83822
|
Diversity (business)
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For financial strategy, see Diversification (finance).
The "business case for diversity" stem from the progression of the models of diversity within the workplace since the 1960s. The original model for diversity was situated around affirmative action drawing strength from the law and a need to comply with equal opportunity employment objectives. This compliance-based model gave rise to the idea that tokenism was the reason an individual was hired into a company when they differed from the dominant group.
The social justice model evolved next and extended the idea that individuals outside of the dominant group should be given opportunities within the workplace, not only because it was the law, but because it was the right thing to do. This model still revolved around the idea of tokenism, but it also brought in the notion of hiring based on a "good fit".
Beyond having a workforce that mirrors the changing demographics of the global consumer market and the ability to better understand their desires and preferences, productivity, and costs can be analyzed to assist in building the business case for diversity. In the deficit model, organizations that do not have a strong diversity inclusion culture will invite lower productivity, higher absenteeism, and higher turnover which will result in higher costs to the company.[1]
Classification of workplaces[edit]
In a journal article entitled "The multicultural organization" by Taylor Cox, Jr., Cox talks about three organization types that focus on the development of cultural diversity. The three types are: the monolithic organization, the plural organization, and the multicultural organization. In the monolithic organization, the amount of structural integration (the presence of persons from different cultural groups in a single organization) is minimal. This type of organization may have minority members within the workforce, but not in positions of leadership and power.[2]
The plural organization has a more heterogeneous membership than the monolithic organization and takes steps to be more inclusive of persons from cultural backgrounds that differ from the dominant group. This type of organization seeks to empower those from a marginalized standpoint to encourage opportunities for promotion and positions of leadership.[2]
The multicultural organization not only contains many different cultural groups, but it values this diversity. It encourages healthy conflict as a source of avoiding groupthink.[3]
Role of leadership[edit]
A study of successful multicultural organizations as opposed to monolithic and plural organizations can be understood by applying theories of leadership which have evolved over time. Trait leadership theory suggests that leadership is dependent on physical and social attributes of the individual and greatly based on European cultures.[4] Situational leadership, where the balance of managing relationship behavior and the tasks at hand,[5] underscore multicultural organizations.
The combination of "transformational leadership" and "discursive leadership" allows and encourages mid-level managers to use diversity as an influential resource in order to enhance organizational effectiveness. In the Journal of Applied Behavioral Science, C.L. Walck defines managing diversity in the workplace as "Negotiating interaction across culturally diverse groups, and contriving to get along in an environment characterized by cultural diversity".[6]
On one hand, there is a lack of documented evidence for the claimed benefits to the organisation and the individual.
On the other hand, diversity is claimed to bring substantial potential benefits such as better decision making and improved problem solving, greater creativity and innovation, which leads to enhanced product development, and more successful marketing to different types of customers.[7][2] Diversity provides organizations with the ability to compete in global markets.[8] Simply recognizing diversity in a corporation helps link the variety of talents within the organization.[9] The act of recognizing diversity also allows for those employees with these talents to feel needed and have a sense of belonging, which in turn increases their commitment to the company and allows each of them to contribute in a unique way.[10]
Standpoint theory suggests that marginalized groups bring a different perspective to an organization that challenges the status quo since their socially constructed world view will differ from that of the dominant group.[11] Although the standpoint of the dominant group will often carry more weight, encouraging conflicting standpoints to coexist within an organization which will create a forum for sanctioned conflict to ensue. Standpoint theory gives a voice to those in a position to see patterns of behavior that those immersed in the culture have difficulty acknowledging.[12] From this perspective, these unique and varying standpoints help to eradicate groupthink which can develop within a homogenous group.[7] Scott Page’s (2007)[13] mathematical modeling research of team work reflects this view. His models demonstrated that heterogeneous teams consistently out-performed homogeneous teams on a variety of tasks. Page points out, however, that diversity in teamwork is not always simple and that there are many challenges to fostering an inclusive environment in the workplace for diversity of thought and ideas.
One of the greatest challenges an organization has when trying to adopt a more inclusive environment is assimilation for any member outside of the dominant group. The interplay between power, ideology, and discursive acts which reinforce the hegemonic structure of organizations is the subject of much study.[14] Everything from organizational symbols, rituals, and stories serve to maintain the position of power held by the dominant group.[14]
When organizations hire or promote individuals that are not part of this dominant group into management positions, a tension develops between the socially constructed organizational norm and acceptance of cultural diversity. Often these individuals are mentored and coached to adopt the necessary traits for inclusion into the privileged group as opposed to being embraced for their differences.[7][11] According to the journal article "Cultural Diversity in the Workplace: The State of the Field", Marlene G. Fine explains that "those who assimilate are denied the ability to express their genuine selves in the workplace; they are forced to repress significant parts of their lives within a social context that frames a large part of their daily encounters with other people". Fine goes on to mention that "People who spend significant amounts of energy coping with an alien environment have less energy left to do their jobs. Assimilation does not just create a situation in which people who are different are likely to fail, it also decreases the productivity of organizations".[8] That is, with a diverse workforce, management may have to work harder to reach the same level of productivity as with a less diverse workforce.
Another challenge faced by organizations striving to foster a more diverse workforce is the management of a diverse population. Managing diversity is more than simply acknowledging differences in people.[15] A number of organizational theorists have suggested that work-teams which are highly diverse can be difficult to motivate and manage for a variety of reasons. A major challenge is miscommunication within an organization. Fine reported a study of "work groups that were culturally diverse and found that cross-cultural differences led to miscommunication."[16] That is, a diverse workforce led to challenges for management. The meaning of a message can never be completely shared because no two individuals experience events in exactly the same way. Even when native and non-native speakers are exposed to the same messages, they may interpret the information differently.[17] There are competencies, however, which help to develop effective communication in diverse organizational environments. These skills include self-monitoring, empathy, and strategic decision-making.
Maintaining a culture which supports the idea of employee voice (especially for marginalized group members) is another challenge for diverse organisation. When the organizational environment is not supportive of dissenting viewpoints, employees may choose to remain silent for fear of repercussions,[18] or they may seek alternative safe avenues to express their concerns and frustrations such as on-line forums and affinity group meetings.[19] By finding opportunities such as these to express dissent, individuals can begin to gather collective support and generate collective sense-making which creates a voice for the marginalized members so they can have a collective voice to trigger change.[18]
Strategies to achieve diversity[edit]
Three approaches towards corporate diversity management can be distinguished: Liberal Change, Radical Change, and Transformational Change.[20]
Liberal change[edit]
The liberal concept recognizes equality of opportunity in practice when all individuals are enabled freely and equally to compete for social rewards. The aim of the liberal change model is to have a fair labor market from which the best person is chosen for a job based solely on performance. To support this concept, a framework of formal rules has been created and policymakers are responsible for ensuring that these rules are enforced on all so none shall be discriminated against. The liberal-change approach centers on law, compliance, and legal penalties for non-compliance.
One weakness of the liberal view is that the formal rules cannot cover every aspect of work life, as there is almost always an informal aspect to work such as affinity groups, hidden transcripts, and alternative informal communication channels.[21][22]
Radical changes[edit]
In contrast to the liberal approach, radical change seeks to intervene directly in the workplace practices in order to achieve balanced workforces, as well as a fair distribution of rewards among employees. The radical approach is thus more outcome focused than focused on the forming the rules to ensure equal treatment.[22] One major tool of radical change is quotas which are set by companies or national institutions with the aim to regulate diversity of the workforce and equal opportunities.
Arguments for and against quota systems in companies or public institutions include contrasting ideas such as quotas
• compensating for actual barriers that prevent marginalized members from attaining their fair share of managerial positions
• being against equal opportunity for all and imply that a marginalized member only got the position to fill the quota.[23] Sweden’s quota system for parliamentary positions is a positive case for radical change through quota setting.[24]
A quota system was introduced at the Swedish parliament with the aim of ensuring that women constitute at least a ‘critical minority’ of 30 or 40 percent of all parliament seats. Since the introduction of the system, women representation in parliament has risen dramatically even above the defined quota. Today, 47% of parliamentary representatives are women, a number which stands out compared to the global average of 19%.[citation needed]
Transformational change[edit]
Transformational change covers an equal opportunity agenda for both the immediate need as well as long-term solutions.[25] For the short term it implements new measures to minimize bias in procedures such as recruitment or promotion. The long term, however, is seen as a project of transformation for organizations. This approach acknowledges the existence of power systems and seeks to challenge the existing hegemony through implementation of equality values.
One illustrative case for transformational change is ageing management;[26] Younger employees are seen as more innovative and flexible, while older employees are associated with higher costs of salary, benefits, and healthcare needs.[27] Therefore companies may prefer young workers to older staff. Through application of the transformational concept an immediate intervention provides needed relief while a longer-term culture shift occurs.
For the short-term, an organization can set up legislation preventing discrimination based on age (e.g., Age Discrimination in Employment Act). However, for the long-term solution, negative stereotypes of older employees needs to be replaced with the positive realization that older employees can add value to the workplace through their experience and knowledge base.[28] To balance this idea with the benefit of innovation and flexibility that comes with youth, a mixture of ages in the workforce is ideal.[29] Through transformational change, the short-term solution affords the organization the time necessary to enact deep rooted culture changes leading to a more inclusive environment.
Intentional "diversity programs" can assist organisations facing rapid demographic changes in their local consumer market and labor pool by helping people work and understand one other better.[7] Resources exist through best practice cases of organizations that have successfully created inclusive environments supporting and championing diversity. An example such resources is MentorNet,[30] a nonprofit online mentoring organization that focuses on women and under-represented minorities in the science, technology, engineering and mathematics fields.
Implementing diversity inclusion initiatives must start with the commitment from the top. With a commitment from top leaders in an organization to change the existing culture to one of diversity inclusion, the diversity change management process can succeed. This process includes analyzing where the organization is currently at through a diversity audit, creating a strategic action plan, gaining support by seeking stakeholder input, and holding individuals accountable through measurable results.[7]
See also[edit]
1. ^ Harvey, Carol P. (2012). Understanding and Managing Diversity. New Jersey: Pearson Education, Inc. pp. 51–55. ISBN 0-13-255311-2.
2. ^ a b c Cox, Jr., Taylor (1991). "The Multicultural Organization". Academy of Management Executive, 5(2), 34-47.
3. ^ Harvey, Carol P. (2012). Understanding and Managing Diversity. New Jersey: Pearson Education, Inc. pp. 41–47. ISBN 0-13-255311-2.
4. ^ Eisenberg, Eric M.; H.L. Goodall, Jr. & Angela Trethewey (2010). Organizational Communication (6th ed.). St. Martin's: Bedford. pp. 250–58. ISBN 978-0-312-57486-4.
5. ^ "Center for Leadership Studies, Inc.". Retrieved 2006.
6. ^ Walck, C.L. (1995). Editor's introduction: "Diverse approaches to managing diversity". Journal of Applied Behavioral Science, 31, 119-23).
8. ^ a b Fine, Marlene G. (1996). "Cultural Diversity in the Workplace: The State of the Field". Journal of Business Communication, 33(4), 485-502.
9. ^ De Pree, Max. Leadership is an Art. New York: Doubleday Business, 1989. print
11. ^ a b Allen, Brenda J. (September 1995). "Diversity and Organizational Communication". Journal of Applied Communication Research 23: 143–55. doi:10.1080/00909889509365420.
12. ^ Allen, Brenda J. (December 1996). "Feminist Standpoint Theory: a Black Woman's Review of Organizational Socialization". Communication Studies 47 (4): 257–71. doi:10.1080/10510979609368482.
14. ^ a b Mumby, Dennis (1988). Communication and Power in Organizations. New York: Ablex Publishing. pp. 1–210. ISBN 978-1-56750-160-5.
16. ^ p. 494. Fine, Marlene G. (1996). "Cultural Diversity in the Workplace: The State of the Field". Journal of Business Communication, 33(4), 485-502.
17. ^ Brownell, Judi (2003). "Developing Receiver-Centered Communication in Diverse Organizations". Listening Professional, 2(1), 5-25
18. ^ a b Milliken, Frances J.; Elizabeth W. Morrison and Patricia F. Hewlin (September 2003). "An Exploratory Study of Employee Silence: Issues that Employees Don't Communicate Upward and Why". Journal of Management Studies 40 (6): 1453–76. doi:10.1111/1467-6486.00387.
19. ^ Gossett, Loril M.; Julian Kilker (August 2006). "My Job Sucks: Examining Counterinstitutional Web Sites as Locations for Organizational Member Voice, Dissent, and Resistance". Management Communication Quarterly 20 (1): 63–90. doi:10.1177/0893318906291729.
20. ^ Tatli, Ahu; M. Ozbilgin (22 July 2009). "Understanding Diversity Managers' Role in Organizational Change: Towards a Conceptual Framework". Canadian Journal of Administrative Sciences 26: 244–58. doi:10.1002/CJAS.107.
21. ^ Jewson, Nick; Mason, David. Sociological Review, May 1986, Vol. 34 Issue 2, p307-34, 28p
22. ^ a b Cynthia Cockburn, 1989, "Equal Opportunities: the short and long agenda", Industrial Relations Journal, 20 (3): 213-25
23. ^ N. Jewson and D. Mason, 1986, "The theory of equal opportunity policies: liberal and radical approaches", Sociological Review, 34 (2)
24. ^ "Increasing Women’s Political Representation: New Trends in Gender Quotas", in Ballington and Karam, eds. International IDEA. 2005: Women in Parliament. Beyond Numbers (revised edition) and Drude Dahlerup, ed., Women, Quotas and Politics. Routledge 2006 7Drude Dahlerup & Lenita Freidenvall, "Gender Quotas in politics — A Constitutional Challenge", in Susan H. Williams, ed., Constituting Equality. Gender Equality and Comparative Constitutional Law. Cambridge University Press 2009.
25. ^ C. Cockburn, 1989, "Equal Opportunities: the short and long agenda", Industrial Relations Journal, 20 (3): 213-25
26. ^ V. Pahl, "Altern und Arbeit – Chancengleichheit für alle Altersgruppen", in C. von Rothkirch, Altern und Arbeit: Herausforderung für Wirtschaft und Gesellschaft, Sigma Rainer Bohn Verlag, 2000
27. ^ L. Brooke, "Human resource costs and benefits of maintaining a mature-age workforce", International Journal of Manpower, 24 (3): 260-83
28. ^ J. Ilmarinen, "Die Arbeitsfähigkeit kann mit dem Alter steigen", in C. von Rothkirch, Altern und Arbeit: Herausforderungen für Wirtschaft und Gesellschaft, Sigma Rainer Bohn Verlag, 2000
29. ^ R. Guest & K. Shacklock, "The impending shift to an older mix of workers: perspectives from the management and economics literature", International Journal of Organisational Behaviour, 10 (3): 713-728
30. ^ http://www.mentornet.net
|
global_05_local_4_shard_00000656_processed.jsonl/83825
|
Jane Curtin
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Jane Curtin
Jane Curtin 1989.jpg
at the 41st Emmy Awards, September 1989
Born Jane Therese Curtin
(1947-09-06) September 6, 1947 (age 66)
Cambridge, Massachusetts, U.S.
Occupation Actress/Comedian
Years active 1968–present
Spouse(s) Patrick Lynch (m. 1975)
Children Tess Lynch (b. 1983)
Jane Therese Curtin (born September 6, 1947) is an American actress and comedian. She is sometimes referred to as "Queen of the Deadpan"; The Philadelphia Inquirer once called her a "refreshing drop of acid".[1] She was included on a 1986 list of the "Top Prime Time Actors and Actresses of All Time".[2]
First coming to prominence as an original cast member on the hit TV comedy series Saturday Night Live in 1975, she went on to win back-to-back Emmy Awards for Best Lead Actress in a Comedy Series on the 1980s sitcom Kate & Allie portraying the role of Allison "Allie" Lowell. Curtin later starred in the hit series 3rd Rock from the Sun (1996–2001), playing the role of Dr. Mary Albright.
Curtin has also appeared in many movie roles, including Charlene in the The Librarian series of movies (2004–2008). She also reprised one of her Saturday Night Live characters, Prymaat (Clorhone) Conehead, in the 1993 film The Coneheads.
Personal life[edit]
Curtin was born in Cambridge, Massachusetts, the daughter of Mary Constance (née Farrell) and John Joseph Curtin, who owned an insurance agency.[3] She has a brother, Larry Curtin, who lives in South Florida;[4] their older brother, John J. "Jack" Curtin, passed away in 2008.[5] She was raised Roman Catholic. She grew up in Wellesley MA and graduated from Convent of the Sacred Heart, Elmhurst Academy in Portsmouth RI in 1965.[6] Curtin is a cousin of actress and writer Valerie Curtin. She attended Northeastern University in Boston.
She married Patrick Francis Lynch (a television producer) on April 2, 1975; they have one daughter, Tess Curtin Lynch.[7] They live in Sharon, Connecticut.
Curtin holds an associate degree from Elizabeth Seton Junior College in New York City. She has served as a U.S. Committee National Ambassador for UNICEF. In 1968, Curtin decided to pursue comedy as a career and dropped out of college. She joined a comedy group, "The Proposition", and performed with them until 1972. She starred in Pretzels, an off-Broadway play written by Curtin, John Forster, Judith Kahan and Fred Grandy, in 1974.
Saturday Night Live[edit]
One of the original "Not Ready For Prime Time Players" for NBC's Saturday Night Live (1975), Curtin remained on the show through the 1979–1980 season. Guest host Eric Idle said that Curtin was "very much a 'Let's come in, let's know our lines, let's do it properly, and go' ... She was very sensible, very focused", and disliked the drug culture that many of the cast participated in. Show writer Al Franken stated that she "was so steady. Had a really strong moral center, and as such was disgusted by much of the show and the people around it".[8]
On this show, and mirroring her own low-key real life, she frequently played straight-woman characters, often as a foil to John Belushi and Gilda Radner. Curtin anchored SNL's "Weekend Update" segment in 1976–77, and was paired with Dan Aykroyd in 1977–78 and Bill Murray in 1978–80.
As a TV anchorwoman, Curtin played as a foil to John Belushi, who often gave a rambling and out-of-control "commentary" on events of the day. During these sketches, she timidly tried to get Belushi to come to the point, which would only make him angrier. In a well-known sketch, Belushi gave a rambling account of his Irish friend's troubles to demonstrate there was no such thing as "the luck of the Irish".
Curtin's newscaster also introduced baseball expert Chico Escuela (Garrett Morris), a heavily accented Dominican, who started his sketches by saying, "Thank you, Hane", before repeating his famous catchphrase, "Baseball been bery, bery good to me!"
In a parody of the "Point-Counterpoint" segment of the news program 60 Minutes, Curtin portrayed a controlled liberal viewpoint (à la Shana Alexander) vs. Dan Aykroyd, who (in the manner of James J. Kilpatrick) epitomized the right-wing view, albeit with an over-the-top "attack" journalist slant. Curtin presented the liberal "Point" portion first. Then Aykroyd presented the "Counterpoint" portion, sometimes beginning with the statement, "Jane, you ignorant slut," to which she replied, "Dan, you pompous ass." The recurring segment has been discussed in an article on "How to Respectfully Disagree" in The Chronicle of Higher Education.[9]
Curtin is also well known for her role in the Conehead sketches as "Prymaat Conehead" (mother of the Conehead family), and as "Enid Loopner" (in sketches with Gilda Radner and Bill Murray). In the movie, Miracle, a reference to the Coneheads are made, and in the beginning scene, brief footage is shown of the Conehead skit from SNL.
She is one of many cast members who appear in the retrospective compilation DVD The Women of SNL (2010, 97 minutes).[10]
Later television work[edit]
Unlike many of her fellow SNL cast members who ventured successfully into film, Curtin chose to stay in television. Her film appearances have been sporadic. To date, she has starred in two long-running television sitcoms. First, in Kate & Allie (1984–89), with Susan Saint James, she played a single mother named "Allie Lowell" and twice won the Emmy Award for Best Lead Actress in a Comedy Series.
She later joined the cast of 3rd Rock from the Sun (1996–2001) playing a human, Dr. Mary Albright, opposite the alien family, composed of John Lithgow, Kristen Johnston, French Stewart, and Joseph Gordon-Levitt. As with SNL, her mostly straight-laced character was often confounded by the zany and whimsical antics of the Solomon family.
In 1997, Curtin narrated two episodes of the documentary television series Understanding.[11]
Curtin also starred with Fred Savage in the ABC sitcom Crumbs, which debuted in January 2006 and was canceled in May of that year. She also guest-starred on Gary Unmarried as Connie, Allison's mother.[12] In 2012, she joined Unforgettable as Dr. Joanne Webster, a gifted but crusty medical examiner.
In 1980, Jane starred with Susan Saint James and Jessica Lange in the moderate hit "How To Beat The High Cost of Living." In 1993, Curtin and Dan Aykroyd were reunited in Coneheads, a full-length motion picture based on their popular SNL characters. They also appeared together as the voices of a pair of wasps in the film Antz. In 2009, she played Paul Rudd and Andy Samberg's mother in I Love You, Man.
Other work[edit]
Curtin has also performed on Broadway on occasion. She first appeared on the Great White Way as Miss Prosperine Garrett in the play Candida in 1981. She later went on to be a replacement actress in two other plays: Love Letters and Noises Off, and was in the 2002 revival of Our Town, which received huge press attention as Paul Newman returned to the Broadway stage after several decades away.
She also has narrated several audio books, including Carl Hiaasen's novel Nature Girl.
On May 7, 2010, Curtin placed second in the Jeopardy! Million Dollar Celebrity Invitational, winning $250,000 for the U.S. Fund for UNICEF. Michael McKean won the tournament, while Cheech Marin came in third.
She presented the Emmy Awards in 1984, 1987, and 1998; the 11th Annual American Comedy Awards in 1997; and the 54th Annual Golden Globe Awards in 1997.[7][13]
Year Title Role Notes
1979 Mr. Mike's Mondo Video Herself/Cameo
1980 How to Beat the High Co$t of Living Elaine
1985 O.C. and Stiggs Elinore Schwab
1993 Coneheads Prymatt Conehead / Mary Margaret DeCicco
1998 Antz Muffy (voice)
2003 Recess: All Growed Down Additional Voices Video
2004 Geraldine's Fortune Geraldine Liddle
2005 Brooklyn Lobster Maureen Giorgio
2006 Shaggy Dog, TheThe Shaggy Dog Judge Claire Whittaker
2009 I Love You, Man Joyce Klaven
2011 I Don't Know How She Does It Marla Reddy
2013 Flicker Box Sister Margaret Pre-production
2013 Heat, TheThe Heat Mrs. Mullins
Year Title Role Notes
1975-1980 Saturday Night Live Various 107 episodes
1977 Love Boat, TheThe Love Boat Regina Parker 1 episode
1977 What Really Happened to the Class of '65? Ivy Episode: "Class Hustler"
1981 Bob & Ray, Jane, Laraine, & Gilda Herself TV Movie
1982 Candida Prossie TV movie
1982 Divorce Wars: A Love Story Vickey Sturgess TV movie
1983 Coneheads, TheThe Coneheads Prymaat (voice) TV short
1984 Bedrooms Laura TV movie
1984-1989 Kate & Allie Allison 'Allie' Lowell 122 episodes
Primetime Emmy Award for Outstanding Lead Actress in a Comedy Series (1984-1985)
Nominated—People's Choice Award for Favorite TV Performer (1984-1985)
1988 American Playhouse Lina McLaidlaw Episode: "Suspicion"
1988 Maybe Baby Julia Gilbert TV movie
1990 Common Ground Alice McGoff TV movie
1990 Working It Out Sarah Marshall 13 episodes
1994 Dave's World Anne Episode: "Lost Weekend"
1995 Tad Mary Todd Lincoln TV movie
1995 Mystery Dance Susan Baker Episode: "1.1"
1996-2001 3rd Rock from the Sun Dr. Mary Albright 137 episodes
Satellite Award for Best Actress – Television Series Musical or Comedy
1998 Hercules Hippolyte (voice) Episode: "Hercules and the Girdle of Hippolyte"
1998 Recess Mrs. Clemperer (voice) Episode: "Wild Child"
2000 Catch a Falling Star Fran TV movie
2003 Cyberchase Lady Ada Byron Lovelace (voice) Episode: "Hugs and Witches"
2003 Our Town Mrs. Webb TV movie
2004 Librarian: Quest for the Spear, TheThe Librarian: Quest for the Spear Charlene TV movie
2006 Crumbs Suzanne Crumb 13 episodes
2006 Librarian: Return to King Solomon's Mines, TheThe Librarian: Return to King Solomon's Mines Charlene TV movie
2007 Nice Girls Don't Get the Corner Office Joy TV movie
2008 In the Motherhood Mom Episode: "Mother Dearest"
2008 Librarian: Curse of the Judas Chalice, TheThe Librarian: Curse of the Judas Chalice Charlene TV movie
2008-2009 Gary Unmarried Connie 2 episodes
2009 Sherri Margo / Paula's Mom Episode: "Birth"
2010 The Women of SNL Various/Prymaat Conehead/Weekend Update TV Movie; Archive Footage
2010 Rex Is Not Your Lawyer Unknown Episode: "Pilot"
2011 Oprah Guest Episode: "Saturday Night Live Class Reunion"
2012–present Unforgettable Joanne Webster 27 episodes
1. ^ Collins, William B. (October 17, 1981). "Midwestern Shaw - Why, Oh, Why Didn't They Leave Out Ohio?". The Philadelphia Inquirer. p. B11. Retrieved April 14, 2013.
2. ^ Du Brow, Rick (August 8, 1986). "Who Are the Top Prime Time Actors and Actresses of All Time?". Times Union (Albany, NY). p. 15A. Retrieved April 14, 2013. "A quietly devastating performer amid all the scene-stealers on Saturday Night Live, Curtin was most memorable as the deadpan, long-suffering anchor on the show's "news updates." In Kate and Allie, she is demonstrating another hugely appealing facet of her remarkably versatile repertoire."
3. ^ Jane Curtin Biography (1947–)
4. ^ Talley, Jim (August 18, 1986). "Investors Star In Film Financing". Sun-Sentinel (Fort Lauderdale, Florida). Retrieved April 14, 2013.
5. ^ "John J. Curtin [obituary]". Boston Herald. September 24, 2008. Retrieved April 14, 2013.
6. ^ "Jane Curtin". Yahoo! Movies. Retrieved May 13, 2010.
7. ^ a b Thomas Riggs, ed. (2012). "Jane Curtin". Contemporary Theatre, Film and Television 118. Detroit: Gale. ISBN 9781414482026. OCLC 781178307.
9. ^ Polk, Bryan, and Mel Seesholtz (Oct 25, 2009). "Two Professors, One Valuable Lesson: How to Respectfully Disagree". The Chronicle of Higher Education (Washington, D.C.) 56 (10). Retrieved April 14, 2013.
10. ^ "The Women of SNL". WorldCat. 2012. Retrieved April 14, 2013.
11. ^ "Understanding (1994–2004)". Internet Movie Database. Retrieved April 14, 2013.
12. ^ Listings – GARY UNMARRIED on CBS | TheFutonCritic.com
13. ^ "Biography for Jane Curtin". Internet Movie Database. Retrieved April 14, 2013.
External links[edit]
Media offices
Preceded by
Chevy Chase
Weekend Update Anchor
Succeeded by
Dan Aykroyd and Jane Curtin
Media offices
Preceded by
Jane Curtin
Weekend Update Anchor (with Dan Aykroyd)
Succeeded by
Jane Curtin and Bill Murray
Media offices
Preceded by
Jane Curtin and Dan Aykroyd
Weekend Update Anchor (with Bill Murray)
Succeeded by
Charles Rocket
|
global_05_local_4_shard_00000656_processed.jsonl/83826
|
Kovilpatti Veeralakshmi
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Kovilpatti Veeralakshmi
Directed by K. Rajeshwar
Produced by Filmajik Productions
Written by K. Rajeshwar
Starring Simran Bagga
Sonu Sood
Music by Adithyan
Cinematography Ashok Kumar
Edited by V.T.Vijayan
Release date(s)
• 4 July 2003 (2003-07-04)
Running time 192 minutes
Country India
Language Tamil
Kovilpatti Veeralakshmi is a South Indian Tamil film released in 2003 directed by K. Rajeshwar who earlier directed Amaran. The film was a huge flop in box office. Simran Bagga dubbed in her own voice for the first time in this film with some dialogues dubbed by Deepa Venkat. She got a very good image as a diversified actress through this film. The film was initially planned to be started in 1996 with Shweta Menon in the lead role, but was delayed.[1] Notably it was the last film of music composer Aadithyan who went on to host a show on Jaya TV.
The film is all about a brave woman who is battling against untouchability in a village. Casteism is a curse for Veeralakshmi (Simran) and her ilk, who undergo unbearable torture on account of their being Dalits. However this time it is not from the upper castes but from a brutally inhuman police force. It is a village where the inspector of police is the feudal lord. He beats up the poor men and rapes their women — and the rest of his menials in the force enjoy the leftovers. Subjugated to an unbearable extent, Veeralakshmi rises in revolt.
A thread of sincerity runs through the entire film [sic] that makes different from the action flicks one is used to.[2]
Filmfare Award for Best Tamil Actress - Simran - Nominated
1. ^ "Tamil Movie News-Pudhu Edition 2 - soc.culture.tamil | Google Groups". Groups.google.com. 1996-10-22. Retrieved 2013-06-11.
2. ^ "Kovilpatti Veeralakshmi". The Hindu. 2003-07-11. Retrieved 2013-06-11.
|
global_05_local_4_shard_00000656_processed.jsonl/83827
|
Lodewijk Bruckman
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Lodewijk Bruckman
Video still with portrait of Lodewijk Bruckman
Lodewijk Bruckman in 1993
Born Lodewijk Karel Bruckman
(1903-08-14)14 August 1903
The Hague, Netherlands
Died 24 April 1995(1995-04-24) (aged 91)
Leeuwarden, Netherlands
Nationality Dutch
Education Royal Academy of Art
Known for Painting
Movement Magic realism
Lodewijk Karel "Loki" Bruckman (Dutch pronunciation: [ˈloːdəˌʋɛik ˈkaːrəl ˈbrʏkmɑn]; 14 August 1903 – 24 April 1995) was a Dutch magic realist painter. He lived and worked in the Netherlands, the United States, and Mexico. Museum de Oude Wolden in the village of Bellingwolde has a permanent exhibition of his paintings.
Lodewijk Karel Bruckman was born on 14 August 1903 in The Hague in the Netherlands.[1] He was the son of house painter Karel Lodewijk Bruckman (1868–1952) and Wilhelmina Frederika Hamel (1869–1930).[2] He had two sisters and two brothers,[2] one of whom was his twin brother Karel Lodewijk Bruckman (1903–1982).[3][4]
Bruckman studied with his twin brother at the Royal Academy of Art in The Hague, where he was a student of Henk Meijer. He worked first as a set painter, later as a drawing teacher, and eventually as a fine artist.[5]
In 1949, Bruckman and his life partner Evert Zeeven, who was also his manager, moved to the United States. They lived in New York City and Provincetown in the U.S. and also in Morelia in Mexico. Bruckman had several gallery exhibitions in New York.[3]
In 1968, they returned to the Netherlands, where they lived in Wemeldinge, Haarlem, and Bellingwedde. Bruckman stopped oil painting in 1986, but continued with pencil drawing. In 1989, they moved to Leeuwarden. Zeeven died on 30 November 1993. One and a half year later, Bruckman died on 24 April 1995 in Leeuwarden, at the age of 91.[1][3]
Painting by Bruckman in the permanent exhibition of his work in Museum de Oude Wolden in Bellingwolde, the Netherlands
Bruckman painted figuratively, forest views, and other still lifes with fruit, flowers, shells, eggs, feathers and towels. His style can be described as realistic, surrealistic or magic realistic.[5] In 1958, Frank Crotty described Bruckman's painting with the following words:
His paintings are both realistic and surrealistic. His technique is reminiscent of Salvatore Dali's. All objects are painted with exceptional exactitude and have an additional faculty which is often described as three-dimensional. His feathers, for example – and he paints many of them – are so realistic that you feel a gust of wind would blow them away. They are the trick-of-eye kind of painting called 'trompe l'oeil'.[6]
Bruckman gained popularity in the United States, while he remained relatively unknown in the Netherlands. His painting Composition With Peaches won the popular vote at the Boston Arts Festival in 1953 and he became second in 1957.[6] He won the J. Porter Brinton Prize in 1954.[5] The Metropolitan Museum of Art in New York City has a still life called Mobile (1955) by Bruckman in its collection[7] and the Cape Cod Museum of Art in Dennis a painting named Rancho Style (1960).[8]
In the 1980s, Bruckman donated fifteen paintings to the municipality of Goes and 21 paintings, with an estimated value of 363,000 euro, to the municipality of Bellingwedde.[9] Museum de Oude Wolden in Bellingwolde in the Netherlands has a permanent exhibition with his paintings from the municipal collection of Bellingwedde.[10]
1. ^ a b (Dutch) Bruckman, Lodewijk, Rijksbureau voor Kunsthistorische Documentatie. Retrieved on 2013-08-08.
2. ^ a b (Dutch) Bruckman, K.L., Digitale Stamboom Den Haag, 2012. Retrieved on 2013-08-08.
3. ^ a b c (Dutch) Fijnschilder Lodewijk Bruckman overleden, Leeuwarder Courant, 1995. Retrieved on 2013-08-08.
4. ^ (Dutch) Bruckman, Karel, Rijksbureau voor Kunsthistorische Documentatie. Retrieved on 2013-08-08.
5. ^ a b c (Dutch) Lodewijk Bruckman (1903 - 1995), Studio 2000. Retrieved on 2013-08-08.
6. ^ a b Frank Crotty, Provincetown profiles and others on Cape Cod, 1958. Retrieved on 2013-08-09.
7. ^ Mobile, Lodewijk Karel Bruckman, Metropolitan Museum of Art. Retrieved on 2013-08-08.
8. ^ Bruckman, Lodewijk, Cape Cod Museum of Art. Retrieved on 2013-08-09.
9. ^ (Dutch) Magisch-realist Lodewijk Bruckman, Leeuwarder Courant, 1994. Retrieved on 2013-08-08.
10. ^ (Dutch) Programma, Museum de Oude Wolden. Retrieved on 2013-08-08.
|
global_05_local_4_shard_00000656_processed.jsonl/83828
|
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Mopac (disambiguation).
MOPAC is a popular computer program used in computational chemistry. It is designed to implement semi-empirical quantum chemistry algorithms, and it runs on Windows, Mac, and Linux.[1]
MOPAC2012 is the current version. MOPAC2009 is able to perform calculations on small molecules and enzymes using PM6, PM3, AM1, MNDO, and RM1. The Sparkle model (for lanthanide chemistry)[2] is also available. This program is available in Windows, Linux, and Macintosh. Academic users can use this program for free, whereas government and commercial users must purchase the software.[3]
MOPAC was largely written by Michael Dewar's research group at the University of Texas at Austin.[4] Its name is derived from Molecular Orbital PACkage, and it is also a pun on the Mopac Expressway that runs around Austin.[5]
MOPAC2007 included the new Sparkle/AM1, Sparkle/PM3, RM1 and PM6 models, with an increased emphasis on solid state capabilities. However, it does not have yet MINDO/3, PM5, analytical derivatives, the Tomasi solvation model and intersystem crossing. MOPAC2007 was followed by the release of MOPAC2009 in 2008 which presents many improved features [6]
The latest versions are no longer public domain software as were the earlier versions such as MOPAC6 and MOPAC7. However, there are recent efforts to keep MOPAC7 working as open source software. An open source version of MOPAC7 for Linux is also available.[7] The author of MOPAC, James Stewart, released in 2006 a public domain version of MOPAC7 entirely written in Fortran 90 called MOPAC7.1.
See also[edit]
1. ^ "MOPAC". Stewart Computational Chemistry.
2. ^ "Lanthanide Complexes Computational Chemistry".
3. ^ "MOPAC2009 Brochure". Stewart Computational Chemistry.
4. ^ Computational Chemistry, David Young, Wiley-Interscience, 2001. Appendix A. A.3.2 pg 342, MOPAC
5. ^ J. J. P. Stewart. "General Description of MOPAC". Stewart Computational Chemistry.
6. ^ [1]
7. ^ MOPAC7 Open Source Version
External links[edit]
|
global_05_local_4_shard_00000656_processed.jsonl/83829
|
Mitchell Higginbotham
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Mitchell Higginbotham
Born (1921-03-02) March 2, 1921 (age 93)
Amherst, Virginia
Nationality United States of America
Ethnicity African American
Occupation U.S. Army Air Force
Years active 1942-1946 (active), 1946-1962 (reserve)
Known for Tuskegee Airmen
Relatives Robert Higginbotham (brother)
Awards Congressional Gold Medal
Mitchell Higginbotham (born March 2, 1921) is a retired U.S. Army Air Force officer who was a member of the famed African American World War II fighter group known as the Tuskegee Airmen.[1]
Early life[edit]
Higginbotham was born in Amherst, Virginia on March 2, 1921.[1] He has a younger brother, Robert, who also became a member of the U.S. military.[2]
Military career[edit]
Higginbotham joined the U.S. military in the summer 1942.[1] He subsequently was accepted into the Tuskegee Army Airfield Class TE-44-K from which he graduated on February 1, 1945 with a commission as a Second Lieutenant.[1] Higginbotham became one of the original members of the Tuskegee Airmen when he was assigned to the 477th Bombardment Group.[1] He served on active duty through the end of World War II; in 1946, he left active duty but continued as a member of the U.S. Army Air Force Reserves.[1] He initially flew fighter aircraft but eventually moved up to flying B-52s.[2]
Higginbotham's younger brother Robert also joined the military during World War II two years after his older brother; however, Robert Higginbotham became a pilot for the Navy Air Corps.[2]
Higginbotham was one of 100 black servicemen who were arrested for attempting to enter an officers club reserved for white officers.[2] This event became known as the Freeman Field Mutiny;[2] it is widely seen as a key moment in the path towards full integration of the U.S. Armed Services.[3]
Civilian career[edit]
Following his years of active duty, Higginbotham went to work for the Los Angeles Airport Advisory Committee, working as a registrar at the Pittsburg Airport.[1] He also served as a probation officer for nearly thirty years.[1]
Higginbotham and his brother Robert both attended the ceremony in 2007 where the Congressional Gold Medal was collectively awarded to the Tuskegee Airmen for their contributions during World War II.[2] He also received "Man of the Year" Award from the Los Angeles Chapter of the Tuskegee Airmen, Inc in 1996.[1]
See also[edit]
Further reading[edit]
Archival resources[edit]
1. ^ a b c d e f g h i "Guide to the Mitchell Higginbotham Papers". Regents of the University of California. Retrieved 27 November 2013.
2. ^ a b c d e f "Tuskegee Airman from Sewickley reflects on obstacles". Trib Total Media, Inc. Retrieved 27 November 2013.
3. ^ Francis, Charles E. (1997). Adolph Caso, ed. The Tuskegee airmen : the men who changed a nation (4th ed.). Boston: Branden. pp. 231–255. ISBN 9780828320290.
External links[edit]
|
global_05_local_4_shard_00000656_processed.jsonl/83831
|
Shuri, Okinawa
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Shuri Castle
A cobblestone road in Shuri-kinjocho
Shuri (首里?, Okinawan: Sui or Shui) is a district of the city of Naha, Okinawa. It was formerly a separate city unto itself, and the royal capital of the Ryūkyū Kingdom. A number of famous historical sites are located in Shuri, including Shuri Castle, the Shureimon gate, Sunuhyan-utaki (a sacred space of the native Ryukyuan religion), and royal mausoleum Tamaudun, all of which are designated World Heritage Sites by UNESCO.
Originally established as a castle town surrounding the royal palace, Shuri ceased to be the capital when the kingdom was abolished and incorporated into Japan as Okinawa prefecture. In 1896, Shuri was made a ward ( ku?) of the new prefectural capital, Naha, though it was made a separate city again in 1921. Shuri was merged into Naha in 1954.[1]
Medieval and early modern periods[edit]
Shuri Castle was first built during the reign of Shunbajunki (r. 1237-1248), who ruled from nearby Urasoe Castle.[2] This was nearly a century before Okinawa Island would become divided into the three kingdoms of Hokuzan, Nanzan, and Chūzan; nearly two centuries before the unification of those kingdoms and establishment of the Ryūkyū Kingdom. The island was not yet an organized or unified kingdom, but rather a collection of local chieftains (anji) loyal to the chief chieftain in Urasoe.[3]
Historian George H. Kerr describes Shuri Castle as "one of the most magnificent castle sites to be found anywhere in the world, for it commands the countryside below for miles around and looks toward distant sea horizons on every side.[2]"
By 1266, Okinawa was collecting tribute from the communities of the nearby islands of Iheya, Kumejima, and Kerama, as well as the more distant Amami Islands; new governmental offices to manage this tribute were established at the port of Tomari, which lay just below the castle, to the north.[4]
Shō Hashi (r. 1422-1439), first king of the unified Ryūkyū Kingdom, made Shuri his capital, and oversaw expansion of the castle and the city.[5] Shuri would remain the royal capital for roughly 550 years. The castle was burned to the ground during succession disputes in the 1450s,[6] but was rebuilt, and the castle and city were further embellished and expanded during the reign of King Shō Shin (r. 1477-1526). In addition to the construction of stone dragon pillars and other embellishments upon the palace itself, the Buddhist temple Enkaku-ji was built on the castle grounds in 1492, the Sōgen-ji temple on the road to Naha was expanded, and in 1501 construction was completed on Tamaudun, which would be used as the royal mausoleum from thence forward.[7]
Throughout the medieval and early modern periods,[8] the residents of Shuri were primarily those associated with the royal court in some way. While Naha was the economic center of the kingdom, Shuri was the political center. Residence at Shuri was prestigious into the 20th century.[9]
Samurai forces from the Japanese feudal domain of Satsuma seized Shuri Castle on 5 April 1609.[10] The samurai withdrew soon afterwards, returning King Shō Nei to his throne, and the castle and city to the Okinawans, though the kingdom was now a vassal state under Satsuma's suzerainty and would remain so for roughly 250 years. The American Commodore Perry, when he came to Okinawa in the 1850s, forced his way into Shuri Castle on two separate occasions, but was denied an audience with the king both times.[11]
Under Imperial Japan[edit]
Shuri in Taishō period
The kingdom was formally abolished when, on 27 March 1879, Japanese Imperial forces led by Matsuda Michiyuki proceeded to the castle and presented Prince Nakijin with formal papers expressing Tokyo's decision. King Shō Tai and his court were removed from the castle, which was occupied by a Japanese garrison, and the main gates of which were sealed.[12] The castle, along with the nearby mansions of former court nobles, fell into disrepair and decay over the ensuing years, and the ways of life of the aristocrats of Shuri were shattered. Royal pensions were shrunk or abolished, and income from nobles' nominal domains in the countryside likewise dried up. Servants were dismissed, and the aristocratic population of the city scattered, seeking employment in Naha, the countryside, or the Japanese Home Islands.[13]
Census figures from 1875-79 show that roughly half of the population of Okinawa Island were living in the greater Naha-Shuri area. Shuri had fewer households than Naha, but each household consisted of more people. Roughly 95,000 people in 22,500 households were of the aristocracy at this time, out of a total population of 330,000 royal subjects throughout the Ryūkyū Islands, with most of the aristocracy living in and around Shuri. Over the following years, however, Shuri shrank in both population and importance, as Naha grew.[13]
Pressure to restore, conserve, and protect the historical sites of Shuri began in earnest in the 1910s, and in 1928 Shuri Castle was declared a National Treasure. A four-year plan was laid out for the restoration of the structure. Other historical monuments came under protection soon afterward.[14]
Though the Japanese garrison which had originally occupied Shuri Castle in 1879 withdrew in 1896,[15] the castle, and a series of tunnels and caverns below it, were made to serve as general headquarters for Japanese military forces on Okinawa during World War II. The city first suffered Allied air attack in October 1944. Civilian response preparations and organization were extremely inadequate. Bureaucrats, almost all of them native to other prefectures, and tied up in obligations to military orders, made little effort to protect civilians, their homes, schools, nor historical monuments. Civilians were left to their own devices to rescue and protect themselves, their families, and their family treasures.[16]
The official Custodian of the Family Treasures of the Okinawan royal family returned to the family's mansions in Shuri in March 1945 and sought to rescue a great number of treasures, ranging from crowns granted the kings by the Chinese Imperial Court to formal royal portraits. Some of these objects were sealed away in vaults, but others were simply buried in the earth or amongst the greenery here and there around Shuri. The mansions were destroyed by fire on 6 April, and the Okinawan guards appointed by the Custodian were sent away when the Japanese military occupied the grounds afterward.[16]
As Shuri was the center of the Japanese defense, it was the prime target of American assault in the battle of Okinawa which was fought from March to June 1945. Shuri Castle was leveled by the USS Mississippi, and much of the city was burned and destroyed in the course of the battle.[17]
The city was rebuilt over the course of the post-war years. The University of the Ryukyus was established on the site of the ruins of Shuri Castle in 1950, though later moved and today has campuses in Ginowan and Nakagusuku. The castle walls were restored shortly after the war's end, and reconstruction of the palace's main hall (Seiden) was completed in 1992, on the 20th anniversary of the end of the American Occupation in Okinawa.[18]
A number of primary, middle, and secondary schools are located in Shuri, along with one university. The Okinawa Prefectural University of Arts is located just outside the grounds of Shuri Castle. One of the university's buildings sits on the site of the former Office of the Magistrate of Mother of Pearl (貝摺奉行所 kaizuri bugyōsho?),[19] an office of the royal administration which oversaw the kingdom's official craftsmen, chiefly lacquerers.[20]
The village of Tobari in Shuri was the home of Masami Chinen, who founded and taught the martial art Yamani ryu specialising in Bōjutsu.
Gibo and Shuri Stations on the Okinawa Monorail lay within the boundaries of Shuri. Shuri Castle Park, Tamaudun, and other major sites are within easy walking distance of Shuri Station, which is currently the terminus of the monorail line, though there are plans to extend it in the future.[21]
1. ^ "Shuri." Okinawa konpakuto jiten (沖縄コンパクト事典, "Okinawa Compact Encyclopedia"). Ryukyu Shimpo (琉球新報). 1 March 2003. Accessed 8 January 2009.
2. ^ a b Kerr, George H. (2000). Okinawa: the History of an Island People. (revised ed.) Boston: Tuttle Publishing. p50.
3. ^ Kerr. p52.
4. ^ Kerr. p51.
5. ^ Kerr. p85.
6. ^ Kerr. p97.
7. ^ Kerr. p109.
8. ^ c. 1314-1609 and 1609-1879 respectively.
9. ^ Kerr. p114.
10. ^ Kerr. p159.
11. ^ Kerr. pp315-317, 328.
12. ^ Kerr. p381.
13. ^ a b Kerr. pp394-395.
14. ^ Kerr. pp455-456.
15. ^ Kerr. p460.
16. ^ a b Kerr. pp467-468.
17. ^ Kerr. pp469-470.
18. ^ Kadekawa, Manabu (ed.). Okinawa Chanpuru Jiten (沖縄チャンプルー事典, "Okinawa Champloo Encyclopedia"). Tokyo: Yamakei Publishers, 2003. p54.
19. ^ Explanatory plaque onsite at the site of the former kaizuri bugyōsho.
21. ^ "Route disagreements postpone monorail extension decisions." Weekly Japan Update. 9 November 2007. Accessed 8 January 2009.
External links[edit]
Coordinates: 26°13′01″N 127°43′10″E / 26.217007°N 127.719423°E / 26.217007; 127.719423
|
global_05_local_4_shard_00000656_processed.jsonl/83832
|
Stephen J. Solarz
From Wikipedia, the free encyclopedia
(Redirected from Stephen Solarz)
Jump to: navigation, search
Stephen J. Solarz
Stephen Solarz 100th Congress.jpg
Member of the U.S. House of Representatives
from New York's 13th district
In office
January 3, 1975 – January 3, 1993
Preceded by Bertram L. Podell
Succeeded by Susan Molinari
Personal details
Born (1940-09-12)September 12, 1940
New York City
Died November 29, 2010(2010-11-29) (aged 70)
Washington, D.C
Political party Democratic
Spouse(s) Nina Koldin
Stephen Joshua Solarz (/ˈslɑrz/; September 12, 1940 – November 29, 2010) was a United States Congressional Representative from New York. Solarz was both an outspoken critic of President Ronald Reagan's deployment of Marines to Lebanon in 1982 and a cosponsor of the 1991 Gulf War Authorization Act during the Presidency of George H. W. Bush.[1]
Early life and education[edit]
Born in Manhattan, New York City, Solarz attended public schools in New York City. He graduated from Midwood High School, in Brooklyn, NY., in 1958, and later received a B.A. from Brandeis University in 1962 and an M.A. in public law and government from Columbia University in 1967.[2] Solarz taught political science at Brooklyn College from 1967–1968.[3]
New York Assembly[edit]
In 1966 Solarz was the campaign manager for an anti-war campaign, for a U.S. House seat. He used that experience to make a successful run two years later, in November 1968, for a seat in the New York State Assembly.[4] He was re-elected in 1970 and 1972.
In the 1973 Democratic primary, Solarz ran against Sebastian Leone for Brooklyn borough president, and lost. That was not unexpected; Solarz had run mostly for improved name recognition and to make political and fund-raising contacts.[4] In 1974, he was a delegate to the Democratic National Mid-term Convention.
Career in Congress[edit]
Election and re-elections[edit]
In September 1974, Solarz defeated incumbent Democrat Bertram L. Podell in the Democratic primary for the New York 13th District. At the time, Podel was under federal indictment; he was later convicted.[4] In November 1974, Solarz was elected to the U.S. House of Representatives, to the 94th Congress, beginning January 3, 1975. He was re-elected eight more times, serving until January 3, 1993.
Involvement in foreign policy[edit]
On July 18, 1980, Solarz became the first American public official to visit North Korea since the end of the Korean War, and the first to meet with Kim Il-sung.[5] In the 1980s, he chaired the Asian and Pacific Affairs Subcommittee of the House Foreign Affairs Committee, an area of growing interest to the American people in that decade. He is remembered for his leadership on the Philippines. He left Manila just as Benigno S. Aquino, Jr. was coming home to challenge President Ferdinand E. Marcos. Following Aquino's assassination, Solarz returned to Manila for the funeral and proceeded to push the Reagan administration to distance itself from the Marcos government. Shortly after Marcos left for exile in Hawaii, Solarz was visiting at one of the opulent palaces and publicized Imelda's massive shoe collection. He worked closely with Aquino's widow, Corazon, who became president, and who dubbed him the "Lafayette of the Philippines."[6]
Solarz had strong ties to India and was held in high esteem by Indian leaders across the political spectrum. His motivations were partly driven by the presence of prosperous Indian Americans in his district. He visited India dozens of times, during and after his term in Congress, and once received a standing ovation on the floor of the Indian Parliament, as has happened to only a few Westerners, among them Bill Clinton and John F. Kennedy. He received bipartisan credit for having helped set the stage for substantial improvements in U.S.-India relations since the 1990s.[7]
In 1982 and 1986, Solarz met with Iraqi President Saddam Hussein.[8] In 1998 he led a group of neoconservatives urging President Bill Clinton to overthrow him.[9]
Loss in primary, 1992[edit]
The round of redistricting following the 1990 Census divided his district into six pieces, reflecting his cold relations with many state lawmakers in Albany. After conducting extensive polling, Solarz decided that rather than challenge Democratic incumbent Ted Weiss or Republican incumbent S. William Green, he would seek election to the open seat in the heavily Hispanic 12th Congressional District. Solarz entered the race damaged by the House banking scandal, having written 743 overdrafts. Solarz was defeated in the Democratic primary by Nydia Velazquez.[10] Ironically, neither Weiss nor Green was re-elected, as Weiss died before the election and was replaced on the ballot by Jerrold Nadler, while Green was defeated by Democrat Carolyn Maloney.
Post-Congressional career[edit]
Solarz was appointed by President Bill Clinton as chairman of the U.S. government-funded Central Asian-American Enterprise Fund to bring private sector development to central Asia and served from 1993 to 1998.[11]
In 1994, Solarz was a leading candidate to be nominated as the United States Ambassador to India, however Solarz was forced to withdraw from consideration after scrutiny of his efforts to obtain a visa for a Hong Kong businessman with a criminal record. Solarz's poor relations with members of the foreign service and the New York state political establishment were also identified as reasons for the failure of his nomination.[12] The post instead went to Frank G. Wisner.
Since 1994, Solarz had remained active with the National Democratic Institute for International Affairs. He was also a member of the Intellibridge Expert Network and of the executive committee of the International Crisis Group. Solarz was also co-chairman of the American Committee for Peace in the Caucasus, along with Zbigniew Brzezinski.
Solarz served on the Board of Directors of the National Endowment for Democracy from 1992 to 2001,[13] and was awarded its Democracy Service Medal on retirement.[14]
Solarz died of esophageal cancer on November 29, 2010 in Washington, D.C. at the age of 70.[2]
1. ^ Steve Solarz (1940-2010) and the Making of Senator Schumer, Capital New York (Nov. 30, 2010)
2. ^ a b Martin, Douglas (November 30, 2010). "Stephen J. Solarz, Former N.Y. Congressman, Dies at 70". The New York Times. p. B10.
3. ^ "SOLARZ, Stephen Joshua, (1940 - )". Biographical Directory of the United States Congress. United States Congress. Retrieved November 29, 2010.
4. ^ a b c Steve Kornacki (November 30, 2010). "Steve Solarz (1940-2010) and the making of Senator Schumer". Capital (New York). Retrieved 2010-02-11.
5. ^ Facts on File 1980 Yearbook p 547
6. ^ Carandang, Ricky (August 5, 2009). "Ex-US Rep. Solarz pays respects to Cory". ABS-CBN News. Retrieved November 30, 2010.
7. ^
8. ^ Hellman, Peter (February 18, 1991), "The Hawk: On the battlefront in Brooklyn with ex-antiwar activist Congressman Stephen Solarz", New York 24 (7): 44
9. ^ [1]
10. ^ Gruson, Linsey (August 21, 1992). "The Selling of Stephen J. Solarz". The New York Times. Retrieved November 30, 2010.
11. ^ Statement by the Press Secretary: Central Asian-American Enterprise Fund, The White House Office of the Press Secretary, July 15, 1994
12. ^ Purdum, Todd S. (March 20, 1994). "Solarz, Who Made Enemies, Pays the Price in a Lost Job". The New York Times. Retrieved November 30, 2010.
13. ^ National Endowment for Democracy, 30 November 2010, NED Mourns the loss of former Congressman and Board Member Stephen J. Solarz
14. ^ National Endowment for Democracy, Jan 18, 2001, 2001 Democracy Service Medal
External links[edit]
New York Assembly
Preceded by
Max Turshen
New York State Assembly, 45th District
Succeeded by
Charles E. Schumer
United States House of Representatives
Preceded by
Bertram L. Podell
Member of the U.S. House of Representatives
from New York's 13th congressional district
Succeeded by
Susan Molinari
|
global_05_local_4_shard_00000656_processed.jsonl/83834
|
Thomas Wedgwood (photographer)
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Thomas Wedgwood
Thomas Wedgwood (14 May 1771 – 10 July 1805), son of Josiah Wedgwood, the potter, is most widely known as an early experimenter in the field of photography.
He is the first person known to have thought of creating permanent pictures by capturing camera images on material coated with a light-sensitive chemical. His practical experiments yielded only shadow image photograms that were not light-fast, but his conceptual breakthrough and partial success have led some historians to call him "the first photographer".[1][2]
Thomas Wedgwood was born in Etruria, Staffordshire, now part of the city of Stoke-on-Trent in England.
Wedgwood was born into a long line of pottery manufacturers, grew up and was educated at Etruria and was instilled from his youth with a love for art. He also spent much of his short life associating with painters, sculptors, and poets, to whom he was able to be a patron after he inherited his father's wealth in 1795.
As a young adult, Wedgwood became interested in the best method of educating children, and spent time studying infants. From his observations, he concluded that most of the information that young brains absorbed came through the eyes, and were thus related to light and images.
Wedgwood never married and had no children. His biographer notes that "neither his extant letters nor family tradition tell us of his caring for any woman outside the circle of his relations" and that he was "strongly attracted" to musical and sensitive young men.
In imperfect health as a child and a chronic invalid as an adult, he died in the county of Dorset at the age of 34.
A pioneer of photography[edit]
The date of his first experiments in photography is unknown, but he is believed to have indirectly advised James Watt (1736–1819) on the practical details prior to 1800. In a letter that has been variously dated to 1790, 1791 and 1799, Watt wrote to Josiah Wedgwood:
"Dear Sir, I thank you for your instructions as to the Silver Pictures, about which, when at home, I will make some experiments..."
In his many experiments, possibly with advice on chemistry from his tutor Alexander Chisholm and members of the Lunar Society, Wedgwood used paper and white leather coated with silver nitrate. The leather proved to be more light-sensitive. His primary objective had been to capture real-world scenes with a camera obscura, but those attempts were unsuccessful. He did succeed in using exposure to direct sunlight to capture silhouette images of objects in contact with the treated surface, as well as the shadow images cast by sunlight passing through paintings on glass. In both cases, the sunlit areas rapidly darkened while the areas in shadow did not.
Wedgwood met a young chemist named Humphry Davy (1778–1829) at the Pneumatic Clinic in Bristol, while Wedgwood was there being treated for his ailments. Davy wrote up his friend's work for publication in London’s Journal of the Royal Institution (1802), titling it “An Account of a Method of Copying Paintings upon Glass, and of Making Profiles, by the Agency of Light upon Nitrate of Silver. Invented by T. Wedgwood, Esq.” The paper was published and detailed Wedgwood’s procedures and accomplishments, as well as Davy's own variations of them. In 1802 the Royal Institution was not the venerable force it is today and its Journal was:
"a little paper printed from time to time to let the subscribers to the infant institution know what was being done ...the 'Journal' did not live beyond a first volume. There is nothing to show that Davy's account was ever read at any meeting; and the print of it would have been read, apparently, if read at all, only by the small circle of members and subscribers to the institution, of whom, we may be pretty sure, only a small minority can have been scientific people."[3]
Nevertheless, the paper of 1802 and Wedgwood's work directly influenced other chemists and scientists delving into the craft of photography, since subsequent research (Batchen, p. 228) has shown it was actually quite widely known about and was mentioned in chemistry textbooks as early as 1803. David Brewster, later a close friend of photography pioneer Henry Fox Talbot, published an account of the paper in the Edinburgh Magazine (Dec 1802). The paper was translated into French, and also printed in Germany in 1811. J. B. Reade's work in 1839 was directly influenced by reading of Wedgwood's more rapid results when using leather. Reade tried treating paper with a tanning agent used in making leather and found that after sensitization the paper darkened more rapidly when exposed. Reade's discovery was communicated to Talbot by a friend, as was later proven in a court case over patents.
Wedgwood was unable to "fix" his pictures to make them immune to the further effects of light. Unless kept in complete darkness, they would slowly but surely darken all over, eventually destroying the image. As Davy put it in his paper of 1802, the picture,
"immediately after being taken, must be kept in some obscure place. It may indeed be examined in the shade, but in this case the exposure should be only for a few minutes; by the light of candles and lamps, as commonly employed, it is not sensibly affected."
Rumours of surviving photographs[edit]
Although unfixed, photographs such as Wedgwood made can be preserved indefinitely by storing them in total darkness and protecting them from the harmful effects of prolonged open exposure to the air—for example, by keeping them tightly pressed between the pages of a large book.
In the middle to late 1830s, both Henry Fox Talbot and Louis Daguerre found ways of chemically stabilizing the images their processes produced, making them relatively insensitive to additional exposure to light. In 1839, John Herschel pointed out his earlier published discovery that hyposulphite of soda (now known as sodium thiosulfate but still nicknamed "hypo") dissolved silver halides. This allowed the remaining light-sensitive silver salts to be completely washed away, truly "fixing" the finished photograph. Herschel also found that in the case of silver nitrate, thorough washing with water alone sufficed to remove the unwanted remainder from paper—at least, the type of paper Herschel used—but only if the water was very pure.
In 1885, Samuel Highley, an early photography historian, published an article in which he remarked that he had seen what must have been fixed examples of early pictures made by Wedgwood, presumably dating to the 1790s. His was only one of several latter 19th century claims alleging the current or former existence of improbably early photographs, usually based on decades-old memories or depending on questionable assumptions, which investigators determined to be unverifiable, unreliable or definitely mistaken.[4]
In 2008, there were widespread news reports that one of Wedgwood's photographs had surfaced and was about to be sold at auction. The photogram, as shadow photographs are now called, showed the silhouette and internal structure of a leaf and was marked in one corner with what appeared to be the letter "W". Originally unattributed, then attributed to Talbot, an essay by Talbot expert Larry Schaaf, included in the auction catalog, rejected that attribution but suggested that it could actually be by Thomas Wedgwood and date from the 1790s.[5] An authentic Wedgwood image would be a key historical relic, avidly sought by collectors and museums, and probably sell for a seven-figure price at auction. Considerable controversy erupted after the announcement and Schaaf's rationale for such an attribution was vigorously disputed by other respected photography historians. A few days before the scheduled sale, the image was withdrawn so that it could be more completely analyzed.[6] As of April 2014 the findings, if any, had apparently not been made public and the image had not resurfaced.
Patronage of Coleridge[edit]
Wedgwood was a friend of the poet Samuel Taylor Coleridge and arranged for him to have an annuity of £150 in 1798 so Coleridge could devote himself to philosophy and poetry. According to an 1803 letter, Coleridge even attempted to procure cannabis for Wedgwood to alleviate his chronic stomach aches.[7]
1. ^ e.g. Litchfield, book title et al.
2. ^ Talbot, W.H.F. (1844). The Pencil of Nature, Longman, Brown, Green and Longmans, London, 1844. On page 11, Talbot acknowledges that the original 1802 account of Wedgwood and Davy's experiments, which he did not see until his own experiments were well underway, "...certainly establishes their claim as the first inventors of the Photographic Art, though the actual progress they made in it was small."
3. ^ Litchfield, pp. 196-197.
4. ^ Litchfield, appendix C.
5. ^ An Image Is a Mystery for Photo Detectives, Randy Kennedy. New York Times, April 17, 2008.
6. ^ E-Photo Newsletter, Issue 148, 9/28/2008. See first two articles by Alex Novak and Michael Gray. Retrieved 7 May 2013.
7. ^ [1] Coleridge Letters
• Litchfield, Richard Buckley (1903). Tom Wedgwood, the first photographer; an account of his life, his discovery and his friendship with Samuel Taylor Coleridge, including the letters of Coleridge to the Wedgwoods and an examination of accounts of alleged earlier photographic discoveries. London, Duckworth and Co. Public domain, available free at (Includes the unabridged text of Humphry Davy's 1802 paper.)
Further reading[edit]
• Batchen, Geoffrey (1999). Burning with Desire: The Conception of Photography. MIT Press.
External links[edit]
|
global_05_local_4_shard_00000656_processed.jsonl/83836
|
Wikipedia:Requested articles/Natural sciences
From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Science (Rated NA-class)
WikiProject icon This page is within the scope of WikiProject Science, a collaborative effort to improve the coverage of Science on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Add your request in the most appropriate place below.
Before adding a request please:
Astronomy and cosmology[edit]
Done. exoplanetaryscience (talk) 19:17, 19 May 2014 (UTC)
Done. ♥ Solarra ♥ ♪ Talk ♪ ߷ ♀ Contribs ♀ 04:47, 24 May 2014 (UTC)
• Orbital normal - unit vector perpendicular to the orbital plane; probably a dictdef.
• Orbital seasons - seasons caused by a planet in an eccentric orbit moving closer to and further from its sun.
• Projection effect (astronomy) (2011) - superluminal motion, retrograde motion of the planets, optical double, &c.
• Speculations about methods for mankind to survive after future catastrophic events in the evolution of the Solar System and the Universe (2011) - perhaps just call it Human survival, as a contrast to Human extinction? We do have Space and survival and Geoengineering.
• Light Lag (2012) - This is more of a lead in to an idea I had about future military space tactics, when light lag over a distance would prove a problem for different sides trying to gage enemy strength, direction, and numbers if/when FTL travel is possible. Example - military force A attempts to make their force look larger to military force B than it is so they would, say, exit FTL at four light hrs from a target, remain there for one hour, FTL to a distant location for a period of time, then FTL back to only five light minutes to the target, at the same instant that their original light is ariving from their original exit from FTL, giving the impresion that there are twice as many vessels exiting FTL than there actualy are. This could be duplicated as many times as nessesary. However, due to degridation of light over distances (distance from points A to B is the same between B to C, though light intencity is not half that of A to B, but intead 1/4 between B to C) it could be detected with sensitive enough equipment. Sorry, but there are no reliable sources for this yet. Also, this would be more suited for a Military Sci-Fi section than here, based on the examples you gave.
• There us a program called CHView that seems to no longer be under development where you could model an interstellar civilization against an accurate 3D map of the local stars. The link is: .
Cosmology, galactic and extragalactic astronomy
Solar and stellar astronomy
Individual objects[edit]
Organizations, observatories, telescopes, and surveys[edit]
Chemistry, chemicals and labs[edit]
Environment and geology[edit]
Journals and trade publications[edit]
Materials science[edit]
Physical science[edit]
alchemy survival guide
visual object recognition
Scientists and people in science[edit]
Lunar crater eponyms[edit]
See List of craters on the Moon
See the NASA Lunar Atlas for crater nomenclature.
Earth scientists[edit]
Other scientists[edit]
• Norris Alderson - Associate Commissioner of Science, FDA
• Carl Disch - [78]
• Franz J. Giessibl - ( a professor at the University of Regensburg who pioneered atomic force microscopy with atomic and subatomic resolution, a premier tool for nanoscience and nanotechnology. He obtained atomic resolution by force microscopy in vacuum for the first time (Science 267, 68 (1995)) and subatomic resolution (Science 289, 422 (2000)) and invented the qPlus force sensor (US Patents 6240771 and 8393009) and was awarded the 2014 Joseph F. Keithley award of the American Physical Society (
• Vadim N. Gladyshev - biochemist (Ph.D.), Professor, Director of the Redox Biology Center, University of Nebraska–Lincoln; outstanding achievements in research and discovery in the experimental biochemistry, computational biology and biomedical redox biology; contribution to the understanding of genetic organization; presented with the ORCA Award
• Pantó György - geochemical scientist
• Riitta Hari - neuroscientist, foreign member of the United States National Academy of Sciences
• William Alan Jeffrey
• Henry Guard Knaggs - (1832-1908) one of the best-known Victorian entomologists, editor of leading publications in the field. See The Aurelian Legacy: British Butterflies and Their Collectors, Michael A. Salmon, Peter Marren, Basil Harley, 2000
• Jani Macari Pallis (Jani Pallis) - professor of sports science, principal on NASA's "Aerodynamics in Sports" project
• Katsuyuki Ooyama - (1929-2006) Japanese American meteorologist [79]
• Paul Manger – neuroscientist
|
global_05_local_4_shard_00000656_processed.jsonl/83839
|
Page:The English Constitution (1894).djvu/40
From Wikisource
Jump to: navigation, search
This page has been validated.
be provided to obstruct and prevent these great aggregations of property. Few things certainly are less likely than a violent tempest like this to destroy large and hereditary estates. But then, too, few things are less likely than an outbreak to destroy the House of Lords—my point is, that a catastrophe which levels one will not spare the other.
I conceive, therefore, that the great power of the House of Lords should be exercised very timidly and very cautiously. For the sake of keeping the headship of the plutocracy, and through that of the nation, they should not offend the plutocracy; the points upon which they have to yield are mostly very minor ones, and they should yield many great points rather than risk the bottom of their power. They should give large donations out of income, if by so doing they keep, as they would keep, their capital intact. The Duke of Wellington guided the House of Lords in this manner for years, and nothing could prosper better for them or for the country, and the Lords have only to go back to the good path in which he directed them.
The events of 1870 caused much discussion upon life peerages, and we have gained this great step, that whereas the former leader of the Tory party in the Lords—Lord Lyndhurst—defeated the last proposal to make life peers, Lord Derby, when leader of that party, desired to create them. As I have given in this book
|
global_05_local_4_shard_00000656_processed.jsonl/83840
|
The American Cyclopædia (1879)/Brunck, Richard François Philippe
From Wikisource
Jump to: navigation, search
The American Cyclopædia
Brunck, Richard François Philippe
Edition of 1879. See also Richard François Philippe Brunck on Wikipedia, and the disclaimer.
BRUNCK, Richard François Philippe, a French philologist, born in Strasburg, Dec. 30, 1729, died June 12, 1803. He was educated in the college of the Jesuits at Paris, served in Hanover as commissary of war, and returned at the age of 30 to Strasburg, where he studied in the university till he had mastered the Greek language. As an editor he made no commentaries, but occupied himself only with the text. Persuaded that all faults in the language of the Greek poets came from the carelessness of copyists, he corrected the texts with the utmost fearlessness, regardless of manuscript readings. Holding a lucrative official position, he was enabled to issue his editions without depending on a publisher. He edited the Greek anthology, all of the tragedies of Sophocles, and several of those of Æschylus and Euripides, the Greek gnomic poets, and the works of Anacreon, Aristophanes, and Apollonius of Rhodes. His labors were interrupted by the French revolution, whose principles he espoused. He was imprisoned during the reign of terror, was twice ruined in property, and obliged to part with his books. He then turned his attention to Latin authors, and edited Virgil, Plautus, and Terence.
|
global_05_local_4_shard_00000656_processed.jsonl/83841
|
The New International Encyclopædia/Anabaptists
From Wikisource
Jump to: navigation, search
The New International Encyclopædia
Edition of 1905. See also Anabaptist on Wikipedia, and the disclaimer.
AN'ABAP'TISTS (Gk. ἀναβαπτίζειν, anabaptizein, to rebaptize). A term applied generally in Reformation times to those Christians who rejected infant baptism and administered the rite only to adults; so that when a new member joined them, he or she was baptized, the rite as administered in infancy being considered no baptism. Still, because all other branches of the church considered this a second baptism, the term Anabaptist, i.e., one who baptizes again, was naturally applied to them. The name is, however, not now used by the present Baptists.
The primitive baptism was doubtless of adults only, but infant baptism early became the Church practice. Opposition to it was kept up by a number of minor and obscure sects in the Middle Ages. When the Reformation unshackled the popular mind it came into prominence. Unfortunately, it was linked with other unpopular ideas of a revolutionary character, and adopted by a set of fanatical enthusiasts called the prophets of Zwickau, in Saxony, at whose head were Thomas Münzer (q.v.) (1520) and others. Münzer went to Waldshut, on the borders of Switzerland, which soon became a chief seat of Anabaptism, and a centre whence visionaries and fanatics spread over Switzerland. They pretended to new revelations, dreamed of the establishment of the kingdom of heaven on earth, and summoned princes to join them, on pain of losing their temporal power. They rejected infant baptism, and taught that those who joined them must be baptized anew with the baptism of the Spirit; they also proclaimed the community of goods, and the equality of all Christians. These doctrines naturally fell in with and supported the “Peasant War” (q.v.) that had about that time (1525) broken out from real causes of oppression. The sect spread rapidly through Westphalia, Holstein, and the Netherlands, in spite of the severest persecutions. The battle of Frankenhausen (see Münzer) crushed their progress in Saxony and Franconia. Still scattered adherents of the doctrines continued, and were again brought together in various places by traveling preachers. In this capacity Melchior Hoffmann, a furrier of Swabia, distinguished himself, who appeared as a visionary preacher in Kiel in 1527, and in Emden in 1528. In the last town he installed a baker, John Matthiesen, of Haarlem, as bishop, and then went to Strassburg, where he died in prison. Matthiesen began to send out apostles of the new doctrine. Two of these went to Münster, where they found fanatical coadjutors in the Protestant minister Rothmann, and the burghers Knipperdolling and Krechting, and were shortly joined by the tailor Bockhold, of Leyden, and Gerrit Kippenbrock, of Amsterdam, a bookbinder, and at last by Matthiesen himself. With their adherents they soon made themselves masters of the city; Matthiesen set up as a prophet, and when he lost his life in a sally against the Bishop of Münster, who was besieging the town, Bockhold and Knipperdolling took his place. The churches were now destroyed, and twelve judges were appointed over the tribes, as among the Israelites; and Bockhold (1534) had himself crowned king of the “New Sion,” under the name of John of Leyden. The Anabaptist madness in Münster now went beyond all bounds. The city became the scene of the wildest licentiousness, until several Protestant princes, uniting with the bishop, took the plan, and by executing the leaders put an end to the new kingdom (1535).
But the principles disseminated by the fanatical Anabaptists were not so easily obliterated. As early as 1533 the adherents of the sect had been driven from Emden and taken refuge in the Netherlands, and in Amsterdam the doctrine took root and spread. Bockhold also had sent out apostles, some of whom had given up the wild fanaticism of their master; they let alone the community of goods and women, and taught the other doctrines of the Anabaptists, and the establishment of a new kingdom of pure Christians. They grounded their doctrines chiefly on the Apocalypse. One of the most distinguished of this class was David Joris, a glass painter of Delft (1501-56). Joris united liberalism with Anabaptism, devoted himself to mystic theology, and sought to effect a union of parties. He acquired many adherents, who studied his book of miracles (Wunderbuch). which appeared at Deventer in 1542, and looked upon him as a sort of new Messiah. Being persecuted, he withdrew from his party, lived inoffensively at Basel, under the name of John of Bruges, and died there in the communion of the Reformed Church. It was only in 1559, when his heretical doctrines had come to light, that the council of Basel had the bones of Joris dug up and burned under the gallows.
Contemporary with these fanatical Anabaptists there were those who united denial of the validity of infant baptism with mystical views, and even with denial of the deity of Christ. But in Switzerland and South Germany the Antipædo-Baptists, who date from 1523, and were dominated by the theological views of Balthazar Hubmeier, though reckoned with the other Anabaptists and cruelly persecuted and suppressed, held only at worst defective political views, but had no part or parcel with any immoral practices. Their creed can be learned from Zwingli's attack upon them. See the English translation in Jackson's Selections from Zwingli, pp. 123-258 (New York, 1901). This humble folk were treated like criminals, because the authorities recognized that their principles, though in no way sinful, were subversive of the tyrannical government they exercised. Anabaptists must die because they would not submit to the established order. To this day the advocates of the State Church look askance at them. At first among them the mode of baptism was not considered important, and so not much discussed. It was by pouring or sprinkling.
A new era for the Anabaptists begins with Menno Simons. (See Menno.) Surrounded by dangers, Menno succeeded, by prudent zeal, in collecting the scattered adherents of the sect, and in founding congregations in the Netherlands and in various parts of Germany. He called the members of the community “Gods congregation, poor, unarmed Christians, brothers;” later, they took the name of Mennonites, and at present they call themselves, in Germany, Taufgesinnte; in Holland, Doopsgezinden — corresponding very nearly to the English designation Baptists. This, besides being a more appropriate designation, avoids offensive association with the early Anabaptists. Menno expounded his principles in his Elements of the True Christian Faith in Dutch. This book is still an authority among the body, who lay particular stress on receiving the doctrines of the Scripture with simple faith, and acting strictly up to them, setting no value on learning and the scientific elaboration of doctrines. They reject the taking of oaths, war, every kind of revenge, divorce (except for adultery), infant baptism, and the undertaking of the office of magistrate; magistracy they hold to be an institution necessary for the present, but foreign to the kingdom of Christ; the Church is the community of the saints, which must be kept pure by strict discipline. With regard to grace, they hold it to be designed for all, and their views of the Lord's Supper fall in with those of Zwingli; in its celebration the rite of feet-washing is retained. In Germany, Switzerland, and Alsace their form of worship differs little from the Lutheran. Their bishops, elders, and teachers serve without pay. Children receive their name at birth, baptism is performed in the place of worship, and adults that join the sect are rebaptized. (See Mennonites.)
Almost the only split among the early Continental Baptists on doctrinal grounds was that which took place in Amsterdam in 1664. Arminianism had not been without its influence, especially among the Waterländers, originally more liberal in their views. A leading congregation accordingly divided into two parties, one (Galenists, from Galenus, their leader) advocating freer views in doctrine and discipline; the other ( Apostoolists, from Samuel Apostool) adhering to absolute predestination and the discipline of Menno. The liberal party rejected creeds as of human invention, adopted much of the philosophy and theology of England, and exercised no little influence on the intellectual progress of Holland. These two parties gradually absorbed the other sections of the Baptists in the Netherlands; and about the beginning of the nineteenth century a union took place by which all the congregations now belong to one body.
For the modern denomination called Baptists, which continues the same protest against infant baptism, but has little, or, as some claim, no genetic connection with the Anabaptists, see Baptists.
|
global_05_local_4_shard_00000656_processed.jsonl/83843
|
Friday, April 17, 2009
The categorization of 'so'
The other day I got an e-mail from a colleague who asked about complements to linking verbs. As I was thinking about the various possible complements, the clause it seems so came to mind. "What in the world is so," I wondered. Not really expecting to find any help, I checked a number of dictionaries. The OED has entries for so as an adverb/conjunction and predictably other dictionaries follow suit. But that can't be right.
The word seem can take a variety of complements apart from so. These include:
1. Noun phrases (NPs) and adjective phrases (AdjPs), typically grouped as predicate complements (aka subjective complements).
He seems happy. (AdjP)
She seems a good sort. (NP) more common in British English
2. Content clauses (aka noun clauses): bare, with subordinator that, and with relative pronoun what
It seems (that) they have arrived early.
That did not seem what they intended to convey.
3. to infinitives
It seems to be the right one.
4. Preposition phrases (PPs)
It seems
like a good solution.
They seem out of place
5. And, of course, the word so. [My aunt reminds me that not can also work here.]
They do not include:
1. Determinative phrases (DPs)
*It seems any
*It seems
*It seems
2. Adverb phrases (AdvPs)
*It seems
*It seems
*It seems
*It seems too.
3. Content clauses introduced by when, why, how
*It seems how they did it.
*It seems why they did it.
*It seems how they did it.
4. Bare infinitives
*It seems be OK.
So, if adverbs are illegal as complements [but note the possibility of not], why call so an adverb? I suppose the reasoning is: if something doesn't fit another category, it must be an adverb, but that's pretty silly.
And what do I think so is? I haven't figured that one out yet. I'll try to get back to you on it.
[Added May 25, 2009: otherwise also seems to have similar properties.]
Nick said...
I have a question: Isn't it "it seems quick"; not "It seems quickly"? I only ask because "to seem" is a linking verb if I remember and this sentence would mean, "it seems [to be] quick."
Nick said...
"so" is a conjunction right? Or do you mean like "so fast"; I would consider that to be informal, but it's being used as an adverb that is modifying the adjective "fast". I'm not sure really on that. I like to look at things semiotically when pertaining to grammar. I don't normally divagate from that.
Nick said...
I've also never heard of "it seems be ok." I've heard of "it seems to be ok" wherein the infinitive is not bare. Normally bare infinitives are subjunctive in nature such as "Let it be so" or "let there be light". Do expatiate on this because I'm curious and you are giving me more ideas for my blog as well.
Brett said...
Yes, the asterisks in front of sentences indicate that they are ungrammatical. It seems quick with the adjective as the complement would be the grammatical alternative.
Brett said...
So has a number of meanings, two of which you mention, but I'm looking at the meaning where it refers back to something previously stated in the discourse: e.g., "She's nice" "Yes, I think so too," where so here means "she's nice".
Brett said...
Sorry, "It seems be OK" should have had an asterisk in front of it. It does now.
Nick said...
Okay, well thanks. You are pretty helpful.
Brett said...
I aim to please.
Q Higuchi said...
I believe it is the verb 'seem' that makes 'so' as in 'It seems so' appear a little peculiar. In the classic sort of transformational account,
It seems (that) they have arrived early.
is derived from the following:
[that they have arrived early] seems
This captures our intuition that what is seeming here is the entire state of affairs, namely their having arrived early. Then the whole clause is shifted to the end of the sentence, and the 'dummy it' fills the subject position.
But the 'they' in that clause can fill the subject position, too, in which case we get the following:
[They] seem [to have arrived early]
Now, here is my point. What if the subject happens to be 'it', not 'they'? For example,
[that it is old] seems
-> (1) It seems that it is old
-> (2) It seems old
When 'so' substitutes the that-clause in (1), you get 'It seems so'. When 'so' substitute the AdjP in (2), you get 'It seems so'.
'It seems so', then, is structurally ambiguous. My humble guess is that this is why you had that sort of linguistic tickling in your mind when toying with 'It seems so'.
Anyway, I hope this makes it even clearer that 'so' is Pro-many-things: Pro-clause (as in the above), Pro-V (I can Verb and so can you), etc.
|
global_05_local_4_shard_00000656_processed.jsonl/83861
|
ERD_SMSS project Logo
The ERD_SMSS project aims to create an emergency repair diskette/CD which is capable to reset passwords on Windows 2000/XP (and above, maybe NT 4.0 support will be added). In fact it should be something like a small repair console as was introduced with Windows 2000 but with a greatly enhanced set of features.
Progress with the project
I will misuse this place to report my progress until a real page is being formed and a beta-version is in sight ;)
Oliver aka Assarbad
Copyright (c) 2003-2004 by ERD_SMSS team
|
global_05_local_4_shard_00000656_processed.jsonl/83867
|
Archive for the 'Ancient Egypt' Category
Object: Faience Necklace
Figure 1 Egyptian Faience blue beaded necklace from the Ethnology Collection of the Sam Noble Oklahoma Museum of Natural History
Blue faience necklace
Africa: Egypt
Date: Modern
Materials: Faience (glass) beads on leather
This small blue beaded necklace is 12 inches long and comes from modern-day Egypt. The leather thong (or string) that holds the beads is tied together in one spot and can be adjusted to fit the person wearing it. The irregular shaped beads are made out of faience, a type of colored glass.
Faience (pronounced “fay-ahns”) has a long-standing history in many countries, especially Egypt. The ancient Egyptians used faience (known as tjehnet) beginning in 3500 BC to make beads, statues, amulets, bowls, and a variety of other objects. One theory is that faience was invented in Mesopotamia in 4000 BC and then brought to Egypt through trade.
Faience was originally developed by ancient Egyptians out of a desire to find a substitute for lapis lazuli, a highly valued dark blue stone. The royalty and nobles of ancient Egypt wanted to show how much power and wealth they had through the beautiful and expensive objects they put in their palaces, temples, and tombs. Lapis lazuli, however, was hard to come by. So, they developed faience, a much cheaper and easily manufactured material, as a substitute.
Faience, known as the “first high-tech ceramic” is made from finely ground quartz (or sand) mixed with lime, copper oxide, water, and a binder agent (such as gum arabic). When mixed together, these ingredients form a kind of paste that can then be put into a ceramic mold, dried, and fired in a kiln (or oven). Early on, it was discovered that adding different minerals (such as manganese) instead of copper oxide would result in different colors of faience including cobalt blue, purple, and yellow.
Today, the production of faience all around the world has expanded. Artists and scientists continue to experiment with and learn from this fascinating blue glass that experienced its beginnings in ancient Egypt and ancient Mesopotamia. This beautiful beaded necklace is only one example of how faience continues to be used today.
Take a look at this cool video that shows step-by-step out to make faience objects using ancient Egyptian molds from the Petrie Museum of Egyptian Archaeology:
[Stephanie Lynn Allen]
Object: Mummified fish
Mummified fish
Ancient Egyptian
unknown date
Materials: Fish, cloth, resin, salt or natron
The following video shows a modern attempt at recreating fish mummification.
[Kathryn S. (Barr) McCloud]
Object: Amulet
Slate Turtle Amulet
Possibly Pre-Dynastic
Materials: Slate
This object is a small (1 15/16” long) slate amulet from Egypt. The thin slate disc is crudely carved in the outline of a turtle. A hole is pierced near the tail for suspension.
The earliest representations of the Nile turtle date back to pre-dynastic times and were associated with magical significance that was meant to ward off evil. Amulets such as this example were designed to defend the wearer’s health and life. As time passed, the turtle became synonymous with drought, the enemy of the Sun god Ra. Many times, a pair of tortoises would be depicted with a scale, representing the ebb and flow of the Nile‘s floodwaters. Eventually, the turtle was associated with Set (the god of wind, desert storms, conflict and evil), and so with the enemies of Ra who tried to stop the solar barge as it traveled through the underworld to re-emerge with the new dawn. Since the turtle was associated with night, it came to symbolize darkness and evil. By the New Kingdom, the Sun god’s hostility toward the lowly turtle was even more strongly formulated in the phrase, “May Ra live and may the turtle die.”
Turtle shtyw
Belonging to the reptile order of Testudines, turtles are one of the oldest reptile groups known. They are characterized by a special bony or cartilaginous shell developed from their ribs. This shell acts as a shield into which the turtle withdraws at danger. Turtles
are cold-blooded, which means they can varying their internal temperature according to the ambient environment. Turtles live in both aquatic and terrestrial environments; however, they lay their eggs on land only.
The turtle amulet is made from slate. Slate is a metamorphic rock derived from a shale-type sedimentary rock composed of clay or volcanic ash. Usually grey in color, slate can be found in various shades of grey from pale to dark and may also be purple or green. Care must be taken to not confuse slate with shale, from which it may be formed, or schist (granite). [Debra Taylor]
Object: Figurine
Egyptian: Bronze Cat
Egypt (possibly Saitic)
ca 664 to 525 BCE
Materials: Bronze, wood
This object is an Egyptian bronze cat seated on a modern wooden base. The wooden base is rectangular with the sides angling toward the interior. The top platform is smaller than the base. The wood has been painted black. The top surface has been excised in order for the bronze cat to be set in. The seated cat faces forward. Its long tail wraps around the bottom right side and around the front legs. The entire cat figure is very slender. Two eyes, a nose, and a horizontal line for the mouth are visible. Ears are on top of the head and pointed. This figure is believed to be of possible Saitic origination. The term “Saitic” comes from the city name “Sais,” which served as the center of power in the Delta region during the 26th Dynasty. The rule of the 26th dynasty is often referred to as the Saite period in Egyptian history. Psammetikhos I was the first ruler of the dynasty, and is traditionally thought to have ruled from about 664 to 610 BCE.
Cat figures such as this one are representations of deity Bastet, the“Devouring Lady,” the protector of women, especially pregnant women. Bastet (also known as Bast, Bastis, Bubastis, or Ubast) was believed to be responsible for joy, music, dancing, as well as health and healing. Her cult can be traced back to 3200 BCE. Around 950 BCE, she became a national deity when Bubastis became the capital of Egypt. Bubastis, a city in the eastern Nile Delta, is believed to have been the birthplace of Bastet. The city itself has origins dating back to the 4th Dynasty and was populated into the Roman Period.
Sometimes, Bastet is associated with the lion-goddess Sekhmet. She is sometimes depicted as a cat holding a mask of a lioness in her hand. Symbolically, she was represented as a woman with a cat’s head, or simply as a seated cat, like in the object pictured above. Cats were viewed by the ancient Egyptians as manifestations of diety, and as such were considered sacred. The cat protected the grain from mice and rats and thus indirectly protected the people. Killing a cat was punishable by death. Many mummified Bastet cats have been found from various time periods throughout Egypt. Amulets and figurines depicting the goddess were common among all Egyptian social classes.
[Debra Taylor]
Object: Cartonnage Fragment
Fragment of a mummy cartonnage
18th dynasty (1570-1314 BCE)
Materials: linen or papyrus
This object is a multi-colored fragment of a mummy cartonnage possibly from the 18th Dynasty. Cartonnage was used for personal funerary ornaments such as mummy masks. The masks would cover the head, shoulders, and upper chest of the mummy to protect the face of the deceased. This particular piece was likely from the chest portion of a cartonnage mummy mask.
Cartonnage was made from thin, layered pieces of linen or papyrus. Once a shape had begun to form one side was coated with gesso (a mixture of glue and whiting plaster) to harden the shape. This coating allowed the maker to use detailed paint or gold leafing on the front side.
Each individual had their own design for their mask. Usually, the design would indicate something about the deceased. For instance, the mask may have been a representation of what the person looked like or enjoyed doing. An example of a gilded mummy mask can be seen at the British Museum.
[Brittany Teel]
Object: Amulet
Date unknown
Materials: faience
The museum’s catalog identifies this amulet as depicting the Egyptian god Anubis. In Egyptian mythology Anubis plays a crucial role as guide and protector of the deceased.
However, after examining the piece I feel that this amulet does not depict Anubis. Anubis, when shown in his half human form, has the head of a jackal while this amulet shows the head of a lion. Additionally, this figure is shown wearing a special type of crown called the atef crown. This type of crown is typically associated with the god Osiris and symbolized the priesthood and divine power. The atef crown resembles the white crown of Upper Egypt which has been decorated with two vertical rows of ostrich feathers. It seems more likely that this amulet depicts the god Maahes, rather than Anubis.
Maahes (also known as Mahes, Mihos, Miysis, or Mysis) was a male deity most commonly associated with fighting, war, and violence. Some myths describe him as a protector or guardian of Ra, the god of the sun disk. In this role he would protect Ra from Apep, the god of darkness while he traveled through the underworld during the night. In times of war, Maahes was also thought to be the protector of the pharaoh. Other myths describe him as an executioner, a protector of the innocent, a guardian of sacred places, or as one who could find “truth.” He also shared many characteristics with other lion headed deities such as Nefertem and Shesmu. It is likely that an amulet of Maahes was thought to protect the wearer from evil and ensure their safe passage in the underworld.
An example of a faience amulet depicting the god Anubis can be found at the Metropolitan Museum of Art. [Kathryn S. (Barr) McCloud]
maahes amulet
Object: Amulet
Date unknown
Materials: faience
Object: Inscribed Papyrus Fragment
Fragment of inscribed papyrus
Ca. 100 BCE
Material: papyrus and ink
Papyrus is an early form of paper, highly valued in the ancient world and most commonly produced in Egypt’s Nile Delta. The paper is made from the inner material of the stem of the papyrus plant (Cyperus papyrus). This inner material, called pith, is removed from the stem and layered on top of itself with the grain of each layer running at right angles to the layer underneath. Once the layers of papyrus reach the desired thickness they are very tightly compressed and allowed to dry.
The inscription on this piece was recently examined by Dr. Janet H. Johnson, a professor of Egyptology at the Oriental Institute, who concluded that it is written in Demotic. Demotic is a type of ancient Egyptian writing that was derived from northern forms of Hieratic, which is often considered the “cursive” or “short-hand” form of ancient Egyptian hieroglyphs. This type of writing was used during the later part of the Dynastic period in ancient Egypt and continued to be used into the Roman Period. The most famous use of Demotic can be found on the Rosetta Stone.
Dr. Janet H. Johnson was able to provide some information as to the content of this inscription. She reports that: “It seems to be a letter dated year 11, first month of summer (no king’s name was included). The name of the sender is lost in the break at the upper right; the name of the recipient seems to be a foreign name. It mentions the town/location of Meidum, in the Fayum…It also seems to mention ‘matters of Pharaoh,’ which probably would be a reference to state business.”
For more information on ancient paper making see:
Johnson, Malcom. The Nature and Making of Papyrus. Barkston Ash: Elemete Press, 1973. [Kathryn S. (Barr) McCloud]
Object: Ushabti
Faience ushabti
XXVI Dynasty (ca. 664-525 BCE)
Materials: faience
Ushabtis, also known as shabtis or shwabtis, are small figurines usually modeled out of Egyptian faience. These figurines are associated with burials and always show a human figure wrapped as a mummy with the traditional false beard and headdress of the pharaoh and the god Osirus. The arms of the figure are crossed and when the burial in question was royal, they would carry the crook and flail signifying kingship or divinity. Ushabtis were intended to function like servants for the deceased in the afterlife. Ancient Egyptians believed that after death the soul of the individual continued to live a similar existence to that on the physical earth. In order to assure that one could have a pleasant and relaxed afterlife, free from labor and discomfort, it was necessary to bring along servants in the form of ushabtis. The ushabtis were all inscribed with a verse from Chapter 6 of the Book of the Dead which asks the ushabti to take the place of the deceased whenever he is called upon to perform any task in the afterlife.
The ushabti in the Sam Noble Oklahoma Museum of Natural History is made of green Egyptian faience. Faience is a type of fired ceramic with a tin glaze, that was common in the Middle East and Europe. Unlike traditional faience, Egyptian faience is made by heating a mixture of sand and minerals. This mixture, when heated would essentially melt together into a solid stone-like material with a glassy finish. By combining different types and quantities of minerals different colors could be created.
A preliminary examination of the inscription on this ushabti indicates that this figurine belonged to a person named Ptah-ir-dy-es, and the museum’s records indicate that the figure dates from the XXVI Dynasty. The XXVI Dynasty, often called the Saite Dynasty, once again united both Upper and Lower Egypt under one king following the Third Intermediate Period. It begins just after the Assyrian invasion of Egypt and is brought to an end by the Persian invasion. This dynasty represents the end of native rule in ancient Egypt, as the power of kingship passed to their southern Kushite neighbors.
For more information on Egyptian funerary customs and grave materials see:
El-Shahawy, Abeer. The Funerary Art of Ancient Egypt: A Bridge to the Realm of the Hereafter. Cairo: Farid Atiya Press, 2005.
Smith, William S., and William K. Simpson. The Art and Architecture of Ancient Egypt. New Haven: Yale University Press, 1998.
For more information on the XXVI Dynasty see:
Welsby, D.A. The Kingdom of Kush: The Napatan and Meroitic Empires. London: British Museum Press, 1996. [Kathryn S. (Barr) McCloud]
Join 55 other followers
Get every new post delivered to your Inbox.
Join 55 other followers
%d bloggers like this:
|
global_05_local_4_shard_00000656_processed.jsonl/83869
|
epicure (n.) Look up epicure at Dictionary.com
late 14c., "follower of Epicurus," from Latin Epicurus, from Greek Epicouros (341-270 B.C.E.), Athenian philosopher who taught that pleasure is the highest good and identified virtue as the greatest pleasure; the first lesson recalled, the second forgotten, and the name used pejoratively for "one who gives himself up to sensual pleasure" (1560s), especially "glutton, sybarite" (1774). Epicurus' school opposed by stoics, who first gave his name a reproachful sense. Non-pejorative meaning "one who cultivates refined taste in food and drink" is from 1580s.
|
global_05_local_4_shard_00000656_processed.jsonl/83882
|
Holland is the name of a stretch of 100 miles of land along the North Sea. Its name ("Holtland" - woodland) isn't much reflected in its landscape, which is predominantly urban area and agricultural polderland. Holland is a centre of world trade, provides agricultural products to the whole of Europe and far beyond, and is home to over 4 million people.
Its highest natural points are the dunes; what lies behind is flat and mostly below sea level. The lowest points are urban areas: the Willem-Alexanderpolder in Rotterdam and, at 6.74 m below NAP, the Zuidplaspolder in Nieuwerkerk aan den IJssel.
In mediaeval times, Holland was a county. Its counts were homegrown, bearing names like Dirk and Willem. The most striking event in its history was the assassination of count Floris V in 1296. Through war and marriage, its rule fell to foreign nobility (globalization medieval style), and at the start of the 16th century, to the duchy of Burgundy. Thus, Holland became part of the Low Countries, a federative assembly of counties, with a parliament seated in Brussels.
The retirement (1555) and death (1558) of Charles V left Burgundy in the hands of the new king of Spain, Philip II, who showed little interest or respect for his remote "Burgundian" property. He set out to rule as an absolute monarch, attempted to centralize power, imposed heavy taxation and enforced religious prosecution of the new protestant movement, which had spread like wildfire.
The Low Countries revolted; after a Spanish military campaign ran dead in the Holland moorlands, the Northern half formed its own parliament in 's-Gravenhage, and effectively gained political independence, a status quo eventually confirmed by treaty after an 80 year war (1568-1648).
In the new federation, which later developed into the present-day state of the Netherlands, Holland became the dominant economical and political force. A free haven for those persecuted elsewhere, it attracted much economic activity. Amsterdam took over Antwerps role as the main European port, not least because Antwerp was under Spanish rule, had been sacked by Spanish troops, and saw its port permanently blocked by the Dutch fleet.
A Golden Age set in; trade flourished, the arts, sciences and engineering skills moved north to Holland, and produced wonders such as the paintings of Rembrandt van Rijn, the land reclamation works of Cornelis Leeghwater, and the development of the wave theory of light by Christiaan Huygens.
After decades, neighbours Britain and France (10 to 20 times bigger) overcame their internal and mutual conflicts, and pushed back Holland into a secondary political role. Holland always retained its prominent place in world trade, and lost political independence only twice after (1798-1813, to Napoleon, and 1940-1945, to Hitler).
The three principal cities of the Netherlands, Amsterdam, Rotterdam, and Den Haag, are all located in Holland. Amsterdam, the capital, is the cultural centre; Rotterdam, until recently the biggest port in the world, prides itself on being a working man's city; Den Haag, the seat of parliament, is the political and administrative centre. Loosely knit together by a string of smaller and bigger towns and suburbs, they form the Randstad, which surrounds the "Green Heart", a agricultural area with moderate urbanization.
Holland's liberal reputation is due to these origins, and lives on not so much in its policies, but rather in its lack of rigour in enforcing them. Consensus politics and laisser-faire have been the rule all through its political history; moral conduct has largely been considered the territory of religion.
Citizens of more repressed areas flock in large quantities to Amsterdam to have a taste of its red light district and drugs; the Dutch themselves are moderate users, but, as in all matters, are keen to make money on them.
It is common practice to identify Holland with the Netherlands - many Dutch do so themselves - but that is really a case of pars pro toto. North and South Holland are only two of the twelve Dutch provinces.
6 Misconceptions about Holland:
1. Everybody wears wooden shoes.
Probably at one time in history this was true (or at least the majority wore them), but it definitely isn't true today anymore. People wear normal shoes just like the rest of the world, except for farmers.
2. Everywhere you look you see the traditional windmills.
In the past, there were a lot more windmills then there are nowadays. They were used to make flour, the main ingredient for bread. These days, only few windmills are left, and they are mainly kept working for tourists. However, you can occasionally see new sorts of windmills that produce electricity.
3. Everywhere you look you see tulips.
Sure, tulips are a major export product, but they are not grown on every square inch that is left in the country.
4. Everybody wears traditional clothes.
Almost everybody used to wear them, up to 50 years ago. But especially after World War II and in the 1950s, clothing changed. These days, you can't tell a dutch guy from a swedish guy by looking at his clothes, and only some older men and women (usually that are 60 years or older) only wear them.
5. Everybody is a drug addict.
Just because the government condones the use of so called soft drugs, doesn't mean that everybody uses it. Since we're used to it that you are able to buy weed (in a small amount) without being afraid of getting arrested, it's not so special anymore. Foreigners usually go totally mad when they first see a coffee-shop where you can buy the stuff, and quite a lot of them come to the Netherlands (and especially Amsterdam, since they think you can only buy it there) just for the drugs. Note: drugs are still illegal, but the government only condones the use of soft drugs, like marihuana. Hard drugs, like cocaine and heroine, are still highly illegal, and are not condoned.
6. The story of the boy holding his finger in a dike to prevent a flood.
There is a story about a boy, Hans Brinker, who sticks his finger in a hole in a dike. By doing that, he prevents a flood. Let me wake you up from that dream: it was made up by some American writer. It is impossible to be true. If there is such a small hole in a dyke, then the water pressure would be enormous, so a small boy would never be able to stop the water by putting his finger in the hole.
Also a Beach Boys album, recorded, as the title suggests, in Holland. Due to Brian Wilson's almost total non-involvement (although he did write the incredible Sail On Sailor with Van Dyke Parks among others), meant this was one of the most democratic of Beach Boys albums, with everyone, including manager Jack Rieley, getting almost equal writing credits, including one of only three Mike Love songs written without a collaborator that the band ever recorded - Big Sur.
Much of the album is based around the themes of travel, homesickness and love of California. While one of their most highly rated albums, it suffers in hindsight from a surfeit of 70s pretension...
1. Sail On Sailor
2. Steamboat
3. Big Sur
4. Beaks Of Eagles
5. California
6. Trader
7. Leaving This Town
8. Only With You
9. Funky Pretty
10. Mount Vernon & Fairway - A Fairy Tale
Band members at the time - Brian Wilson, Carl Wilson, Dennis Wilson, Mike Love, Al Jardine, Blondie Chaplin, Ricky Fataar.
Currently available on Capitol Records as a twofer with Carl & The Passions (So Tough)
Previous album - Carl & The Passions (So Tough)
Next album - The Beach Boys In Concert
One of the traditional three administrative parts of the English county of Lincolnshire. It is the dead-flat fenny region in the south to south-east, bordering on the inlet of the North Sea called The Wash, and adjoining Cambridgeshire to the south. The main town is Boston.
The other two parts of Lincolnshire are Lindsey to the north and Kesteven to the west.
In the 1974 reorganization of local government Holland ceased to have an administrative function. It now consists of two districts called Boston and South Holland.
Holland is the widely used synonym for the Netherlands to foreigners and tourists. The word Holland originally comes from Holt, which means forest. Since this country was not only very flat (neder = nether = flat), but also covered with large forests, the name Holt-land (later named Holland) was used for the country in the 16th and 17th century.
The two main provinces are North-Holland (capital: Haarlem) and South-Holland (capital: Den Haag).
After the merge of all old provinces in the 18th century, the Kingdom of the Netherlands was formed.
Holland is used as synonym for the Netherlands today, but officially it's still Nederland, the Netherlands, le Pays Bas or die Niederlande.
Hol"land (?), n.
A kind of linen first manufactured in Holland; a linen fabric used for window shades, children's garments, etc.; as, brown or unbleached hollands.
© Webster 1913.
|
global_05_local_4_shard_00000656_processed.jsonl/83894
|
Sacred datura root
17,396pages on
this wiki
Sacred datura root
Sacred datura root
Icon xander root
effectsSacred datura poison for 30s
-2 Perception for 90s
compon. ofDark datura
Datura hide
Datura antivenom
questsRite of Passage
Gametitle-FNV HH
Gametitle-FNV HH
Sacred datura root is a consumable item in the Fallout: New Vegas add-on Honest Hearts.
The Pip-Boy icon is identical to that of the xander root. The plant in which the sacred datura root is picked appears as a rather small green bush blossoming with white flowers, which look similar to morning glory flowers. Although it is easily able to be seen as a poison to the player, it can still be consumed as if it was food. Doing this will lead to poison effects similar to those of the Mojave Wasteland poison, such as that of cazadores/nightstalkers etc.; however, it can only be cured by the datura antivenom, as the antivenom from the Mojave has no effect on this type of poison. During the quest Rite of Passage, it is shown that the sacred datura root can be used to create a bitter tasting tea with rather severe hallucinogenic effects. The only known person to be able to create this tea is White Bird of the Sorrows tribe.
• It can also be purchased from Joshua Graham.
Behind the scenesEdit
Other Wikia wikis
Random Wiki
|
global_05_local_4_shard_00000656_processed.jsonl/83905
|
Papers Published
1. Howle, L.E., Efficient implementation of a finite-difference/Galerkin method for simulation of large aspect ratio convection, Numerical Heat Transfer, Part B: Fundamentals, vol. 26 no. 1 (1994), pp. 105 - 114 .
(last updated on 2007/04/06)
The efficiency of a mixed finite-difference/Galerkin method is examined for simulation of steady two-dimensional Rayleigh-Benard convection of large aspect ratio. It is found that computation time is reduced by an order of magnitude for large-aspect-ratio systems if the summations resulting from the formation of inner products are expanded prior to code compilation. The expansion of the summations is carried out by a source code utility, which writes the expanded and simplified source. This eliminates the need to store and multiply sparse tensors. The method extends to large-aspect-ratio problems that would previously be computationally impractical using the finite-difference/Galerkin technique.
Finite difference method;Tensors;Mathematical models;Boundary layers;Approximation theory;Function evaluation;Computational methods;
|
global_05_local_4_shard_00000656_processed.jsonl/83938
|
Intensity Frontier
Experiments at the Intensity Frontier
How it works
Though trillions of naturally occurring neutrinos pass through us each second, they interact so rarely with other particles that they are very difficult to detect. More than 10 trillion man-made neutrinos pass through the 6,000-ton MINOS far detector, located in the Soudan Underground Laboratory in Minnesota, each year. But only about 1,500 collide with atoms inside the detector.
CP Violation
One reason researchers study neutrinos is to try to explain why we exist. Physicists theorize that the big bang created equal amounts of matter and antimatter. When corresponding particles of matter and antimatter meet, they annihilate one another. But somehow we're still here, and antimatter, for the most part, has vanished without a trace.
If this is true, it seems that at some point, matter and antimatter must have behaved differently from one another.
Physicists had held to the decree that nothing would change about the laws of physics if every particle were twisted around its axis and replaced with its antiparticle. This is called charge-parity symmetry. But it turns out that matter and antimatter are not exactly equal opposites, and this could explain why they exist in unbalanced quantities. Breaking charge-parity symmetry is called CP violation.
In order to advance the theory that CP violation caused the imbalance between matter and antimatter, physicists need to observe it in action. They observe decays of particles that can result in either matter or antimatter. If the decays produce the two in unequal amounts, that could signify the new physics researchers hope to discover.
Researchers at Fermilab use the NuMI beam to study neutrino oscillations, when neutrinos change from one flavor of neutrino to another. If antineutrinos do not follow the same pattern as neutrinos when they change from one flavor to another, this is a signal of CP violation. The same mechanism that could cause neutrinos and antineutrinos to oscillate differently can cause decays that would create more matter than antimatter and help explain the dominance of matter in the universe.
Top of page
Top of page
Scientific results
CP Violation in Kaons
Observation of Tau Neutrino
Unexpected Behavior of Neutrinos
Experimenters at Fermilab's NuTeV, Neutrinos at the Tevatron, experiment discovered an imbalance of neutrinos and muons emerging from high-energy collisions of neutrinos with target nuclei in a 700-ton detector.
The results of generations of particle experiments with other particles have yielded precise predictions for the value of this ratio, which characterizes the interactions of particles with the weak force, one of the four fundamental forces of nature. But neutrinos did not fall into line with those expectations.
Experimenters using the Large Electron Positron at CERN, the European particle physics laboratory, measured the same neutrino interaction in a different particle reaction. They saw the same discrepancy, although with less precision. If the discrepancy is real, it could be another indication that neutrinos truly are different.
Top of page
Current Experiments:
Works in Progress:
Top of page
Last modified: 03/22/2010 |
|
global_05_local_4_shard_00000656_processed.jsonl/83944
|
Ban Meat Sales of Threatened Species
Target: Melinda Plaisier, Food and Drug Administration’s Associate Commissioner for Regulatory Affairs
Goal: Stop the sales of meat from animals on threatened species lists
Recently, a restaurant in California added lion meat to its menu for a few weeks – a decision for which it faced serious backlash from the community. Many long-standing regulars have refused to return to the restaurant, which offers other exotic meats such as ostrich, alligator, and kangaroo. The restaurant has since stopped serving lion – which the restaurant’s owner claims was provided from a farm in Illinois – but this does not address the issue. Since lions are on the threatened species list, rather than the endangered, the sale and distribution of their meat is not illegal. The restaurant has faced no legal repercussions for selling the meat of a threatened animal.
The issue is such: providing consumers with a taste of a threatened species may spark more interest in consumption, leading to further population declines within an already threatened species. The Food and Drug Administration sets the regulation standards for the sale and consumption of foodstuffs, and the threatened species list needs to be added to the list of banned foods.
Investigations by a non-profit animal rights organization, Born Free USA, claim that lion meat often comes from captured lions which have been turned into exotic pets, rather than being raised on meat farms. This leads to further concern regarding any species on the threatened list: are these animals in fact raised on farms, or captured from declining wild populations only to be killed and eaten?
Sign this petition to urge the Food and Drug Administration to add the sale of threatened species to the list of illegal food items. Preventing the decline of threatened animal species can lead to population growth, whereas killing and eating animals that are already threatened will only inhibit their decline.
Dear Melinda Plaisier,
A restaurant has recently found its way into the media’s spotlight for serving lion meat. While the restaurant owner claims that the lion meat originated from a farm, research indicates that the origins of this type of meat are often misrepresented – many of these animals are, in fact, captured from the wild. The restaurant in question faces only public backlash for selling a threatened species for consumption, yet faces no legal implications: there is nothing stopping them from doing this again.
I urge you, as the Associate Commissioner for Regulatory Affairs, to enact stricter regulation on the sale of threatened species for consumption. As they are not yet endangered, regular public consumption can only lead to further decline in native populations of these species. Restaurants cannot be allowed to offer threatened species to consumers, and I fear an initial taste may spark heavy interest in further consumption of these threatened species. Please, set stricter regulations for the sale of meat belonging to threatened species.
[Your Name Here]
Photo credit: Kristin De More via Freefoto
Sign the Petition
Facebook Comments
1. jeanne rogers says:
Signed and noted.
2. GGma Sheila D says:
Isn’t it about time we stop killing animals that are threatened and endangered? Humans have already brought these species to the brink of extinction. When they are gone will we kill and eat other humans??
There comes a point where we HAVE to say ENOUGH IS ENOUGH. That point is NOW.
3. Please do what you will and can to help endangered species, and always — when possible — do a little good to help mankind.
How often we do have not considered are impact on mankind and on animals, and the earth.
Leave a Reply
• alper arslan
• Caitlin McInerney
• Michelle Lacombe
• Jenna Miles
• leland hodges
• marilou glenz jung
• marilou glenz jung
• moreau AGNES
1 of 264123...264
|
global_05_local_4_shard_00000656_processed.jsonl/83947
|
[ hiatus ]
i have been on a semi hiatus because of school for more than 10 months, but my summer break finally started. i was planning to end the semi hiatus and go back to normal in early june, but i haven't felt like it. i don't know why, but i haven't really been into kpop lately and that's also why i don't particularly enjoy running this blog anymore. i might be back later. i'm not sure yet, so i won't be deleting this blog anytime soon.
i'll still be active on my twitter and i might make another blog for my other fandoms. if so, i'll add the link on this page. i'll leave my inbox open if you have something to ask or say and you can always ask for my skype if you want to stay in touch with me uwu
see you around
|
global_05_local_4_shard_00000656_processed.jsonl/83961
|
Postby Desty on Tue Sep 25, 2001 1:53 am
I wonder what this will do to his opinion about
Kiwi. It doesnt seem like they where getting along very well to begin with. Hopefully Kiwi will get a chance to expain himself before Berissa's dad throws him out. Beeing an interspecies bigamist doesnt sound very flattering...
Regular Poster
Posts: 39
Joined: Fri Jan 01, 1999 4:00 pm
Location: Norway
Postby Werecat on Tue Sep 25, 2001 4:55 am
Hahaha!!! That was halarious! <--(My new saying <IMG SRC="">)<P>------------------
=^^= <--|meow!
Regular Poster
Posts: 37
Joined: Fri Jan 01, 1999 4:00 pm
Postby McFrugal on Tue Sep 25, 2001 9:04 am
Now *that* is an interesting misunderstanding! Interspecies bigamy! I never thought that would be accused of anyone, heh!<P>
Frugality is key.
Regular Poster
Posts: 94
Joined: Fri Jan 01, 1999 4:00 pm
Location: New York
Postby EsylaMcSassy on Thu Sep 27, 2001 1:48 am
It's HIGH-FREAKIN-LARIOUS!!!!<P>------------------
Love and other indoor sports,
Read <A HREF="" TARGET=_blank>Angry People</A> or I'll sprinkle lime pixie sticks in your shoes.
<A HREF="" TARGET=_blank>The Four Toon Tellers</A> aren't cartoonists...they're fruit and cake!
Spread the word....<A HREF="" TARGET=_blank>Joe Average</A> is back! Wait, that's four words...
"Angry People made me the intellectual I am today!" -....<A HREF="" TARGET=_blank>Carson</A>
User avatar
Regular Poster
Posts: 835
Joined: Fri Jan 01, 1999 4:00 pm
Location: Lansing, Michigan
Return to Not Gonna Take It
Who is online
Users browsing this forum: No registered users and 1 guest
|
global_05_local_4_shard_00000656_processed.jsonl/83963
|
View Single Post
Old 12-01-2012, 12:21
Forum Member
Join Date: May 2002
Location: South East London, UK
Posts: 7,400
The new series of Room 101 starts Friday 20th January 8.30pm BBC1.According to the tv guide there are 8 episodes in this series.
And the first guests are Fern Britton, Robert Webb and Danny Baker, which is the one that i went to see. It also means that are not airing in the order they were filmed as this was the 3rd episode to be recorded.
NoEntry2k is offline Reply With Quote
|
global_05_local_4_shard_00000656_processed.jsonl/83995
|
PicturePhoto credit: JLplusAL on Flickr (CC License)
From an early age, while other heavy-hearted teenage girls around me were torturing themselves with the question "why am I so fat/ugly/short/stupid," I was asking myself a very different question:
"Why, oh why, am I so hairy?"
If only I had known at the time that hairiness in women, or hirsutism in medical terms, really isn't that rare at all. In fact, most women have hair in places that would generally be considered "inappropriate" according to standards set by television and media.
To separate reality from the unrealistic standards of modern society, we need to look at where it is, in fact, normal for women to have body hair. Arms, legs, the bikini line, and underarms are a given. However, did you know that above the lip, on the chin, on the hands, fingers, feet and toes, below the bellybutton, and around the nipples is also normal in small amounts?
The truth is that hairiness in women should only become a concern when:
a) the amount of hair is obviously abnormal,
b) hair growth suddenly increases without obvious reason, or
c) the quantity of hair is detrimental to the emotional well-being of a person
Excessive hair or sudden hair growth may both be the result of an underlying condition which must be treated. Polycystic Ovarian Syndrome is one of the most common, and includes further side-effects such as acne, weight gain, and irregular periods. Certain medications such as hormones or steroids may also result in irregular hair growth. If you are concerned about the quantity of hair present on your body, it never hurts to pay a visit to your local doctor to make sure nothing is amiss.
Finally, as mentioned in point c), if the hair on your body is causing you emotional stress or depression, it may be time to take action. All permanent solutions, specifically laser hair removal and electrolysis, are expensive, so you have to consider whether your depression is serious enough to warrant an expense of upwards of $2000. You may also consider medical treatments such as Vaniqa, which inhibits the hair growth enzyme, or oral pharmacological agents which include but aren't limited to oral contraceptives, cyproterone acetate, and more. (To see a full list, visit the Monash University pamphlet on hirsutism.)
In my case, laser hair removal on my legs changed how I look at myself. No more jeans on the beach in hot weather while everyone else jumps into the water. No more fear of short-shorts and skirts. Just pure confidence in myself and my body.
Leave a Reply
Who is Heather?
A lifetime of plucking, waxing, snipping, zapping and shaving has made me an expert in the field of hair removal. When I'm not ripping out hairs, you'll find me drawing, out photographing the world, and teaching English to wee children.
May 2013
Being Hairy
Ingrown Hairs
The owner of this website, Heather Broster, is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking From Hair to Bare (www.fromhairtobare.weebly.com) to Amazon properties. To see more information, visit my full Disclaimer page. Thank you!
|
global_05_local_4_shard_00000656_processed.jsonl/84006
|
Jump to: navigation, search
Egbertus Marius ten Cate, born 6 December 1868, died 21 December 1926, was a Dutch Mennonite minister. After having finished his theological studies at the Amsterdam University and the Mennonite Seminary, he became a ministerial candidate in 1893. He served the congregations of Noordhorn 1894-1896, Monnikendam 1896-1904, and Apeldoorn 1904-1924. On 24 September 1924 he collapsed in the pulpit. As the pastor of Monnikendam and later of Apeldoorn, he also took care (from 1903) of a group of Mennonites at Amersfoort. When this group became an independent congregation (1919) ten Cate served also this congregation until he retired. Ten Cate was much interested in Mennonite history and wrote many articles in the Dutch weekly, De Zondagsbode and Dutch non-Mennonite periodicals. He contributed important articles to the Doopsgezinde Bijdragen (issues of 1899, 1903, 1904, 1906, 1911, 1918). Ten Cate also revised and edited in 1912 the third edition of A. Brons Ursprung, Entwicklung und Schicksale der altevangelischen Taufgesinnten oder Mennoniten.
[edit] Bibliography
De Zondagsbode (2 January 1927).
Author(s) Nanne van der Zijpp
Date Published 1953
[edit] Cite This Article
MLA style
van der Zijpp, Nanne. "Cate, Egbertus Marius ten (1868-1926)." Global Anabaptist Mennonite Encyclopedia Online. 1953. Web. 29 Jul 2014.,_Egbertus_Marius_ten_(1868-1926)&oldid=86573.
APA style
van der Zijpp, Nanne. (1953). Cate, Egbertus Marius ten (1868-1926). Global Anabaptist Mennonite Encyclopedia Online. Retrieved 29 July 2014, from,_Egbertus_Marius_ten_(1868-1926)&oldid=86573.
Adapted by permission of Herald Press, Harrisonburg, Virginia, and Waterloo, Ontario, from Mennonite Encyclopedia, Vol. 1, p. 526. All rights reserved. For information on ordering the encyclopedia visit the Herald Press website.
|
global_05_local_4_shard_00000656_processed.jsonl/84010
|
Al Qaeda Magazine Publisher Faces Charges
A grand jury convened yesterday to see if 24-year-old North Carolina blogger Samir Khan could be charged with terrorism offenses after he created the English-language jihadi magazine Inspire, with stories like "Make a Bomb in the Kitchen of Your Mom."
NPR spoke with sources connected to the case, and former acquaintances of Khan's, some of whom have received grand jury subpoenas. Among the charges being considered are providing material support to a terrorist organization and conspiracy to commit murder overseas. He apparently wasn't the most popular guy around, either: "Samir had very few friends around here, maybe one or two friends," a spokesman for the Islamic Center in Charlotte told NPR.
Khan flew to Yemen last year and disappeared, and he eventually published Inspire, the English-language magazine that offered tips on how to correctly pack before heading off to learn the tricks of the trade in Al Qaeda training camps, among other useful information. Ah, the kids these days... they just don't make zines like they used to.
[NPR; Image via]
|
global_05_local_4_shard_00000656_processed.jsonl/84011
|
NBC has been pushing Smash like crazy so everyone knows it's a show about a musical about Marilyn Monroe. Finally airing on TV tonight after being online for weeks, the show has big numbers and lots of talk about Marilyn and talent and being a star. Here's the pilot in 45 seconds.
|
global_05_local_4_shard_00000656_processed.jsonl/84023
|
Last name origins & meanings:
1. English and Scottish: from the Middle English personal name Dodde, Dudde, Old English Dodda, Dudda, which remained in fairly widespread and frequent use in England until the 14th century. It seems to have been originally a byname, but the meaning is not clear; it may come from a Germanic root used to describe something round and lumpish—hence a short, plump man.
2. Irish: of English origin, taken to Sligo in the 16th century by a Shropshire family; also sometimes adopted by bearers of the Gaelic name Ó Dubhda (see Dowd).
3. Daniel and Mary Dod, natives of England, emigrated to Branford, CT, in about 1645.
This name appears in the following lists: Country Stars
Comments for Dodd
7 Fun Driveway and Sidewalk Games for Kids
Kindergarten Readiness App Wins Gold
Best Sun Safety Practices for Babies
Find out what is happening in each day of your pregnancy!
My due date:
Pregnancy Day By Day
|
global_05_local_4_shard_00000656_processed.jsonl/84039
|
Crappy Casemod Contest Still Ongoing
Don't forget to send in your pictures for the lousiest casemods you can find. They don't have to be ones you made personally—although we've gotten a few personal entries from Gizmodo readers that make us shake our heads in shame—so send them in to with the subject "Crappy Casemod Contest."
|
global_05_local_4_shard_00000656_processed.jsonl/84040
|
Google Goggles Gets Text Translation: See C'est Bon!
Google Goggles—the fun-to-say visual search app—now includes a point-and-click translation tool that'll take a lot of the guesswork out of your next overseas vacation. It's like having a little multilinguist living in your phone.
It also looks stupid simple to use: just point your phone at a word or phrase, press the shutter button,
and you'll be given the option to translate it. The app's also smart enough that it'll automatically detect the source language for you, in case one morning you wake up in a country you don't recognize.
What's it good for? Menus, street signs, notes furtively slipped to you over a cappucino at Les Deux Magots. And as much as I fear this takes some of the adventure out of being a stranger in a strange land, there's definitely value in knowing that a three cheese puff salad is staring you right in the face.
Can't decipher what's on a foreign menu? Google's point-and-click translation tool on your mobile phone solves your problem.
Google Translate integrated with mobile phone application Google Goggles makes for the ultimate traveller app
Announced today, the newest version of mobile application Google Goggles (image recognition technology) enables automatic text translation using the phone's camera. From translating street signs to navigating foreign menus, people can now use their Android mobile devices for easy on-the-spot translation using the technology of Google Goggles plus the engine behind Google Translate.
Here's how it works:
* Download Google Goggles from the Android Marketplace
* Press the shutter button
* Goggles will recognize the text, and give you the option to translate
* Press the translation button to select the source and destination language. (Note we do our best to detect the source language)
Current languages supported include English, French, Italian, German and Spanish. We plan to extend our recognition capabilities to other languages over time, including both Latin-based and non-Latin languages.
In addition to translation, Google Goggles v1.1 features a larger database of recognized objects, improved user interface, and the ability to initiate visual searches using images in your phone's gallery. Point your phone at a building that takes you eye in Paris or Rome for example and be connected to search results telling you all about it.
|
global_05_local_4_shard_00000656_processed.jsonl/84041
|
34,000 Year Old Life Found Trapped in Salt Bubbles
Digging up salt in the middle of the desert usually yields a pretty boring find. As in, lots of salt. But a team of scientists in Death Valley hauled up a lot more than that—perfectly preserved, millennia-old bacteria.
Luckily, the lifeforms came in peace.—though it's hard not to when you're a bacterium trapped in a salt crystal. "It's permanently sealed inside the salt, like little time capsules," said explained Professor Tim Lowenstein of Binghamton University.
The bacteria was found in a sort of suspended animation—not moving, not reproducing—just sort of... sitting there. For over thirty thousand years, which makes them one of the oldest forms of life ever discovered on Earth. Their secret to survival? Algae, and lots of it. Enough food to keep them running—in sleep mode—for all these years. And most incredibly, once they were thawed out a bit, the bacteria started to reproduce again after being removed from their crystal cells. Tenacity! [LiveScience]
|
global_05_local_4_shard_00000656_processed.jsonl/84046
|
introduction usage terms download
updates getting help
SourceForge Logo
GlucoTools is a set of tools for the Palm Pilot and HandSpring Visor type PDAs that assist pumpers (diabetics using insulin pumps, Animas, Cozmo, Desetronic, Minimed, etc) with managing diabetes. It may be usefull for non-pumpers as well. GlucoTools is handy to double check meal and correction boluses. This software and source code is released under the terms of the GNU General Public License.
The insulin-pumpers website is a very good website for insulin pumpers or those considering an insulin pump.
The GlucoTools project is hosted on SourceForge.
I currently maintain five other projects on SourceForge, the Finite State Kernel Creator, NAGS Spam Filter, SourceForge Banner, NIC Diagnostics, EZ-USB2131 Linux Kernel Module [if you didn't notice, that was a shameless plug]
Screen Shots
The GlucoTools main screen
A meal bolus - time to eat!
Reviewing the records
Some interesting info
Some Features
PDA Requirements
Future Release
07/25/06: OK, so I'm running a little behind. I do plan another release at some point in the future, I hope sooner than later because I have some cool ideas for the desktop.
09/01/05: I plan to release v3.0 in six to eight months. Shortly thereafter, I'll release the first version of GlucoTools Desktop. I have a prototype that is written in Perl and generates JavaScript web pages so it will be OS/platform independent.
Terms of Usage
You must agree to the following terms before using GlucoTools:
1. I will NOT make any insulin dosage or other decision based solely on GlucoTools
2. I use GlucoTools at MY OWN RISK and will consult my physician before making any changes whatsoever
3. I have read and agree to the terms of the GNU General Public License
If you don't agree to the above terms, don't use GlucoTools.
While GlucoTools is intuitive, reading the user documentation is a must to fully understand the features of the tool. The importance of reading the documentation can't be stressed enough. The document is available in the Portable Document Format (PDF) and the Adobe Acrobat Reader is freely available (look for the "Get Acrobat Reader" icon). On many popular Linux distributions, xpdf and ghostview are two applications used to view and print PDF documents.
GlucoTools v2.0 Documentation (PDF) for you to read.
GlucoTools v2.0 Application (PRC) for downloading to your PDA.
GlucoTools v2.0 Source code (tar gzipped); so, you want to see under the hood do you?
Subscribe to the GlucoTools announcements list to be notified when new versions are released.
A list of changes within each release is available.
Getting Help
Any type of questions should be posted to the GlucoTools users list, you must be subscribed to the list to post (send email) messages to it. Previously posted questions and answers should be browsed first.
Reporting Bugs
Before submitting a bug, be sure to browse the submitted bugs to ensure it has not already been reported. If you don't see a similar bug, then submit a new bug with the following information:
1. From the GlucoTools main screen, press the hardware scroll-up button to display the GlucoTools information screen containing the PalmOS version, ROM ID, software version, GlucoTools source file version, and database version.
2. Type of PDA (Visor Neo, Palm III, etc.).
3. Describe the sequence of steps that exposed the bug.
4. Provide the exact text displayed, if any, when the bug is exposed.
5. Any known work-around for this bug.
Valid HTML 4.0 Transitional
|
global_05_local_4_shard_00000656_processed.jsonl/84047
|
Funny Things People Say or Do or That I Say or Do
Mar 7
Whoopsie Daisy of the Day: At the opening ceremony of a Kazakh ski festival, a blooper of Borat proportions threatened to make local officials national laughingstocks.
As guests of honor, including regional administration head Nuraly Saduakasov, placed their right hand on their breast and turned toward the flag in preparation for Kazakhstan’s national anthem, the PA system suddenly began blaring Ricky Martin’s “Livin’ La Vida Loca.”
The humorous bungle was quickly corrected, but not before the entire embarrassing episode was caught on tape and uploaded online.
|
global_05_local_4_shard_00000656_processed.jsonl/84052
|
Click here to search for abstracts by keywords
GCAThe entire volume of abstracts is available to view as a PDF file, and may be searched in Adobe Acrobat by pressing Ctrl+F or through the Edit > Search menu. Please note the file is 32MB in size!
To reference a Goldschmidt 2010 abstract in an article, please note that they are published in a special edition of GCA (Geochimica et Cosmochimica Acta), Volume 74, Issue 11 Supplement 1 (June 2010). The page number of a particular abstract can be found in the PDF files.
You may also view the abstracts by the first author's family name (note: these files are around 2MB on average):
Individual (one page) abstracts can be accessed via the Program by clicking on the abstract of choice, or by performing a keyword search.
|
global_05_local_4_shard_00000656_processed.jsonl/84056
|
Can Polymath be scaled up?
As I have already commented, the outcome of the Polymath experiment differed in one important respect from what I had envisaged: though it was larger than most mathematical collaborations, it could not really be described as massive. However, I haven’t given up all hope of a much larger collaboration, and in this post I want to think about ways that that might be achieved.
First, let me say what I think is the main rather general reason for the failure of Polymath1 to be genuinely massive. I had hoped that it would be possible for many people to make small contributions, but what I had not properly thought through was the fact that even to make a small contribution one must understand the big picture. Or so it seems: that is a question I would like to consider here.
One thing that is undeniable is that it was necessary to have a good grasp of the big picture to contribute to Polymath1. But was that an essential aspect of any large mathematical collaboration, or was it just a consequence of the particular way that Polymath1 was organized? To make this question more precise, I would like to make a comparison with the production of open-source software (which was of course one of the inspirations for the Polymath idea). There, it seems, it is possible to have a large-scale collaboration in which many of the collaborators work on very small bits of code that get absorbed into a much larger piece of software. Now it has often struck me that producing an elaborate mathematical proof is rather like producing a complex piece of software (not that I have any experience of the latter): in both cases there is a clearly defined goal (in one case, to prove a theorem, and in the other, to produce a program that will perform a certain task); in both cases this is achieved by means of a sequence of strings written in a formal language (steps of the proof, or lines of code) that have to obey certain rules; in both cases the main product often splits into smaller parts (lemmas, subroutines) that can be treated as black boxes, and so on.
This makes me want to ask what it is that the producers of open software do that we did not manage to do. I may not have the right answer to this question, but I do have a suggestion. Again, I have to admit that there is a lot I do not know about how open software is produced — for example, is there some big-picture planning stage before people start actually writing code, and if so, how is it organized? I’d be interested to hear from anyone who can answer this kind of question, and my suggestions may well need to be refined in the light of the answers.
Here, though, is my preliminary diagnosis. What I think we did that made it hard for all but a few people to contribute was to work on something that was not the final document: instead, we used blog comments in order to produce a high-level plan, which gradually became more detailed and precise. The comments were more like a conversation, and once an idea was digested by the participants one could leave the relevant comment behind and move on. Of course, some of the ideas made it into the eventual proof, but by that time they had been discussed in several other comments and their outer form had often been substantially modified.
Now it might seem as though we could not have done otherwise: it looks like a pretty essential feature of solving an unsolved mathematical problem that one does not know in advance what that proof will look like; and it also seems as though the best way to find out is to start with high-level thoughts and lower the level only when a thought seems to be promising. I do not want to deny any of that. What I would like to suggest is that we change what we think of as the “final document”. The obvious notion of “final document” is a write-up of the argument that eventually works, but there is a different notion that might serve as a better model for Polymath projects. I’m not quite sure what the best name for it is, but a first attempt is “proof-discovery tree”.
I am not going to discuss what the best implementation would be, because I do not know enough about what wiki-like facilities are out there, but the basic idea would be to produce an online document that thoroughly investigated all reasonable approaches to the main problem and arranged them in a natural hierarchical way. If, say, this was done on a wiki, then the main page would have links to subsidiary pages that would discuss very general ideas. (In the case of DHJ(3), one of these would have been the idea that we started with, namely, to model a proof on the triangle-removal approach to the corners problem.) Each of these general pages would naturally throw up several questions, and there would be links from these questions to pages at the next level of the tree, one page devoted to each question. A big priority in all these lower-level pages would be to make them as self-contained as possible, so that one could treat the whole process recursively: in theory you could go to a lower-level page and treat it just as you would the main problem, proposing general approaches to it, asking questions connected with those approaches, and so on. Of course, some of these lower-level questions might not be very interesting in isolation, but if you wanted to understand the motivation for them then all you would have to do is go up a level or two to see how they arose.
What would determine when a branch of this tree of web pages ended? It would be when a question was definitively answered. This definitive answer could well propagate up a few levels in the tree: for example, it might be a counterexample to a conjecture one level up, which might itself be so obviously necessary for the success of an approach outlined one level up still that that approach could be definitively labelled as not working (in which case one would put a note to that effect, but leave the lower parts of the tree that explained why it didn’t work), and so on. A complete proof-discovery tree could then be defined as one where all its branches had definitive endings, though it seems highly unlikely that this would ever be achieved for a problem such as DHJ. A successful proof-discovery tree could perhaps be defined recursively as follows: at the top level, one would have a precise approach to the problem, with links to subproblems that be sufficient, if solved, to solve the main problem, and each of these subproblems would have successful proof-discovery trees. The base case would simply be a question with a definitive answer.
In general a proof-discovery tree would have far more information than just a proof of a theorem: it would contain explorations of many other related ideas, and they would be organized in such a way that even if the tree was not a successful one, the document would make it easy to see what ideas had been considered and either rejected or temporarily abandoned. Such a document would be similar to the long sequence of comments that resulted from Polymath1, but with two differences, one major and one minor. The minor difference is that not everything in those comments would be worth including in a proof discovery tree. The major difference is that the logical structure of the mathematical ideas would be far more apparent. This would be a huge advantage for anybody who wanted to contribute to the project: they could simply follow a branch that interested them until they got to the end, and at that point they would attempt to make a contribution. Or if they preferred they could jump straight to a fairly deep point in the tree and think about a subproblem in isolation.
How might this all work in practice? I think it could be done in a way that is not too different from the way Polymath1 was organized, but there would be a change of emphasis. Instead of the blog conversation being seen as primary and the wiki being an add-on, the more codified proof-discovery tree would be the main focus of attention. However, there would still be a blog conversation. A typical contribution to the collaboration might be the creation of a new page of the proof-discovery tree and a brief explanation on the blog of what one had done and why. But it might be a minor edit to the proof-discovery tree (perhaps to make some page easier to understand), in which case a blog comment would not always be necessary — though for more elaborate edits it probably would be.
Why bother with the blog comments at all? Well, the linear structure of Polymath1 had definite advantages as well as disadvantages. The main advantage was that it was easy to find out what had been done recently. It was also good to have personal contact with other participants and to keep track of who had said what. And it would almost certainly be useful to be able to make blog comments that did not obviously fit into a proof-discovery tree.
My hopes for Polymath1 before it started were that it would be possible to make contributions without much effort. My reason for believing in this possibility was that, as Michael Nielsen elegantly put it, the solution to a problem arises as a result of an aggregation of small insights. I hoped that it would be possible to break the process of discovery down into sufficiently small steps that each one was fairly easy.
To some extent, that is what happened, but a serious problem with the idea was that, as I have already mentioned, having a small insight often depends on a rather deep understanding of the problem at hand. (If nothing else, this understanding helps one recognise which ideas are likely to be helpful.) A second problem is that a small idea can often depend on some other rather large ideas. For instance, “Mimic the proof of Theorem X” could be a small idea in the sense of being an idea that one can think of quickly and express in just six words, but quite a big idea in another sense if the proof of Theorem X has many stages, some quite technically complicated.
If we were to organize a Polymath project in the way that I am suggesting, then these two difficulties could be alleviated to some extent. We would value very highly the formulation of precise questions with yes/no answers, because such questions can be considered in isolation. Somebody adding such a question to the proof-discovery tree would be expected to explain it very carefully, and to present it as though it were the main question. This would be quite a lot of work, but the payoff would be that others would find it much easier to contribute. And in any case that kind of work could also be done collaboratively. Also, if somebody proposed a high-level approach such as “Mimic the proof of Theorem X,” the expectation would be that they, or others, would add links to detailed explanations of what Theorem X was and how it was proved.
If Polymath1 had been organized this way, then my initial contribution would have been to add to the top-level page (the main content of which would be a description of the density Hales-Jewett theorem), “Approach 1: mimic the triangle-removal approach to the corners theorem.” This would have been a link to an article explaining that idea in more detail. On the more detailed page there would have been the definition I gave of the tripartite graph in which triangles corresponded to combinatorial lines. There would also have been a link to a page explaining the triangle-removal proof of the corners theorem (on which there would have been a link to a page explaining the statement and proof of Szemerédi’s regularity lemma). And there would have been an enumeration of the definitions and proof steps associated with the proof of the corners theorem for which analogues were needed. These would have been hyperlinked to pages discussing them in more detail. One of these pages would, for example, have been a link to a page with the following subproblem: is there a usable analogue of Szemerédi’s regularity lemma for subgraphs of the tripartite graph where you join two sets if they are disjoint? And so on.
I have slightly oversimplified matters, because the logical structure of a proof-discovery tree would not always be as clear-cut as the above account would suggest. For example, if somebody proposes a variant of a question on the grounds that it might illuminate that question, then how does such a proposal fit into a proof-discovery tree? Here, I would envisage sticking to the tree structure, but making clear that one was not talking about formal logical implication. For instance, if a node of the tree was concerned with trying to prove Statement A, then Approach 1 might be, “Prove Statement B first, and then modify the argument to give a proof of Statement A.” There would be no guarantee that this approach would succeed, even if one managed to prove Statement B, but one could still have a link to a page all about Statement B. If the Statement B node ended up as a successful one, then one would go back to the Approach 1 node and follow a different subtree that was devoted to the question of how to modify the (now known) proof of Statement B. That subtree would be connected to Statement A in a more precise logical way.
My ultimate fantasy is that it might be possible to solve a problem without anybody taking a global view: everybody would concentrate on local questions, and at some point the proof-discovery tree would become a successful one. Success would be a purely formal property that one could verify automatically. How? Well, each time you had a node of the tree at the top of a successful tree, you would go to its parent node and make a note to the effect that one of the ingredients needed for that node to be successful had now been supplied. If it was the last ingredient then you would iterate this process, and if the top node became successful then you would be done.
I should explain further what I mean by “global view”. In one sense the initial, very general nodes of the tree would constitute global views of the problem. But that is not what I am talking about. These nodes would still be local when considered as part of the proof-discovery tree: it might be possible to understand that a certain general approach was worth trying but have only a very limited appreciation of how that idea could play out. So what I mean by “global view” here is a good understanding of a large part of the tree rather than merely an understanding the vertices that happened to be near the root.
If some fantasy like this became a reality, so that in an essential way the problem was solved by a collective super-brain and not by the combined global understanding of a handful of individual brains, then the problems about credit would be even more interesting. An ultimately successful Polymath project would be one in which nobody had done anything very impressive (just as a neuron doesn’t have deep thoughts) and yet the collective achievement was a notable one. Why would anybody want to contribute to such a project? I’m not sure, but I’m also not sure why so many people are prepared to give so much of their time to the open software movement. Perhaps it might be for some strange reason like wanting to know the solution to an interesting mathematics problem.
About these ads
43 Responses to “Can Polymath be scaled up?”
1. Gil Says:
Dear Tim,
Actually the success of the project was beyond my expectation (and beyonf any reasonable fantasies) both in terms of achieving tha mathematical goal and in terms of the open collaboration. Your initial post contained a very detailed plan with 38 steps (A-Z+AA-LL) on an attack on the problem, and when I saw it I was quite pessimistic that this plan can lead to a large open collaboration. At the end it was a large open collaboration. To have a successful open collaboration even without a massive number of contributors is already significant.
Here there was a fair number of collaborators and even larger number of precious participans who observed the progress.
Probably in order to make it larger you need to have less internal, often hectic, “competition” in the collaboration itself. On the other hand the intense mode of the efforts by a few participans was the major reason for the mathematical success of this project.
As you mentioned, problems that require much background and that a few people thought a lot about already,are probably worse in terms of “massive collaboration”. Your suggestion for polymath3 to openly discuss your ideas for Behrend-type upper bounds for Roth’s problem is a great idea for a next open collaboration; the stakes are order of magnitute higher than DHJT and it can be fruitful even if you will be the main player and it will force you to write these ideas in the open, and also if other people will just try to shoot them down. But it can lead to a larger form of open collaborations where the bounds will actually be pushed, or pushed relative to some other plausible conjectures.
I agree that the success of polymath1 depended on many people having some good prior understanding of the problem and related issues, and the open mode helped aggregating these understandings. Good! this is already very significant.
Is a large open collaboration a potentially good tool for gaining understanding collectively when we do not have it at all? We need more tries. Trying to attack a problem where little is known and little background is needed can be a good test.
As a metameta comment: I think some of the metadiscussion is premature.
2. name Says:
I believe that almost no open source project is a massive collaboration from the very start. Take for instance the Linux kernel. Before even releasing it to the public, Linus Torvalds had written a significant amount of code, a “skeleton” so to speak. Only after that did people start to contribute, adding things they needed that were missing, rewriting portions that they thought could be made better, and so on. The source code then grew a factor of 1000 over the next 17 years (the cumulative code size being much larger). Other projects start out with a mid size group, like the one in Polymath1, but having something with hundred of contributers from the start is very rare.
A project then picks up contributers as it goes along: if your new hard drive doesn’t work with the system you might be compelled to patch the device driver so that it does, and so on. An individual contributer typically doesn’t, and need not, see the “big picture”, but for every part of the software there is someone or a group of people who maintains that piece and knows how things fit together. (In the “cathedral” model this is a chosen group, otherwise it’s someone whose track record makes people trust him or her.)
Random comments on Polymath:(i) If you come across a fun/simple lemma, consider if you really should write down the formal proof yourself — it might be a perfect opportunity for an outsider to get into the project. (ii) Online reading seminars such as the one Tao organized are an awesome idea! (iii) Don’t make the platform overly complicated.
3. Andrew Stacey Says:
It’s a very interesting idea to compare mathematics with OSS. It’s something I’ve pondered a little recently. In fact, I’ve been setting up a blog/forum/wiki to explore this (and related) ideas!
My thoughts are summarised in this:
Open Source Mathematics
4. Matt Leifer Says:
One of the things that large-scale open source software products do is to offer a shallow learning curve, so that new contributers can start adding to the project straight away. Along with this, they also make it possible for individual developers to customize the software in their own way, without having to share the same “big picture” as the core developers.
In web-based applications (where I have most of my experience) there is often a plugin architecture that achieves this. For example, in WordPress you can easily write a plugin that displays a widget on the sidebar, a new theme, or a new language translation. These things can be written after reading only a small portion of the documentation and without understanding how most of the main WordPress engine works. The vast majority of developers just stay at this level, maintaining their plugins as the core engine is updated. However, if a developer starts working on more and more sophisticated plugins, e.g. something that changes the way that the admin controls work, they gradually have to absorb more of the documentation and start looking at the source code to figure out how to do things. In this way, they begin to notice bugs and improvements in the source code and start to submit patches to the core engine. If they get even more interested, they may then become part of the team working on core development.
It is also important that several different “big pictures” are allowed to coexist in the same project. For example, the core WordPress team want to make the best individual blogging platform on the planet. However, other groups see WordPress as a more general CMS and release distributions that are pre-installed with custom plugins specific functionality. There is Buddypress (a general social networking engine), WordPress mu (a multi-user, multi-blog version of WP) and something that turns a wordpress installation into a Twitter clone. Similar examples exist in other open-source projects, e.g. often people will write a version of something for the mac that looks more mac-native than the original – compare OpenOffice vs. NeoOffice, Firefox vs. Camino, etc.
For most open-source projects, it is a relatively small group of developers that end up maintaining the core engine of the project. Most are working on plugins or customizations of the software for some specific purpose. Relatively few people care about the “big picture” and it is more usual to find a benevolent dictatorship rather than a democracy behind it all. In fact, these days it is fairly common to find a commercial company behind all the core development.
The bottom line is that what you need is a “plugin architecture” for polymath projects, i.e. a way of contributing small things without having to know much about the core of the project.
5. Chris Johnson Says:
-[is there some big-picture planning stage before people start actually writing code, and if so, how is it organized?]-
For open-source projects, I think the model is often that the core of the program is built by one person, and this program becomes a successful open-source project if is sufficiently interesting to, and extendable by, others. The big-picture planning (‘software architecture’) is done by the project initiator before he starts writing, and the practicalities of software development mean that once the core code has been written, it’s hard to change the structure. In commercial software, the big-picture planning will be done by a small team of expert programmers and project managers.
This software-design stage is the part that is most like working out a mathematical proof, in the sense that it involves creative thought and lines of thought that lead one down blind alleys. In large commercial projects, there will be a formal specification of what the software is to achieve, and once the software architecture meets this specification, ‘the theorem is proved’. The actual coding stage is analagous to writing up the proof in LaTeX, using sufficiently explicit mathematical language that a computer could interpret it.
It’s unusual for an open-source software project to have no-one who understands the overall picture, though common for some (perhaps the majority) of contributors to understand only a small part. The latter contributions are, in mathematical terms, simple corollaries to the main result or one of the main lemmas, often quite tangential to the main direction of the proof, which add a simple feature to the software to allow it to do something not anticipated or thought important by the original software designer.
6. Daniel Says:
Much has been correctly said about Open Source and it’s /modus operandi/, and i don’t mean to be pedantic nor repetitive, but i’d like to add a few notches and maybe contextualize things a bit.
The creation of UNIX is really the “critical point” in this context: at that point, before the formal creation of ‘computer science’, what was done was to apply the already galvanized principles of science (including math) into doing computing. In this sense, Free Software was born, and lo and behold, it’s a strong reflection of the principles by which we all abide when doing science: collaboration (massive or not), free exchange of information, modularization (the break-down of a problem into its constituent parts), etc. So, firstly, i believe the appropriate metaphor here would be “Free Software”, rather than the more pragmatic “open source”, but this is just a minor point.
Having said that, i believe there’s one fundamental difference between the way to prove a theorem and its analogue in software development, namely the writing of a piece of code: when you’re developing a piece of software, it’s very possible to “algorithm-ize” each and every step of the way; and although this may be similar to the construction of a proof, a technique for proving a certain theorem may not be so “modularizable”, i.e., sometimes it’s not possible to break a theorem down to it’s “dumb-bits”, tiny-little pieces that require very little “thinking” to be proved — which is something very common in the development of FLOSS (“Free-Libre-Open-Source-Software”). So, along this line, here’s my take on this problem:
(1) modularization: Even though, sociologically and ecologically speaking, a FLOSS project is usually not born with all of its “management” ready (meaning, the formal breakdown of all the little pieces involved), it’s true that one of the core UNIX principles is that of maximum modularization: make one tool that does one job very well, can compound them to obtain a certain desired result. IMHO, this is only “perturbatively true” in math (or physics, for that matter) — meaning to say that, sometimes, it’s not possible to breakdown an action into its constituent bits… it’s very common for this to be possible only in hindsight (which is the very opposite of what’s done with FLOSS projects). Further, sometimes in math it’s not about “piping” the result of one tool into the input of another, but more like a “star-like” application, a multi-faceted combination of the use of these bits. And, in this sense, as Tim has already pointed out, one does need to have some sort of “global picture”, otherwise it’s all but impossible to know how to proceed.
(2) Focus on *collaboration*: this may look like a marginal point, but the inner workings of a piece of software do model its outcome in a very clear way. For instance, Wiki projects focus on a database-heavy approach, i.e., as far as wikis are concerned, it’s about accumulating data, filling the DB with information. However, from the point of view of software development, i believe that an approach like ‘git’ (i.e., a “version control”-like approach) is better suited: this approach focuses on the particular contributions /per se/, rather than on “volume of data gathered”. While it’s true that wikis do have a very basic “version control” system, this is not their main fulcrum; while for “version control” software, this is the very point — and this is the reason, e.g., ‘git’ was chosen to handle the projects of the Kernel, Gnome, X, etc; this way, massive collaboration can be better handled.
So, in summary, i’d say that if a person was able to break apart the proof of a theorem into completely dumb-proof bits, it’s massive collaboration would be optimized, once one could sit and blindly apply a few [highly modular] tools in order to ‘spit’ the answer. However, for more intricate projects, the collaboration may need to be twofold: on the front of the proof itself, but also on the strategical front as well (which makes it a bit of a meta-collaboration).
7. Robert Says:
Others have mentioned modularity and plugin-in structure before. But what I would like to emphasize is the specification of “interfaces”: In order to contribute modules without having to understand the big picture, you would need to have a somewhat clear structure of what the finer grained structure is supposed to deliver. Such minimal interface allows you to contribute plug-ins with only the requirement to match the interface.
In the tree model of progress, there is of course as well the danger of reinventing the wheel a million times: There should be a possibility to have cross connections for sub-problems that appear at different places but that are of a similar nature so they might as well be handled all at once. To discover those, of course, a more global point of view is needed.
8. Martin Schwarz Says:
With regards to the similarities of OSS and mathematical proof development, I think there is one crucial difference to observe: A successful OSS project is usually released very early, giving only an idea of where it is about to go and what might be useful, but where it is already sufficiently useful to attract some expert users that both, make use of it, and – as they are experts – have the capabilities to change, extend, and generalize it to solve their higher-level goals even better. Only after the expert group has sufficiently generalized and completed the product, non-experts users will be able to pick it up and use it as black-box.
Translating to maths collaborations, I think this would translate to proof important special cases first, that are usable by quite a number of experts, attract their attention and make them generalize the special-case to full generality. Or, it might map to the development of good conjectures with supporting evidence, that would allow to proof various conditional results within some other context and which would attract experts of these fields motivated to remove the condition their higher-level proof depends on.
9. Jason Rute Says:
Hi Tim,
One comment I have to add is that your new suggestions on how to do collaborative math remind me of collaborative projects in formalized math where the goal is too find a machine-checkable formal proof of a math theorem. The projects–whether done individually or in a group–often involve splitting a much bigger project into smaller subgoals, which in the case of a collaborative project can be handed out to individuals who don’t necessarily need to know the details of the larger proof.
One example is Tom Hale’s Flyspeck project to give a formal proof of the Kepler conjecture using the theorem proving system HOL Light. It’s hosted on Google Code under an open source license. It involves collaborators from across the world. And each can work on their part of the project without necessarily having to understand the other parts. (This is only one such project, but also possibly one of the most open ones. There are others using the Mizar, Coq, and Isabelle systems. See the December copy of the Notices for more details.)
There are however some big differences between your approach and that of formal math. Formal math projects usually start from an existing informal proof (although it may still need to be modified novelly to make it easier to proof formally). Also, it’s easier to hand out smaller projects since it usually involves giving someone a proof of lemma X and saying “go formalize this.” Also the time frame on these projects can be quite long because of the tediousness of formalizing even the most simple math.
Yet, there may still be some similarities that can be learned from. It seems in the Flyspeck case, that there are two types of participants. The first are those who are “experts” in the field who have helped Tom considerably in checking the correctness of the computer code (the reason his proof was so controversial in the first place). The others are the people newer to the scene (many undergrad or grad students) who formalize basic mathematical facts needed for the proof. To make the later case go smoothly, Tom Hales spent (what I imagine was) considerable time making a clear outline of the facts needed for his proof and a sketch on how to prove them. This document is on the Google code site and may have similarities to the wiki idea you have. Although, as I already mentioned, the goals are a bit different.
10. gowers Says:
There are many interesting points here, of which one, which has been made in various ways, strikes me as the main one: that in order to make small contributions to a large software project you need to have something that’s already fairly well developed (or at least, if this isn’t an absolute necessity then it certainly makes things far easier).
Just to make sure there is no misunderstanding, I’d like to spell out the analogy I am drawing, because I think I didn’t make it sufficiently clear. In the third paragraph I may have seemed to suggest that the analogy was between computer programs and mathematical proofs, with subroutines corresponding to lemmas, and so on. And indeed, I think there is a close analogy there. However, that is not the analogy I want to highlight. The important analogy from the Polymath point of view is between a big piece of software and a proof-discovery tree. The proposer of a Polymath project could get things started by writing down a number of thoughts and arranging them in a tree-like fashion (or perhaps some more general directed graph). That would be the analogue of an initial piece of software that did actually perform some interesting function.
Note that the initial Polymath document would not necessarily need to prove anything: the analogue of “perform some interesting function” would be “improve understanding of the main problem”. Then the analogue of “discover that the existing software does not do what you want it to do” would be “find that the existing explanation is hard to understand”. The analogue of “rewrite part of the code so that it works on my computer” would be “rewrite part of the proof-decision tree so that I find it more transparent”, and so on.
In other words, the hope would be to get the project to take off by having as its main objective not so much the solving of the problem (though of course that is what one wants to be the result of the exercise) as achieving a very complete understanding of it and all its attendant difficulties.
Another point I’d like to make very clear: if you have a tree-like structure then there is implicitly some relationship that is expected to hold between a node and its children. That relationship would not be logical dependence, but rather something like “is the motivation for”. Logical dependence from a child node to a parent node would be just one of many ways in which the parent node could be motivation for the child node.
If somebody were to produce a proof-discovery tree that reached a certain critical mass and had lots of “open” leaves (that is, leaves with well-defined questions that were still not answered in any remotely definitive way), then I think that some of the barriers to participation would be removed.
11. Gil Says:
I do not know what masses are expected when overall the mathematical community is rather small. And I also do not fully understand what “few” means. But I think Tim’s statement are simply false.
There were a large number of people contributing, and there was an even larger number of people following in a way which make them potential contributors if something will come their ally. When people said that they followed the discussion and had some idea but then somebody presented it before they had a chance to, for me this is part of contributing. In fact, even people who followed it and thought about the issues and did not come with something to say are contributors to this collective effort.
So I think that overall the large collaboration we have witnessed is superior in terms of potential in solving math problems and in terms of having a large number of people being part of a collective effort compared to the tree-fantasy in the post.
What so good about this fantasy? Having a global view is a nice part of doing mathematics and the bottlenecks are often with technical matters and not with global views.
Maybe we should wait with this super-brain ideas for a little while.
12. Hunter Says:
Re: the form of informal discussion
(the original) wikipedia now has a ‘discussion page’ attached to each content page… It seems to me that following this model would provide a good place for what were formerly blog comments. The discussion pages should have rss feeds automatically generated so collaboraters could aggregate the feeds of parts of the problem they’re interested in. If the project got large, perhaps someone would come forward to do a daily digest of a broad range of the discussions so the thing didn’t get too balkanized.
13. gowers Says:
Gil, the super-brain idea is perhaps a little far-fetched, and risks taking some of the joy out of doing mathematics. It would be good only if there were some problems that could be solved in that way that could not be solved with smaller collaborations, and as you say it is far from clear that that is the case.
Nevertheless, I think that the change of emphasis I describe could be a good thing quite independently of whether it increased the number of participants by a factor of ten. If we took the wiki part of the process much more seriously, aiming to include, in a carefully organized fashion, all the promising thoughts that emerged in the blog conversation, then the result would be a document that ought to be much more useful to people who wanted to join in the collaboration. And it would also be make it much easier for people who wanted to attempt the problem at a later date if it did not end up being solved by the Polymath collaboration.
14. Gil Says:
Dear Tim,
Indeed you may be right. I tried also to think if the idea of more emphasis on wiki rather than blog threads is good or bad but beside my personal hunches and preferences I simply dont know.
Regarding polymath1 I’d love to see the discussion continues in parallel to having a wiki proof at place and letting the participants digest it. Talking about Shalahfication or Gowersfication of the approach towards better bounds and discussing spin-off matters with the attention the project got, can be more useful (and more timely) than trying to tune the polymath mechanism itself, and I am sure many like me are eager to see how the simplest Szemeredi proof will look like, and perhaps teach it soon.
But the main point is this: polymath1 was a success in all respects. It should be completed (where, as usual, those who contributed the most will have the lion-share of the writing and finishing and explaining job). And you should be proud and happy; it is not GRH, neither it is a computationally-superior mode of doing mathematics (yet), but it is a very important problem, and a truly novel mode of collaboration, which attract attention and enthusiasm.
15. Kristal Cantwell Says:
I think the idea of proof-discovery tree sounds interesting. It is worth being implemented to see what happens. I suspect that any growth between this and the next such project will be around a factor of two and that a series of projects might be needed to get greater growth. Your ultimate fantasy reminds me of the Chinese room argument. It might come up as an attempt at converting a large computer proof into something that could be understood by humans.
16. Joseph Myers Says:
(Writing as a mathematician turned (mainly open source) software developer.)
There is by now a substantial body of literature dealing with open source software development from a social sciences viewpoint. This might help answer some of the questions about how and why it works and what motivates people to take part. (I do not have any specific guide to or bibliography of this literature to recommend. Although Eric Raymond’s essays are certainly worth reading as one viewpoint on things.)
Then again, the literature is likely to give several different answers to some questions; different open source projects can be very different in how their development communities operate, so you can’t necessarily read general conclusions from a study of or experience in one community. And just as the communities differ, so too do the individual participants and how they participate. Some do make small local changes without a global picture, some make more wide-ranging changes or refactor code after subsequent changes have made it clearer how things should be structured; some projects have more scope for the local changes and others have greater need of the refactoring. Some discuss and agree on designs for more complicated changes before bringing them into effect. Some provide community leadership, official or otherwise. Some may focus on documentation, or on triage of bug reports. Some prefer to contribute pieces to larger longstanding projects and choose projects to contribute to accordingly; others prefer creating something new on their own and found new projects. (Sourceforge is littered with any number of duplicative and largely defunct projects from people who made their own instead of contributing to someone else’s project to do something similar.) It’s likely massively collaborative mathematical research also has room for the different styles of projects and contributors.
Studies have, for example, considered such areas as: the economics of open source software; how developers interact in open source development; the demographics of open source development (linking into a previous question regarding women contributing to Polymath, at least one study found that open source developers were 98-99% male compared to 70-80% in traditional software development, and there is a whole bibliography concerning that subject); motivations of open source developers; the extent to which the development is done by developers who are or are not paid to do it. (And where any such area is studied statistically, different results may and do arise depending on whether you count by number of developers, number of separate contributions or amount of code contributed.)
One day social scientists may be looking at Polymath collaborations in similar ways.
17. Michael Hudson Says:
Two points:
1) There is a growing belief in software development in general that the “top down” style of software/project organization where requirements are understood before the architecture is laid out before the programming begins is not a realistic or helpful model for how to get things done (keywords “agile”, “lean”, “scrum” etc, though it does seem to turn into a religion for some people). I have no idea what this means for collaborative mathematics — except that perhaps it’s worth thinking about not over-emphasizing on solving a particular problem and rather try to develop tools that are generally useful.
2) The way open source projects keep track of where they are and what needs to be done is often in an issue tracker, not a wiki. Have you considered using trac or Bugzilla or similar software?
• Michael Nielsen Says:
Two points about issue tracking software:
(1) Issue trackers help a lot with modularizing problems (and localizing the scope of conversation), making it easier for more people to contribute in parallel, without being overwhelmed by the quantity of conversation.
(2) In a fashion similar to Tim’s suggestion of a proof-tree, issue trackers can be used to track dependencies in a hierarchical way, even showing dependency graphs.
(The link is to the Firefox “bugtracker”, but despite the name it’s used to handle issues other than bugs, including feature development – here’s an example.)
I don’t think existing issue tracking software is suitable for Polymath-like collaborations, but as a successful existing mode of organization it’s a useful model, and addresses some of the problems raised in Tim’s post.
18. Rajiv Das Says:
You can explore more collaborative UI like a storyboard that makes it easy to add/rearrange tag ideas as well as have a big picture view of what’s hapenning.
19. Boris Says:
I do not understand, Polymath has succeeded in solving a larger problem than it meant to solve. It did so quickly. Why then is “small” scale of polymath a failure? Why is it a failure to solve a problem having expended efforts of fewer humans rather than more? Doubling the number of people will not only double the number of thoughts produced, but also increase the number of duplicate thoughts, and increase the communication overhead regardless of communication infrastructure used. Is it a wise expenditure of human efforts? Isn’t it better if instead there are two (or more) completely independent projects, so that less effort is wasted overall?
My personal feelings are strongly against doing mathematics without “global picture”. Even if I concur that it is possible, I would not want to be the person on the bottom of this solution-tree, who labours on a subproblem he hardly understands relevance of. The “ultimate fantasy [...] that it might be possible to solve a problem without anybody taking a global view” is my ultimate nightmare. The only reason I think about this or that piece of mathematics is that I want to understand what goes on. I do not care whether a theorem is true or false. If tomorrow there is a solution to some problem in which I am greatly interested, such as the diagonal Ramsey numbers or Roth in Z_3^n, but it is such that I have no realistic hope of understanding the solution, or the solution ‘cheats’ and provides no insight, it will make me sad rather than happy. I do not want to become a part of a “super-brain”, I want to understand the mathematics with my own feeble brain,
20. Michael Nielsen » On scaling up the Polymath project Says:
[...] Gowers has an interesting post on the problem of scaling up the Polymath project to involve more contributors. Here are a few [...]
21. gowers Says:
Boris, maybe it was a mistake to talk about the fantasy of lots of people rapidly solving a problem with nobody having a global picture. I would be fascinated if that were possible, but I would be no keener than you to spend my life making small contributions to lots of proofs that I didn’t really understand. However, I would like to repeat that one obtains a different and more palatable version of the fantasy if one interprets the word “local” differently: I think it could be mathematically rewarding to participate in a Polymath project and understand only a tiny part of the proof-discovery tree if that tiny part had the structure of a branch. Then one could be working on a small problem, but because one understood the entire path that led from the statement of the main problem to the small problem, one would have a clear motivation for the work, a clear sense of progress if one managed to solve it, a clear idea of where to go next, and so on. Indeed, this is very like usual research: one spends a lot of time thinking about local problems, and it is even possible to forget the big picture while one is doing so.
I’d also add that it seems very likely that some problems are by their nature much more parallelizable than others, and this could have a big effect on the optimal size of a collaboration. I think this is the right way of looking at what happened with DHJ: I certainly don’t regard that as a failure, and the size of the collaboration felt as though it worked extremely well. Perhaps it was even more or less optimal for that problem.
One last thing I wanted to say was to take up a point made by somebody in a comment on another post. (I apologize for not being able to remember exactly where the comment is and who made it. Gil, was it you?) They compared Polymath to what goes on in theoretical computer science, with posts corresponding to papers, which in that area are often short and made public very rapidly. There is a large subset of what goes on in theoretical computer science that could be regarded as a massive collaboration aimed at proving that P does not equal NP. One can have a rich and satisfying mathematical life contributing to this huge project, even without understanding every last detail of every promising approach to the problem. What I envisage is something like this but on a smaller scale and done online.
22. Jason Dyer Says:
Re: Boris’s comment.
For massive collaboration, I think a “cloud” metaphor is better than a “tree”. We need to get away from the notion of one portion of the work being “more important” than another. Yes, there are problems that contribute more to the larger mathematical picture, ideas that are more brilliant, but side problems can be interesting in themselves (and form their own spin-off projects that have nothing to do with the original — and I think that should be allowed). For example, even if the hyper-optimist conjecture is proven false, I think Fujimura’s problem is interesting enough to be studied in itself.
This ought to remove some of the depression of not being able to see all the parts of the cloud. This is how it is in normal mathematical development anyway.
What also is appealing to me about a “cloud” is that there could be many different separate projects, yet if everything is united with the right technical tools. Imagine two projects having the exact same side question, and another group is formed to solve that side question which “links” the other two projects.
23. nicolaennio Says:
polymath project is about finding a mathematical proof, but an important by-product is that collaborators gained knowledge and insights about combinatorics.
What I mean is that this kind of collaboration is a format for teaching mathematics in the same way an open source project makes you learn about a software or train you to produce clearer code.
Wouldn’t be nice to look for “instructive theorems”? That is theorems that should be solved by a group and that challenges people to “create” a common background that will be the real product of the collaboration?
24. Gil Says:
The tree-proof with nobody having a global view of what is going on is sort of a nice idea even if a little far fetched (not the idea itself but the position that it may lead to some sort of a superior ability to prove theorems). We can try to experiment it sometime. It seems related to automatizing doing math which is another intruiging idea.
I wouldnt mind spending time in doing some little work in a big project I do not understand. Actually I took part in a sort of a similar project. It was a large tree-like collaboration aimed at refereeing Doron Zeilberger proof of the alternating sign matrices conjecture (monotone triangles). At some point Doron thought that his new refereeing tree-process where nobody has a global understanding but only locally check some subtree of the proof as even more important than the proof itself. You can find the paper and the refereeing story in Electronic journal of combinatorics.
The entire proof was checked also single handedly by David Bressoud.
One lesson I had from that project that a more involved directed graph is better for various reasons than a tree. Maybe it applies also here.
I think a lot of it is indeed around NP not equal P but only small part of it really meant at proving that NP is not P.)
25. carnegie Says:
Are problem solving projects really suitable for mega-collaboration? A focus on problem-solving tends to involve lots of deep thought rather than lots of broad thought. Usual caveats about no clear dichotomy etc. apply.
Theory building projects would seem to me to be far easier to compartmentalize, to have a “benevolent dictator” (like Langlands) keeping track of the direction and management – and also for people doing “small” things to feel they are making genuine contributions whilst maintaining a sense of the bigger picture.
In many ways Gorenstein already accomplished this, just not quickly and without the massive communication boosts the internet allows. Atiyah did too, with his “influence number theory using tools from QFT” project which has come to fruition with (e.g.) the proof of the fundamental lemma or the calculation of Tamagawa numbers using Yang-Mills theory.
26. Polymath1: Success! « Combinatorics and more Says:
[...] related posts: Tim Gowers raised in this post interesting questions regarding the possibility of projects were the actual number of provers [...]
27. Timothy Chow Says:
It surprises me a little that, in this discussion, there haven’t been more frequent mentions of the classification of finite simple groups as an example of a polymath-type effort where nobody had a “global” view in Tim’s sense. As I understand it, no single individual has a complete grasp of the entire proof. The lack of this global view didn’t mean that working on individual pieces of it was not intellectually satisfying.
What strikes me as being a key feature shared by (a) large-scale software development, (b) Flyspeck-like formal theorem-proving, and (c) Zeilberger’s tree of referees is that there is a pretty clear idea at the outset of what needs to be done. To be sure, the details are not at all clear and can turn out to be quite different from what one might initially expect. However, one knows at the start that there are no fundamental obstacles to completing the project and that the difficulties are largely logistical. I suspect, then, that while Polymath might be extremely good at solving problems for which a lot of the necessary tools are already “out there,” it may not have any particular advantage when it comes to coming up with radically new ideas.
28. Kevembuangga Says:
Hé, Hé, interesting…
It seems carnegie and Timothy Chow hold contradictory opinions.
Could any or both of you elaborate?
29. Timothy Chow Says:
carnegie and I seem to agree that Polymath is more naturally suited to “broad thought,” i.e., combinations of disparate but existing ideas, rather than “deep thought,” i.e., radically new ideas. Where we might disagree is whether “broad thought = theory building” and “deep thought = problem solving.” The dichotomies broad/deep and theory/problems seem orthogonal to me.
30. gowers Says:
Just to add my pennyworth, I think I do genuinely (if tentatively and conjecturally) disagree with you here Tim: my hope is that the broad thought that Polymath is suitable for can provide more quickly and efficiently a platform for the discovery of the radically new idea that solves the problem. For a long time I have tried to argue against the conception that unexpected new ideas come from a mysterious “flash of inspiration” or “stroke of genius”, and Polymath is (partly) an attempt to add to that argument. (I’m not trying to suggest that you have a simplistic attitude to this, and it may be that we don’t disagree after all, but your words sound on the surface as though they disagree with me to some extent.)
31. Gil Says:
Hmm, this is an interesting issue and my heart goes with Tim G. on this one. An example I like is the “probabilistic method”. This is a deep conceptual idea that had profound effect in various fields, but not at the same time. Suppose that there was a secratary in the math department whose job was to send memos like: probability was used successfully in number theory; folks, lets try it for combinatorics, or for algorithms, or for groups, or for Banach spaces, or for topological manifolds,…So in principle, some flashes of inspirartions could be replaced by routine efforts.
32. carnegie Says:
Timothy Chow: yes, on reflection that was not a valid dichotomy. But the point is that with ‘theory building’ you can achieve an awful lot by asking “what structures are involved” and then “what superstructures are these structures examples of”. If you find GL(n,C) playing a role in some theory, an “obvious next step” is to say “can we extend this to an arbitrary Lie group”.
If you have a property which applies to smooth manifolds an “obvious next step” is to ask “does this extend to orbifolds”.
“Obvious next steps” in physics include creating supersymmetric versions of a theory, or noncommutative versions of a theory.
If a property holds over the complex numbers, a theory-building number theorist will immediately ask “what about finite fields and p-adics”.
This attitude is far more amenable to breaking up tasks.
Gil says he agrees with Tim Gowers, but the example he takes perfectly demonstrates the theory builder’s attitude. There is no “problem” to solve, the issue is instead “use methods from field X to gain an understanding of field Y”.
I don’t agree with Tim Gowers. I believe that “moments of inspiration” are often genuinely enlightening. Many mathematicians have vivid memories of such moments.
Of course, the chances for such moments to occur increases when you are exposed to and moderately engaged with many different people talking about different things. But the actual process leading up to a flash of inspiration is opaque. In that sense, polymath could prove incredibly valuable from the perspective of mathematical sociology, not because it is massively multiplayer, but because of the accompanying wiki-like documentation of the lead-up to the flash.
If I kept notebooks of all my ideas, including the stupid ones, and occasionally wondered upon achieving some milestone: how did I get here, and what was the roadblock, and how can I make sure I incorporate those ideas into my future thinking, I suspect I would become a far better mathematician.
33. Timothy Chow Says:
Let me clarify my view on “radically new ideas” or “flashes of insight” versus Polymath. I don’t mean to endorse the modernist ideal of the solo genius having a brilliant idea ex nihilo. I agree that ideas that seem to have come out of nowhere could (in principle at least, if we had a complete record) usually be traced back to “explainable” origins, and that the inputs from multiple people could expedite these flashes. In a recent interview, Ingrid Daubechies said that she often had recollections of “sudden flashes”; however, when she went back to her written notes (she keeps quite detailed notes of her ideas, including false starts), she found that her memory was faulty: in point of fact, the elements of those sudden flashes could be discerned in retrospect among the unsuccessful and abandoned attempts. Wiles’s account in his famous paper on Fermat’s Last Theorem, if you read it carefully, also shows that his sudden flash was not ex nihilo, but benefited from earlier abandoned ideas and the input of others.
However, I think it is useful to distinguish between Polymath on the one hand, and the entire mathematical community as a whole on the other hand. You could, perhaps, argue that the community of all mathematicians, past and present, comprise one giant Polymath. This point of view strikes me as not being very useful. For me, Polymath is an entity that works on a certain project for a certain amount of time. The exact boundaries may be fuzzy, but they stay within certain limits.
The question, then, is whether major breakthroughs can be significantly expedited by a Polymath-type collaboration. It’s possible, but my instinct is that the nature of a major breakthrough is that its genesis can’t be “forced” just by having a much larger collaboration than usual. For the breakthroughs to occur, we of course need a healthily functioning mathematical research community that shares its results and builds on previous work on so on. This creates a giant pot of simmering ideas, out of which we hope that good things come. Beyond this, though, I think it’s very hard to control when the perfect confluence of events will occur that generates a big leap. It might occur in Polymath’s mind or in the mind of an individual who happens not to be collaborating with anyone at that moment (though of course he or she will have learnt a great deal from others prior to that moment).
Where Polymath has a definite advantage over the individual is in projects where there is already a tolerably decent map of the territory to be explored but the territory is too large or requires too many different kinds of talents to be covered by one individual. But for problems where we currently have no idea how to proceed, I think we just have to muddle ahead as an entire community and hope that we’ll eventually get close at some point.
34. Klas Markström Says:
One thing that the polymath seems to be efficient at is eliminating flawed approaches to solving a problem. When someone proposes an approach there is a large number of people with different skills who can construct counterexamples which the proposer might not have found so quickly.
In a way this gives a kind of natural selection-effect which both guides the efforts towards those approaches which have a chance at working and helps build an understadning of what a working approach must be able to cope with.
35. gowers Says:
Tim, I think we probably do disagree, but not all that much, and with much less than full certainty on both sides. My instinct tells me something like this. Obviously no method can “force” a breakthrough, for the trivial reason that there may simply not exist an argument that is remotely within today’s technology. So the question is whether, if Polymath works on a problem where a very unexpected argument does happen to exist, it is potentially a more efficient method for finding that argument.
My instinct tells me that it is. One reason is the one that Klas Markström gives: Polymath can judge approaches quickly, which makes it more feasible to throw out slightly wild ideas in the hope that one of them will work. (As an individual, one could do the same, but sifting out the one good idea from the 99 bad ones would be a much slower process.) A second reason is that, as Polymath1 showed, the initial exploration of fairly standard ideas can be done very quickly, so one can arrive sooner at the point where it is much clearer what needs to be done and what the real gap is.
I’m still not certain that I’m disagreeing with you, because you may perhaps be talking just about rather extreme cases of problems like the irrationality of \gamma, where nobody has the slightest idea even where to start. But for that kind of problem I think the history of maths shows that progress, if it occurs, tends to occur when other parts of the subject develop to the point where the problem changes from being completely out of reach to being within reach but difficult. One might cite Fermat’s last theorem as an example of this. What it doesn’t show is that we have to wait for a genius to have an unexpected idea in isolation. Of course, it often has taken a brilliant idea, and the connection between Fermat and Shimura-Taniyama-Weil was such an idea, but it’s not obvious that an appropriate Polymath couldn’t have had that insight more quickly. (E.g., someone might have just suggested completely unseriously that a counterexample to Fermat could lead to an interesting elliptic curve, someone else might have picked up on that idea, etc., which, in slow motion, is sort of what happened.)
36. Timothy Chow Says:
The irrationality of \gamma is certainly one problem of the kind I was thinking about. Consider the irrationality of \zeta(3) as another example. While it’s possible that Polymath might have beaten Apery to the punch had it existed then, I think it’s far from clear. The problem with \zeta(3) wasn’t that it was “technologically” out of reach. I think it was just that almost everyone thought it was an unapproachable problem—or at least that it wasn’t worth working on.
For Polymath to work on something, a certain number of people have to all be convinced that it’s worth their while to think about the problem. An unpopular topic, or an approach that everyone thinks is crazy, may actually have *less* chance of getting traction with Polymath than with an individual. There are plenty of instances where someone decides that he or she doesn’t give a hoot about fashion, and insists on working doggedly in some direction that everyone else thinks is hopelessly misguided, and is eventually vindicated.
Big breakthroughs often have a strong psychological component to them. Looking at the idea after the fact, one might be able to argue that many other mathematicians could have made the same breakthrough, but just didn’t have the courage to believe that such a bizarre approach could possibly work. Groups and individuals have different psychological characteristics, and I believe it is a mistake to think that the group will always outperform the individual. The individual has the advantage of just needing persistence and faith and doesn’t have to worry about selling the idea to others before the final results are obtained.
37. Jason Dyer Says:
I think we’ll see a proof of the normality of \pi before a proof of the irrationality of \gamma.
(For fun, look up Kaida Shi on Arxiv, who has apparently not only proven that gamma is irrational, but also the Riemann Hypothesis, the Goldbach conjecture, and the twin primes conjecture.)
I’m with the “flash of inspiration is a myth” crowd, but that’s a very good point about the social implications of crazy ideas. Dr. Gowers tried to alleviate this to an extent with his rules, but I did sense people less willing to get out on a limb in posts than they were doing in their heads. Is there anything else we can do to encourage risk-taking ideas?
(On a side note, could someone involved with the project read over this post of mine intended for a general audience? Feedback has been positive from outside visitors but I’d also like an opinion from someone who can tell based on the overall picture if anything should be changed.)
38. Peter Boothe Says:
Although there is a 1:1 correspondence between proofs and programs, there are important social differences between them that make your metaphor a little worrying. In particular, a program can be successful without being bug-free. Linux has, and has always had, many bugs. Indeed, it is quite likely that all programs of sufficient size and complexity have some bugs.
A program with bugs can still be very useful, but it seems like a slightly-incorrect proof is, at best, only a little better than just being wrong. If this is true, then the open source “bazaar” model doesn’t really work without a consistent and well-defined global view of the problem among all participants.
The other big problem with the metaphor is that most large programs start out as working small programs and grow from there. Most are not designed as big programs solving a big problem. Is it possible to “grow” a small (correct) proof into a larger (correct) proof of some larger truth? It seems like this is not how mathematics is usually practiced, but it would be the growth process that most closely mirrors the open source process.
39. David Rutter Says:
I like the way the “proof-discovery tree” very much resembles a software dependency tree. Since dependencies are checked and dependents automatically compiled in, it seems possible that this idea could easily be implemented in web software. In particular, it could be mechanically organized so that when a proof-discovery tree is complete, the software has already compiled the proof in a single page. All that would need to be done to make a paper would be to clarify the proof output (by, for instance, changing duplicate variable names and adding more coherent English transitions between proof steps) in ways that software would find difficult.
40. Henry Says:
I think the problem is about accessibility. When users have read through an excess of beta material, they lose a great amount of time, perhaps creating a sense of chaos. A solution is to index the sites, and customise a search engine for each index. It is the dogma of Math Harbour, small is beautiful.
Before we have the nice Google Wave, perhaps, the CSEs at Mathematics would be useful to you. Just click the right hand corner for unsolved problems. Lectures, proofs and examples can be found in the middle.
Happy solving.
41. Dan Dutrow Says:
I wonder if the collaboration activity could be viewed in multiple ways, catering to how the individual wants to digest the information.
For example, integrating something like Google Wave into the mix could provide the threaded discussions necessary to follow down different paths. Meanwhile, the same information could be exported to a blog in serial/linear fashion so thoughts could be viewed chronologically.
Furthermore, integrating chat or micro-blogging tools into the mix would allow for quicker, simpler, contributions. (I noticed that the average length of posts increased beyond what was stated in the “rules.”) That can be done through Google Wave, XMPP (Jabber), or twitter.
Another idea would be to display all posts chronologically, but color them by thread, and draw colored lines through the discussion. That would allow the eye to quickly filter through the information – focused on one thread, but collecting inputs from other threads through osmosis.
At some point, threads could split and join (analogous to branching and merging of software). The issue there, of course, is that threads won’t join cleanly. Instead, only some aspects of one thread will relate to the other. Thus, it may be of value to share posts between threads. I don’t know of any tools that allow you to do this, besides concurrent versioning systems, like SVN, but that’s not really web-based.
If you represent all posts as individual nodes, there must be some kind of network where interconnections could be made. There is value not only in the node, but also the link, because the connection of independent thoughts provides much insight into the problem. However, it might be hard to digest such a diagram in traditional web formats.
42. Massively Collaborative Mathematics: lessons from polymath1 « Hypios – Thinking Says:
[...] maybe in collaborative mathematics, the final document should not be thought of as a proof, but a proof-discovery tree: “The basic idea would be to produce an online document that thoroughly investigated all [...]
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
Get every new post delivered to your Inbox.
Join 1,545 other followers
%d bloggers like this:
|
global_05_local_4_shard_00000656_processed.jsonl/84066
|
How can you tell how old a deer is?
1. 0 Votes
When you have a fawn (a baby deer), the deer is tawny colored with white spots. As it ages, it loses those white spots. Size can help determine if a deer is young or old, and antlers help a lot more with bucks, but only until the antlers stop growing. Even then, it’s hard to get an exact age in months or years. Teeth tend to be the best indicator if you can get that close (there’s a large chart on it in the first link below).
Please signup or login to answer this question.
|
global_05_local_4_shard_00000656_processed.jsonl/84070
|
Wednesday, November 26, 2008
Today...So Far
In honor of my husband's aunts - who both read and enjoy this blog - and inspired by Lecia's blog, I decided that today I would present our day so far. The Princess didn't have school today, so it's been different - and fun - and...well... you'll see!
First, the Princess and I decided to make pumpkin chocolate chip muffins for breakfast tomorrow. Yum. Unfortunately, I kept my back turned to the Pixie for just a hair too long. (Then again, she's not normally given to mischief.) I happened to glance around the corner when we had the batter all mixed up....and discovered that she had been painting on my couch.
That's and red paint on my beautiful couch. Thank goodness it's washable paint, and that the couch has a very cleanable microfiber cover.
Crisis cleared up, the girls got to taste the batter.
Then they did each other's hair.
Isn't she cute!
This one too!
I have such girley-girls!
We then went to Michaels - a trip that I've been promising the Princess for a while. She thinks it's better than a toy store! (Smart me to train her that way.....) I took the camera, but the Pixie just wasn't happy.
When we got home, I gave them hot dogs - which is a very rare treat in this house.
They loved them!
The Pixie is in bed now, and my Princess is happily making Thanksgiving crafts. She's a very creative sort, and will probably be here for the rest of the afternoon.
And now I'm off to the rather prosaic, n0t-so-fun to-do list. If I'm lucky, I'll have a bit of time to knit...
Bonnie said...
Muffin batter, hairdos, Michael's - now that's a nice day! Hope you have a very happy Thanksgiving.
A Day That Is Dessert said...
These pictures are so sweet!! I must say, your daughter is very mischevious!! First the keys, then painting on the couch....if we lived in the same town and she came over for a playdate I'd be nervous! :)
Love especially the pictures of doing hair. Enjoy those muffins and your Thanksgiving! xoxo
|
global_05_local_4_shard_00000656_processed.jsonl/84078
|
Sunday, 31 March 2013
G L Wilson
Please read our photo and content policy.
Saturday, 30 March 2013
Steampunk-themed relic Thinline Tele-style guitar
The craze for steampunk-themed guitars is getting almost as popular as that for faux "relic" guitars. Some would argue that it is just as silly. This steampunk Thinline Tele-style guitar was originally a Czechoslovakian-made Jolana Vikomt and has now been steampunkerized in the soon-to-be time-honoured fashion of bunging a few cogs and gauges on it, maybe also some ramdon electronic components and some tubing.
G L Wilson
Please read our photo and content policy.
Friday, 29 March 2013
G L Wilson
Please read our photo and content policy.
James Trussart Steel Top (with alligator finish)
Everybody loves a James Trussart guitar, and this Steel Top is one of his finest models! Everything in it is the most pleasant combination of classicism and innovation, and though it almost looks like a jewel, it still feels that it can be played for real on stage.
I think that metal front thinline guitars have a really interesting sound, I wonder why nobod builds some on a big scale - and with an affordable price (the Trussart is handmade and quite complex, therefore pricy).
Bertram D
Please read our photo and content policy.
Wednesday, 27 March 2013
Guitar review: Fender Pawn Shop Series Bass VI
Fender's Pawn Shop series are - in their own words - "guitars that never were but should have been". The concept seems to be that these designs are influenced by weird old modified guitars that you might find in a pawn shop. Hence we see different pickup combinations and trem systems and variations on design that you wouldn't expect to see on stock Fender guitars. In some cases, the Fender Pawn Shop guitars go beyond such minor modifications and we see alterations to actual body designs, e.g. the Mustang Special has a re-designed chunkier body shape, the '72 is essentially a Thinline Strat, and the Offset Special is reminiscent of a Thinline Strat with an offset waist and equipped with a Jaguar/Jazzmaster tremolo.
The model that I was really interested in, however, was the Fender Pawn Shop Bass VI. I've been hankering after a Bass VI for years. I used to have a Hohner Hollywood (so-called) Bass VI but its 25.5" scale length meant that you couldn't really tune it down a full octave below a regular guitar, and so I had to make do with it being tuned down to G. That's still a whole lot lower than most baritone guitars which are often tuned to B or C - or A if you're lucky, but it still wasn't "bass" enough and that always really bugged me.
The Pawn Shop Bass VI, like the original, has a 30" scale and so has no such problems with being tuned to E. It differs from the original Fender Bass VI in a number of minor details: most notably it has a JZHB pickup in the bridge position - this does look a lot like a large P90 pickup but my understanding is that it is a double coil version of the Jazzmaster pickup. In the neck and middle positions are a pair of Jaguar pickups. These are all controlled by a Strat-like 5-way switch. It's a disappointment that Fender didn't use the individual pickup on/off switches plus "strangle" switch on a chrome plate as on the classic Bass VI. This doesn't seem so much like a quirky modification, rather a brazen cost-cutting exercise, which is a pity because I think Fender missed a trick here.
There's another chrome plate missing beneath the volume and tone controls and output jack. Here on the Pawn Shop Bass VI the pickguard has been extended to include this area. Other than that, it's a pretty faithful reproduction of the original. The neck is unbound, but then not all the "originals" had bound necks anyway. Interestingly the headstock bears the legend "Fender VI" (as used on the very first Bass VIs rather than the later "Fender Bass VI") with "Electric Bass Guitar" in smaller lettering beneath.
Part of the reason that I wanted a Bass VI was so as to encourage me to play more melodic parts and lead lines, as so often in a band situation I find myself ending up playing rhythm guitar which - although I am happy to do - I want my share of the limelight too. I also have an album to record and I was looking for new "voices" to use in my music, and decided that a Bass VI could be the very thing.
So, how does it play and sound? Well, the first thing that struck me was that it packs a punch as a bass. Some uninformed people insist on calling the Fender Bass VI a "baritone guitar". Believe me, it is nothing of the sort; indeed I'd go so far as to say that to call it a baritone guitar is an insult. It's as much of a bass as a Fender Precision or a Jazz Bass. Other people say that it's a "bass for guitarists", but I don't think this is so much of an insult - there's a lot of truth in the statement. It has six strings, it is tuned like a guitar only an octave lower, and the string spacing is like that of a guitar. It even has a tremolo (but I'll talk about that some more later). A bassist friend of mine tried out the Fender Bass VI and commented, "I'm not sure how I'm supposed to play it. Do I play it as a bass or as a guitar?"
For the most part I do play it like a guitar, although I expect I'll end up using it for bass lines too. The one thing that took a little getting used to was the extra stretch on a 30" scale. It didn't feel too bad, but when I changed back to playing a regular scale guitar again, that felt really awkward. However, having been changing back and forth between guitar and Bass VI for a month now I have gotten used to the differences in scale length.
It is possible to play guitar chords, but open chords can sound muddy; you wouldn't really want to strum on a Bass VI, but careful arpeggiation of chords can be quite effective. Barre chords are possible too - they sound better the higher up the neck you travel. I've even played the Bass VI using a capo on certain songs so as to put me into baritone guitar range - that has proven to be quite effective.
Initially I was underwhelmed by the JZHB pickup in the bridge position. My initial reaction was that Fender had put it there just for the sake of making it different so as to justify the "Pawn Shop" label. However, I was initially using my guitar amp but after a band performance during which the poor amp struggled, I got myself a most excellent bass amp from Roland's Cube series which complements the Bass VI beautifully. Prior to that I only ever used the bridge JZHB pickup in conjunction with the middle Jaguar pickup through the guitar amp because I thought that on its own it sounded thin and weedy, I have since found that through a proper bass amp it gives me a really usable treble sound that cuts through nicely when playing in a band situation that includes another bass player.
When playing with another bass player, I've been very conscious that I need to dial in a different bass sound so that we don't clash or else end up in one big bassy muddle. Using the JZHB pickup helps cut through, but also choice of effects helps too - and thankfully my Roland Bass Cube has a whole load on-board.
The Bass VI is an instrument crying out to be played with a chorus effect. It really does bring the sound to life. I've also found it best to scoop the middle out on the EQ on the amp, but obviously one can experiment and find out what works best for them.
I love also the sound of the Jaguar pickups, although I think perhaps I would use them more when recording rather than when in a live situation when I want my sound to be distinct from that of the (4-string) bass player in the band.
Now the tremolo... it's the same kind of Fender trem as featured on their Jaguar and Jazzmaster guitars. That, for me, should be a good thing as I've never got on with Strat-style tremolos. However, with this particular instrument I cannot get the trem arm to snap into place. I really don't want to force it into position and possibly break something. I've tried asking Fender in the UK what I might be doing wrong and so far have not received any response. I did wonder if the shop may have given me the wrong arm, but seeing as it was a shop specialising in bass guitars that is rather unlikely (the only other Bass VI they had was the Eastwood Sidejack and I remember that had its trem arm installed - also that's another Fender Jaguar/Jazzmaster type system). [EDIT - 16 April 2013: Today I received a parcel from Fender containing a threaded tremolo arm for Bass VI. It fits and operates percectly. Yes, it does seem that the shop gave me the wrong arm in the first place. A big "thank you" to Michael at Fender Customer Relations for sorting this out for me.]
I've seen the opinion expressed elsewhere thay the tremolo on the Bass VI is no great shakes anyway and is more of a gimmick than a usable trem, but I would like the opportunity to find out for myself. (Earlier I mentioned the Hohner Bass VI I used to own. That had a Wilkinson trem that was very effective in use).
Having seen comments on forums such as it seems that some people are buying these Pawn Shop Bass VIs and modifying them to look and function like the originals, which is an irony seeing as the whole point of the Pawn Shop series was to present ready "modified" guitars. I think Fender should have done a straight reissue of the Bass VI rather than a Pawn Shop version. A little bit of chrome shouldn't hike the price up very much.
I know it sounds like I have a few niggles with the Fender Pawn Shop Bass VI, but despite the unnecessary Pawn Shop accoutrements, despite the mix-up with the trem on my example, I do love my Bass VI dearly. I feel as if it might be the perfect instrument for me and can honestly say I haven't been as excited about a guitar purchase as much since I bought my first ever Fender Stratocaster circa 1987.
The Bass VI is a Made in Mexico Fender, which is something a few years back might have caused some to view it with concern, but Fender Mexico have really got their act together now and this is a quality-made and beautifully-finished instrument. Retails in the UK at prices between £601 and £755 so shop around. Colours are black with tortoiseshell pickguard (i.e. mine), sunburst with tortoiseshell pickguard, and candy apple red with matching headstock and white pickguard.
Just DON'T call it a baritone guitar, OK?
G L Wilson
Please read our photo and content policy.
Tuesday, 26 March 2013
Kay "Old Kraftsman" Sizzler vintage electric guitar
Currently listed on eBay with a Buy It Now price of $1,199.99.
G L Wilson
Please read our photo and content policy.
Sunday, 24 March 2013
DeArmond M55
Strangely it is difficult to find information about this DeArmond M55 - though it's an internet era guitar released in the 2000s! Even though it's been discontinued and DeArmond don't produce guitars anymore, Fender (who owns the DeArmond brand) could have kept a website about these guitars!
Anyway, the design of this guitar is based on a Guild model - DeArmond were just making the pickups and sign - and as you can notice, the original humbucker has been replaced by a Filtertron copy. I like the absence of stoptail and the turn-o-matic / string-through-body combination, it always feels to me that it's the best way to transmit the strings vibration to the body.
Bertram D
Saturday, 23 March 2013
Vox Phantom XII assembled from 1960s Italian "new old stock" parts
Here's another for our southpaw friends, a Vox Phantom XII. The Vox Phantom is one of those guitar designs that you either love or hate. I'm in the former camp. Anyway, I'll let the eBay seller explain about this particular example:
Italian made Vox Phantom 12 string left handed guitar... BRAND NEW... custom made from matured, original unused parts.
This is an example of a limited supply of old Vox and Eko guitars that have only recently been assembled from stock that has been stored, slowly maturing for many years.The body,neck,scratchplate and control knobs are all vintage original Vox old parts... the bodies were sprayed up and stored in the mid to late 1980s ... Modern Vox type pickups have been added with a set of repro machineheads. The electric wiring is brand new. What you get is a vintage guitar BRAND NEW!!!
Finished in Black with 3 single coil pickups, chrome hardware, rosewood neck.
This guitar is a beauty!!!
Why buy a new Vox made in Korea for over £1200 or an American rip off when you can have an Italian LEGEND for £865!!!?
I'm guessing that the seller, elitepianos are one and the same - or else closely allied to - Brandoni Guitars.
The neck would appear to be a right-handed example - note in the above photo that the Vox logo appears to be upside-down for a left-handed player. I don't suppose this will pose any real problems, but I hope they made some concession to the player with regards to the side dots on the neck - they're not a lot of use on the wrong side.
Currently listed on eBay UK for £865.
G L Wilson
Friday, 22 March 2013
1950s plastic toy skiffle guitar with auto-chord attachment
I love these old 1950s and 1960s plastic toy guitars; sometimes they can be just as fascinating as real guitars. This particular 1950s skiffle guitar has a idiosyncratic auto-chord attachment on the neck so the player doesn't even have to learn chords - just press a button. That is, so long as they don't want to play any chords other than G7, E7, C, A7, D7, G.
Currently being auctioned on eBay UK with a low starting price.
G L Wilson
Wednesday, 20 March 2013
Fender Hellecaster
Look twice, this looks like a Stratocaster, but it's a Fender Hellecaster, a limited edition John Jorgenson signature model released in 1997.
It has split pickups similar to those used on the Electric XII and later on G&L guitars, an inverted big headstock (that looks extremely cool and should be standard on strats), and nice sparkle finish.
So with a little effort, one can make a strat look good after all!
Bertram D
Tuesday, 19 March 2013
Sekova Grecian hollowbody electric guitar: six pickup stereo wonder?
Here's another one you don't see come up for sale very often. It's a circa-1968 Sekova Grecian, made in Japan (although the exact provenance is not known) and imported into the USA by U.S. Musical Merchandise of New York City.
Of course what makes the guitar so spectacularly bizarre and/or wonderful is that it sports SIX pickups, albeit six individual string pickups, i.e. a one for each string. You might think that makes it effectively a one-pickup guitar, but something else is going on here; just witness the number of switches and volume and tone pots. The Grecian is actually wired for stereo with the signals from the 3 bass strings and the 3 treble strings being separated. It's a nice idea, but quite crude compared to latter-day stereo guitar innovations, e.g. the Kramer Ripley guitar or the Gittler guitar where the output from individual strings can be panned wherever desired in the stereo spectrum.
But how does it sound? In an article for My Rare Guitars, Michael Wright ("The Different Strummer") commented:
As cool as it looks, this Grecian formula sucks big time. The stereo idea wasn’t terrible, but you always had to have two amps to take advantage of it. Plus, the coils are just not big enough to crank out much sound and, like so many Japanese guitars from this era, the wiring is extremely thin and the pots are crummy, so you’re lucky if the thing plays.
The example pictured about is currently being auctioned on eBay with a starting price of $295. Thanks to Steve C for bringing this guitar to my attention.
G L Wilson
Monday, 18 March 2013
Fender Japan Aerodyne Jazz Bass rare left-handed version
Fender's Aerodyne series of guitars and basses were another - rather elegant - variation on a theme courtesy of the ever resourceful minds at Fender Japan, with bodies that feature a gently arched top and binding. This particular Fender Aerodyne Jazz Bass is a rare left-handed example. Note that the bass features both Precision and Jazz Bass pickups.
Currently listed on eBay with a Buy It Now price of $999.99.
G L Wilson
Saturday, 16 March 2013
Stonehenge 2 guitar with tubular metal body briefly seen on eBay
This Stonehenge 2 guitar was listed as a new item on eBay yesterday, but today the auction has been withdrawn. Whether the Greek seller has decided not to sell or else was offered a deal outside of eBay, I cannot say. What the seller does tell us is that:
In 1984, the guitar maker Alfredo Bugari of Castelfidardo made a novel guitar with this name with a tubular metal body. The model name derives from his theories about the sound properties of the ancient English stone circle.
That description raises more questions for me than it answers; I'd like to know more about Bugari's Stonehenge theory, but then that would be getting away from guitars, which is what this blog is all about.
The seller also mentions that it's one of only 150 such guitars that Bugari built. Here on Guitarz we looked at a Stonehenge 3 guitar back in 2009. There doesn't appear to be a lot of difference between the "2" and "3" models, from what I can tell, apart from different pickups and a completely different bridge. The bridge on the "2" that we are looking at here appears to have fine tuners - I thought it might be a trem for a few moments and then wondered where the mechanism would be; note the view from the back - the bridge has no material behind it - it makes you wonder how this affects the sound.
This guitar WAS listed with a starting price of £300. What happened to it after the auction was pulled, I can't say, but as it's such a rare and unusual piece I thought I'd share it with you. (I normally like to share eBay auctions that you CAN bid on).
G L Wilson
Friday, 15 March 2013
slightly pimped up Ibanez Talman TC 220
Ibanez Talmans are recurrently featured on this blog - I just like this simple and classic guitar and its many variations - I can't believe that it's been discontinued when so many ugly, useless or cloned guitars are sold every day...
The Talman TC 220 normally sports uncovered humbuckers, the previous owner of this one just added chrome covers and rings - actually I did that on my Epiphone Dot Studio and not only this looks good, but to me it also improves the sound somehow...
I love the Talman control plate / jack output - I wish I could buy one somewhere for a project...
Bertram D
Thursday, 14 March 2013
A Fender paisley Tele with a difference - it's GREEN!
Although this is a genuine Fender Telecaster, it does feature a custom refinish. You're not going to find an off-the-shelf Tele in green paisley like this one. The seller calls it a "a 7-UP Paisley 'JUDGE' version... direct from Ralph and Sandy's custom shop" (as if I'm supposed to know what that all means) and goes on to list the guitar's specs: Seymour Duncan Tele Hot Neck pickup, Seymour Duncan Tapped Tele bridge pickup, reverse control plate, 5-way switching, series and parallel modes, Vitamin Q Paper in Oil capacitor, No Load Tone control, Electro socket jack holder, Switchcraft jack, Premium CTS controls...
It does indeed sound - and look - like a very fine guitar. And so it should be with an eBay Buy It Now price of $2,299.
G L Wilson
Wednesday, 13 March 2013
One-off 1980s Route 66 guitar from Wilkes Guitars
I'll let route66don, the eBay seller of this bizarrely-shaped Route 66 guitar, tell you all about it:
Unique one-of-a-kind route 66 guitar made as a showpiece for us by Doug Wilkes of Wilkes Guitars of Stoke on Trent, UK in the mid 1980s. Built to the highest spec it has a maple body and neck with an unbound phenolic resin fingerboard with side dots, EMG active vintage style strat pickups, Leo Quan Badass bridge/tailpiece and vintage Kluson Sealfast nickel banjo style machine heads. It sounds and plays great although the shape is of course somewhat clumsy to hold. Used but in very good condition. Paintwork has 'yellowed' a little and it has 2 tiny dings on the top of the '66' horns. Comes complete with a pro quality foam lined aluminium flight case by T&D cases of Hull. For sale due to my retirement at £1000.
I'm guessing his band was called "Route 66". It's just a hunch I have. [Edit: it looks more likely that "Route 66" was the name of his music shop - see the comments below.] And when he says it's "somewhat clumsy to hold" I think he means it's not very ergonomic.The banjo-style machine heads are a nice touch though.
Doug Wilkes, of course, is a luthier never afraid to experiment; regular readers might recall Wilkes' "The Answer" guitar with sliding pickups.
The Route 66 guitar is currently being offered for sale on eBay UK with a starting bid of £1,000.
Of course we've previously looked at another completely different Route 66 guitar. Curiously, it is also UK-made.
G L Wilson
Tuesday, 12 March 2013
Grab your sunglasses: 1955 Harmony Stella Sundale acoustic guitar
G L Wilson
Sunday, 10 March 2013
Matsumoku-made Aria Pro II Titan Artist series TA-30 semi
...speaking of Japanese guitars, here is one that is much more affordable than the guitar in the previous post, in fact I would recommend this as my eBay Buy of the Month because I have been using one of these very same guitars as my main instrument for a while now and think that it is a superb guitar. It's one of my favourite guitars ever and I don't make that claim lightly - I was trying to work out the other day how many guitars I have owned and it must be at least 60.
Anyway, this is a 1980s-era Aria Pro II TA-30, but please do not confuse these with the later Korean-made TA-40 guitars which also had a bolt-on neck. The TA-30 is a quality Japanese guitar made in the now legendary Matsumoku factory and is far superior in construction, materials, and in playability. (It absolutely nails that Creedence Clearwater Revival sound, if that frame of reference is any use to you!) The Korean-made TA-40 is not a bad guitar but the cheaper laminates that it is made from mean that it has a tendency to sound rather boxy.
The Japanese and Korean TAs do look very similar but there are various little details that help you tell them apart. For example, the Japanese TAs have a much slimmer body if viewed sideways-on - it's about a centimetre difference. Also, the f-holes are much more slender and ornate, whereas the Korean examples have more of a "cookie cutter" outline. The Japanese examples are often fitted with those tulip-shaped machine heads too.
Contrary to popular belief and numerous eBay listings for these guitars, both Japanese and Korean, the TA-30 and TA-40 are not "335-style". For starters the shape is slimmer and nowhere near as rounded as the Gibson guitar, but more tellingly they have fully hollow bodies and not a solid centre section as on the 335 (that's what makes it a 335).
Currently listed on eBay UK with a starting price of £150. That is an absolute bargain for a Matsumoku-made guitar of this quality. If I was in need of a back-up for my main guitar I'd certainly bid but right now I have other priorities.
G L Wilson
Saturday, 9 March 2013
2001 ESP KH-2 Kirk Hammett Signature relic guitar, #18 of 100
Legions of Metallica fans will probably want to lynch me, but I'm sorry, it has to be said...
HOW MUCH for a Japanese-made Super-Strat?
Currently listed on eBay UK with a Buy It Now price of £3,999.99.
[NOTE: I have nothing against Japanese guitars. In fact I think they often represent the very finest production-made guitars on the planet, and I have owned many. What I am really astonished at here, is the super-inflated price for a guitar that is essentially a "super-strat", just a very basic guitar. I'd like to know how that price tag is justified. Is this actually the recommended selling price (in which case in seems that Metallica fans are being ripped off big time) or is it a price that has been created by demand?]
G L Wilson
Thursday, 7 March 2013
FMW luthier-built Ricky-inspired bass through neck with lucite wings
This unique FMW bass by Berlin luthier Frank M. Weber is quite a stunner. It's obviously based around the Rickenbacker 4000 series basses, is of walnut and maple through-neck construction with see-through luctite body wings. The body is equipped with 8 white LEDs to illuminate the see-through sections. It also has LED position markers mounted into the top edge of the ebony fingerboard. Pickups are "Harry Häussel" Bass Bars.
Currently listed on eBay France with a Buy It Now price of €1500.
G L Wilson
Wednesday, 6 March 2013
The Strawbs "Shine On Silver Sun" and a Vox Winchester
Further to our previous post, here is a 1973 clip from BBC TV's Top Of The Pops featuring The Strawbs performing "Shine On Silver Sun" with guitarist Dave Lambert playing his Vox Winchester, which - as we have already seen - has a body made from a Vox wah-wah pedal.
Note that his Winchester features two pickups rather than the single pickup we looked at in the previous post. Also, the neck has a different headstock, implying these were assembled from whatever Vox parts were lying around.
G L Wilson
Tuesday, 5 March 2013
Vox Winchester guitar has metal body made from Wah Wah pedal chassis
Talk about re-cycling ... and you thought that Fender's recycled parts Swinger and Maverick models were quirky! I think that the above photos of this rare 1960s British-made Vox Winchester guitar speak volumes. Do I really have to add anything? Other than perhaps to draw your attention to the really weird positioning of the volume pot in a window in the rear of the guitar! (If it IS a volume pot as the eBay seller suggests. See the Dave Lambert link below.)
Currently listed on eBay with a Buy It Now price of $2499. Thanks to Nathan for bringing this guitar to my attention. Nathan points out that Dave Lambert of the Strawbs owns an example.
G L Wilson
Monday, 4 March 2013
Hofner Shorty - the little guitar that is ripe for modding
This eBay UK seller, impressed with the quality and playability of Hofner's inexpensive little travel guitar, the Hofner Shorty, has modded seven of them installing the Wilkinson WVS50 two-point trem system on each, and made them available for sale on eBay with a Buy It Now price of £175 a piece.
He makes a very good case for the playability and versatility of these babies, as evidenced in the above video.
Meanwhile, another modder has routed out the body of his Hofner Shorty, reinstalled a new top and bridge, and with the help of piezo pickups and a little technology has converted it into a quite convincing sounding electro-acoustic guitar for busking.
G L Wilson
Kay strat copy with flowery custom paint
Sometimes there is a thin nuance between psychedelic / flower power patterns and your grand mother's favorite summer dress...
Look at this refinished Kay strat copy from the 1980s, does it sing about universal peace and free love or does it ask you to put the teapot cover on and take the scones out of the oven?
Bertram D
Saturday, 2 March 2013
Gibson SG Special 3SC
Yeah, I know, it's a Gibson SG and it has 3 blade single coil pickups, it exists, it's real, it's the Gibson SG Special 3SC - a 2007 limited edition -, and I show it here just for its plain oddity!
Well, maybe it's also a good guitar!
Bertram D
Friday, 1 March 2013
Kramer KL8 bass - aluminium neck and eight strings
It always kind of annoyed me that no-one seriously took notice of Kramer guitars until Eddie Van Halen started using them, mainly because by this time - in my eyes - they had stopped being cool by abandoning the aluminium necks that made their early instruments so very interesting in the first place. The company was founded in the late 1970s by Dennis Berardi, Gary Kramer and Phillip J. Petillo and produced their first aluminium necked guitars in 1976 from a plant in Neptune, New Jersey. They switched to producing more generic wooden-necked guitars in 1981. Notably, Gary Kramer, had left the company by this point.
Unlike Travis Bean guitars and basses, Kramer's aluminium neck was basically a T-shape in cross section with a fillet of timber to either side so as the give warmth to the back of the neck against the player's hand. Kramer produced more basses in their early period of metal-necked instruments in a ratio of approximately 4:1, mainly because bassists for some reason seem to be more willing to experiment with new ideas than guitarists (which is something I have often complained about - c'mon guitarists, stop being so conservative!).
The above-pictured Kramer KL-8 bass is an 8-stringed beauty (OK, it's only strung with 4 strings in the photo, no need to point that out in the comments), has a pair of DiMarzio pickups and a whole bevy of switching options. Note the wooden inserts (looks to be walnut, maybe?) in the back of the neck. Note the four machine heads at the body end for the octave strings. This concept of tuners at both ends of the instrument was also used around the same time on 12-string guitars and 8-string basses built by the likes of Wasburn and BC Rich (also 10-string guitars in the latter case). Actually, speaking of BC Rich, the whole design is rather reminiscent of their guitars; only the forked headstock gives the game away that it's a Kramer.
The eBay seller has listed this bass as being from the 1980s, but I think it's more likely to be 1970s, although conceivably it may have been from the last year of production of aluminium necks in 1980.
Currently listed on eBay with a Buy It Now price of $1,500.
G L Wilson
Related Posts with Thumbnails
|
global_05_local_4_shard_00000656_processed.jsonl/84084
|
3DMonstr Printer: 8 Cubic Feet Of Build Volume
3D Monster
So you’re looking at 3D printers, but the build volumes for the current offerings just aren’t where you’d like them to be. [Ben Reylblat] had the same problem and came up with the 3DMonstr, an enormous printer that has (in its biggest configuration) a two foot cubed build volume, four extruders, and the mechanical design to make everything work.
Most of the ginormous 3D printers we’ve seen are basically upgraded versions of the common table-top sided models. This huge Ultimaker copy uses the same rods as its smaller cousin, and LeBigRap also uses woefully undersized parts. The 3DMonstr isn’t a copy of smaller machines, and instead uses very big motors for each axis, ball screws, and a proper welded frame. It’s highly doubtful anyone will call this printer a wobblebot.
The 3DMonstr comes in three sizes: 12 inches cubed, 18 inches cubed, and 24 inches cubed, with options for two to four extruders. We caught up with the 3D Monstr team at the NYC Maker Faire, and from first impressions we have to say this printer is freakin’ huge and impeccably designed.
1. Dane says:
I saw them at makerfaire and was impressed, it looks like they’ve even made improvements since then. I’m curious how they are getting around ABS warping issues on large prints. The issue scales with size, and i can say it’s non-trivial, ex: http://transistor-man.com/3dprinter_upgrade1.html
Looking forward to them getting funded!
2. Josh says:
Eh, the design looks rather crude, basic and unfinished. The axis’s are going to have so much torque (much more than’s needed) that if you were to touch the extruder nozzle to the print bed (it happens to me fairly often on my reprap) which given the raising z axis extruder mount looks very possible, something’s going to break!
It looks like it’s 70% done, as a prototype then sure, but as a finished product? nah.
3. Giant Flying Bird says:
That’s nice, but it still isn’t big enough to create my life size Pterodactyl zoo.
4. Hack Man says:
It’s FRAME is an 18′ shipping container.
5. medix says:
The idea that 3D Printing is scalable in build volume is really nothing new, and I’m getting rather tired of beating this dead horse (so to speak). The basic designs have been done to death, so where’s the novelty?
How about more posts concerning things that are actually being DONE with 3D Printers? I acknowledge that there have been a few, on varied topics, but I’d really like to see more applications that feature 3D printing as the core of the fabrication process.
Not to say that this isn’t interesting, but.. well.. meh.
• How about a 3D printer which “sprays” material outwards from pivot?
The pivot is on rotational x/y base and, using basic physics, predicts where the stream will land and reverse-calculates the angle and strength to use for the desired co-ordinates.
You would need to operate it in a windless environment, and you’d need a new material. And it would be a bit more dangerous…..but in principle you might be able to completely decouple build volume from device volume. Instead it would be linked to how far the spray could project upwards.
Not saying it would be practical, but it would be an interest design to see experimented with imho.
6. fartface says:
I am hoping for someone to start making 3D printing more accessible to the newbie or even the normals. just the software alone for 3d printing makes 90% of the population freak out.
• twdarkflame says:
I think we are mostly there. There already is pc-less solutions. The new Ultimaker basically lets you download a file from thingyverse, dump it on an SD card then the printer can use that directly. Meanwhile theres plenty of “ultra easy” 3d software online and offline these days.
I think the speed is the big hold up atm really. 6 hours or so is just too much for most people. That’s why I have high hopes for;
And itterations of that design.
• pcf11 says:
The Dollar store lets you walk in and just buy plastic crap too.
• twdarkflame says:
And I can buy a paperback book cheaper then I can print one on my 2d printer.
Your point is?
• ANC says:
I think he’s saying we’re at the stage before Postscript and LaserWriters were invented. Sure, we’ll have fun with these things, but when will your parents be able to use one?
• twdarkflame says:
That would make sense if it was a quality issue holding it back from being mainstream, but I dont think that’s the case.
And, as I mentioned in the above post, I think the answer to “when will my parents” is “right now” in terms of “able”. Cant get much easier then “download and print from sd card”. :)
• ANC says:
Ok, ok. When will they want to use, naturally turn to, or see the need for a 3-d printer.
• SATovey says:
At the same time when they will want to purchase a loom in order to make a blanket or some article of clothing.
3D Printing is more likely seen as a craft by the majority of people. The only ones who take notice of it are the crowd that just can’t resist learning a new craft, or are novice engineers that have an idea that they would like to make work.
People who think that 3D printers will become mainstream the way computers and 2D printers have, are neglecting the basic psychology of mankind which is: “if I can pay someone else to do it, why should I do it.”
That basic psychology however, does not eliminate the need to make the craft more accessible and easier to use as doing so expands the number of people that will at least try it. 3D Printing, will likely be categorized alongside knitting, black smithing, and other such crafts that allow people to make things. So the only ones who will delve into it are the ones who are inclined to take the time to learn the craft, and excel at it.
• sneakypoo says:
No, no it cannot. You still have to slice the STL before the printer can understand what to do with it. Granted, cura (software from Ultimaker) is now getting to the point where you get pretty ok prints without messing around with any settings at all but for the best results you need to learn a thing or two and be willing to experiment with both settings and other slicers.
We’ve come a long long way since the early RepRaps and Skeinforge though.
• F says:
Do you really think that normal people are going to endure the smell of molten plastic in their house? Does anyone actually bother to test these things for the amount of outgassed toxins? What is the effect on children and infants? Big build volume and fast print speed also mean higher concentrations of toxins in the air.
Selling to “normal” people also requires things like guards to protect young fingers from pinches and burns.
Also I’ve no doubt that many of these 3-D printers would go up in smoke and flames if they were left unattended and had a software meltdown resulting in stuck motors.
By the time you trick out a 3-D printer to make it UL compliant for home use, you’ve doubled the price, pushing it back up out of home range pricing.
• twdarkflame says:
Yes. And Yes. Study’s (so far) show is no worse then frying. (and honestly, my Ultimaker hardly smells at all unless your within a few feat).
You should absolutely be in a ventilated space if your going to be standing near it, but thats it pretty much.
Likewise, there -is- guards. Lots of current designs are semi or fully encased. Compare the Ultimaker1 to the new Ultimaker2 (and there’s stuff like the Form1…go and look whats out there!). If you want a 3D printer more like a 2d one, there’s choices now.
Even without guards though, they wont burn the house down. When it goes wrong you might damage the moters/belts, or clog the tubes. But their not explosive, and the head isn’t just going to fall off and start burning stuff.
You seem a few years out of date with whats available basicly.
Price is still an issue, however. Another reason for what I linked too being a good template for future designs.
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
global_05_local_4_shard_00000656_processed.jsonl/84085
|
About these ads
Home > Cyberwar, Security > The Middle East Flame is Far from Being Extinguished
The Middle East Flame is Far from Being Extinguished
Another day, another revelation inside the (in)visible Cyber War going on Middle East. Today Kaspersky Lab has announced the discovery of another strain of malware derived from the infamous Tilded-Platform family: the little brother of Flame, the so-called miniFlame (or “John”, as named by the corresponding Gauss configuration).
The malware has been discovered while looking closer at the protocol handlers of the Flame C2 Infrastructure. An analysis that had previously revealed four different types of malware clients codenamed SP, SPE, FL and IP, and hence the fragmented evidence of a new family of cyber weapons, where one only element were known at the time the FL client corresponding to Flame.
Exactly one month later, another member of the family has been given a proper name: the SPE element corresponding to miniFlame.
Unlike its elder brother Flame (and its cousin Gauss) miniFlame does not appear to be the element of a massive spy operation, infecting thousands of users, but rather resembles more a small, fully functional espionage module designed for data theft and direct access to infected systems. In few words: a high precision, surgical attack tool created to complement its most devastating relatives for high-profile targeted campaigns. The main purpose of miniFlame is to act as a backdoor on infected systems, allowing direct control by the attackers.
Researchers discovered that miniFlame is based on the Flame platform but is implemented as an independent module. This means that it can operate either independently, without the main modules of Flame in the system, or as a component controlled by Flame.
Furthermore, miniFlame can be used in conjunction Gauss. It has been assumed that Flame and Gauss were parallel projects without any modules or C&C servers in common. The discovery of miniFlame, and the evidence that it can works with both cyber espionage tools, proves that were products of the same ‘cyber-weapon factory’: miniFlame can work as a stand-alone program, or as a Flame or event Gauss plugin.
Although researchers believe that miniFlame is on the wild since 2007, it has infected a significantly smaller number of hosts (~50-60 vs. more than 10,000 systems affected by the Flame/Gauss couple). The distribution of the infections depends on the SPE variant, and spans a heterogeneous sample of countries: from Lebanon and Palestine, to Iran, Kuwait and Qatar; with Lebanon and Iran that appear to concentrate the bigger number of infected hosts.
Another evidence of the ongoing (since 2007) silent Cyber War in Middle East.
About these ads
1. No comments yet.
1. No trackbacks yet.
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
Get every new post delivered to your Inbox.
Join 2,898 other followers
|
global_05_local_4_shard_00000656_processed.jsonl/84088
|
23 May 2007
All aboard that love train...
The best part is... you can let somebody else pay for your ticket...
Controversial immigration rules aimed at stopping sham marriages are unlawful, says the Court of Appeal. The rules applied to all such individuals, irrespective of the status of their partner.
So, what's at issue here?
Nobody knows the scale of sham marriages, although senior registrars suggested that before the new legislation there could have been at least 10,000 a year.
Registrars at Brent Council in north London, one of the most diverse areas of Britain, suggested in 2005 that a fifth of all marriages there were bogus, with officials able to spot couples who barely knew each other.
Good grief.
Technorati Tags: , ,
Canadi-anna said...
Why do we have rules anyway?
Anonymous said...
Candadi-anna, I understand your frustration. My wife and I have talked about this sort of thing many times.
We've decided since the government keeps enacting nut-case laws and the courts keep striking down sane ones we will (whenever possible) only obey and observe those laws we support.
For example the SCOC *ALMOST* outlawed spanking a couple of years back. It's still legal but only by one justice's vote. If they ever make it illegal we will still use spanking as a punishment if its required (- f**K the law).
Another example: I want to buy a rifle. Every new government brings in more laws against guns. SO . . . f**K them. I'll just go buy an illegal gun and not ever register it. The government, the courts and the Starbucks LIberal assholes who think this crap up can shove it.
If they can catch me and make a case that sticks, hooray for them. But I don't think they can.
Neo Conservative said...
case in point, the 2 billion dollar farmer bob rifle registry...
the fiberal government of the day says... we sure as shit aren't gonna do anything about the ever increasing problem of murder and mayhem with illegal smuggled handguns in our heavily minority constituencies...
so instead, let's go after rifles owned by farmers and hunters... and say we're doing something about gun crime.
farmers and hunters in the main, say no way, fuck you and guns are driven underground.
no crimes are addressed, two billion dollars that could have gone to cancer research are flushed down the toilet and millions of perfectly legal gun owners are turned into criminals.
Anonymous said...
But, hold on! It makes a lot of asshole lawyers wealthy (like there are any *NON*asshole lawyers)and gives cushy civil service jobs to Liberal friends doesn't that account for something?
I was SERIOUS when I said I was simply going cherry pick which laws I respect and which I will deliberately ignore.
The ruling elite Liberals, pretend conservatives, judges, lawyers, journalists and media types keep pushing society farther and farther into a pussy-whipped, decadent looney land. When anyone objects they use the courts, media and panic legislation to ram their B.S. down evryone's throat.
So we can't change anything so the other choice is refusing to play along.
That means to hell with them. Whenever I choose and it's possible I will simply ignore those laws I don't feel like obeying. As I said before, if they can catch me and successfully charge me, good for them. But I don't think they can.
Anonymous said...
A bill of sale is all that's required for a legitimate marriage from many countries.
Western notions of romantic love don't apply, women are two legged cattle used by males for breeding.
We can't impose our values on these people, we must assimilate to their culture if they want to live here - if women are cattle, women are cattle - end of story.
Not even feminists argue with that, these cultures trump the rights of women because they're non-white cultures, so sexism is okay.
|
global_05_local_4_shard_00000656_processed.jsonl/84099
|
Wednesday, August 10, 2011
So we've given it a name
theWife and I deconstructed the day on the way to the movies to see an actual movie. I know it seems superfluous to say 'to the movies to see an actual movie' but anyone who's a parent and who lacks a decent relative-fueled support network will know what I mean. That you get to go out to do the thing you did as your week's coda pre-child that you now do twice a year. See an actual movie.
And oh did we love movies. Once or twice a fortnight we'd go, aiming to see usually a comedy (1) or a sci-fi. Sometimes a thriller. Never a romance (spit, spit), unless it was a romantic comedy, in which case, yes please (2). And ... I admit it ... I got to choose. I did. It was rare theWife fought me on the choice because otherwise she'd have to deal with my passive-aggressive sooking ('No, it's okay. I mean I wouldn't choose it but I'm happy to do this for you').
So we were in the car and tooling along the Monaro when theWife wisely said we should focus on the positives. I mean I've been in ambulatory pain all my life. All my life. And all my life I've been vilified by parents and teachers and adults alike as some sort of fat food-noshing slacker who couldn't be fucked to get outside and just take a lungful of fresh air you fat boy weak person when it turns out the reason why I didn't want to get jiggy with anything was because of constant discomfort.
Fast-forward to now. Three years ago I decided to go for a walk and I decided I'd do it every day. Every single day. Without fail. And bar one day (3) I did it. Rain. Cold. Hot. Didn't matter. I did it. And I kept doing it even as the pain ate at my legs, at my back, my gut, my head and on my aching flat-as-all-fuck-feet. Because all my life people had judged me ill for not moving so fuck it, I was moving.
There were moments when I enjoyed it. But almost always I was doped to the max on pain killers which took away the aches and pains that shot through my body with the giddy abandon of pinball machine that's undergoing a seven-ball-drop. I'd get an actual feeling of ... fuck it ... worth that I was actually exercising religiously, in a once-a-day-mass-kind-of-way for the first time in my life and overcoming by sometimes sheer fucking willpower the screaming desire to fuck it all off and just drift into a world of inexorable abdominal growth.
But it turns out ... it turns out all I was doing was grinding the last off the cartilage from my bones in my left hip socket. Because, according to the doctor, my left hip likely never formed correctly or developed properly, leading to my feeling crap and un-athletic and unappealing. Which meant I didn't walk properly—I have flat-feet, I shuffle, and my right feet points 30 degrees off my left for some unknown reason. So nearly 40 years later my joint has reached its end point. Ground off Mikey gristle that kept my bones in operation just floating away into my system for parts unknown ... unknown save where the fuck they should have been all this time.
I do have to admit I feel like calling up and abusing every single fucking teacher I ever had who mocked me or abused me or made me feel like utter shit and scream in their fucking faces what a fucking useless abusive ____ they were and how they made me feel small. Small and worthless.
But then who doesn't feel that? I'm sure that a good thirty per cent of people get damaged during schooling in some way. It's not an awesome environment. You have to be with these people. But somewhere between work and being in prison.
If I get a hip replacement then ... just maybe ... I will feel better. Like actually have greater movement. I haven't been able to bend my left leg properly in years. If I tend to nail work, taking care of deep-set hang-nails down the sides of my big toes, then I spend 10 minutes getting into position until I have turned my leg into a squished letter s and can stretch a shaking hand with tweezers down into the side of my toe.
I'd be able to walk without pain. I may have a limp but, fuck it, damage already done. It won't get worse than that. And the tech's improved since people I've known along the course of the dash-point have had them as theirs was some years in the past.
Although I dread the idea of an operation, and this is the first time I'm going to lose a part of my body I need for basic survival (4), and the long recovery time that will be needed, at the end of it then it should be an improvement.
This sad to happy transformation as per the play faces (5) made me smile. And it reminded me of this from Step Brothers. A moment where something bad ... became something very, very good.
So ... henceforth ... my likely hip replacement shall be called ... the fucking Catalina Wine Mixer.
And yes ... it's true ... Will Ferrell totally wore the shit out of that pirate hat.
I leave you with this.
Boats and hos.
That is all.
(1) I went into upper case for a second there and it started blaring COMEDY! in a most alarming manner. All fixed.
(2) Yes, I admit it. In the same way I love Roger Moore most as Bond I love RomComs. Love them. I love Richard Curtis and I dreamed of being him. It's a sad weird dream and if Mr Curtis ever indulges in a meth-fueled self-Google and comes here I hope he takes it as a compliment on his skills and not the beauty of his anus. Which, I am sure, is simply terrific and likely well-tended.
(3) The missed day did include a fair hike from then around an airport but that doesn't count if I am honest since it wasn't a purpose walk.
(4) I don't have a gall-bladder anymore. But you can live without one. Before civilisation reached the joy of modern medicine
you'd have been hard pressed to survive now with such a bunged hip. Unless, of course, you were a member of the slave-owning upper class. Because you would have people for that and who would carry you around in a sedan chair (4a)
(4a) Jabberwocky has the best (and as far as I know only) sedan chair race on film. See Jabberwocky today!
(5) I have no idea what they're called so I just did a wiki for happy sad faces. Let's see if it worked. No. Let's try faces of drama. On the second 20 is Drama, which has the faces, but not the name of the faces. Greek drama faces? Tragedy came up but no mention of faces. Ah, what about Greek drama masks? Score! Graphic says they're tragic comic masks. Let's see if I'm right ... yes, but back to here for it. Phew!
1. I think you're right, and it is going to make a massive difference in the long run. I totally get why you'd want to punch all those teachers in the face - when I realised as an adult why I was so bad at hand eye coordination stuff* I wanted to go and yell at a few PE teachers.
* Because of an almost-blind left eye that I'd known about but not really given any thought - as a kid you don't think to justify yourself with that sort of thing if no one spells it out for you first.
2. Aw man you poor thing. That would have sucked.
I know what you mean about the don't think to justify yourself thing.
It's like I always had gut pain as well. I just never articulated it because I never understood other people didn't have it.
Moving has always hurt. Always. Unless I am zonked out of my tree on meds.
I remember as a kid I hated catching the bus because of all the bullying fuckwits on it. So I'd try walking home. But it hurt. So I'd give up and go to the house of people who went to my church and ask if I could call my mum for a lift. They'd say yes but they were (rightly) weirded out by it. But my mum was pissed off. Which was fair enough. She was trying to work and I'd cause all sorts of trouble for her to finish early to get me.
But I never really realised because I was hurting. Because I've always hurt.
You think it'd make me stronger or something.
You'd be wrong. So weak. So whiny.
My eyes are old and bent.
3. You're allowed to be whiny. It sucks! :(
4. I suppose I can indulge a little bit
(goes out into thunderstorm and shakes fist at the gawds, tears mingling with rain as thunder drowns out screams of embittered rage).
That's better.
5. So long as it helps. Otherwise you'd just be wet.
6. And no girly umbrellas!
They have to be manly ones. With sport on it. Or cars. Or chicks. Or a chick on a car wearing cricket pads.
|
global_05_local_4_shard_00000656_processed.jsonl/84111
|
RXTE Helpdesk/FAQ RXTE What's New HEASARC Site Map
RXTE Information - Cycle 10 Guest Observer Program RXTE
The schedule for RXTE Cycle 10 is as follows:
• Release Date - January 31, 2004, as part of ROSS-04
• Due Date for Notices of Intent - July 9, 2004
• Due Date for Proposal Submission - September 20, 2004, 4:30pm ET
• Proposal Peer Review - November, 2004
• Start of Cycle 10 observations - on or around March 1, 2005
The RXTE front page of the ROSS-04 Announcement contains detailed information about the program.
This Announcement solicits proposals for participation in the NASA OSS program to acquire and analyze scientific data from the RXTE X-ray Observatory, for observations to be carried out in the interval beginning around March 1, 2005, and lasting for twelve months.
Beginning in 2002/2003, for Cycle 8, Guest Investigator funding was reestablished, and will continue to be available for RXTE Cycle 10. The Cycle 10 Peer Review will thus be a two-stage process, similar to the process used previously for RXTE Cycles 1-4 and Cycles 8 and 9. In the first stage, the scientific and technical merits of submitted proposals will be assessed. The PIs of proposals that are successful at Stage 1 will be invited to submit budget requests. These budgets will be assessed in the Stage 2 review.
As was also the case for Cycles 8 and 9: NASA HQ has requested that proposers
• submit a Notice of Intent for each proposal, and
• submit an additional Cover Page for each proposal
A complete proposal submission will thus include:
To HQ:
• Notice of Intent
• HQ Cover Page
To GSFC via RPS:
• RPS Cover Page
• Target Forms
• Scientific Justification
Notices of Intent are due on or before July 9, 2004. The additional HQ Cover Page is due at the time of submission of the full proposal, on or before September 20, 2004.
Further details on Notices of Intent and Cover Page can be found in Section IV (b) of the ROSS-04 Announcement. This information is also discussed in the 2004 NASA HQ NRA Proposers Guidebook. Note that proposers responding to this NRA should consult the 2004 version of the Guidebook.
If you have questions or problems with submitting NOIs or your HQ cover pages, please consult the NASA HQ Proposal Submission FAQ Page , or submit your technical support question to [email protected].
The process of submitting proposals for RXTE Cycle 10 is identical to that used for Cycle 9. Electronic submission of forms will still be achieved using RPS. The content of these forms is unchanged. Scientific justifications will also be submitted electronically using the same submission process used for Cycles 8 and 9.
As always, any questions about this process should be directed to the the RXTE Guest Observer Facility.
Further Information
Software, response matrices for PCA/HEXTE simulations
Fully Electronic Proposal Submission for RXTE Cycle 10
NO hardcopies need be sent by postal mail for participation in this Cycle. RPS submission of the cover forms is still required. In addition, the cover forms and scientific justifications should be submitted electronically as PostScript files.
For the rest of their electronic submission, PIs should:
1. Enter their proposal data into RPS, saving often, and using the "Verify" button to perform final checks before submission. (This step is identical to previous RXTE cycles.)
2. Generate one target form per requested observation, using RPS. You are required to submit one target form per possible observation. For example, if you are requesting "the first three of the twenty most interesting X-ray transients", you should submit twenty (not three) target forms, one for each possible trigger.
3. Submit the forms using the 'Submit' button in RPS.
4. Wait (seconds-minutes) for the RPS acknowledgment, which will contain a 3-digit proposal submission number.
5. Create a PostScript file of the forms, using the 'LaTeX' or 'PostScript' buttons within in RPS.
6. Upload two (2) PostScript files per proposal, via RPS, one containing the forms, the other the Scientific Justification, technical feasibility information, and status of previous RXTE observations ("track record"), as specified in the Announcement and Appendices - particularly Section C.2.2. Full instructions on how to upload can be found in RPS. The two files must be named
• nnn_flast_f.ps
• nnn_flast_sj.ps
• nnn is the 3-digit proposal submission number supplied by RPS;
• flast is the first initial and last name of the PI;
• _f is the forms;
• _sj is the scientific justification;
e.g.: 017_asmale_f.ps, 017_asmale_sj.ps
7. Wait (seconds-minutes) for a second RPS acknowledgment, confirming receipt and completion of the electronic submission process.
• When using RPS, remember to frequently save the html file containing your form entries. Files you're working on can be saved to your hard disk, and reloaded from there, with the RPS 'Save' and 'Reload' buttons.
• If electronic submission is infeasible for you, please contact the RXTE Guest Observer Facility to make alternative arrangements.
or similar.
Other Important Features of Cycle 10
Here, we list other factors that potential RXTE proposers should be aware of. This section is basically unchanged from the previous Cycles.
If you'd like to be a Peer Reviewer ....
The RXTE Cycle 10 Peer Review will take place in the Baltimore/Washington area in mid-November 2004.
If you would like to be considered as a reviewer, please email Mike Arida at [email protected].
RXTE Instrument Configurations: the easy route
For all sources with total PCA count rates less than 1200/s (including all extragalactic observations) and HEXTE count rates less than 80/s, the SOC strongly recommends that proposers use the following set of instrument configurations:
PCA EA1: Standard1
PCA EA2: Standard2
PCA EA3: GoodXenon1_2s
PCA EA4: GoodXenon2_2s
PCA EA5: Idle
PCA EA6: Idle
HEXTE Cluster A: E_8us_256_DX1f
HEXTE Cluster B: E_8us_256_DX1f
Note that RPS does not allow one to leave an EA unused. In this case one has to explicitly specify "Idle".
Use defaults for all other HEXTE parameters.
Note that this advice does not relieve the proposer from the obligation of providing estimates for the expected event count rates.
In these cases there is no need to run recommd or elaborately justify the chosen configurations.
help desks.
This page is maintained by the RXTE GOF and was last modified on Thursday, 21-Sep-2006 13:30:00 EDT.
|
global_05_local_4_shard_00000656_processed.jsonl/84133
|
Anti-Papal Movement
Catholic Church
Christian Church in the Middle Ages
Christian Church, Separation of
Conversion of Constantine
Development of World Religions
Jesus Christ
Origins of Christianity
Persecutions of the Christians in Gaul
Reformed Churches
Rise and Triumph of Christianity
Rise of Christianity
Roman Church
A History Christianity
Edited By: Robert A. Guisepi
The Origins Of Christianity
In the initial decades of the Roman Empire, at the eastern end of the
Mediterranean, a new religion, Christianity, emerged. Much of the impetus for this new religion rested in issues in the Jewish religion, including a
long-standing belief in the coming of a Messiah and rigidities that had
developed in the Jewish priesthood. Whether or not Christianity was created by God, as Christians believe, the early stages of the religion focused on
cleansing the Jewish religion of stiff rituals and haughty leaders. It had
little at first to do with Roman culture. Christianity arose in a remote
province and appealed particularly to the poorer classes. It is not easy, as a
result, to fit Christianity neatly into the patterns of Roman history: It was
deliberately separate, and only gradually had wider impact.
Christianity originated with Jesus of Nazareth, a Jewish prophet and
teacher who probably came to believe he was the Son of God and certainly was regarded as such by his disciples. Jesus preached in Israel during the time of Augustus, urging a purification of the Jewish religion that would free Israel and establish the kingdom of God on earth. He urged a moral code based on love, charity, and humility, and he asked the faithful to follow his lessons, abandoning worldly concern. Many disciples believed that a Final Judgment day was near at hand, on which God would reward the righteous with immortality and condemn sinners to everlasting hell.
Jesus won many followers among the poor. He also roused suspicion among
the upper classes and the leaders of the Jewish religion. These helped
persuade the Roman governor, already concerned about unrest among the Jews, that Jesus was a dangerous agitator. Jesus was put to death as a result, crucified like a common criminal, about A.D. 30. His fo lowers believed that he was resurrected on the third day after his death, a proof that he was the Son of God. This belief helped the religion spread farther among Jewish communities in the Middle East, both within the Roman Empire and beyond. As they realized that the Messiah was not immediately returning to earth to set up the Kingdom of God, the disciples of Jesus began to fan out, particularly around the eastern Mediterranean, to spread the new Christian message.
Initially, Christian converts were Jewish by birth and followed the basic
Jewish law. Their belief that Christ was divine as well as human, however,
roused hostility among other Jews. When one early convert, Stephen, was stoned to death, many disciples left Israel and traveled throughout western Asia.
Christianity Gains Converts And Religious Structure
Gradually over the next 250 years, Christianity won a growing number of
converts. By the 4th century A.D., about 10 percent of the residents of the
Roman Empire were Christian, and the new religion had also made converts
elsewhere in the Middle East and Ethiopia. As it spread, Christianity
connected increasingly with larger themes in Roman history.
With its particularly great appeal to some of the poor, Christianity was
well positioned to reflect social grievances in an empire increasingly marked by inequality. Slaves, dispossessed farmers and impoverished city dwellers found hope in a religion that promised rewards after death. Christianity also answered cultural and spiritual needs - especially but not exclusively among the poor - left untended by mainstream Roman religion and culture. Roman values had stressed political goals and ethics suitable for life in this world. They did not join peoples of the empire in more spiritual loyalties, and they did not offer many emotionally satisfying rituals. As the empire consolidated, reducing direct political participation, a number of mystery religions spread from the Middle East and Egypt, religions that offered emotionally charged rituals. Worship of gods such as Mithra or Isis, derived from earlier Mesopotamian or Egyptian beliefs, attracted some Roman soldiers and others with rites of sacrifice and a strong sense of religious community. Christianity, though far more than a mystery religion, had some of these qualities and won converts on this basis as well. Christianity, in sum, gained ground in part because of features of Roman political and cultural life.
The spread of Christianity also benefited from some of the positive
qualities of Rome's great empire. Political stability and communications over a wide area aided missionary efforts, while the Roman example helped inspire the government forms of the growing Christian church. Early Christian communities regulated themselves, but with expansion more formal government was introduced, with bishops playing a role not unlike Rome's provincial governors. Bishops headed churches in regional centers and supervised the activities of other churches in the area. Bishops in politically powerful cities, including Rome, gained particular authority. Roman principles also helped move what initially had been a religion among Jews to a genuinely cosmopolitan stance. Under the leadership of Paul, converted to Christianity about A.D. 35, Christian missionaries began to move away from insistence that adherents of the new religion must follow Jewish law. Rather, in the spirit of Rome and of Hellenism, the new faith was seen as universal, open to all whether or not they followed Jewish practices in diet, male circumcision, and so on.
Paul's conversion to Christianity proved vital. Paul was Jewish, but he
culture could grasp, and he preached in Greece and Italy as well as the Middle East. Paul essentially created Christian theology, as a set of intellectual principles that followed from, but generalized, the message of Jesus. Paul also modified certain initial Christian impulses. Jesus himself had drawn a large number of women followers, but Paul emphasized women's subordination to men and the dangers of sexuality. It was Paul's stress on Christianity as a universal religion, requiring abandonment of other religious beliefs, and his related use of Greek - the dominant language of the day throughout the eastern Mediterranean - that particularly transformed the new faith.
Relations With The Roman Empire
Gradually, Christian theological leaders made further contact with
Greco-Roman intellectual life. They began to develop a body of Christian
writings beyond the Bible messages written by the disciples of Jesus. By the
4th century A.D., Christian writings became the only creative cultural
expressions in the Roman Empire, as theologians sought not only to explain
issues in the new religion but also to relate it to Greek philosophy and Roman ethics. Ironically, as the Roman Empire was in most respects declining, Christianity produced an outpouring of complex thought and often elegant use of language. In this effort, Christianity redirected Roman culture (never known for abundant religious subtlety) but also preserved many earlier literary and philosophical achievements.
Christians, who put their duties to God first, would not honor the emperor as
a divinity and might seem to reject the authority of the state in other
spheres. Several early emperors, including the mad Nero, persecuted
Christians, killing some and driving their worship underground. Persecution
was not constant, however, which helps explain why the religion continued to spread. It resumed only in the 4th century, when several emperors sought to use religious conformity and new claims to divinity as a way of cementing
loyalties to a declining state. Roman beliefs, including periodic tolerance,
helped shape a Christian view that the state had a legitimately separate if
subordinate sphere; Western Christians would often cite Christ as saying
The full story of early Christianity goes beyond the history of Rome.
Christianity had more to do with opening a new era in the history of the
Mediterranean region than with shaping the later Roman Empire. Yet important connections did exist that explain features of Christianity and of later Roman history. Though not a Roman product and though benefiting in part from the empire's decline, Christianity in some of its qualities can be counted as part of the Greco-Roman legacy.
Back to Main menu
A project by History World International
World History Center
|
global_05_local_4_shard_00000656_processed.jsonl/84146
|
How to Get The Best From Your Kit Part 8
Give me Bass
Having discussed the mechanics of PA speakers and amplifiers in parts 4 & 5 this article looks at how to upgrade a PA system by adding bass bins. A concept that every mobile DJ will no doubt consider at some time, all opinions expressed are in no way linked to the editor of Mobile DJ magazine.
Do I really need more Bass?
Oh be serious! What kind of sad DJ doesn't want more bass? How else would you deal with a reluctant audience if you couldn't bounce them on to the dance floor? One point to consider before you increase your credit card limit - again: take into account the type and size of the functions you perform at, the size of the room and on average how many people attend. Does your sound system as it stands do an adequate job for the majority of functions? If it does then for the odd function where you decide you might want a bit more 'Welley' in your sound you can opt for hiring the extra equipment as required.
What adding bass bins to your sound system achieves;
a). You get more bass (Duh !).
b). The volume of your sound system increases, however, you can also achieve this just by purchasing a bigger amplifier for your current speakers - taking into consideration what's been discussed in the previous three articles in this series. But, increasing the volume over the whole frequency spectrum will eventually become painful on the hearing as the higher frequencies start to pierce the eardrums.
c). Adding bass bins achieves more than just an overall increase in volume, by concentrating the increase of volume in purely the lower frequencies the overall "presence" of your sound increases. The difference is between sounding loud and sounding big, a powerful bass sound will reverberate around the room and into your chest (but we aren't after de-fibrillation).
d). You certainly get more flexibility in the size of functions you can adequately cater for. A standard disco PA system consisting of a pair of full range speakers and an amplifier at about 1000 to 1600 Watts will cater for audiences up to 200 or 300 people. For larger venues and audiences you will find enhancing your sound system with a pair of bass bins will be beneficial.
Bass bins - purpose built for the job
So how do you increase the volume of the bass frequencies? It all depends on moving large volumes of air. There are two ways a DJ can do this; the first is to get a descent set of bass bins the second is to go on a diet of baked beans. One method will achieve a more practical, precise and less unpleasant result than the other. Bass bins are purpose built for the job of producing bass frequencies and will do so more efficiently than standard full range loudspeakers but this also means they are big and, unfortunately, heavy. Good bass bins will have at least a single 15 inch loudspeaker driver, a larger 18 inch driver will be able to produce even lower sub-bass frequencies down to as low as 30 Hz. Bass bins with two 12, 15 or 18 inch speakers will produce a more powerful bass sound but will need a larger amplifier to drive them (not to mention the fork lift truck to get them in and out of a venue). The greater the number of bass drivers hence the larger the bass bins - the greater the volume of air you can shift so the louder the bass sound. There are plenty of models to chose from including; Peavey Hi-sys 115XT, Hi-sys 118XT and Hi-sys 215XT, RCF Event ESW1018, JBL SR4715A, SR4718A, SR4719A and MR905.
Upgrading your sound system - two approaches
Lets assume you are choosing to upgrade your current sound system and to do this you'll keep your original full range speakers to use as mid / high cabinets and possibly keep your original amplifier as well. O. K. so you rich kids who are going to buy a brand new JBL concert system all in one go, just turn the air conditioning up in your Porche and chill out. The two methods of upgrading I'm about to describe can make a good upgrade path to a large Bi-amplified sound system for us mere mortals. The difference between the two methods depends on using a passive (post-amplifier) or an active (electronic pre-amplifier) crossover.
What's a crossover?
The ideal speaker would be a point source capable of producing the full audio frequency range. Allegedly one did exist several billion years ago just before the big bang. In the real world of compromises when a single speaker attempts to generate both high and low frequencies at the same time intermodulation distortion occurs. Visualise the back and forward movement of a speaker cone. Low frequency waves are greater in amplitude than high frequencies and so will move the speaker cone a greater distance at a slower oscillation rate. As the cone moves forward at low frequency the high frequencies produced by the same cone added on top of the low frequency oscillation get compressed and so are raised in pitch. As the cone recedes the high frequency signal is lowered in pitch. This change in pitch is known as the Doppler effect, it's the same effect used to describe the change in pitch of a train whistle as it passes you. The interference with the high frequency wave production due to the low frequency oscillations of the speaker cone is known as intermodulation distortion. The solution is to use multiway speaker systems splitting the frequency band between different purpose built drivers for the appropriate frequency ranges. A crossover is used to separate the audio frequencies and route them to the correct speakers, the bass frequencies go to the bass bins, the mid range frequencies to the mid range drivers and the high frequencies to the tweeters or compression drivers. However, as Mike Taylor interjected in a previous article that mentioned this subject, using crossovers introduces the problem of crossover distortion, cheers Mike! This means at the crossover frequency there will be an element of distortion introduced by the crossover process itself. Every time you process or alter an audio signal in some way you distort it, it's difficult to win. However, crossover distortion is easier to handle than intermodulation distortion and with careful engineering crossover distortion can be kept to a minimum. Multiway speaker systems will usually come with a built in crossover, for this article we need only concern ourselves with the low frequency crossovers required for use with bass bins.
The passive system
Some, but not all, bass bins come with a built in crossover. You connect the output from your amplifier to the bass bins and then connect a signal fed back out from the bass bins to your full range speakers as shown in fig. 1. The built in crossover in the bass bin takes the bass frequencies below about 150Hz to 250Hz and feeds it to the speaker driver(s) in the bass bin, it sends the remaining low-mid to high frequencies out to the full range speakers which are now used as mid / high speakers. An important point to be aware of is the additional loading the bass bins will add to your amplifier. To explain consider this example, suppose you currently own a pair of Peavey Hi-Sys 2XT full range loudspeakers rated at 350 Watts at 4 Ohm impedance. You decide to add a pair of Peavey Hi-Sys 115XT bass bins to your sound system using their internal passive crossover. These bass bins are also rated at 350 Watts at 4 Ohm impedance. The instructions supplied with the bass bins state that by connecting the bass bins to the Hi-Sys 2XT speakers using the bass bin's internal crossover will present an overall speaker impedance of 4 Ohm to the amplifier. This is governed by the design of the passive crossover, by connecting some speaker systems together the overall impedance load may be lower for example 2 Ohm. It's generally a rule of thumb that if you connect 2 speakers of the same impedance together in parallel the overall impedance is half that of the individual impedance value of the speakers. You need to know the overall impedance load so you can be sure your amplifier has the capability of driving that particular impedance load, some amplifiers can't drive less than 4 Ohm on each channel. Read the manufacturers instructions for both the amplifier and the speakers if you're unsure. Now you know the overall impedance you need to calculate the total speaker power. This is the sum of the power handling of the bass bins added to the power handling of the full range speakers. Be sure the power rating is measured the same way for both bass bins and full range speakers you need the RMS, or better still AES power ratings. In our example the power of the bass bins added to the power of the full range speakers is 700 Watts. So to prevent the risk of overloading the amplifier you need an amplifier that delivers at least 700 Watts per channel into a 4 Ohm load if you wish to minimise clipping the amplifier. (The subjects of speaker power handling and how amplifier's work were covered in the previous two articles). If you do try using too small an amplifier you risk damaging the compression drivers in the full range speakers since the bass bins and the main speaker drivers in the full range speakers are taking up all the power. So you have to consider the possibility of buying a larger amplifier if yours isn't up to the job. The main disadvantage with using passive crossovers is they are inefficient. Because they are employed after the power amplifier they have to be capable of withstanding the large power outputs involved, in our example the region of 700 Watts. The required high power handling limits the types of components that can be used in the crossover to the passive type components, hence 'passive crossover', typically capacitors, resistors and inductors. Because of the heat dissipation in these components they have to be quite large and by their nature this makes it harder to engineer precise crossover networks. In addition the heat generated uses up power output from the amplifier for something we don't want instead of sound production, which we do.
The active system
This time the crossover point is before the amplification stage as shown in fig. 2. The sound signal from the mixer is fed directly into an electronic active crossover, which you have to buy in addition to the bass bins. There are slight variations in the types of active crossover available we need a stereo 2 way crossover, examples include; Behringer Super X, DOD 835, Studiomaster C180 and the Peavey XD 3/4. These crossovers use more precise 'active' (hence the name) components such as op-amps and transistors etc. for the job of splitting the sound signal between the bass bins and the mid / high speakers. This enables the crossover process to be more tightly controlled and better engineered thus introducing less distortion at the crossover frequency. Because the signal is still at pre-amplification levels there's no need to worry about the components ability to handle the high power output, as is the case when using a passive crossover. However, because the crossover stage is before the amplification stage you now need two amplifiers.
So, in addition to buying your new bass bins and an active crossover you also have to buy a second amplifier to drive the bass bins and use your original amplifier to drive your full range speakers which are now used as mid / high cabinets. The obvious down side to this is expense, although the advantages are very beneficial, to start with you now have a serious amount of power to drive your sound system. The crossover sends all the sub-bass frequencies typically below 90 to 250 Hz to the amplifier driving the bass bins and the low-mid / high frequency ranges are sent to the second amplifier driving the full range speakers. What this process has done is to split the task of producing all the powerful bass frequencies between the two amplifiers. This means that where before your original amplifier's power was used up producing the full range sound to drive your full range speakers there is now more power available in this amplifier because it is producing just the mid and high frequencies. This extra available power means the top end will sound a lot clearer because of the extra available 'headroom' in your sound system. You will notice that your sound system is much louder at what were the lower volume levels on your mixing deck before you upgraded to using two amplifiers. You can also increase the volume on your mixer by a greater amount than before because your original amplifier will not start to clip so easily. Consequently at larger functions the volume is there when you need it. A point worth noting here is to check the power consumption of the two amplifiers if the total current required to drive then exceeds 13 Amps you will need to connect the second amplifier to a second wall socket, time to invest in another extension lead. Check that both power sockets are on the same phase or ring main.
2 is better than 1
Having two amplifiers removes the worry of 'what will I do if my amp blows?' Remember, the amplifier is the engine of your PA system, if you're only running with one amplifier and it breaks down your show's over if you haven't got a spare. Most other breakdowns in a PA system can be worked around, losing one CD player or record deck is inconvenient but not a disaster. If your running a bi-amplified PA system and one of your amplifiers breaks down, with some quick re-wiring you can be back up and running in 5 minutes. Bypass the active crossover by connecting your mixer output directly to one remaining good amplifier that in turn connects to your full range speakers. This will see you through the rest of the night, albeit at a reduced volume. I recently read an article aimed at mobile DJ's stating that professional DJ's never went to a gig with less than two amplifiers, this re-kindled the 'what makes a professional DJ debate'. This one's down to the individual, personally I have two amplifiers that I take out when I'm using my full PA system. I'll admit that if I'm not using my bass bins I usually go out with just one amplifier, normally because I'm in my car instead of a van. If you do use only one amplifier make sure it's a good one with a reliable track record and look after it.
What crossover frequency?
Another advantage of using an active crossover is you can control the actual crossover frequency thus enabling you to get the optimum performance out of both your bass bins and your mid / high speakers. When setting the crossover frequency always follow the advice given by the bass bin manufacturers, the operation manual for the active crossover should also contain some helpful advice. The actual process of getting it precisely correct involves going into the theory of phase and time alignment of multiway speaker systems. (Why? I don't know, try asking me one on 3rd century Byzantine Mosaics). It's very boring and not practical for a standard mobile set up anyway, it depends if you want to squeeze every last ounce out of your sound system or not. As a guide the crossover frequency for bass bins is between 90Hz to 250Hz, there should be little or preferably no vocals produced by the bass bins, that's the job of the mid / high speakers.
Passive or Active?
Having compared the two methods of adding bass bins to a PA system there are clear advantages in using an active crossover with a second amplifier compared to the passive choice. The only down side is the cost, the passive option being a lot cheaper as long as you don't have to buy a bigger amplifier. If you do then it's probably worth going for the active set up anyway. Also, not all bass bins come with built in crossovers, if you opt to buy some without then you have no choice but to use an active crossover system. If your budget won't stretch to the active bi-amplified system and you opt for bass bins with a built in crossover they usually have a way of bypassing the internal crossover so you can upgrade at a later date if you wish.
Tricks and Tips
1. Placing your bass bins together in an array, as in fig. 3, will increase efficiency.
Two bass bins side by side will provide +3dB more volume than the same two bass bins separated by a stage (this is the equivalent to doubling the power sent to your bass bins). Stack 2 more bass bins on top so you have four stacked in a square gives +6dB more volume. Why? Together they form a single source, apart they are discrete sources. As I mentioned in part 3 of this series, bass frequencies and high frequencies behave differently. Bass frequencies are non-directional and hard to localise, the spherical nature of bass sound waves means that having two sources some distance apart will introduce some phase cancellation in the middle. It's the same as dropping two stones into a pond together or a distance apart. Dropped together they cause larger waves in the water than when dropped apart when the waves will also collide with each other when they cross in the middle.
2. Run the bass bins in mono. As already mentioned bass frequencies are non-directional and hard to localise, hence they don't contribute to the stereo effect. If you have a bi-amplified system run the amplifier for the bass bins in mono, some amplifiers and electronic crossovers have a switch to enable you to do this. Otherwise just connect the left and right inputs from the crossover to the bass amplifier together. This eliminates possible loss of bass volume due to slight phase variations in bass frequencies between the left and right stereo channels. Keep the signal and the amplifier driving the mid / high cabinets in stereo to maintain the stereo sound where it is required in the upper frequencies.
Top of pagePrevious articleArticle index
[Top of page | Previous article | Article index]
Copyright © 2000 Gerry Hayden.
|
global_05_local_4_shard_00000656_processed.jsonl/84147
|
The gen on the C and C++ language bindings to the DOS API
Every operating system API has C/C++ language bindings, which make that API accessible to programs written in the C and C++ languages. The OS/2 system API has the <os2.h> header and the os2386.lib link library, for example. The Win32 API has the <windows.h> header and a whole bunch of link libraries such as kernel32.lib, user32.lib, gdi32.lib, and advapi32.lib. The POSIX API has <unistd.h>, <sys/stat.h>, <sys/socket.h>, and a whole bunch of other headers, and the libc link library.
For the DOS API, the C and C++ language bindings comprise the <dos.h>, <io.h>, <direct.h>, and <conio.h> headers, and a link library of wrapper and shim functions that is usually rolled into the implementation's all-in-one "runtime library".
These were supplied by pretty much all DOS-targetting implementations of the C and C++ languages, from Watcom C/C++, through Turbo C/C++ and Microsoft C, to Borland C/C++ for DOS.
There were essentially two classes of functions provided by the C/C++ language bindings: Direct wrappers for the DOS INT 0x21 API itself, that simply took their function parameters and stuck them into the appropriate processor registers before invoking INT 0x21, and "shim" functions that were layered on top of the DOS API, that did further processing to provide semantics that DOS itself did not such as POSIX-style permission flags and "text mode" files (more on which, later).
The headers
The individual headers provide access to different portions of the DOS API:
File I/O functionality, including:
plus a whole load of shims, more on which later.
Directory manipulation functionality, including:
"Console" I/O functionality, including:
Pretty much everything else, such as (to pick a few examples):
Resemblance to the POSIX API
The C/C++ language bindings to the DOS API were, and still are, often conflated with the POSIX API C language bindings, but they are in fact a wholly different API, that just happens to resemble the POSIX API on a dark night if one squints heavily.
Sometimes, this resemblance was intentional. <io.h>, for example, also declared a whole raft of supposedly POSIX-alike functions, such as open(), chmod(), read(), write(), seek(), and close(). These were shim functions, which internally called the DOS API but placed some mappings atop it, changing POSIX permission flags into DOS file attributes (where possible), and implementing the handling of character 26 and of CR+LF sequences in O_TEXT mode files (which, contrary to popular belief, are not functions of the DOS API itself).
Sometimes this resemblence was a simple consequence of the fact that the DOS API and the POSIX API work exactly the same. INT 0x21/AH=0x3F, for example, has almost exactly the semantics of the POSIX API read() function from <unistd.h>: it is given a buffer pointer, an I/O handle, and a maximum size, and it reads up to that number of bytes from the handle directly into the buffer as-is, without any processing of them, and returns either an error code or the number of bytes read. Thus dos_read() from <dos.h> closely resembles the POSIX read() function.
Sometimes, there was a distinct difference, and not a resemblance at all. The most widely-known such difference is the DOS API mkdir() function from <io.h>, which takes one argument, the string to pass to INT 0x21/AH=0x39. The POSIX API mkdir() function from <sys/stat.h> takes two arguments. And of course, as mentioned, the shim functions in <io.h> that were layered on top of the DOS API itself added a whole load of "text file" processing, neither native to DOS itself nor the same as the POSIX semantics, such as special handling for character 26 and modification of CR+LF sequences. Thus, and ironically, functions like the read() shim from <io.h> were far less similar in operation to the POSIX read() function (from <unistd.h>) than the underlying DOS API dos_read() function from <dos.h> was.
Borland and conio
Originally, the C/C++ language bindings to the DOS API were as above. Then along came Borland.
Borland had to be faster than Microsoft. Its compiler had to compile faster. And programs compiled with it had to run faster. So Borland changed all of the <conio.h> functions. Instead of calling the console I/O API that the operating system actually provided, and being simple wrappers for the DOS API itself, Borland's versions of the functions bypassed DOS and either called the low-level device-specific machine firmware API, or talked to the console hardware directly, peeking and poking video RAM.
kbhit() turned into a firmware call. putch() wrote directly to VRAM and came to know about text window boundaries, scrolling flags, and colours. getche() became putch(getch()). And a whole load of new functions such as settextwindow() were added.
As a consequence of this, it became a Frequently Given Answer to point out that with a Microsoft-compiled program using <conio.h>, one could redirect the standard input and standard output of the program and it would work properly, because the DOS API that the <conio.h> functions called was of course aware of I/O handle redirection; whereas with a Borland-compiled program using <conio.h>, redirecting the standard input and standard output of the program simply wouldn't have any effect.
This was particularly galling to people who wanted to run Borland-compiled programs remotely, on BBSes that they were connected to via terminal emulators. A program that prompted for the user to press a key and then called getch() would work for BBS use if compiled with the Microsoft compiler, since the BBS software could redirect the DOS I/O handles through the serial device and DOS would handle the redirected console I/O in the normal fashion. But the same program if compiled with the Borland compiler would not work for BBS use, since getch() would talk to the firmware directly for keyboard access, rather than go through the redirectable DOS API functions.
Watcom's <conio.h> library followed Borland's, and the same Frequently Given Answer applied to Watcom-compiled programs. Thanks to Borland, the popular wisdom surrounding getch() and its companions changed to the extent that people eventually regarded them as highly hardware-specific, even though they had started off as simple wrappers for DOS API functions that could be redirected and would work with files and with most DOS character devices.
There are various spellings of the DOS API C/C++ language binding function names. In part, these came about because of a confusion as to what implementors of the C and C++ languages should do with library functions that they supplied as standard, but that weren't part of the ISO C and C++ standard libraries. Originally, the function names were unadorned, as above. Later on, the popular belief that everything that was "non-ANSI" should be prefixed with an underscore took hold, and DOS C/C++ implementors renamed their functions to names such as _getche() and _dos_findfirst(). But because by that time there was a significant codebase using the former function names, DOS C/C++ implementations ended up with both forms in their headers, rather making a mockery of the reasons that the underscore convention was supposedly introduced in the first place. (Of course, nowadays, people appreciate far more that an operating system API's language bindings are in essence little different from any other application-mode programming library, and are not necessarily required to be specially marked with underscores.)
The C/C++ language bindings to the DOS system API are also available on compilers that don't target DOS. This is mainly to provide some form of source-compatible upgrade path for applications being ported from MS/PC/DR-DOS to the platforms that the compilers target. In these circumstances all of the functions are shims, layered on top of the native operating system API. On OS/2-targetting implementations, for example, the <conio.h> functions are layered on top of the 16-bit OS/2 VIO/KBD API and the <dos.h> functions are layered on top of the OS/2 Control Program API (i.e. the various DosXYZ() API functions).
The mapping from the DOS API shims to the actual operating system API is usually quite imperfect. For example: on OS/2 and Win32, directory searches have to be closed lest one leak handles. But this is not true for the DOS system API. The DOS API C language bindings only have _dos_findfirst() and _dos_findnext(). As a consequence of this, there's usually either a bodge in the library to attempt to reduce search handle leakage heuristically (as was the approach taken by Borland C/C++ for OS/2) or an API extension providing a new _dos_findclose() function that ported DOS code has to be modified to call (as is the approach taken by Watcom C/C++).
Interestingly, this is not a problem confined to C/C++ compilers providing a compatibility shim functions for porting DOS programs. It exists in DOS emulators, too. As Microsoft KnowledgeBase article 195930 notes, the Virtual DOS Machine subsystem on Windows NT, NTVDM, has exactly the same problem. It has to map DOS API calls made by DOS programs running within the VDM into Win32 API calls. But it has no way to know when a DOS program has finished with a directory search, unless the entire program terminates, of course. So it gradually leaks directory search handles as it calls FindFirstFile() without later calling FindClose(), until the DOS program eventually exits.
|
global_05_local_4_shard_00000656_processed.jsonl/84163
|
Keress bármilyen szót, mint például: cleveland steamer
3 definitions by last1picked
the force that pulls a friend away at the beginning of his new relationship rarely losing its grip
often abbreviated as "VV" or symbolically represented by "¤"
frænd 1: I haven't seen todd in a while. he really got pulled in by that vagina vortex.
frænd 2: yeah. he's totally in outer space.
frænd 1: hope he says "hi" to halo and mark for us.
Beküldő: last1picked 2009. augusztus 22.
a hairstyle that is a combination, generally of equal parts but not specifically, of an afro, a mullet, and a pompadour.
the only known fromulladour, that is the only one ever captured on film, is university of maryland basketball player bambale osby.
fan #1: yo! osby really schooled 'em out there today bro.
fan #2: he's totally rockin' that fromulladour!
fan #1: word.
fan #2: damn we're white.
fan #1: indeed. let's go to the gap.
Beküldő: last1picked 2008. augusztus 2.
to excel at something while barely trying.
used as a result of the lyric: "last week fucked around and got a triple double."
i was totally ice cubeing last night when i was watching a movie while banging your sister; gave her three orgasms and never took my eyes off the tv.
Beküldő: last1picked 2008. augusztus 3.
|
global_05_local_4_shard_00000656_processed.jsonl/84168
|
Seminar speaker sign-up
From eebedia
Revision as of 12:14, 28 August 2013 by Kentwood Wells (Talk | contribs)
Jump to: navigation, search
Seminar Speaker: Dr. Emily Lemmon
Institution: Florida State
Web site:
Time and Place: 4 PM in BPB 130
Contact: Chris Simon and Elizabeth Jockusch
Thursday, 5 September 2013
Time Name Room
9:00 a.m.
9:30 a.m.
10:00 a.m. Bernard Goffinet BioPharm 300A
10:30 a.m. Charlie Henry TLS 479/481
11:00 a.m.
11:30 a.m. Informal talk: Anchored phylogenomics: accelerating the resolution of Life. Note- come early-- we have to be out of the room on time becasue a class starts at 12:30 TLS 181
12:30 p.m. Lunch
1:30 p.m. Yang Liu BioPharm 312
2:00 p.m. Kent Wells TLS 380
2:30 p.m. Chris Simon Biopharm 305d
3:00 p.m. Beth Wade Biopharm 323
3:30 p.m. seminar prep
4:00 p.m. Seminar Biology/Physics 130
Friday, 6 September, 2013
Time Name Room
8:00 a.m. Breakfast
9:00 a.m.
9:30 a.m.
10:00 a.m.
Personal tools
|
global_05_local_4_shard_00000656_processed.jsonl/84198
|
A Moment of Science
Electric Eel Powers Christmas Tree
Christmas Tree lit up
Photo: Brent Flanders (flickr)
The Tokyo aquarium's use of eels is an interesting way to supply electricity.
Christmas time means chestnuts roasting on an open fire and a gorgeously lit up Christmas tree, right? Well, at a marine aquarium south of Tokyo, decorations are also going up, but not how you might think!
One decoration, the Christmas tree, is getting a lot of attention. The reason? The electricity that powers the lights is coming from an electric eel.
With each movement, the eel generates 800 watts of electricity. The aquarium has set up two aluminum electrodes inside the eel’s tank. The electrodes capture energy and convert it to power the Christmas tree.
Now, that’s what you call energy star!
Read More:
Margaret Aprison
View all posts by this author »
Stay Connected
Support for Indiana Public Media Comes From
About A Moment of Science
Search A Moment of Science
|
global_05_local_4_shard_00000656_processed.jsonl/84210
|
Nightcaps: Double Play, a shrine to baseball
A couple talks outside Double Play. Photo: The Chronicle
An unlit marquee marks the entrance to Double Play. Photo: The Chronicle
From the local scene:
• Carl Nolte wins again, penning a nice ode to the Mission’s Double Play Bar, a shrine to baseball that just happens to be across the street from the site of the old Seals Stadium: “The place is full of baseball memories: signs advertising long-vanished businesses, old first basemen’s gloves, a giant bat, scorecards from games played years ago. The crowd was young. None of them had ever been in Seals Stadium, though they heard of the old place.” [San Francisco Chronicle]
• Q&A with Chad Robertson. [Eater National]
• Q&A with Mourad Lahlou. [KQED]
• Look at all the pretty pics of St. George Spirits. [Food GPS]
• How a bartender at 500 Club got a visit from the Google Gestapo. [Fox]
• Chez Panisse vet Tamar Adler files a rant of sorts against Anthony Bourdain: “Anthony Bourdain has turned a sort of belligerent gluttony into a talisman for insecure men. The tragedy of Bourdain is that he once exhibited something better.” The comments in the article are worth a read, too, especially as Bourdain has chimed in there. [The New Yorker, HuffPo]
From the national scene:
• Carrots: SO HOT RIGHT NOW. [New York Times]
• Philadelphia’s iconic Le Bec Fin is going paperless when it comes to menus, swapping in iPads for the traditional route. [Philly Inquirer]
|
global_05_local_4_shard_00000656_processed.jsonl/84215
|
Population Groups
Integration is being looked at as a model with important benefits to a number of segments of the population. This section explores the needs of these population segments and the work that has been done or is being recommended with regard to those groups. Among the population groups to be considered are age groups, gender, insurance status, racial and ethnic groups, rural and urban populations, and veterans.
|
global_05_local_4_shard_00000656_processed.jsonl/84216
|
Thursday, September 14, 2006
They're Not "Fascists"
The time has come to debunk the neoconservatives' favorite propaganda term "Islamo-fascism," and the nuttiness and folly that derives from it. As despicable as fundamentalist Islamic ideology may be, it is not Nazism any more than it is Maoism or Stalinism. Ideologically, culturally, and organizationally, Al Qaeda and the Nazi Party could hardly be more different.
Nazism teaches that everything, including religion, should be an adjunct to the modern bureaucratic-industrial State, with the Fuhrer as the embodiment of the will of the Volk. Militant Islam rejects the State (as it has existed since the Treaty of Westphalia), favoring a society governed entirely by religion, via clerical rulers, and under Sharia law as revealed in the Koran. Islamist societies are organized along clan/tribal lines under a clerical judiciary. Nazism stands for "National Socialism." Radical Islam is neither National (bound inseparably to any State) nor "Socialist." Osama bin Laden is a mult-millinaire semi-capitalist, from a rich and decidedly non-socialist family and society.
The Fuhrerprinzip ("Fuhrer Principle") was an essential doctrine of Nazism. Radical Islam has no equivalent. It operates in decentralized cells, united by religious doctrine, rather than a centralized Party/State apparatus. Even Osama bin Laden is no Fuhrer, as he deferred meekly to Mullah Omar during the Taliban rule, and has no direct hierarchical control over the Islamic militant movement.
Another core doctrine of Nazism is the supremacy of the Aryan Master Race. Osama bin Laden would be hard-pressed to mobilize a single platoon of Tall Blond Brutes, much less a new Waffen SS. He would be utterly doomed if he tried to restrict Al Qaeda membership to non-Semites (Arabs are Semites) of Nordic extraction. Militant Islam transcends race. The movement includes Arabs, Persians (Iranians), blacks (Somalis, Sudanese), Asians (Fillipinos, Indonesians), and even a few whites.
Nazism celebrated the State and the collective will of the German Volk. Think of the Nuremburg rallies. Islam arguably doesn't recognize the modern State at all. Apart from secular dictatorships Islamists want to overthrow, the nations of the Muslim world are governed according to sectarian, tribal and personal loyalties, rather than allegiance to the abstraction of the State. It is precisely the State-lessness of the Islamists that makes this a "different kind of war." When was the last time anyone saw an Al-Qaeda army marching in perfect unison under Roman-style standards bearing swastikas, to the sound of a Sieg-Heiling crowd?
Though Nazism taught that women had their "place" (Kirche, Kuche, Kinder--Church, Kitchen, Children), it still possessed a Western, even semi-Pagan acceptance of female sexuality. Have you ever seen that bizarre Nazi film of pretty, but nearly-identical, almost Borg-like young women in miniskirts exercising in perfect unison with hula hoops, showing off their lissome Aryan bodies? Not the sort of thing we'd see broadcast under the Taleban, or on Iranian National Television.
In terms of religion, the Third Reich was a mixture of Christianity and restored Germanic paganism, with the Christianity dominant ( Radical Islam is...well...Islamic, isn't it?
The Nazi Party and State were strictly centralized, top-down, hierarchical organizations. Once those centralized institutions were destroyed, the Nazi Party ceased to be a significant force on the world stage. The "Werewolves" (an attempted Nazi insurgency) never amounted to anything in post-WWII Germany. The Islamists have no Party or State apparatus. Even if we were to find and kill Osama bin Laden, fundamentalist Islam would continue to exist, if not get stronger as a result of his "martyrdom." OBL's power is derived from his status as a semi-mythic symbol of their movement, not his direct control of State military forces or Secret Police units.
From beginning to end, Nazism was organized around uniformed paramilitary and military units. Osama has not one single such unit to his name. He has no Brown Shirts, no Wermacht, no Luftwaffe, no Kriegsmarine. He has no Gestapo or SS. His forces are all irregulars, and currently control no significant territory.
If we must attempt to turn Islamic Fundamentalists into some other enemy from America's past, there is one they have a lot more in common with, in terms of their organization, equipment, tactics, etc.: The Viet Cong. But then, we have a good reason not to go around calling Osama bin Laden the next Ho Chi Minh, don't we? "Islamo-VC," anyone?
Since calling OBL and his ilk "Islamo-Fascists" is clearly absurd, why do it? Simple: it's propaganda. The Nazis are the one, single enemy in all world history that it's indisputably OK to hate. If we tried calling them "Islamo-Stalinists," there would be those who still think the Worker's Paradise was a good idea, poorly executed, who wouldn't be swayed. Call them Islamo-Kamikazes, and the War Party would be confronted with the internment of the Japanese and the victims of Hiroshima and Nagasaki. Pick any other enemy the U.S. has waged war against, all the way back to the War of Independence ("Oh, come on, those stamp taxes weren't so bad!"), and you can find some people willing to empathize with the other side.
The Nazis though...well, all you have to do to portray Evil in a movie is dress it in a Nazi uniform (e.g. Star Wars, Indiana Jones). We'll overlook the firebombing of Dresden even as we wring hands over Hiroshima because the folks in Dresden were, well, Nazis. If we can slap a swastika on somebody, it's more than OK to kill them, and anyone who appears to support them, or who just happens to live within the blast radius of their hideout.
Furthermore, should anyone ever question the latest war do jour, the Administration can invoke the specter of Neville Chamberlain and label any critics as “appeasers.” And so, we get a spectacle of bizarre mutant Hitlers springing up all over the world whenever the US wants to start putting "steel on target."
Slobodan Milosovic, a Slavic "Hitler" whose tiny country was a client state of Russia.
Saddam Hussein, an Arab "Hitler" whose “Fourth Reich” was so poorly equipped, American forces could crush his “Wermacht” while suffering fewer casualties than in training operations of comparable scale.
Osama bin Laden, a “Hitler” leading his “Fourth Reich”, decentralized, non-State guerrilla insurgency...from a cave.
Hugo Chavez, whose tin-horn oil-funded socialist regime is no doubt poised to conquer the planet with its vast and technologically superior military-industrial complex.
President Ahmadinejad of Iran, certified nutjob whose virtually figurehead role is certainly a novel application of the Fuhrerprinzip.
Perhaps the only reason the U.S. never called Mohammad Aideed (that Somali warlord they were never able to catch) or the thugs in Rwanda "Hitler" is that A) the American government doesn't seem to care that much about oil-less African countries, and B) even the folks in Texas might not buy the idea of a black "Hitler."
Not one of these odd Boys From Brazil comes close to Adolf Hitler in terms of power or scale of criminality. To label every two-bit thug the U.S. government doesn’t like a new “Hitler” is an insult to the entire World War II generation. To every Londoner who kept a stiff upper lip while huddling in a bomb shelter during the Blitz…to everyone who endured shortages and rationing so that the economic output of their entire nation could be mobilized for the fight…to every man who stormed the beaches of Normandy or fought in the Battle of the Bulge, or faced Rommel in the desert, or the Russians who lost nearly 11 million people fighting the Nazis on the Eastern Front.
Anonymous Moogie said...
A detailed description of the difference between the current "enemy" of the United States and fascism. By chance have you seen, "The Power of Nightmares"? I've only been able to catch it on google video as no one will broadcast it in the US. The documentary seems to discuss on the following claim you've made, which I'd like to comment on:
While it seems like propaganda, who is this propaganda aimed at? Is it propaganda in word alone, and if so, what would an appropriate descriptor be? I don't think calling OBL and those who follow his practices 'Radical Islamists' would make a difference for the average American. The more intellectual among us can already see this for what it is. There seems to be more to this than just a name, there is the fear that the current administration has instilled in the American people. This being, that Radical Islam is an organized group capable of mass destruction. This is nothing that is "derived" from the use of 'Islamo-fascism', but rather the image painted with other propaganda tools; like the media, terror alert levels, etc.
2:11 PM
Blogger ninjadroid said...
You're a good writer, and I wish you would keep at it.
2:45 PM
Blogger K. Crady said...
Moogie, I would say that the propaganda is aimed primarily at the American people. By re-casting the "war on terrorism" as World War II, The Sequel, the neocons hope to cash in the "Good War" aura WWII possesses. It also gives them the chance to tar anyone who opposes their militarism as the next Nevill Chamberlain, while they get to play Churchill.
3:40 AM
Post a Comment
Links to this post:
Create a Link
<< Home
|
global_05_local_4_shard_00000656_processed.jsonl/84223
|
Squids evolved giant eyes to watch out for sperm whales
The colossal and giant squids that lurk in the ocean depths are truly remarkable creatures. If there's one feature that's really striking, it's their gigantic eyes, which are any times bigger than any other known marine organism's eyes.
Considering the huge discontinuity between the size of these squids' eyes and those of all other oceangoing animals, the natural question is why such eyes evolved in the first place. Now researchers from Sweden's Lund University think they have the answer — the eyes are a necessary early warning system for these squids' only known predator, the sperm whale. Research leader Dan Nilsson explained to BBC News why he and his team set out to solve this mystery:
"We were puzzled initially, because there were no other eyes in the same size range - you can find everything up to the size of an orange, which are in large swordfish. So you find every small size, then there's a huge gap, then there are these two species where the eye is three times as big - even though squid are not the largest animals."
Indeed, for creatures like these huge squid that typically live a mile beneath the ocean surface, big eyes really serve very little purpose. Most creatures down there have developed bioluminescence, which means you don't need particularly large eyes to spot prey.
The one exception is if a really large object is moving towards you — then, such huge eyes could let you spot the approaching beast up to 400 feet away. If a sperm whale is near, these eyes give squid — which, unlike the whale, can't use sonar to detect an approaching adversary — time to take evasive action. The researchers say that this likely also explains the development of similarly big eyes in the prehistoric beast ichthyosaurus, which similarly relied on its huge eyes to spot even larger approaching predators.
Current Biology via BBC News. Mural from Houston Museum of Natural Science; photo by etee on Flickr.
|
global_05_local_4_shard_00000656_processed.jsonl/84229
|
cerca qualsiasi parola, ad esempio dirty sanchez:
Short for South Korean Telecom1, a Starcraft progaming club. Members and former members include SlayerS'_Boxer, iloveoov, GoRush, Kingdom and Bisu.
It also means SKT1 push in Starcraft: in TvP, the Terran player pushes out with 5-ish marines, a tank, 1 or 2 vultures and research spider mines on the way, against a fast-expanding Protoss.
The purpose of this push is to harass or delay the Protoss's expansion, or force Protoss to build more defenses so that Terran can nullify Protoss's economic advantage or even establish its own.
SKT1 push used to kill every fast-expand Protoss.
di Gothic90 27 aprile 2009
|
global_05_local_4_shard_00000656_processed.jsonl/84231
|
IT Answers » Desktop Virtualization Implementation Tue, 29 Jul 2014 14:27:40 +0000 en-US hourly 1 Applications in Desktop Virtualization Thu, 08 Apr 2010 20:49:51 +0000 0 Have you deployed Windows 7 desktop virtualization? Tue, 08 Mar 2011 19:28:42 +0000 1 Webcast: Simplifying the Complex Desktop Virtualization Project (Sponsored) Tue, 01 Nov 2011 17:38:33 +0000 One of the main reasons that we see desktop virtualization projects, VDI specifically, fail is that organizations are quick to consider desktop virtualization the same as server virtualization. This is a critical mistake, and more often than not leads to an unsuccessful implementation, or at the very least one that doesn’t live up to expectations.
Attend this webcast, Thursday November 3 at 12:00 PM EDT to learn why server virtualization and desktop virtualization are so different, how you can un-complicate an implementation, and what steps you need to take to help you start your desktop virtualization project today.
]]> 0
Possibilities of office virtualisation Wed, 11 May 2011 09:21:04 +0000 hi there it may sounds like funny qustion to experts for that i’m sorry, my question is,
is it possible to have 10desktops in a office on a lan connection with a single server, and all these desktops can be istalled “without” OS to the network and sopportad with vertual desktop direct from server? is it possible if so how kindly give your best considring dummy me.
thanks a lot
]]> 0
Open IT Forum: What would sell desktop virtualization to you? Wed, 19 Jan 2011 20:47:16 +0000 Last year, Brian Madden talked about the ways to get traditional desktop administrators to start getting their feet wet in desktop virtualization, and we were wondering what it took for you to satisfy your curiosity about the technology. What customized ways have you adopted desktop virtualization? Was it difficult to convince you or your desktop admins?
Share your tales, successes and failures, and we’ll give you 50 knowledge points good for our iPad contest!
]]> 2
Bandwidth requirements for application virtualization vs. desktop virtualization Thu, 20 Jan 2011 18:38:56 +0000 Just wondering how the bandwidth requirements vary depending on application virtualization and desktop virtualization. I need to virtualize all by an infrastructure in WAN (not LAN), so which is the most ideal option? Any product recs?
]]> 0
Virtual Apps vs. Virtual Desktops Fri, 14 Jan 2011 20:24:30 +0000 7 Enterprise desktop virtualization solution Thu, 20 Jan 2011 18:43:46 +0000 Can anyone share their experience with enterprise-wide desktop virtualization implementations? How many workstations were involved? What types of problems/difficulties did you experience? How did you address them? Any ideas as to how to broach the subject with execs would be appreciated as well.
]]> 1
Open IT Forum: Pros & cons of VDI-in-a-box Thu, 13 Jan 2011 20:22:51 +0000 In the still maturing technology of desktop virtualization, VDI-in-a-box is heralded as a cost-effective solution for specific applications or users that can reap the benefits of virtualization.
Are you currently deploying or considering deploying VDI-in-a-box? What are the pros & cons you’ve experienced?
The best answers get 50 knowledge points toward our iPad contest!
]]> 1
Open IT Forum: Who is deploying desktop virtualization? Wed, 05 Jan 2011 17:58:19 +0000 Are you deploying desktop virtualization in the enterprise? If not, why not? If so, what hardware and software are you using/do you recommend?
]]> 5
|
global_05_local_4_shard_00000656_processed.jsonl/84235
|
2011/08/05 - Jakarta Cactus has been retired.
For more information, please explore the Attic.
Last 15 days web site changes
These are the changes that happened to the Cactus web site for the past 15 days since the last site update (excluding the todo and changes pages which are modified too often):
<no changes>
Release changes
Cactus versions newer than 1.7 (released on ...)
• update Since version 1.7.1 of Cactus all changes are now recorded using Apache's JIRA. The list of changes for an unreleased version is available in the JIRA roadmap report and the released version changes in the JIRA changelog report. (VMA)
Cactus 1.7 (released on 28 Jan 2005)
• update Tested with Orion 1.6.0b, Orion 2.0.4, Tomcat 4.1.31, Tomcat 5.0.29, Resin 2.1.14, Resin 3.0.9, JBoss 3.2.6. (VMA)
• update Added support for Resin 3.0.9 and above. (VMA)
• update Upgraded following dependencies: Commons BeanUtils to 1.7.0, Commons Collections to 3.1, Commons HttpClient to 2.0.2 and Commons Logging to 1.0.4. (VMA)
• add Add attribute jvmArgs in the container tasks. (FAL) Thanks to Matheus Bianconi. Fixes issue CACTUS-158.
• fix The <resin3x> element of the <cactus> task is now correctly using the user-defined port (port attribute). The port was previously hardcoded to 8080. (VMA)
• fix The Cactus Servlet Test Runner now re-initializes the Cactus configuration when it is called the first time (in its init() method). This allows testing several webapps in the same JVM (i.e without restarting the container). (VMA)
• add The <jboss3x> container element now supports running JBoss in a temporary directory, specified by the tmpdir attribute. In addition, by using the configDir attribute, you can now specify a directory where you have stored a custom JBoss server configuration (identified by the config attribute). This configuration will be copied to the tmp directory and used to configure JBoss. (VMA) Fixes issue CACTUS-119.
• fix Cactus was failing with a NullPointerException if the response was not returning any output stream (which happens if response.setStatus(HttpServletResponse.SC_NO_CONTENT) is called for example). (VMA) Thanks to Maxwell Grender-Jones. Fixes issue CACTUS-123.
• fix Fixed "java.lang.NumberFormatException: For input string: "localhost"" error that was happening when using the <cactus> task with JBoss 3.0.8. It was due to the fact that JBoss 3.0.8 does not support the new --server parameter which works with newer versions of JBoss 3.x (VMA) Thanks to Raphael Philipe Mendes da Silva. Fixes issue CACTUS-122.
• add Added new <resin2x> and <resin3x> tasks to start/stop Resin 2.x/3.x instances. (VMA)
• update Building Cactus from the sources now requires Ant 1.6.1+ (Ant 1.6.2 if you're using JDK 5). The Cactus build also been simplified a lot and external libraries are automatically downloaded from ibiblio.org if not present on the file system. (VMA)
• update Ensure faster shutdown times with WebLogic 7.x by using the FORCESHUTDOWN WebLogic command instead of the graceful one. (VMA) Fixes issue CACTUS-120.
• update The JettyTestSetup class now checks if the Jetty server is already started and only starts it if it isn't running. It also does not stops it if it was running before JettyTestSetup was called. This is useful when you have a master test suite and when you also wish to run your tests one by one. (VMA) Fixes issue CACTUS-118.
Cactus 1.6.1 (released on 14 May 2004)
• fix An error was introduced in the Servlet Test Runner during the internal package refactoring that happened in Cactus 1.6. The XMLTransformer could not be loaded and it resulted in a ClassNotFoundException exception. (VMA) Fixes issue CACTUS-107.
Cactus 1.6 (released on 08 May 2004)
• update The ServletTestRunner now looks for am optional cactus.properties file and reads its properties. If not defined it sets default values for the context URL of the Cactus redirectors and for their mappings. (VMA)
• update Updated web site documentation for enabling Cactus logging. (VMA)
• update Due to some internal package restructuration (all non public API were moved to internal packages), the jspredirector.jsp file was modified. If you have installed this file manually somewhere, you'll need to remember to update it. (VMA)
• update Big internal restructuration: we have moved all the non public API classes to java packages with the name internal. For example the package org.apache.cactus.internal is a package containing some internal implementation. You should not use any internal class in your own development as these classes may change at any time in the future. If you find you need access to some internal class, please send an email to the Cactus mailing class explaining the reason and we may open up some API/SPI. (VMA)
• remove Removed org.apache.cactus.util.HttpURLConnection class. It was a Commons HttpClient wrapper on top of the java.net.HttpURLConnection class. However, this class is now fully integrated in the Commons HttpClient jar. (VMA)
• add Added new optional nested <containerclasspath> element for the <cactus> task. It allows specifying additional jars that will be put in the classpath used to start/stop the specified containers. (VMA)
• remove Removed ability to choose different HTTP connection helpers. The only one supported now is the Commons HttpClient one provided internally by Cactus. Thus the cactus.connectionHelper.classname property is now removed. (VMA)
• remove Removed deprecated classes in the org.apache.cactus.ant packages as they have been deprecated for a long time. (VMA)
• fix Fixed bug in HttpServletRequestWrapper.include() where the passed request was not the original request. The problem was only apparent with Tomcat 3.x. (VMA)
• add Added support for Tomcat 3.3.2 (note that this required adding the commons-logging jar to the Tomcat bootstrap classpath). (VMA)
• update In the Form authentication code, changed the response check logic for the pre-authentication step to accept any status code less than 400. It was previously only accepting a 302 code but different servers are implementing it differently. (VMA) Thanks to Kazuhito Suguri.
• add Added new ServletContextWrapper.setInitParameters() which allows to programatically define Context init parameters (as if they had been entered in web.xml using the <context-param> element. (VMA)
• add Added support in the Ant integration webxmlmerge task for merging <context-param> elements. (VMA)
• update Updated the version of Commons HttpClient in the Cactus distribution to 2.0 final. (VMA)
• add Added new FormAuthentication.setExpectedAuthResponse(int) that allows to set the expected HTTP response code for an authentication request which should be successful. If not specified, it defaults to HttpURLConnection.HTTP_MOVED_TEMP. (VMA) Thanks to Kazuhito Suguri.
• add Added new FormAuthentication.setSessionCookieName(String) that allows to set the security cookie name to a name different than JSESSIONID (the default). (VMA) Thanks to Kazuhito Suguri.
• fix FormAuthentication no longer assumes "localhost" when adding cookies. (VMA) Thanks to Kazuhito Suguri. Fixes issue CACTUS-37.
• fix Fixed bug in Cactus wrapper implementation of request.getPathTranslated() which was failing when there was no simulated URL defined (i.e. no call to WebRequest.setURL()). (VMA) Thanks to Paul Green.
• update Migrated to Apache license 2.0. (VMA)
• update The <cactus> task now also cleans custom tmp directories (passed using the tmpdir attribute). (VMA) Thanks to Daniel Rabe. Fixes issue CACTUS-79.
• add Added support for specifying which JNDI port to use when shutting down JBoss 3.x in the Cactus Ant integration. The <jboss3x> nested element now supports the new jndiport attribute for specifying the port. If not specified, it defaults to 1099. (VMA) Thanks to James Carpenter. Fixes issue CACTUS-85.
• add Added new encoding HTTP parameter for the Cactus Servlet TestRunner. By default the XML returned (when there is no server-side XSL transformation) is using the UTF-8 encoding. This encoding parameters allows using a user-specified encoding. (VMA)
• fix Fixed request.getRequestURL() (J2EE 1.3 only) which was not working properly when WebRequest.setURL() was called with a null pathinfo parameter. (VMA) Thanks to Scott Leberknight. Fixes issue CACTUS-89.
• add Added support for Resin 3.x in the Ant integration. Only Resin 3.0.5 and above are supported (the reason is that it seems the configuration file format has changed between Resin 3.0.3 and 3.0.5). Note that Servlet API 2.4/JSP 2.0 are not yet supported by Cactus. (VMA)
• add In the Ant integration, added new contextxml attributes to the <tomcat4x> nested elements of the <cactus> task. This is to support the context xml configuration file that appeared with Tomcat 4.1.x. Note that for Tomcat 5.x you need to use the existing nested <conf> element (see the <cactus> task documentation for more details). (VMA)
• fix In the Ant integration, added support for web contexts defined in JBoss's jboss-web.xml. (VMA) Thanks to Brian Topping. Fixes issue CACTUS-84.
• fix Fixed the bug where a ServletException occurs, stating: The request object passed to forward() must be the request object you got from your Cactus test case (i.e. a Cactus request wrapper object). Instead we got [...]. (VMA)
• add In the Ant integration module, added new configXml attribute to the <weblogic7x> container element of the <cactus> task. (VMA) Fixes issue CACTUS-53.
• fix Make the <cactus> task work on Mac OSX by not including the tools.jar file (on Mac OSX all classes are found in classes.jar). (VMA) Thanks to Joe Germuska.
• fix Prevent requiring commons-httpclient jar to be present on the server-side classpath. (VMA) Thanks to Kazuhito Suguri.
• add Added the Maven plugin. It was formerly hosted in the Maven project's own CVS. It is now in the Cactus CVS and is part of the Cactus distribution. (VMA)
Cactus 1.5 (released on 23 November 2003)
• fix In the Cactus/Ant integration, user-defined tomcat-users and web.xml are now correctly replacing the default version provided by Cactus. This is for the Tomcat 4.x and 5.x containers. (VMA)
• fix The Jetty Test Setup was not shutting Jetty down if an error ocurred during setUp. (VMA) Thanks to James Stangler. Fixes issue CACTUS-63.
Cactus 1.5-rc1 (released on 26 October 2003)
• fix The Cactus Ant integration now reports the HTTP error code when it fails to start the container and timeouts. It was previously doing this but only in debug mode. (VMA) Thanks to Norbert Pabis. Fixes issue CACTUS-58.
• update Update the version of Commons HttpClient in the Cactus distribution to 2.0 rc2. (VMA)
• fix Fixed and improved logging. It now works with Log4j, JDK 1.4 logging and Commons Simple log. Check the sample servlet application for an example of how to set up Cactus logging. (VMA)
• add Added new optional cactus.logging.config Cactus property. If specified Cactus will load the properties file pointed by this property and will set each property as a system property. (VMA)
• update Updated to use AspectJ 1.1.1. When you upgrade to Cactus 1.5-rc1 make sure you update your aspectjrt.jar to version 1.1.1 (the jar is provided in the Cactus distribution). (VMA)
• add Added an EJB sample application to demonstrate how to perform EJB unit testing with Cactus and how to automate it with the Cactus/Ant integration. (VMA)
• fix When using the new ServletTestSuite wrapper around pure JUnit test cases, the setUp() and tearDown() methods were not called. (VMA) Thanks to Alexander Ananiev.
• fix In the Ant integration, when using the <cactus> task with an EAR, the test webapp context was not correctly set. (VMA) Thanks to Jonathan Kovacs. Fixes issue CACTUS-52.
• fix Fixed the JettyTestSetup class so that it stops the running Jetty server at the end of the test suite execution. (VMA) Thanks to James Stangler. Fixes issue CACTUS-49.
• update The Cactus distribution now packages commons-httpclient 2.0 RC1 (we were previously including version 2.0 beta2). (VMA)
• fix Support for WebLogic 7.x is now working fine. Fixed bugs in the WebLogic 7.x configuration: the weblogic.xml file was not correctly copied in the WEB-INF directory of the cactified war, and an NPE happened when the bea home property was not set. It now tries to guess it from the cactus.home.weblogic7x property if not set. In addition, the configuration has also been greatly simplified. (VMA)
• update In the cactifywar Ant task, changed the default realm name that is used when adding Cactus default configuration data. It was previously Cactus test realm. It is now myrealm. The reas on is that the default WebLogic configuration creates a myrealm realm and thus using this name makes our Cactus configuration for WebLogic much simpler. It doesn't affect the other containers as they do not seem to check for the realm name. (VMA)
• add Added the ability to define the JBoss port that will be used in the Ant integration to poll if the JBoss server is up and running. Note: This value will not modify the port of the default JBoss configuration (it is still 8080). However, it is useful for users who have defined their own JBoss configuration and are using a port other than 8080. (VMA) Thanks to Florin Vancea.
• fix Fixed bug in Eclipse plugin build where the plugin version was not correctly resolved. (VMA) Thanks to Christopher Marshall. Fixes issue CACTUS-47.
Cactus 1.5-beta1 (released on 14 July 2003)
• update Support for WebLogic 7.x has not been tested with Cactus 1.5 and may not work. (VMA)
• update Support for WebLogic 6.x is still available in the Ant integration. However, it has not been tested as it is no longer possible to download WebLogic 6.x from the BEA web site. It is thus completely untested. (VMA)
• update Update of jars bundled in the Cactus distribution: Commons Logging 1.0.3, Log4j 1.2.8, HttpClient 2.0beta2, HttpUnit 1.5.3 and JUnit 3.8.1. In addition the requirements for the jars needed to build Cactus were also updated: Checkstyle 3.1 (and the dependent jars: BeanUtils 1.6.1, Collections 2.1, Regexp 1.2 and Antlr 2.7.2). (VMA)
• add Added a Cactus books section for books covering the Cactus framework. (VMA)
• update Refactoring of XXXTestCase class hierarchy. Whereas it was previously inheriting from AbstractWebTestCase and AbstractTestCase it is now simply inheriting from JUnit TestCase. Thus all non-public API are now not visible from user (as they should be). This has broken binary compatibility. If you had some framework compiled with Cactus 1.4.1 and using some methods from AbstractWebTestCase or AbstractTestCase, you'll need to recompile it with Cactus 1.5. (VMA)
• add Added a HttpServletRequestWrapper.setRemoteUser() method to simulate a remote user. Thus, there is now 2 methods to get a remote user: by simulating it as above or by using real BASIC or Form-based authentication. (CML)
• update Refactored the authentication support by introducing the interface Authentication, which the class AbstractAuthentication now implements. (CML)
• add Added a quick tutorial for Cactus developers who want to set up their Eclipse environment to work on the Cactus plugins for Eclipse. (VMA)
• add The WebResponse class now has a method to directly retrieve the status code. (CML)
• fix When a simulation URL is used and null values are passed for the Server name, Context Path and Servlet Path parameters, calls to Cactus HttpServletRequestWrapper.getServerName(), HttpServletRequestWrapper.getServerPort(), HttpServletRequestWrapper.getContextPath() and HttpServletRequestWrapper.getServletPath() now correctly return the values from the original Request object (and not the wrapped one), handled by the Servlet Redirector. (VMA)
• update Added verification code in Cactus to verify that the parameters passed to the WebRequest.setURL() method have the correct format and throw an exception if not. (VMA)
• add Added a RSS feed for Cactus news. (VMA)
• add The Cactus web site has a new style that is heavily based on CSS. It should provide better printing capabilities and a more consistent look. (CML)
• fix Cactus was not correctly handling the ComparisonFailure exception introduced by JUnit 3.8.1 and these exceptions were reported as errors instead of failures. (VMA) Thanks to Misak Boulatian. Fixes issue CACTUS-22.
• fix Fixed bug in Cactus exception handling where an invalid test result could result in a StringIndexOutOfBoundsException, thus hiding the real problem. (VMA) Thanks to Melissa White.
• add Added a new custom Ant task (WebXmlMerge) that merges the content of two web deployment descriptors into one. That includes the definitions of filters and servlets, as well as some security-related elements. (CML)
• add Allows creating Cactus TestCase without the need for a constructor that takes a String parameter (a default constructor is good enough). Obviously, this feature works only with JUnit 3.8.1 (but Cactus continues to support JUnit 3.7). (VMA)
• add Added ability to add any additional HTTP parameters to the request used by Cactus to the Form-based authentication security URL. (VMA)
• add Added support for running pure JUnit TestCase on the server side using Cactus. This is possible by using a new ServletTestSuite Test Suite. (VMA)
• fix Fixed bug where a simulation URL would be used even when none has been defined (Reminder: a simulation URL is defined by calling WebRequest.setURL()). (VMA) Thanks to Helen Rehn.
• add Added a timeout for the <runservertests> Ant task so that the verification that the container is started is stopped if this timeout is reached (a build exception is raised). (VMA)
• remove Moved some Ant tasks that were previously in the Cactus Ant tasks; they are now in the Ant Integration project. They are the tasks used to start/stop the containers and the runservertests task. It is now recommended to use the Ant Integration. (VMA)
• update Improved (and normalized) build system. This change should not affect nor be visible by Cactus end-users. However, Cactus power users building Cactus from sources will appreciate. (VMA)
• update Improved test classe names in the Servlet Sample application. (VMA)
• update The Servlet Sample build is now using the Cactus Ant Integration. (VMA)
• add New Cactus Ant Integration. It provides new custom Ant tasks such as <cactifywar>, <cactus> which makes executing Cactus tests from an Ant build script extremely easy. (CML)
• update Modified the build process to generate the Cactus web site by removing the use of Stylebook and replacing it with an XSL stylesheet. In addition, added several new features: support for subdirectories, support for dynamic menu items and sitemap generation. (VMA)
• fix The Cactus runservertests custom Ant task has been improved and it is now propagating correctly Ant references to the targets you defined for starting the container, stopping it and runnning the Cactus tests. Previously, it was only propagating the Ant properties. (VMA)
• update Added stack trace filtering to the ServletTestRunner. Stack frames in the JUnit framework classes as well as in the Cactus base test case classes are filtered out. (CML)
• add The Cactus web site now provides online documentation for both the CVS HEAD version and the last released version. (VMA)
• fix Enable the ServletTestRunner to run in an environment where it is not allowed to set system properties. In such cases, the cactus.properties configuration file needs to be on the server classpath. (CML)
• update Implemented server-side XSLT transformations in the ServletTestRunner. The code is based on the TraX API but uses reflection to avoid a direct runtime dependancy. (CML)
• add Added automated Ant scripts for JBoss/Jetty 3.x. (VMA)
• fix Fixed bug where users using a Locale which does not format numbers with dots (".") had issues with the JUnitReport Ant XSL stylesheet. The Servlet Test Runner code now forces a US Locale. (CML)
• add Added new extension class to help unit test JSP Taglibs. See the TestJspTagLifecycle test class in the sample-servlet application for help on using it. (VMA) Thanks to Christopher Lenz.
• update Improved error handling when dealing with invalid Cookies. (VMA)
• fix Fixed a potential bug with classloaders. On the server side, Cactus looks for the TestCase class by searching first the WebApp Classloader and then the Context ClassLoader. However, the Context ClassLoader is only searched if an Exception (subclass of Exception) is raised. Thus, if a NoClassDefFoundError had been raised, Cactus would not have searched in the Context ClassLoader. (VMA) Thanks to Roumen B. Antonov.
• add Added support for internationalization (double byte characters) for sending back test results. This allows Cactus to be used with any character set. (VMA) Thanks to Atsushi Hasegawa.
• fix Fixed bug where a redirector overriden by calling WebRequest.setRedirectorName() was not used to fetch the Cactus test result (the default redirector specified in the Cactus configuration was used instead). (VMA) Thanks to Pranab Dhar.
• fix Fixed bug where Cactus was using the deprecated HttpClient PostMethod.setRequestBody(String) which had some bug related to char to byte encoding. Now using the PostMethod.setRequestBody(InputStream) signature. (VMA) Thanks to Stephan Merker.
• add Added links to Japanese and Korean translations of Cactus. (VMA)
• add Ability to get a real HTTP Session cookie before the start of the test. This is achieved by calling the new WebRequest.getSessionCookie() method which returns a HttpSessionCookie object that you then add to the HTTP request. Initially suggested by Kyle W. Willkomm. (VMA)
• add New WebResponse.getCookieIgnoreCase(cookieName) to get the first cookie matching cookieName whatever the case (case-insensitive). (VMA)
• add Added Form-based authentication support. (VMA) Thanks to Jason Robertson.
• add Added a tutorial that explains how to build Cactus from the sources. (VMA)
• add Added a Jetty Sample application to demonstrate how to use the new JettyTestSetup that automatically starts Jetty before a test suite. (VMA)
• add Added a Jetty integration tutorial. (JRU)
• add Added a org.apache.cactus.extension.jetty.JettyTestSetup JUnit TestSetup to automatically start Jetty before a test suite is executed. This is really nice to quickly run tests inside any IDE or even from a simple <junit> Ant taks without the need to package and deploy a WAR. In addition, it is real nice to debug tests this way. Moreover, Jetty starts in less than 1 second, making it completely seamless and transparent! We are now at the same order of magnitude as pure JUnit tests in term of speed ... :-). Of course, this is only for Servlet tests ... I am still waiting for an embeddable EJB container that starts in less than 1 second ... (VMA)
• add Added support for client side begin(...) and end(...) methods. They are called on the client side, before and after every test in the same way as the JUnit setUp() and tearDown() are called before and after each test, but on the server side. (VMA)
Cactus 1.4.1 (released on 31 August 2002)
• add Added a sample (and thus a test for Cactus) for HttpUnit integration as part of the Cactus sample. (VMA) Thanks to Hirsch Richard.
• fix Fixed bug in new HttpClient wrapper in Cactus with regards to the headers returned in the HTTP response (they were offset by one and the status line was not returned as a header). (VMA) Thanks to Hirsch Richard.
• fix Fixed bug where NullPointerException would be thrown by Cactus ServletTestRunner if an exception raised in a test case had not specified a message (i.e. getMessage() returning null). (VMA) Thanks to Micah Williams.
• fix Some JDK implementation return "null" when Class.getClassLoader() is called to indicate that the given class has been loaded by the bootstrap class loader. This was leading to NullPointerException being thrown by Cactus in some cases. (VMA) Thanks to Gerhard Kreutzer.
• fix Fixed import bug in sample-servlet which prevented building Cactus from the sources on JDK 1.4. (VMA) Thanks to Ville Skyttä.
• fix Fixed default properties for the sample application so that it points to the correct HttpClient jar which is packaged with Cactus (the one built on 06/06/2002). (VMA) Thanks to Hirsch Richard.
• fix Fixed a bug that was preventing having several POST parameters added in the request in beginXXX() methods. (VMA) Thanks to Larry Tambascio.
• fix The encoding in the sample junit-noframes XSL stylesheet was set to be "US-ASCII". It is now set to "UTF-8". Apparently, XSLT processors are only required to support utf-8 and utf-16, the rest is processor specific. For example your processor might support US-ASCII-7 and others might support US-ASCII. Thanks to Robert Koberg for the explanation! (VMA) Thanks to Dave Hoover.
Cactus 1.4 (released on 25 August 2002)
• add Added a tutorial that explains how to test JSPs with Cactus. (VMA)
• fix Fixed JDK 1.2 compatibility (broken in 1.4b1). (VMA) Thanks to David George.
• fix According to the XML definitions (at least the definitions Mozilla 1.0 got implemented), the <?xml version...> tag should go before the <?xml-stylesheet...> tag. Using them in the wrong order produces an error when Mozilla tries to render it. (VMA) Thanks to Felipe Hoffa.
Cactus 1.4b1 (released on July 31 2002)
• add It is now possible to assert response codes in endXXX(). For example, you can verify that you servlet has returned a 500 response code. See the tests provided in the Sample Servlet application which is part of the Cactus distribution. This change was possible because we moved the underlying implementation from HttpURLConnection to Jakarta Commons HttpClient. Note: It is still not possible to test a 401 response code (this limitation has been raised to the HttpClient team). (VMA)
• update Refactored internal code to be able to use different HTTP connection implementation. 2 are currently provided: one using the JDK HttpURLConnection and one using Jakarta Commons HttpClient (the default one). It can be modified by setting the following System property: cactus.connectionHelper.classname = org.apache.cactus.client.JdkConnectionHelper (for the JDK HttpURLConnection). Note that the Servlet Sample test that assert response code will fail with the JDK HttpURLConnection implementation. (VMA)
• add Cactus now requires the Commons Logging library (commons-logging.jar). It is needed as Commons HttpClient is now using Commons Logging for logging and Cactus depends on HttpClient. Cactus is also now using Commons HttpClient for all its internal logs. This lets us use any underlying logging implementation: Log4j, LogKit, JDK 1.4 Logging, No Logging or SimpleLog (provided with Commons Logging - logs to the console). Check the Config Howto for how to configure logging in Cactus. (VMA)
• remove Removed deprecated org.apache.cactus.ServletTestRequest class (was deprecated in Cactus 1.2). (VMA)
• remove Removed deprecated org.apache.cactus.util.ClientCookie class (was deprecated in Cactus 1.2). (VMA)
• remove Removed deprecated org.apache.cactus.util.AssertUtils class (was deprecated in Cactus 1.2). (VMA)
• add Added automatic script support for Orion 1.6. (VMA)
• fix Ant scripts for Resin, Orion, Tomcat 3.2.4, WebLogic 6.1 and WebLogic 7.0 now correctly configured for Cactus BASIC authentication tests. (VMA)
• add Added automatic script support for WebLogic 7.0. (VMA)
• fix It seems that it is possible that the test result contain an end of line character and the Cactus WebTestResultParser was choking on this and seeing the returned result as invalid. This has been fixed. (VMA) Thanks to Daniel Dennison. Fixes issue CACTUS-22.
• add Added Test Coverage Reports as part of the Web Site generation. (VMA)
• update Renamed all external jars used in Cactus by suffixing them with their versions. This is so that Cactus users will know exactly what jars Cactus is packaging. You are of course free to use your own jars and Cactus only packages these jars for your convenience. Cactus nows packages Log4j 1.2.3, AspectJ Runtime 1.0.4, JUnit 3.7, HttpClient 2.0alpha1 built on 6/6/2002, HttpUnit 1.4 and (new) the Servlet API 2.2 and 2.3 jars. The Cactus jars have also been renamed to include the version number in their names. (VMA)
• fix Cactus Sample can be built using Ant 1.4 (support for Ant 1.4 was broken by a line introduced in Cactus 1.3 that would only work with Ant 1.5. However, this line can easily be commented out). (VMA)
• fix Modified the jspRedirector.jsp so that it initializes an HTTP Session (session="true"). There is no way I know to make this parameter dynamic so we set it to true as this is the most used case. If one of your test must not have a session created for it, then you can always use the Redirector overriding feature (WebRequest.setRedirectorName(String redirectorName)). (VMA) Thanks to Marc Brette. Fixes issue CACTUS-21.
• fix Fixed bug where the Test Result object which is put in the Servlet Context was not serializable. This might cause some trouble with some containers. (VMA) Thanks to Patrick Lightbody.
• add Added simulation of Remote IP address and Remote Host Name, i.e. you can now control what request.getRemoteAddr() and request.getRemoteHost() will return. That is useful if your code depends on these values. (VMA) Thanks to Marc Brette.
• update It is now possible to specify the Cactus properties as System properties (the property names are the same as the ones in cactus.properties). Also if not specified, redirector names are set by default to "ServletRedirector", "JspRedirector" and "FilterRedirector". (VMA)
• add New Quick start tutorial that explains how to run Cactus tests quickly in Tomcat. (VMA)
• add A cool new way to quickly execute your test cases: Cactus now has a JUnit Test Runner called ServletTestRunner (it is a servlet) that you start using your browser. See the TestRunner Howto tutorial. (VMA)
• fix Test classes are now first looked for using the Current classloader (the webapp one for Servlets) and if not found using the context class loader. Previously the order was the revert, which was not logical and could lead to issues. (VMA)
• update Improved debugging of runservertests task. Simply run Ant in debug mode (ant -debug xxx) and the task will print information. Very useful to know why the runservertests task seems to hang after starting your server ... (VMA)
|
global_05_local_4_shard_00000656_processed.jsonl/84242
|
GM Recalls 36,413 Police Impalas Over Serious Crash Risk
After receiving complaints from several police departments that lower control arms had cracked on a number of 2008-2012 police spec Impalas, General Motors pulled the trigger on a recall, offering to replace the lower control arms on more than 36,000 vehicles free of charge.
The suspension parts used in police interceptors is different from those on civilian Impalas, so only police departments have been affected by the recall, which will begin Aug. 21.
The National Highway Traffic Safety Administration found that the cracks, which generally occurred near a bushing sleeve on the lower control arm, caused a significant crash risk increase and, you know, the chance of... fiery death.
We would guess so, especially considering that police cars are a bit more likely than their civilian counterparts to be involved in high speed pursuit and offroad scenarios.
All of the cars were manufactured at GM's plant in Oshawa, Ontario. We're not sure that has anything to do with anything, but we're glad they'll be cranking out an all new Impala soon.
Photo credit: General Motors
|
global_05_local_4_shard_00000656_processed.jsonl/84243
|
MotorWeek says this car "doesn't appear out of proportion." That makes me wonder if they know the definition of the word proportion.
Apparently the GT does everything needed to be considered a BMW, in fact, John Davis says that it has earned its stripes. What that apparently means is that it's good in a slalom and decent on the brakes too.
But let's go back to the styling. Can we go back to the styling? I wanna go back to the styling. This thing doesn't look good. I'm sorry BMW, but it doesn't. It's terrifying looking. To say it's in proportion is akin to saying Carlos Mencia is hilarious.
It just ain't true.
|
global_05_local_4_shard_00000656_processed.jsonl/84262
|
Monday, April 23, 2012
Footage of SuperFrankenstein (now Mustang) stealing show at "Juju Rules" book release party
At right is Bern Baby Burn.
MUSTANG said...
Love that video! You can really see how much fun we had.
By the way... I don't want to sound like a "Correcting Clarence" or a "Know-It-All Ned," but MUSTANG should be written in ALL CAPS. If el duque had the powers of observation God gave to people who aren't reporters, he would see that I wrote it that way in the earlier posts.
Know this: when MUSTANG does anything--even something as seemingly trivial as writing his name in ALL CAPS--he does it for a reason.
This has been MUSTANG. Stay real!
Niel Anblowme said...
a jerk by any other name is...
|
global_05_local_4_shard_00000656_processed.jsonl/84273
|
uk numbers for your business
0800 numbers
There isn't to setup for a new speech sound lines
You don't have to every existing mobile\par|hit|bu |level|equality|score} Consumers will not artifact at which the companies requests are increasingly being taken away
You could be posted a total document daily This means you can caterpillar track typically the effectiveness of a person's business, revealing every person address worth, length of time which total simply call loudness. You could also chess move your predict information are living internet that you really need password saved report.
You details:
If you holy order up your 0800 freephone few, a name a "target" numbers. This is list so that they can which every one of calls for 0800 list will likely to be diverted.
The actual "target" numbers may very well landline volume, a moving multitude or alternatively a worldwide amount. As an illustration occasion you run tiny since Southern spain then you can certainly inactive write you're 0800 identification number inside the uk coupled with accept your choices a Portugal.
Every site visitor names the actual 0800 quanity, these are website typically immediately redirected for your chose prey count. Still attain Unknown caller Series Identification ( 0800) to help you to watch the perfect they are just plain screaming from.
It doesn't matter the location where the inward contacts travel from - such as. mobile, foreign therefore forth., you only achieve requested the actual inward postpone charge per unit.
A product cost-saving tip: It will be easiest currently being formula 0800 issue with released video, there's no need a "memorable" host - the belief that many you expect to have an 0800 phone number is sufficient to spur patients we contact you before the these products mobile our competitors.
|
global_05_local_4_shard_00000656_processed.jsonl/84293
|
Choqok is a micro-blogging client which supports and services
• Supporting micro-blogging service
• Supporting micro-blogging service (Using its Twitter compatible API)
• Supporting self hosted Laconica websites (Using its Twitter compatible API)
• Supporting User + His/Her Friends time-lines
• Supporting @Reply time-lines
• Support for send and receive direct messages
• integration
• Supporting Multiple Accounts simultaneously
• Supporting search APIs for all services
• KWallet integration
• Ability to make a quick tweet with global shortcuts (Ctrl+Meta+T)
• Ability to notify user about new statuses text, with KNotification or Libnotify
• Support for shortening urls with more than 30 characters (shorten on paste)
• Support for configuring status lists appearance
Developed By
Thanks To:
|
global_05_local_4_shard_00000656_processed.jsonl/84296
|
swatch: streetwear fx flash
amazing chameleon-like glitter that, depending on the lighting and angle, flashes vivid medium green, spring green/gold, copper, and on rare occasions, blue. here is just one coat of fx flash over one coat of wet 'n' wild black creme.
excuse the jacked-up condition of my nails and mani; this is after three days of wear and neglect!
1. YESSSS. I luuurrrve your swatches on MUA. So Glad I found ur blog!
P.S. That glitter is sweet!
2. thanks so much, guys! i only wish my nails weren't so misshapen in these pics, haha. now that i have this blog, i suppose i can no longer get away with neglecting my nails!
3. This is so beautiful. I love that glittery look.
Related Posts Plugin for WordPress, Blogger...
|
global_05_local_4_shard_00000656_processed.jsonl/84299
|
Greenville Police
10:04 am
Tue October 29, 2013
Greenville PD searching for missing mother, two children
Rachel Morrison
Herald Banner
The Greenville Police Department is searching this morning for a mother and two children who were reported missing by the husband and father of the missing.
According to a release from the police department, Jessie Patricia Henry Morrison, the mother, is an Alaskan Native with brown hair and black eyes. She is 31-years-old, 5’ tall and weighs approximately 120 lbs.
Morrison’s two children are reported to be with her. Son, Jaden Morrison will be 2-years-old in November and daughter, Rachel, is 11-months-old.
T
Read more
|
global_05_local_4_shard_00000656_processed.jsonl/84325
|
Ever wonder what it looks like to pull awesome stunts (like backflips!) on your bike while navigating treacherous terrain? If you were actually there, I mean—not if you were playing a game like Trials or something. Well, thanks to Kelly McGarry, you can see what it's like.
And god, does mountain biking like this seem like something I'd never want to do. I'll stick to the sidewalk, thanks.
In case you're curious: this is was performed at the Red Bull Rampage this year, a downhill mountain biking competition. McGarry came in second, which really just makes you wonder what came in first. Lord.
And...is it just me projecting, or does he sound a little scared at the start of this video, before he hits his groove? Not that I would blame him, of course! It IS terrifying.
GoPro: Backflip Over 72ft Canyon - Kelly McGarry Red Bull Rampage 2013 [GoProCamera]
|
global_05_local_4_shard_00000656_processed.jsonl/84355
|
Friday, April 11, 2008
What are we set up to see?
If you haven't seen this before, I challenge you to take this short test.
Here's the deal. Watch the video and count how many passes the white team makes.
Test you awareness.
What did you see? After you watch, be sure and check out the questions below.
1) What did you see?
2) What did you miss?
3) Why?
4) What does this brief experiment teach you?
Becky said...
I had to watch about 5 times, even after knowing to look for the bear, before I saw it. So, I am not sure what that means....
Anonymous said...
I only counted 9 of the 13 passes, and had to watch twice to find the bear even knowing it was there, but I, too, am not sure what that means except that my powers of visual observation are obviously questionable. (Very effective ad for its intended purpose, though, the bear being a stand-in for a biker in traffic.)
Eric Livingston said...
1. I saw all 13 passes.
2. I missed the bear.
3. I missed the bear, because I was so intent on getting the answer right that I was determined to not let the white team's ball out of my sight.
4. I sometimes miss the big picture. Especially when I'm trying to make sure I'm right.
Larry James said...
The first time I saw this the "other figure" was a gorilla and the screen was the size of a wall. I missed seeing anything but the white team passing. The size of a PC screen diminishes the impact a bit.
Leatherwing said...
Like Eric, I saw all 13 passes, but never saw the bear.
Obviously, being told to look for one thing can cause you to miss other things.
|
global_05_local_4_shard_00000656_processed.jsonl/84360
|
RWU Tube
Newport Walking Tour
Fast Facts
First Impressions
Colt State Park
Workshops & Presentations
Creating Exam-Targeted Course Summaries
A course summary (often referred to an "outline" by law students and professors) is a personal compilation of the essentials of a course.
This presentation stresses that the generative process of outline production is more important than the product produced, and explains why no other student's outline, or any commercial summary can possibly take the place of a self-produced product. Students are taught how, why and when to produce course summaries.
Manage Your Life-Time
Students are introduced to the concept that the higher orders of thinking associated with lawyering demand focus and concentration - and that to achieve the life/time balance essential for maximum focus and concentration, law students and lawyers need to exercise their executive management capabilities to the utmost.
This means aggressive assertion of total control over their most personal asset - their lifetime. During this presentation, practical solutions are offered for the seeming conundrum of "not enough time" experienced by most beginning law students.
Essay Exam Answering Workshops:
Attend one each week for six weeks
Step-by-step, students learn how to answer law school essay examinations. Week one introduces the elemental skills, which are built upon each week.
Working through all six workshops, followed by attendance at Powerful Exam Answering sessions will provide the essential information and methods law students need to perform at their personal best levels during finals.
Fortify Your Learning: Developing Dynamic Flowcharts
The most powerful learning tool for many students is the self-created flowchart. This graphic organizational "mind map" guides you through exam-targeted analysis structures.
The flowchart's cousin, the text-based "skeletal outline," is preferred by other students. Learn the whys and hows of both - including the use of state-of-the-art software programs.
Simulated Exams: Contracts and Torts
In the words of Joseph Glannon, Professor of Law at Suffolk University, and author of The Law of Torts: Examples and Explanations, "The best way to prepare for your law exams, once you have mastered the basic legal rules, is to take some law exams.
There's a big difference between reading about chess and playing the game. If you were going to a chess tournament, you would prepare by playing a lot of chess.
Similarly, there's a big difference between learning legal rules and using them effectively to answer an essay question. If you want to develop a facility for clearly applying the law you have studied to new facts, the best way to do it is to practice at it."
These November simulations provide students with opportunities to take exams under actual examination conditions - examinations questions written by their own professors.
A powerful addition to each student's own individual exam-answering study component, these simulations "take the edge off," reduce exam anxiety, and provide a sound basis for discovery of a student's exam readiness.
|
global_05_local_4_shard_00000656_processed.jsonl/84370
|
Archived posting to the Leica Users Group, 2004/05/06
Subject: [Leica] C8080
From: bdcolen at (B. D. Colen)
Date: Thu May 6 08:43:47 2004
It really is...But of course it helps to have something to brace
-----Original Message-----
[] On Behalf Of
Slobodan Dimitrov
Sent: Thursday, May 06, 2004 11:40 AM
To: Leica Users Group
Subject: Re: [Leica] C8080
Yet another plus for digital!
S. Dimitrov
> From: "B. D. Colen"
>....I do find, however, that it's possible to hold some of the digitals
> at pretty low shutter speeds.....
Leica Users Group.
See for more information
Replies: Reply from bdcolen at (B. D. Colen) ([Leica] C8080)
In reply to: Message from s.dimitrov at (Slobodan Dimitrov) ([Leica] C8080)
|
global_05_local_4_shard_00000656_processed.jsonl/84378
|
African American Studies
From Ursula C. Schwerin Library Subject Guides
Revision as of 09:30, 11 August 2011 by Mharrick (Talk | contribs)
Jump to: navigation, search
Subject Specialist in African American Studies
African American Studies Department
Getting Started with African American Studies Research
This guide will help you get started with research in African American Studies at the City Tech Library and beyond.
The library's online databases provide access to articles and images in journals, magazines, newspaper and reference sources. Some of the most useful databases for research in African American Studies are:
Journals and Magazines
To find a specific journal or magazine:
Reference Resources
The library has many reference books like encyclopedias, dictionaries and atlases that are useful for African American Studies research. You can search for reference books by keyword or subject in the library catalog.
Here are a few popular reference books for African American Studies:
Finding Books and More in the Library
Searching with Subject Headings
You can also search by subject from the Find Books tab on the Library website. Here's a sample of relevant subject headings for African American Studies research:
Internet Resources
Content Websites
Arts & Letters
(From the introduction written by Howard Dodson, Chief Schomburg Center for Research in Black Culture The New York Public Library:)
• African-American Woman: Online Archival Collections: "On-line archival collections featuring scanned pages and texts of the writings of African-American women. Includes the memoirs of Elizabeth Johnson Harris (1867-1942), an 1857 letter from Vilet Lester, a slave on a North Carolina plantation, and several letters from Hannah Valentine and Lethe Jackson, slaves on the estate of David Campbell, a governor of Virginia." (Special Collections Library, Duke University)
• Documenting the American South: "Documenting the American South (DAS)" is a collection of sources on Southern history, literature and culture from the colonial period through the first decades of the 20th century. It is organized into the projects listed above. The next one, now in the planning stage, will feature North Caroliniana. The Academic Affairs Library at the University of North Carolina at Chapel Hill sponsors DAS, and the texts come primarily from its Southern holdings. An editorial board guides its development. (UNC Chapel Hill)
Martin Luther King
African Americans in Science
Links to guides and lists for other websites:
Professional Organizations
|
global_05_local_4_shard_00000656_processed.jsonl/84386
|
Take your PowerPoint slides Beyond Bullet Points
Popular keynote speaker Clif Atkinson says that bullet points kill PowerPoint presentations. On a friend's recommendation, I picked up Atkinson's book, Beyond Bullet Points, in preparation for an hour-long presentation I had to give this past weekend. His bullet-free story technique made a huge difference in my slides. Why no bullet points? Atkinson says:
Bullet points on a screen make information harder to understand, not easier.
The core purpose of communication is to cohere: to coalesce fragments of information back together into a single understanding. That's the most difficult task of communicating. And it's actually the origin of the word communication: to "make common", or to bring together.
Bullet points can do many things, but they do not cohere information. In fact, they do the opposite—they fragment understanding into little pieces. Break any topic into a title, sub-headings and bullet points, and you're de-communicating, because you're not helping to bring a single idea together.
Instead, Atkinson says you should craft a story to tell in your presentation - much like a Hollywood director scripts a film. As someone who never uses PowerPoint and had much anxiety over delivering this presentation, I can't recommend Atkinson's book enough. He explains the story formula clearly and provides downloadable templates at his site for scripting your message. When you use his method, your presentation has structure and flow a bunch of slides with bullets just won't have. This one's required reading for anyone who presents with PowerPoint (or Keynote, for that matter.) Atkinson also has a blog with several articles that can give you a taste of the book's message. Thanks, Matt!
|
global_05_local_4_shard_00000656_processed.jsonl/84387
|
Convert and Clip Web Text to Evernote with One Click
The Readability bookmarklet strips down and reformats text for focused reading. Note organizer Evernote offers its own bookmarklet to grab text from any page. Marry the two with JavaScript, and you've got elegant, streamlined web notes.
Evernote offers an option to clip just the text you've highlighted on any web page, but that's not always convenient—or even possible, given the freakish ad layouts of some sites. With the combined power of Readability and Evernote, hitting a single bookmarklet converts the page into Readability's eye-pleasing, text-only format, then quickly pops out Evernote to grab everything on the page—which is, of course, only text. It's pretty smooth and painless on most sites, including our own, which previously had a few hang-ups with Readability (and, to be fair, its kin as well).
To install the combined bookmarklet, copy the code from the post linked below, create a new bookmark in your browser (preferably on the bookmark/links bar), then paste the copied code into the "Location" or "Address" field. Name it what you'd like—we went with Readable EverClip—and enjoy a somewhat smoother text capture process.
|
global_05_local_4_shard_00000656_processed.jsonl/84388
|
Get Rid of Dark Under-Eye Circles and Puffiness
Whether you had a late night out or just a late night, those dark circles under your eyes are casting you in a less than pretty light. Here are some quick tips to look better, with absolutely no cucumbers involved.
Photo by //amy//.
DIY weblog wikiHow lists some ways to do away with dark circles. Among them is to take a wet washcloth—make sure to wring out any excess water—then position it over your eyes. You might also try wetting and freezing a cotton swab, then gently wiping under the affected areas. Keep your eyes closed when applying, and do your best not to flinch.
Manual massaging not doing the trick? The culprit might be too much salt in your diet.
Hit up the full post for ten more tips, and let us know what you do to get rid of dark circles (if you do). And while the eyes have it, check out previously mentioned Windows application Eye Relax to help remind you when to give your scanners a much-needed break.
|
global_05_local_4_shard_00000656_processed.jsonl/84389
|
Android: Android has a lot of great to-do apps, but few of them are as easy to use as Any.DO. You can add tasks by voice, manage them with simple gestures, and even use its predictive features to instantly add common tasks to your to-do list.
The biggest problem with to-do list apps is that you often spend more time managing your to-dos than you do actually doing them. Any.DO makes adding and managing tasks a breeze, so you can get to work faster. Adding a task is as simple as typing in a word or two (it cam predict common tasks like "Go Grocery Shopping", so you only need to type a few letters) or using voice commands. You can then drag and drop tasks to a different day, swipe to complete, and even shake the phone to clear all completed tasks (probably the first time I've ever used a shake gesture and didn't think it was gimmicky).
It also has a few more advanced features, like sharing, reminders, folders, and priorities. Right now, it can only sync with Google Tasks, but the app's Market page notes that synchronization with other services—like Remember the Milk, Producteev, Springpad, Outlook, and more—are on the roadmap for future versions. If you're still unhappy with your current to-do app, I can't recommend it highly enough.
Any.DO is a free download for Android only.
Any.DO | Android Market
|
global_05_local_4_shard_00000656_processed.jsonl/84404
|
Sep 192010
WebDAV is always been one of my favorite protocol, because it permit to easily share file system and give user the permission of upload files, on the other side using it from Linux it’s always been a thorn while on Windows it works completely integrated with the OS.
What’s WebDAV ?
Web-based Distributed Authoring and Versioning (WebDAV) is a set of methods based on the Hypertext Transfer Protocol (HTTP) that facilitates collaboration between users in editing and managing documents and files stored on World Wide Web servers. WebDAV was defined in RFC 4918 by a working group of the Internet Engineering Task Force (IETF).
Continue reading »
flattr this!
|
global_05_local_4_shard_00000656_processed.jsonl/84406
|
[conspire] Re: RHL 9 Install problems
Rick Moen rick at linuxmafia.com
Thu Jul 10 14:54:32 PDT 2003
Quoting Sean Neakums (sneakums at zork.net):
> In all likelihood, this is because the LNX-BBC will have mounted all
> of the filesystems it could find, read-only, under /mnt/ro. umount
> /dev/hda2 won't work because the BBC uses devfs. Probably the OP
> followed your advice to check the output of mount and successfully got
> it umounted before running fsck.ext2, however.
Er, right, thanks! Forgot about the devfs angle.
Rick Moen Age, baro, fac ut gaudeam.
rick at linuxmafia.com
More information about the conspire mailing list
|
global_05_local_4_shard_00000656_processed.jsonl/84408
|
User: Password:
Create account Recover password
Support US
Donations this month: 33%
Goal : $ 300
Due: 31/07/2014
Get the latest updates
via email.
Has linuxtracker improved over the past month?
Recomended Reading
Download Free Books!
Linuxtracker Swag
Top Uploaders
Topics: 4,387
Posts: 5,092
Topics/Posts: 86%
[New Torrent] Internet Download Manager (IDM) v6.17 build 10 Incl Crack + Key
Last post by vavaro01
On 28/07/2014 14:31:11
[New Torrent] TeamViewer 9.0.23358 Premium Beta with Crack + Patch
Last post by vavaro01
On 28/07/2014 14:24:00
[New Torrent] kali-linux-1.0.8-i386
Last post by Pimmetje
On 28/07/2014 05:45:25
[New Torrent] kali-linux-1.0.8-amd64
Last post by Pimmetje
On 28/07/2014 05:44:13
[New Torrent] tails-i386-1.1
Last post by ObrienDave
On 27/07/2014 19:33:40
Custom Search
Review on
Seedbox Torrents
Seedbox UP Speed
Torrent’s details
Moderation ok
Torrent Vesta-05-13.iso
Forum /index.php?page=forum&action=viewtopic&topicid=4173
Magnet Link Magnet Link
Info Hash c6d25f01dd514de15833e244a1524d22875288f3
Who thanks
He is accurate on gcc (vesta) 4.8.0/4.9.0, Glibc 2.17 and OpenJDK 8 which are brought together on Vesta's previous version. The kernel, modules of a kernel and base unit of system to be loaded into random access memory that provides rather decent speed of work.
Functionality of system is defined by existence/absence of pplications (EXTends) in in catalog /EXT of the boot device. EXTends represent parts of file system (squashfs, isofs, or others) which are as required mounted in catalog /opt/EXT/ as by loopback-devices. That gives the chance to pack a distribution kit only with the necessary EXTends, absent EXTends can be loaded on the Internet in the course of work.
Dependences of EXTends, whenever possible, contain in their in atalog /lib.
Category linuxplus
Home Page
Support Forums
• Currently 4.00/5
Size 4.34 GB
Show/Hide Files
1 file
AddDate 01/05/2013
Uploader ida
Speed 0 KB/sec
Down 21 times
peers seeds: 1, leechers: 0 = 1 peers
Similar torrents
Vesta-05-13-desktop.iso01/05/20133.47 GB10
No comments...
|
global_05_local_4_shard_00000656_processed.jsonl/84428
|
[Top][All Lists]
From: Russ Allbery
Date: Sat, 08 Nov 2008 13:15:21 -0800
Roumen Petrov <address@hidden> writes:
> It was old build bug when building readline library on some linux-es. In
> my memory is suse 7.1 but I'm sure that only this particular version was
> affected.
> Many other linux verdors build readline without dependent libraries and
> this allow application to be linked against different curses compatible
> libraries.
libreadline is linked against libncurses on Debian.
But surely it's obvious that this isn't an interesting argument and has
nothing to do with my point? It may be that my specific example doesn't
apply on the system that you're looking at right now, but I'm sure that
you can find dozens or hundreds of others without even trying. Any shared
library that is linked with other shared libraries and is built with
libtool can present this problem.
The best practice for distribution-packaged shared libraries and binaries
is that they should only be linked against shared libraries whose ABIs
they use directly. They should never be linked against shared libraries
that they use only indirectly, since doing so adds unnecessary
dependencies and unnecessary rebuild work when the SONAMEs of those
additional shared libraries change. The same issue applies to any large
local software installation.
libtool does not follow this best practice unless you delete the installed
*.la files or use --as-needed (which as a linker flag doesn't seem to be
reliable or robust as yet -- I do apologize if --as-needed referred to
some libtool-specific feature I didn't know about instead of the GNU ld
flag). One of the problems with the GNU ld --as-needed flag is that it
applies indiscriminately to all linked libraries, even ones that the
application maintainer added explicitly (rather than being added
implicitly by libtool), and sometimes does the wrong thing with libraries
that are actually needed.
The desired behavior of libtool from a distribution perspective would be
to not include dependency libraries from the *.la file in the link on
platforms known to have proper transitive dependency support unless a
static link was requested. (There would need to be a flag to override
this for the unusual cases where this is required; there are some edge
cases where it's needed, usually involving things like weak symbols or
other corner cases.)
Russ Allbery (address@hidden) <>
reply via email to
[Prev in Thread] Current Thread [Next in Thread]
|
global_05_local_4_shard_00000656_processed.jsonl/84430
|
List:General Discussion« Previous MessageNext Message »
From:Thimble Smith Date:March 19 1999 7:09pm
Subject:adding variables to 'mysql' program
View as plain text
Hi. I've got an idea that it would be nice (and not too hard) to
add some notion of variables to the 'mysql' program. Then some
operations that have to be done in >2 queries would be a whole
lot easier to perform. It would obviate the need for writing a
throw-away script for one-time database fixes.
For example, I do one query that returns a list of IDs, and another
query to DELETE FROM table WHERE table_id IN (${stored list of ids}).
I know that sub-selects will make this example obselete, but does
something like this sound like a good idea? Does anyone have good
ideas about the syntax to use? If there is enough interest (and I
can figure out a good way to do it), I'll try to implement it (no
guarantees on time, though, 'cause <insert your favorite excuse>).
adding variables to 'mysql' programThimble Smith19 Mar
• adding variables to 'mysql' programMichael Widenius21 Mar
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.