id
stringlengths
50
55
text
stringlengths
54
694k
global_05_local_5_shard_00000035_processed.jsonl/23358
Ry Cooder Uses iTunes to Master his New Album Ry Cooder, prolific solo artist and talented session musician, was having a lot of trouble getting his new album to sound right. Then, by accident, he discovered the magic method to mastering his record and making it sound awesome: iTunes. By burning it with the "sound enhancer" accidentally turned on, Cooder got his album sounding exactly how he wanted it. That ended up being the only effect he used on the final mixes of the songs. NY Times [via Collision Detection]
global_05_local_5_shard_00000035_processed.jsonl/23359
Male Chastity Belt Preferable to Eunuchizing We're not sure what to say about this stainless steel male chastity belt other than the fact that we want zero part of that. There's a hole in the front for your junk to go, as well as a hole in the back for the stinkier junk to drop out, and the whole thing is locked with a key you (hopefully) never lose. How long can this be worn? "The experiences of my customers are completely different. They range from weekend use to the continuous carrier." At least it beats getting your muchachos cut off forever. [Latowski via Nerd Approved]
global_05_local_5_shard_00000035_processed.jsonl/23361
Energy Saver One-For-All Remote Waves Bye-Bye to Standby There're a few gadgets out there that try to reduce your energy consumption by switching off all your gizmos properly, but none perhaps as convenient as this new Energy Saver Universal Remote from One-for-All. It's a four-in-one device to reduce your collection of remote controls to just one, and has a "green" power-off button. This communicates with an adapter in a wall socket that can turn off all your gear using a power strip plugged into it. So you won't be leaving so many things on standby, hurting both your wallet and the environment... and you get to do it all without stretching your legs. Available in the UK and Germany for now, for around $78. [Red Ferret]
global_05_local_5_shard_00000035_processed.jsonl/23362
iTunes Support Store: iPhone App Crashes Fixed Good news, iPhone users! Looks like Apple has finally fixed that 2.0 app crashing problem. One Gizmodo reader received an email last night from the iTunes Support Store with instructions for redownloading applications you've already bought (for free, of course) and was given a $15 gift certificate for his troubles! Maybe Adam will convert to Macs after all? [ - Thanks Henry!]
global_05_local_5_shard_00000035_processed.jsonl/23363
Flash for Windows Mobile About to Leapfrog iPhone For No Good ReasonAdobe is set to demonstrate a full-functioning build of Flash on Windows Mobile 6.1 today at the Adobe MAX conference, indicating that the era of hacky stop-gap measures and the mildly convincing Flash Lite may soon be coming to an end, at least for some. But what of the two most net-centric phone OSes? Android development is mercifully under way, but as far as the iPhone is concerned, all we hear is an echo: We are working on Flash on the iPhone, but it is really up to Apple. This is pretty disheartening, especially when you consider that Adobe has previously claimed that Flash for the iPhone could be out in "a very short time" if Apple gave the green light. CPU load, battery life, video performance and reluctance to open up their browser to plugins could all be issues at play here, but they're not necessarily dealbreakers, and certainly not unique to the iPhone. Taking into account Apple's recent bout of surprise stubbornness, it looks like it might just be time to just, you know, move on. [MobileCrunch]
global_05_local_5_shard_00000035_processed.jsonl/23364
Nasa Admits Mars Spirit Rover Won't Be Moving Again After getting bogged down in sand, and damaging its two right wheels, all is not looking good for the Mars Spirit Rover. Nasa's admitted it will be stuck in its rut for all eternity, with little hope of moving it. Part of the massive $900m Mars Exploration Rover program, the little Spirit Rover hasn't had much luck on the planet, but Nasa is still hopeful it can collect data from the soft patch of sand it's bogged in. It's powered by the solar panels on its back, but due to the angle it's stuck in the sand, the Spirit Rover must be turned slightly to catch some valuable sun rays over the coming months of hibernation, waking up in August to start collecting information on its surroundings. While it can't zip across Mars' surface anymore, it could still provide details on the level of movement Mars makes on its axis, which would suggest whether it contains a solid or liquid core. [BBC]
global_05_local_5_shard_00000035_processed.jsonl/23366
People Came Up with Some Crazy Ideas to Fight Tornadoes Back in the Day I've never experienced a tornado but I watched Twister multiple times and I'm scared shitless of them. And I sort of understand the weathery science behind them! Imagine how people felt in the 1800's when those menacing winding bolts from the sky would manifest and tear up everything in its path. Not fun. To avoid the destruction of tornados—and there was destruction, thousands of people died in the late 1800's—people started fantasizing ways to ward off tornadoes. Two, in particular, caught my eye. One was to build a huge ass wall and the other was to develop a tornado extinguisher system. Seriously! In 1896, the San Francisco Chronicle theorized that it would be "possibly practicable to build great windbreakers to the west of big cities that should forever guarantee them from such dire misfortune as that which overtook St. Louis; in other words to wall modern cities as a matter of protection against the weather as old-time towns were walled against human foes." Sadly, Mother Nature pays no mind to walls. A more sciencey way to fight a tornado was to create a 125 feet high tower that would extinguish them. The idea was "to place an immense cylinder filled with some highly explosive material...when the tornado struck the windmillish arms, they would revolve - producing friction that would ignite [it]." That explosion would hopefully disrupt the motion of the tornado. The ideas were awesome but really, there's no way to fight off a tornado. The best option we have is to warn people about them and get them to take cover. Unfortunately, even today the warning time for tornadoes is only 10 to 15 minutes. Damn. Let's just all move to Antartica. [NPR]
global_05_local_5_shard_00000035_processed.jsonl/23373
Find Out How Early You Signed Up for Twitter, Instagram, Foursquare and More For me, signing up for any new social network goes something like this: Hear about it and ignore it. Hear about it again and then make fun of it. Hear about it again and wonder if I should sign up. Sign up. Do nothing until I hear about it again and remember I have an account and then, finally, start using it. Without fail that's how I've signed up to every social network in my life. Because I'm supposed to be technological-savvy and such, it means I usually beat the pants off of the rest of the 'normal world' at signing up to social networks but consistently lose to the always accepting early adopters. I just want to be first! And let's be real here, on the Internet, there is a twisted smug sense of victory in being first. So if you're curious on where you stand as an early adopter of social networks like Twitter, Instagram, Foursquare, etc. or if you're first at anything, check out the website Idego. It checks your account and shows you where you stand as an adopter. I'm in the 1.99% for Instagram but trail 21 of my friends. [ via The Next Web]
global_05_local_5_shard_00000035_processed.jsonl/23392
TR1 Student Posts: 1 Member Since: ‎04-18-2012 Message 1 of 2 (117 Views) HP Pavilion DV5000 When I try to access files and folders I get the following message The feature you are trying to use is on a CD Rom or other removable disk that is not available.  Insert HPIZplus450 disk. What is this disc? If I just keep pressing escape it eventually goes and I can get to files but just an hindrance. Associate Professor Posts: 1,034 Member Since: ‎12-05-2008 Message 2 of 2 (114 Views) Re: HP Pavilion DV5000 Go to the Control Panel-Add/Remove Programs.  Remove HP Image Zone then rstart. I am an HP employee.
global_05_local_5_shard_00000035_processed.jsonl/23416
You are here Herbed Garlic And Parmesan Croutons   Garlic cloves 2 Large, sliced thin lengthwise   Dried oregano 1 Teaspoon, crumbled   Dried basil 1 Teaspoon, crumbled   Dried thyme 1 Teaspoon, crumbled   Salt 1⁄2 Teaspoon (Plus Additional To Taste)   Pepper 1⁄2 Teaspoon   Olive oil 1⁄2 Cup (8 tbs)   Italian bread loaf 1 , cut into 3/4-inch cubes   Finely grated parmesan cheese 1⁄4 Cup (4 tbs) (Fresh) In a small saucepan combine the garlic, the oregano, the basil, the thyme, 1/2 teaspoon of the salt, the pepper, and the oil and simmer the mixture for 5 minutes. Remove the pan from the heat, let the mixture stand for 15 minutes, and discard the garlic. In a bowl toss the bread cubes with the oil mixture, spread them in a jelly-roll pan, and bake them in the middle of a preheated 350° F.oven for 8 minutes. Sprinkle the croutons with the Parmesan and bake them for 7 minutes more, or until they are golden. Sprinkle the croutons with the additional salt and let them cool. The croutons keep in an airtight container for 1 week. Recipe Summary Rate It Your rating: None Average: 4 (16 votes) Herbed Garlic And Parmesan Croutons Recipe
global_05_local_5_shard_00000035_processed.jsonl/23417
You are here How To Freeze Cooked Cabbage priyam's picture How to freeze cooked cabbageRaw cabbage does not freeze well but you can freeze cooked cabbage at home without much hassle. Boiled or blanched cabbage can be frozen and used for up to a month. The cabbage has to be reheated in a microwave or skillet when ready to use again. Even dishes made using cabbage like cabbage rolls etc can be frozen and consumed at a later date.  Let us see the steps for freezing cooked cabbage… Steps for Freezing Cooked Cabbage at Home 1. Let cooked cabbage cool down. Cut it into small pieces or shred it. 2. Dry and blot on a wad of paper towels. 3. Line a shallow freezer container with paper towels. 4. Arrange the cabbage pieces or the shredded bits inside the container. 5. Close lid tightly, label, and place in freezer. 6. Thaw and use as desired. Freeze cooked cabbage and make delicious dishes with it in a matter of minutes. Image credit: Rate This Your rating: None Average: 4 (4 votes) How To Freeze Cooked Cabbage
global_05_local_5_shard_00000035_processed.jsonl/23418
HDL Coder What's New R2015a (Version 3.6) - Released 5 Mar 2015 Version 3.6, part of Release 2015a, includes the following enhancements: • Mac OS X platform support • Critical path estimation without running synthesis • AXI4-Stream interface generation for Xilinx Zynq IP core • Custom reference design and custom SoC board support • Localized control using pragmas for pipelining, loop streaming, and loop unrolling in MATLAB code • Support for image processing, video, and computer vision designs in new Vision HDL Toolbox product See the Release Notes for details. Previous Releases R2014b (Version 3.5) - 2 Oct 2014 Version 3.5, part of Release 2014b, includes the following enhancements: • Clock-rate pipelining to optimize timing in multi-cycle paths • Support for Xilinx Vivado • IP core generation for Altera SoC platform • Custom or legacy HDL code integration in the MATLAB to HDL workflow See the Release Notes for details. R2014a (Version 3.4) - 6 Mar 2014 Version 3.4, part of Release 2014a, includes the following enhancements: • Code generation for enumeration data types • ZC706 target for IP core generation and integration into Xilinx EDK project • Automatic iterative clock frequency optimization • Code generation for FFT HDL Optimized and IFFT HDL Optimized blocks • HDL block library in Simulink See the Release Notes for details. R2013b (Version 3.3) - 5 Sep 2013 Version 3.3, part of Release 2013b, includes the following enhancements: • Model reference support and incremental code generation • Code generation for user-defined System objects • RAM inference in conditional MATLAB code • Code generation for subsystems containing Altera DSP Builder blocks • IP core integration into Xilinx EDK project for ZC702 and ZedBoard See the Release Notes for details. R2013a (Version 3.2) - 7 Mar 2013 Version 3.2, part of Release 2013a, includes the following enhancements: • Static range analysis for floating-point to fixed-point conversion • User-specified pipeline insertion for MATLAB variables • Resource sharing and streaming without over clocking • Generation of custom IP core with AXI4 interface See the Release Notes for details.
global_05_local_5_shard_00000035_processed.jsonl/23419
Give Now A Moment of Science Posts tagged mucus May 22, 2015 kid with kleenex over runny nose Runny Nose January 22, 2014 A girl with a sore throat eats a popsickle Why Sore Throats Hurt Ever wonder why sore throats hurt so badly? And why do they occur in the first place? December 2, 2013 A water fountain shoots a stream of water from a human face. Spitting Image of Health Saliva doesn't just help with digestion; it also aids your body's fight against disease and infection. May 3, 2005 How to Blow Your Nose Do you know the correct nose-blowing technique? Is there a correct way, and what is the difference? Learn more on this Moment of Science. August 19, 2004 Chronic Sinusitis July 19, 2004 Fencing in Bacteria September 27, 2003 Sex, Violence, and Garden Snails If you think dating is tough, just be glad you aren’t a garden snail. When you live life at a snail’s pace, you’d better be able to mate with the next snail you meet without worrying if it’s male or female. That’s why garden snails are hermaphrodites and can take on both the male and female roles in reproduction. September 27, 2003 Stomache Growls When our brain tells us its time to eat, a reflex kicks in that makes the stomach walls contract. The contractions cause the digested food in the intestine to move down towards the rectum. Stay Connected Support for Indiana Public Media Comes From About A Moment of Science Search A Moment of Science
global_05_local_5_shard_00000035_processed.jsonl/23428
Internet Shakespeare Editions Author: William Shakespeare Editor: Grechen Minton Not Peer Reviewed Much Ado About Nothing (Quarto 1, 1600) about Nothing. 865haire shall be of what colour it please God. hah! the prince and monsieur Loue, I wil hide me in the arbor. Enter prince, Leonato, Claudio, Musicke. Prince Come shall we heare this musique? 870Claud. Yea my good lord: how stil the euening is, As husht on purpose to grace harmonie! Prince See you where Benedicke hath hid himselfe? Claud. O very wel my lord: the musique ended, Weele fit the kid-foxe with a penny worth. Enter Balthaser with musicke. 875Prince Come Balthaser, weele heare that song againe. Balth. O good my lord, taxe not so bad a voice, To slaunder musicke any more then once. Prince It is the witnesse still of excellencie, To put a strange face on his owne perfection, I pray thee sing, and let me wooe no more. Balth. Because you talke of wooing I will sing, Since many a wooer doth commence his sute, 885To her he thinkes not worthy, yet he wooes, Yet will he sweare he loues. Prince Nay pray thee come, Or if thou wilt hold longer argument, Do it in notes. 890Balth. Note this before my notes, Theres not a note of mine thats worth the noting. Prince Why these are very crotchets that he speakes, Note notes forsooth, and nothing. Bene. Now diuine aire, now is his soule rauisht, is it not 895strange that sheepes guts should hale soules out of mens bo- dies? well a horne for my mony when alls done. The Song. Sigh no more ladies, sigh no more, 900Men were deceiuers euer, One foote in sea, and one on shore, To one thing constant neuer, Then sigh not so, but let them go, And be you blith and bonnie,
global_05_local_5_shard_00000035_processed.jsonl/23432
"I want you to know, darling, that I'm leaving you for another sex robot, and she's twice the man you'll ever be." That's the first line of Charles Stross' novella "Trunk and Disorderly," which just went online as an audio book at Subterranean Press. Originally published in Asimov's Science Fiction, it's a silly P.G. Wodehouse-esque spoof about a drunken socialite who blunders around with his butler and his sister's miniature elephant. He barely manages to survive a coup attempt disguised as a wild party. [Subterranean Press]
global_05_local_5_shard_00000035_processed.jsonl/23433
See Inside Dragonball's Mexican Hell-Pit! This morning we have spoilers so intense, they earned someone a cease-and-desist letter with the Hulk on it. There are rumors about the Iron Man movie, and a new set picture from the live action Dragonball. There are also new hints about Lost, Smallville and Battlestar Galactica. And comic-book spoilers for Marvel Comics and the Batman titles. Click through to start the spoiler campaign. Iron Man We won't see Iron Man's fellow armored superhero War Machine in the Iron Man movie, but we may get to see Jim Rhodes (who becomes War Machine) put on Iron Man's old armor to rescue Tony Stark, based on hints actor Terence Howard dropped. [IESB] Here's a picture of the Mexican set for the new Dragonball movie, opening in 2009. It looks like some kind of crater, presumably with CGI stuff happening in the middle where the greenscreen is. [Slashfilm] See Inside Dragonball's Mexican Hell-Pit! There are rumors that the next Lost episode (the one with the funeral in Iraq) will also feature "GI Jack (aka "Through the Looking Glass" Jack)" in action, and could be a multi-character flash-forward. The episode definitely will feature scenes on the beach, in Otherton, and in the forested valley where Karl was shot. There's also a rumor there will be no Sawyer-centric episode this season. [DocArzt] The April 24 episode of Smallville, "Sleeper," feaures a blonde femme fatale named Vanessa, sent to investigate one of the characters. [BabetteW54] Battlestar Galactica Here are the official first 15 episode titles of season four: "He That Believeth In Me," "Six of One," "The Ties That Bind," "Escape Velocity," "The Road Less Travelled," "Faith," "Guess What's Coming to Dinner," "Sine Qua Non," "The Hub," "Revelations," "Sometimes a Great Notion," "The Disquiet That Follows my Soul," "The Oath," "Blood on the Scales," and "No Exit." [Pop Media Cult] Robin's dead ex-girlfriend, Spoiler, recently started turning up mysteriously in some of the Batman comics, but we won't find out who the new Spoiler is until this summer. But she's someone we know, says writer Chuck Dixon. Also, Robin will soon face a "true badass" named 666Gun, who will force Robin to rethink his view on the world. And an upcoming storyline will challenge Tim Drake's present as Robin, and his hard-won future as Batman. This synopsis, plus an upcoming cover image (below), gives weight to the idea that Batman will be dying and Robin will take his place. [Comic Book Resources] See Inside Dragonball's Mexican Hell-Pit! Marvel Comics A low-level Marvel employee dished some minor spoilers (and bitched about the company) in a Livejournal that was deleted over the weekend. Apparently, spoilers included that several of the company's classic 1970s heroes will turn out to have been replaced by shape-shifting alien Skrulls years ago (as part of the big "Secret Invasion") — and that means blaxploitation hero Luke Cage will be back to his original tiara-wearing, yellow shirt look. Also, Iron Man may get a new villain: MODOG. And Spider-Man's dead girlfriend Gwen Stacy may be back. [Lying In The Gutters]
global_05_local_5_shard_00000035_processed.jsonl/23434
Heaven's Gate UFO Cult Sneakers - Creepy, Wrong, or Fake?Click to view Rumors are swirling that Nike discontinued work on this sweet black-and-purple prototype Dunk shoe because it reminded consumers too much of the Heaven's Gate cult mass suicide. Really? On the Nike Skateboarding blog, they offer up this image of the shoe side-by-side with one of the victims of the Heaven's Gate suicide to prove their point. So, the idea is that because the shoes match the colors of the suicide victim, the shoe was discontinued? If you'll recall, the web design company/cult known as Heaven's Gate believed that they were going to be taken up by aliens associated with the Hale-Bopp comet, so they timed their suicides to coincide with its arrival. But is there really a connection between these sneakers and the cult? It's true that the Heaven's Gaters were known for all wearing matching black Nikes, but I don't see Nike discontinuing their line of all-black Dunks. Still, the flimsiness of the rumors haven't stopped eager entrepreneurs from trying to make a ton of cash from selling the prototypes on E-bay. I love the way the E-bay seller pretends that the shoes were actually called "Heaven's Gate." You too could be suckered out of $3,000, so act now! [via E-bay]
global_05_local_5_shard_00000035_processed.jsonl/23435
What Happened, Happened … Unless You Can Change the Past Why did I read that huge Lost spoiler on io9 last week — and, more importantly, why did it turn out to be true? Spoilers and lamentation after the jump. More lethal spoilers, below. Listen, show, you and I have been through a lot together, but Daniel Faraday? You had to kill off one of my favorite characters, and not coincidentally the only guy who seems to know what's really going on? This is where the inevitable cries of "He's not dead!" arise — I know, because that's exactly what my husband said last night. And, hey, that's what Lost has led us to believe: nobody's dead dead, except for those people who really are. But I think Dan's little talk with Jack out by the sonic fence (that "anyone of us can die") was the writers' way of saying Dan's really gone — and, alas, that's also what TPTB said when they announced last month that one or two major characters would die this year. So goodbye, Dan. I will miss you. (Unless Richard drags him into the The Temple, fixes him up, and somebody else is the major character who die dies. Hoping against hope here, folks.) What Happened, Happened … Unless You Can Change the Past We got to know a bit more of Faraday's story in "The Variable" — mainly that in a show rife with characters with daddy issues, Dan's problem is with the other parent. Eloise Hawking is not going to win a "Mother of the Year" plaque any time soon. She squelches young Daniel's love for music and college-graduate Daniel's relationship with the doomed Theresa — all in the name of Dan's destiny and nurturing his special gift for science and mathematics. As a result of her relentless pushing, Dan becomes Oxford's youngest doctorate, loses his own memory and sends Theresa into la-la land as a result of his work (loved Widmore telling him that he planted the fake 815 wreckage, because Dan won't remember it in the morning). Then, heartbreakingly, because Eloise says it will make her proud of him, Daniel accepts Widmore's grant, thereby going to what Eloise knows will be a certain death on the island — she pulled the trigger, after all. What makes Daniel's presence on the island so important that she will, as she later says, "sacrifice" her son? On a positive note, she gave him his journal. I wonder if she's written more in it than the inscription, i.e., some of her own observations of time and space on the island. In 1977, Daniel tells Miles he has returned from Ann Arbor after seeing the Dharma Class of '77 picture with Jack, Kate, and Hurley, but it later becomes clear that he knows "the incident" is going to occur in mere hours. Does seeing the picture trigger his hasty return, or is that his cover story for Miles? At any rate, Dan heads straight to Jack, and after finding out that Mama Eloise told Jack it was his destiny to return to the island, informs Jack that she was wrong. Before answering Jack's questions, Dan heads to The Orchid. There, Dr. Chang scoffs at Daniel's warning that in a matter of hours there will be a catastrophe at The Swan. Daniel tells Chang he knows what's going to happen because he's from the future, then ups the ante by outing Miles as Chang's son. When Miles denies the relationship, Chang tells Daniel to stay away from him. Daniel then goes to Sawyer's house and tells the Losties (already assembled to figure out their next move now that Phil knows that LaFleur/Sawyer helped deliver Ben to the hostiles) that his mother has the information to get them off the island. (But does she in 1977, or is that knowledge that only comes with her years of research that haven't yet occurred?) Jack and Kate, armed with the fence code supplied by Juliet (Sawyer, you big dope, never call your ex by your nickname for her in front of your current partner), take Dan to the Hostiles so he can meet up with her. But first he meets up with little Charlotte — and no wonder she later remembers being frightened by the crazy man with the pedophile vibe. Icky. Daniel tells Jack and Kate that in his lifelong study of relativistic physics, he's concentrated on the constants, not the variables — people, with their reasoning and free will. He proposes to detonate Jughead thereby negating the catastrophic release of electromagnetism from the Swan. (I'll take Dan's word that an h-bomb blast will somehow have a better outcome). As a result, he will effectively change the past — and the crash of Flight 815, caused when Desmond doesn't push the button, thus never happens in the first place. But where will this leave the people who were on board the plane? Kate goes to trial — and never spends a joyous three years with Aaron. Locke is confined to a wheelchair, and works at a box company. Rose dies of cancer. Sawyer remains an unredeemed criminal and never experiences true love with Juliet. And so on. Dan thinks his plan will get them out of 1977 — but do they really want to go back to life as it was in 2004? Meanwhile, Radzinsky sounds the alarm after he catches Daniel, Kate, and Jack arming themselves for their trip to hostile territory, and a gunfight ensues. When the slightly injured Radzinsky bursts into LaFleur's house ("Just got shot by a physicist!") to tell him Dharmaville's been infiltrated, he discovers Phil, bound and gagged in the closet. Radzinsky takes Sawyer and Juliet into custody. Everything goes tragically awry when Daniel announces his arrival at Camp Hostile by firing two shots into the ground. He threatens to shoot the ever-calm Richard (who can't quite place where he's seen Dan before — it was in 1954, of course) unless he produces Eloise. Before Richard can fully explain her whereabouts, she shoots Daniel in the back. With his last breath, he tells her he is her son. What Happened, Happened … Unless You Can Change the Past In 2007, Desmond is rushed into the emergency room. In the waiting room, Penny is visited by Eloise Hawking, who apologizes for Desmond becoming a casualty in a conflict that's "bigger than any of us." She admits that for the first time in a long time, she doesn't know what's going to happen next. But Desmond survives — whew! — and though for a moment it seemed like Eloise might steal little Charlie, the Hume family seems safe for the moment.
global_05_local_5_shard_00000035_processed.jsonl/23446
Mercedes 300SD-amino Camper, Just In Time For The Coming FinanciapocalypseThis Mercedes 300SD camper conversion nicely combines two of our Ten Best Vehicles For The Coming Financiapocalypse. Essentially it's a sedan that's been hacked "professionally built by a skilled fabricator" into a 300SD-amino, and then had a pickup truck-bed camper plopped in the back. For just $4000 it's not a bad deal, with two beds, a table and even a kitchen sink. Of course, you could always combine two other cars on the list — an air-cooled VW and a mini RV — for the more traditional alternative of a VW stoner camper van. We'll still probably just follow our own advice and hold onto what we own now. [LA Craigslist] (Hat tip to Aleksandr!)
global_05_local_5_shard_00000035_processed.jsonl/23455
Arianna Huffington Is The Gift That Keeps On Giving It doesn't matter that there have already been 800 profiles of and interviews with Arianna Huffington. New ones are still endlessly entertaining. Part of the reason is because Huffington is both endlessly charming and lacking in a certain self awareness, a titan and schmoozer who blithely spouts populist rhetoric about media and politics. Case in point: New York's Chris Rovzar's chat with her on the occasion of her relocating to New York, in which she enthusiastically embraces her junior staff to show what a warm boss she is — even as Rovzar can't help but notice that they're mildly terrified. And then there's this: An assistant brings us coffee. Huffington asks for a Stevia, which begins a lengthy search of drawers. "We don't have a hierarchy in our operation," Huffington continues. "I see everything as a team, and I love empowering people." The assistant is now on her hands and knees, rooting through Huffington's bag. She finds a Stevia. The canonical Huffington work is Lauren Collins' 2008 profile in The New Yorker, which shows Huffington's many laudable and energetic qualities, but also provides a similar contrast: "This isn't journalism; it's a Sag Harbor circle jerk," Huffington wrote in March, 2006, after Vanity Fair published a story defending [Judith] Miller. She chose to ignore the fact that, eight months earlier, she had substantiated her own criticisms by writing that she had heard them from people who knew Miller well, "since I spent the weekend in the vicinity of her summer hometown." There's also her tendency to ask everyone she meets to blog, a networking tool that appeared endlessly in The New Yorker profile and retains its amusement as a gag for New York: "We literally arrived at the little Amalfi port, and there was Newt Gingrich with Callista, his wife. And so Barbara Walters was with me and she invited him to appear on The View and I invited him to blog. He said he will." But aren't they foes? "He has a book coming out in November," she explains. Imperfect, yes, but still, we prefer limousine liberals to limousine conservatives. 101 Minutes With Arianna Huffington [NYM] Related: The Oracle [The New Yorker]
global_05_local_5_shard_00000035_processed.jsonl/23456
As women, we're inundated every day with the idea that our bodies exist for other people- when we're catcalled on the street, when coworkers have entire conversations with our breasts rather than our faces, when we're groped on the subway. We're prudes if we don't let enough people access our bodies; sluts if we allow too many people to access our bodies. Billboards display women in various states of undress, wearing products, endorsing products, as products. Movies and TV we watch invite us to look at women from the perspective of the male gaze. I'm not an expert by any means, but this seems harmful to the way we see ourselves and the way we take care of ourselves. Maybe we should stop letting those things dictate how we see our bodies and start seeing our bodies as instruments of power rather than smorgasbords for others to feast on. Your body isn't a passive painting or a photograph, your body is a tool. I've kind of come to the conclusion that I don't want to work out to lose weight, but to jump higher/punch harder. This is very complicated to explain to people, who tend to give me a "tell yourself whatever you want, sweetheart" look when I attempt to explain this to them. Finding fat-friendly spaces to work out is nearly impossible as well. I'm lucky in that I've never had serious problems with eating or other body issues, but I'm typical in that my relationship to my body had historically been one of contempt and animosity. That changed this year. I started running on January 8th; I remember that date because it was the first day that I no longer felt a little bit hung over from the disaster that was the New Year's Eve concert and the New Year's Eve redo that I staged on the following week. I hopped on a treadmill at the gym and couldn't even run two miles before feeling like I was going to have a spectacular exhaustion-related fall off the back of the machine in front of all of the cute gym people. Despite the discouraging start to my running career, the next day, I signed up to run an 8K race in March and started telling everyone who would listen that I was going to run it (because nothing motivates me like an aversion to shame). After the first race went surprisingly well, I signed up to run the Chicago Marathon. My entire summer went out the window, sacrificed on the no-fun altar of proving to myself I could run a distance that has killed people. During the process of training for the marathon, I noticed my attitude toward food changed. Rather than worrying a plate of spaghetti would go straight to my thighs, I started worrying that it wouldn't. I developed the appetite of a 13 year old boy after hockey practice, eating five or six times a day so that I could make sure I'd have enough energy to run 10, 12, 15, 20 miles and not keel over with exhaustion. My attitude toward my body changed as well. I stopped really thinking about how it looked and instead focused on getting shit done, realized that any physical changes I was seeing were happening because my body knew best how to shape itself to complete the task at hand. When a coworker commented that my enlarged calves made me look like I could probably dunk a basketball (even though I'm only 5'6"), I took it as a compliment. Fuck yes, I have huge calves. Fuck yes, I have strong legs. Fuck yes, my body got me through all 26.2 miles. And fuck yes I'm still running. And no I don't know how much I weigh, nor do I care whether or not I'm ready for bikini season. Am I ready for running season? I don't mean to get all cheerleadery and Oprah-tastic all over your face here and suggest that everyone run a marathon, as we're all different; some people aren't built to run, just as I'm not built to go rock climbing or dance ballet. There's merit in finding something-anything- to do with your body that you love, and loving when your body is able to do it. And there shouldn't be any reluctance to talk about choosing to be physically active, because enjoying your own physical strength doesn't have to be viewed as subscribing to patriarchal notions of femininity. When I reached out to a small group of commenters about this issue, the response I received was overwhelming. Next week, I'm going to share some of their stories of how using their bodies in physically demanding ways has helped them appreciate themselves as more than pretty little knickknacks with attached boobs. In the meantime, what say you, commenters? When's the last time you appreciated your body for what it does for you rather than put it down for how it looks? And what the hell is up with "fitness" being equated with "thinness"? Does anyone consider a handful of almonds a satisfying afternoon snack? Image via CREATISTA/Shutterstock.
global_05_local_5_shard_00000035_processed.jsonl/23457
Elizabeth Wurtzel Thinks Rich Stay-At-Home Moms Are Directly Responsible for the War on Women In case you were wondering how Elizabeth Wurtzel felt about wealthy stay-at-home mothers, she just wrote a piece for The Atlantic called, "1 Percent Wives Are Helping to Kill Feminism and Make the War on Women Possible" Okay then! "Because here's what happens when women go shopping at Chanel and get facials at Tracy Martyn when they should be wage-earning mensches," Wurtzel explains: "the war on women happens." Hmmm, we'd say the war on women happens when politicians spew antichoice, anti-women rhetoric and try their damnedest to enact laws that ensure women aren't considered equals. But here are some more (troll-y) quotes from Wurtzel's piece: "Let's please be serious grown-ups: real feminists don't depend on men. Real feminists earn a living, have money and means of their own." "Hilary Rosen would not have been so quick to be so super sorry for saying that Ann Romney has never worked a day in her life if we weren't all made more than a wee bit nervous by our own biases, which is that being a mother isn't really work. Yes, of course, it's something — actually, it's something almost every woman at some time does, some brilliantly and some brutishly and most in the boring middle of making okay meals and decent kid conversation. But let's face it: It is not a selective position. A job that anyone can have is not a job, it's a part of life, no matter how important people insist it is (all the insisting is itself overcompensation)." "I do expect educated and able-bodied women to be holding their own in the world of work." Pretty much all of Wurtzel's valid points — that economic inequality is key, for example — are masked with overwrought hyperbole ("feminism is pretty much a nice girl who really, really wants so badly to be liked by everybody"), but if you're bored tonight you'll probably want to check out the comments section; it's bound to be a doozy. 1 Percent Wives Are Helping to Kill Feminism and Make the War on Women Possible [The Atlantic] Image via wavebreakmedia ltd /Shutterstock.
global_05_local_5_shard_00000035_processed.jsonl/23491
Let's See What's Inside A Steam Machine (And How You Swap Stuff Out) Valve's Steam Machine may have its eyes on your living room, but it's not a console. It's still a PC at heart, so if you're curious how easy it's going to be to open one up and swap out/upgrade parts, take a look at this. Corey Nelson is one of the lucky 300 to be testing the new hardware, and he's filmed this helpful guide to opening a Steam Machine up and removing its components. It's...well, it's a PC, a very cramped PC, though it's interesting that there appears to be room for a second HDD in there. Steam Machine Tear Down [YouTube]
global_05_local_5_shard_00000035_processed.jsonl/23500
During this year's Tokyo Game Show, Japanese website Inside Games thought it would be a good idea to film this year's booth companions. Maybe it was. Inside Games also thought it would be a good idea to film them with a 3D camera. Meaning? Meaning if you have a 3D television, a 3D smartphone, a 3D computer monitor, or a 3D head display you could apparently watch the above YouTube clip, and it *should* render the booth companions in eye-popping 3D. Or you could just stare at them cross-eyed. Works for Nintendo! The videos were recently uploaded to Inside Games, and the second video is in good old fashioned 2D. If you're into that. 東京ゲームショウのコンパニオンを3Dムービーで [Inside Games]
global_05_local_5_shard_00000035_processed.jsonl/23501
This Company Will Make Real-Life Miniatures Out Of Your Minecraft World If you've played Minecraft, chances are you've built something so cool that you wish you could immortalize it and put it on your mantle. Thankfully, there's Figureprints, a company who in the past have recreated your Xbox 360 avatars and World of Warcraft characters. Now, you can export you favorite Minecraft creations and they'll make them into a reality, for a price determined by how complicated your model is. Man. I might have to get one of these. FigurePrints - Minecraft [Main Page via Wired]
global_05_local_5_shard_00000035_processed.jsonl/23502
YouTuber: I Was Banned From Making Money Because Of An Over-Zealous Fan [UPDATE] Like a lot of people on YouTube, Nick Reineke makes videos about games. And like a lot of people on YouTube, he wants to make money off those videos. But he can't. Last month, YouTube banned him from AdSense, the advertising service that most YouTubers use to make money. It's not clear exactly why he was banned—YouTube hasn't explained—but Reineke thinks he knows what the problem was: an excited fan. A fan who clicked on one of Reineke's ads too many times. A fan who might have inadvertently ruined Reineke's YouTube career. Kotaku was first contacted by Reineke, who runs a channel dedicated to showing off indie games, a few weeks ago. He told us that he had received an e-mail from YouTube saying he was banned from AdSense for "invalid activity." And he said he knew why. "I've come to find out that a fan of mine took it upon himself to "help" my page by clicking my ad 20 or so times," he said in an e-mail. "I'd never condone this and never would have wanted anyone to do this as I am aware it is a flagrant violation of the AdSense Terms of Service. Unfortunately for me, my YouTube channel is tied to my AdSense account and because of this issue I am now blacklisted from becoming a YouTube partner and monetizing my videos in the future." The fan has admitted to clicking the ad, posting on Reineke's forums to apologize. "I thought, 'Hey, maybe I could give it a few clicks to see if Nick gets any money from it. Couldn't hurt to try, right?'" he wrote. "This was obviously [an] incredibly dumb decision and ended up getting Nick's AdSense blocked completely." Reineke has reached out to other YouTube networks to try to strike a deal, but they won't partner with someone who can't support AdSense. He can't make a new account without making up a false identity. And when he appealed to YouTube, they denied his request. Here's the letter they sent him: Thanks for the additional information provided in your appeal, we appreciate your continued interest in the AdSense program. After thoroughly reviewing your account data and taking your feedback into consideration, our specialists have confirmed that we're unable to reinstate your AdSense account. If you'd like more details on our invalid activity policies or review process, please visit As a reminder, further participation in the AdSense program by publishers whose accounts have been disabled is not permitted. Thanks for your understanding, The Google AdSense Team After an e-mail like that, Reineke says there isn't much else he can do. "It is not possible to directly contact Google," he said. "No one will speak with you, and there are no other avenues unless you are friends with someone who works there. Once your appeal is rejected, they will not reply to your emails or speak with you further on the issue (they actually tell you that in the rejection letter). It is essentially a LIFETIME ban for your account. Seems fair, right?" I reached out to YouTube for clarity, but they wouldn't comment on Reineke's specific situation. A YouTube representative sent me this statement: If we determine that an AdSense account may pose a risk to our advertisers or the experience of individual users, we may disable that account to protect the health of the network. If a publisher feels that the decision to suspend their AdSense account was made in error, and if they can maintain in good faith that the invalid activity was not due to the actions or negligence of themselves or those for whom they are responsible, they can appeal the disabling of their account. Accounts will be reinstated on a case by case basis. While it's certainly possible that Reineke was banned for another reason, he told me he has no idea what that might be, and YouTube isn't helping. "The notification that they had disabled my AdSense account due to 'invalid activity' was sent just a few days after the person had told me they did the spam clicking on my ad," he said. "Since it was the only ad I had up, and it was the only thing tied to my AdSense account... it's really the only thing that could have caused this. I've also considered the possibility of a rogue spammer on my site or something random like that, but I've never seen any evidence of strange behavior on the site before my account had already been disabled. One of Nick Reineke's most recent videos, from his channel Indie Impressions. "My YouTube and website editing and usage (with respect to my account standing) has been very much a repeated pattern of posting new content the same way day after day for months, so I can say in good conscience, other than the fan clicking the ad, there wasn't a deviation of any kind or anything I've done that would be misconstrued as malicious by Google." So Reineke is frustrated. He's feeling helpless. And he doesn't know what to do next. "My question is: what is there to stop someone who didn't like me from spamming any ad they know to be powered by AdSense to get it all taken down?" Reineke said. "There are no repercussions for the person doing the clicking, only the people who stand to lose everything. Seeing as how I did not condone this action by the individual who thought they were helping me, it's not really much different. "So is there really justice here? Someone who has devoted thousands of hours to their site and channel is now barred from potentially ever making money from their work on this service because an over-zealous fan decided on their own to spam click my ad?" Update: Shortly after the publication of this article, I got an e-mail from Reineke saying that his account had been restored. He wasn't contacted by YouTube or informed in any way; he just suddenly saw that AdSense was back. "I uploaded a video last night and I didn't have the row of $s next to my videos," he said. "This morning, as I noticed [this article] show up in my RSS feed, it was back."
global_05_local_5_shard_00000035_processed.jsonl/23504
I didn't know ninja dancing was a thing, let alone ninja dancing CGI. This is Enra, a Tokyo-based dance troupe that mixes dance rhythms, martial-arts influenced acrobatics, and computer graphics. The group is spearheaded by Nobuyuki Hanabusa, who made a career of creating computer graphics for Japanese television shows and commercials. The performers have impressive resumes, too. One dancer, Tsuyoshi Kaseda, for example, played the green Power Ranger on Fox Kids. Enra [Official Site]
global_05_local_5_shard_00000035_processed.jsonl/23506
The Best Game Of Thrones Game Is Becoming The Ultimate GoT Game The official Game of Thrones video games were largely terrible. Lucky for us, then, that the Game of Thrones mod for Crusader Kings II is better than any official game could ever have been. Even luckier for us, that mod is about to get a whole lot better. The development team's April Fool's joke was that they were going to be adding Essos - the world's Eastern continent - to the game. It was funny, because it seemed too ambitious to be true. Nope. A month later and they've announced they're actually going to do it. It's still early days (much of the land is yet to be added as territory/factions), and the team faces a big challenge in that there's not as much canon for the East, so information on cities and dynasties is going to have to be fabricated in places. But still. The opportunity to take control of everything from Pike to Qarth is a tempting one, and the fact it'll be using CKII's Republic DLC as a base - which allows the Free Cities to be modelled somewhat accurately - is exciting. A Game of Thrones [Mod Page, via PC Gamer]
global_05_local_5_shard_00000035_processed.jsonl/23510
Larry Ferlazzo’s Websites of the Day… …For Teaching ELL, ESL, & EFL Using The Telephone to Learn English | 1 Comment Though it’s obviously easier for my students here in the United States to find others with whom they can practice their English, I know it’s more challenging for ESL students in other countries to do the same. On my Teacher’s Page, under the section called Telephone,  I have a few examples of how English Language Learners in non-English-speaking countries might be able to get a little more practice. One link is a tutorial from CMS Professional Learning for using Skype, an Internet-based system for making inexpensive phone calls (which I suspect many readers of this blog are familiar with).  Another link is one I just learned from a post in Teacher Dude’s blog.  It’s called Kan Talk, and it’s designed to help English Language Learners specifically connect via Skype with others who would like to talk with them.  Finally, there’s a link to Jajah.  Jajah allows you to use the Internet to make phone calls, but you can do it while using your regular phone and only one person has to have access to the Web. Print Friendly Author: Larry Ferlazzo I'm a high school teacher in Sacramento, CA. One Comment 1. Hi Larry. I knew about your website, but I just discovered your blog. I’m thrilled. The links you provide to Skype, Kan Talk, and Jajah open up some creative uses in the classroom. I’m going to visit them later and spend more time experimenting. My Beginning High ESL students need more speaking and listening practice. BTW I’m tagging you for an online meme. The tag can be found on my website at For those who don’t know what a meme is I’ll give you the short version. It’s when a question is circulated among bloggers. Each blogger has to answer the question and then pass it on to five other bloggers. So the question for Larry is: What magazines do you read and how have your reading habits changed in the last couple of years? I’m a new blogger, so I’m making new friends by tagging them. Leave a Reply Required fields are marked *.
global_05_local_5_shard_00000035_processed.jsonl/23519
Ask Lifehacker: Gmail or Thunderbird? Dear Lifehacker, I'm a loyal Gmail user but I keep hearing good things about Thunderbird. Is it really the Firefox of email programs? Why should I use Thunderbird instead of - or in conjunction with - Gmail? Dear Bird, The main thing that Thunderbird (or any desktop email program) gives you that pure Gmail does not is offline access. That means before you board a cross-country flight you can grab all your "to respond to" messages to your laptop and process them in flight while you're not connected to the intertubes. Thunderbird maintains a local copy of your email that you can store and archive yourself. Most folks don't want to have to archive their own email, but some geeks (myself included) feel a certain comfort in keeping years of communication records stored on their own hard drives. (Of course you can download your Gmail messages to Thunderbird via POP periodically to keep your own email store as well, though I'm not sure that includes sent or archived messages - let me know if so.) A few things that Thunderbird can do that Gmail currently cannot: display and sort messages by size, learn what you as an individual consider spam with its adaptable junk mail filter, use encryption to send and retrieve your messages (and with an extension, encrypt the contents of those messages) and detach or delete attachments while keeping the message intact. Like Firefox, there are lots of feature-adding Thunderbird extensions that can customize it to your needs and let you do things like define your own keyboard shortcuts (see the previously-posted TB QuickMove extension for that.) Because T-bird is open source and extensible, the add-ons available for it are built by the community; they aren't proprietary and under the control of one entity, like Google. If you want a feature, chances are someone has or can build an extension for you. That said, the array of available T-bird extensions is not nearly as rich as Firefox's, and there are some areas where T-bird simply doesn't stand up to Gmail: search, for one, and message tagging, for another. Just for those two killer features alone, I'd say Gmail beats out Thunderbird as an email client. But if Gmail isn't an option and you're using an email address that offers POP or IMAP access to your messages, you'd do very well to use Thunderbird. Its message filtering is strong; its IMAP support for large folders is fast, and there's still something satisfying - albeit a bit quaint - about working on your email in a desktop application that's not a web browser. Happy emailing, Lifehacker (aka )
global_05_local_5_shard_00000035_processed.jsonl/23520
Download of the Day: iMacros (Firefox) Firefox only: Record and replay your web activities with iMacros, a free Firefox extension. This powerful tool has countless applications. You can use it to fill in forms that stretch across multiple pages, to automatically log into a site and perform specific activities, or even to extract data from a site and save it as a CSV file. It's particularly useful for web developers looking to test the performance and functionality of their sites. Macros can be edited, controlled with JavaScript and combined with other extensions such as Greasemonkey. I found iMacros surprisingly easy to use, though some features will likely send you scurrying to the online manual (which is quite good). iMacros is free for Firefox. Thanks, Paul!
global_05_local_5_shard_00000035_processed.jsonl/23522
Going Google on Windows Phone 7 If you're giving Windows Phone 7 a shot but wondering how you can integrate a Windows phone with all your Google apps—well let me just tell you, it's not easy. Sure it's possible to do using only Google's webapps, but most of us are still looking for native application solutions. Here's how I've managed to access my must-have Google apps with nice native apps on my Windows Phone 7. Note: One of the major disappointments with all of these solutions is the lack of push notifications, so you will have to access each of these applications regularly to stay up to date. Access Gmail using native Google Mail support Gmail on Windows Phone 7 has been built in to the account settings in Windows Phone and uses push notifications, so no searching necessary here. This also works with contacts and calendar, though you're limited only your main calendar. If you need more than that, check out SuperG Calendar (below). Access Google Docs using GDocs Google Documents is a fairly advanced web app and as of yet there is not an application for Windows Phone 7 that allows you to edit the documents in your Google Docs account, however GDocs which is available in the Marketplace for free (seemingly for a limited time only) is a great application for reading/viewing and storing documents locally for future viewing offline. Access Google Voice using GoVoice Google Voice support was the hardest to choose from. There are a few in the market, but after some testing, GoVoice was the best of the lot. Loading speed, least frequency of errors and crashing, and overall usability, it was the easiest choice. Still, the lack of push notifications really holds the app back here. Its a bit steeply priced for $2.99 in the market, but in the end it does make it easy to keep up with your Google Voice messages. Access Google Calendar using SuperG Calendar With the Windows Phone's calendar sync with Google Calendar being limited to only your main calendar a good Google Calendar app was a must. Although a bit slow to load and refresh, SuperG Calendar was one of the best purchases I made in the market. SuperG allows you to view all calendar's in your account, even your shared calendars. Again, it lacks push notifications but at $1.99 this is a great purchase for those of you with multiple busy calendars. Access Google Reader using Wonder Reader And last but not least, Google Reader. After a few tries with a few other RSS feed readers, Wonder Reader stood out by far as the fastest and most capable of the bunch. Priced just right at $1.99, it is worth every penny. It imports everything from your Google Reader account seamlessly. All of your folders are just as they are and it updates very quickly, even if you have a lot of feeds. It has even been updated to work with Google's new 2-step verification. Windows Phone 7 is still a relatively young platform; we rounded up our favorite apps early on, but if you're a WP7 user and have a favorite or two of your own, let's hear about it in the comments.
global_05_local_5_shard_00000035_processed.jsonl/23524
Flickr user plasticniki's desktop is informative and elegant, and combines gorgeous perspective on the wallpaper image with flat icon graphics and monotype fonts to create a sharp-looking HUD that won't get in the way of her work. The best part of this desktop is that it doesn't take a ton of effort or a bunch of custom themes and packs that require delicate tweaking to set up. If you want to give your Windows system the same look, here's what you'll need: The Long Road Desktop If you need help getting all of the components to look just right, head over to tutorial on setting up Rainmeter to get started. If you have a Mac or are running Linux, you can approximate some of the same effects using GeekTool or Conky, respectively, but not everything. Desktop | Flickr
global_05_local_5_shard_00000035_processed.jsonl/23525
Attachment Icons for Gmail Changes File Attachment Icon Based on Type Chrome: Normally Gmail uses a paperclip icon to let you know a file is attached to the email. The free Chrome extension Attachment Icons for Gmail replaces the paperclip with standard Windows file type icons for PDFs, pictures, spreadsheets, and many other file types. The extension does have a bug or two—if the email has multiple types of attachments only the first icon is displayed and if you use Gmail themes the icons may not display properly. Other than those two nitpicks the extension works well. I'm adding it into my setup. Attachment Icons for Gmail | via Addictive Tips
global_05_local_5_shard_00000035_processed.jsonl/23530
Date: 27 Oct 2000 Why Textbook ElGamal and RSA Encryption Are Insecure We present an attack on plain ElGamal and plain RSA encryption. The attack shows that without proper preprocessing of the plaintexts, both El Gamal and RSA encryption are fundamentally insecure. Namely, when one uses these systems to encrypt a (short) secret key of a symmetric cipher it is often possible to recover the secret key from the ciphertext. Our results demonstrate that preprocessing messages prior to encryption is an essential part of bothsy stems.
global_05_local_5_shard_00000035_processed.jsonl/23535
Any success reading sensors on IBM 326, HP DL145, Sunfire v20z Steven Timm timm at Tue Feb 15 00:25:38 CET 2005 I have tried to use lm_sensors to read out IBM e-server 326, HP DL145, Sun Sunfire v20z. On the IBM and sun all I can get are the eeprom's, on the Sun sunfire I can get one w83627hf but with nothing important attached to it. Has anyone else done better up till now? (note, the above are all opterons, I am running the 32-bit (athlon) kernel on them at the moment and compiled i2c/lm_sensors 2.9.0 accordingly). Same RPMS work on Accelertech (HDAMA) and Tyan S2882. Steve Timm Assistant Group Leader, Farms and Clustered Systems Group Lead of Computing Farms Team More information about the lm-sensors mailing list
global_05_local_5_shard_00000035_processed.jsonl/23538
Re: Taking another round at @summary From: Lachlan Hunt <[email protected]> Date: Wed, 06 Jan 2010 13:15:36 +0100 Message-ID: <[email protected]> To: Denis Boudreau <[email protected]> Cc: HTML Accessibility Task Force <[email protected]>, HTML WG Public List <[email protected]> Denis Boudreau wrote: > Agreed, @summary, as it is, might not be a perfect solution. But > removing it altogether from HTML5 is a sure way to add to the > difficulties users with disabilities already experience when trying > to access tabular data. The summary attribute is currently not removed all together. It's in the spec and can be used. Although it is considered obsolete, it's still technically conforming. (Note that the validator will only issue a warning about its use, not flag it as invalid). > I humbly believe we are missing the broader picture. Look at it from > a government perspective: > For instance, here in Quebec (Canada), like most public > administrations, we are putting together our own adaption of WCAG > 2.0, an accessibility standard called SGQRI 008. All in all, it's a > fairly good document that goes farther than most government > adaptation I've seen to this day. Again, not perfect, but it's got > teeth nonetheless and it's already done a lot to promote > accessibility in our local industry. > In that standard, the use of the @summary is mandatory for complex > data tables. Mandatory. This standard is not going to change for at > least another 5 to 7 years. What that means is that every government > website, intranet, extranet developed by or for the government that > contains such elements have to have a @summary or else they fail at > compliance and are subject to reddition (which is not a good thing). This should be a clear lesson that government policies should avoid mandating specific technical solutions to problems, as opposed to simply requiring that, for example in this case, complex tables be accompanied by some solution that adequately addresses the accessibility issues, without specifying what that solution must be in all cases. Any one of the alternative techniques listed in HTML5 for providing a table summary outside of a summary attribute has the potential to be equally, if not more effective in some cases, so it should be clear why govt. policies creating such lock-in are bad. Policies like you describe simply make the often incorrect assumption that the specified technical solution is the most appropriate in all cases, which is almost certainly not always going to be the case. And I don't think we should let such clear mistakes in some govt. policies weigh too heavily on the technical decisions we make. Lachlan Hunt - Opera Software Received on Wednesday, 6 January 2010 12:17:16 UTC
global_05_local_5_shard_00000035_processed.jsonl/23540
Markup Validator can not proxy digest auth? From: olivier Thereaux <[email protected]> Date: Mon, 22 Jan 2007 13:20:52 +0900 Message-Id: <[email protected]> Cc: w3t-sys Team <[email protected]> It took me almost half a day thinking there was a bug in the validator, but as I finally found out, there's no bug: by *design* of Digest Auth, the markup validator can not proxy digest authentication like it does for basic authentication. Explanation: Digest auth works in a challenge-response manner. 1) client requests resource 2) server answers 401, gives challenge string, authentication realm 3) client computes response, based on hash of challenge string, realm, user, password, and most importantly here, *queried URI* So even if the validator can pass the challenge string and realm to the user's browser, and pass the response string back to the server, the response will NOT be accepted by the server, simply because expected_response = hash(challenge, realm, user, password, "http:// is obviously different from given_response = hash(challenge, realm, user, password, "http:// Conclusion: bad news, everyone, I think we can't "proxy" digest auth - unless I'm mistaken, and trust me, I'd love to be wrong here. I can't recall who made the first implementation of the auth proxying for the validator. Gerald? Terje? Would you concur? We then have the choice betweem 1) CLIENT <- basic auth -> VALIDATOR <- digest auth -> SERVER (which, arguably, is wrong wrong wrong - we'd be putting the SERVER at risk without their consent. Plus, I'm not even sure it's entirely 2) "sorry, we can not validator resources protected by digest authentication. Use the upload feature of the validator, or install a local instance of the validator in your network, and give access to your resources to that server". Thoughts? Different diagnosis? Is this a showstopper for switching w3.org servers to digest auth, seeing as it's not only going to break validation, but all sorts of services too (xslt, etc.)? Received on Monday, 22 January 2007 04:21:04 UTC
global_05_local_5_shard_00000035_processed.jsonl/23541
RE: Is there an NCBI taxonomy in OWL ? From: andrea splendiani \(RRes-Roth\) <[email protected]> Date: Thu, 26 Feb 2009 00:13:20 -0000 Message-ID: <C20D81024E1CAE4893D45B58FA0FFF6401101053@rothe2ksrv1.rothamsted.bbsrc.ac.uk> To: "Erick Antezana" <[email protected]> Cc: "public-semweb-lifesci hcls" <[email protected]>, "Vladimir Mironov" <[email protected]>, "Martin Kuiper" <[email protected]> Yes, I already had downloaded you files ;) >From a first inspection, they seemed to contain a reduction of the NCBI Taxonomy (in terms of properties). I was playing as well with your system, actually at: but the query: select ?x where { ssb:NCBI_4530 rdfs:label ?x . Gets in timeout all the time. Shouldn't be such a pretentious query, I guess, even if you compute transitiveness. I used to be familiar with biotop, 'till about one year ago. But I don't aware of the fact that there is a off-the-shelf taxonomy there. -----Original Message----- From: Erick Antezana [mailto:[email protected]] Sent: 25 February 2009 20:51 To: andrea splendiani (RRes-Roth) Cc: public-semweb-lifesci hcls; Vladimir Mironov; Martin Kuiper Subject: Re: Is there an NCBI taxonomy in OWL ? Hi Andrea, you can find an RDF at and play with it from: I would also recommend taking a look at the work of Schulz et al: Andrea Splendiani wrote: > Hi, > I was looking for an NCBI Taxnomoy in OWL, but I didn't find it (or > better, could find fragment from other projects...) > What is strange though, is that on the obo foundry website > (berkeleybop.org/ontologies) there are notes on the ncbi taxonomy > representation in owl... but not the representation itself. > Does anybody have some hint about where I can fin an OWL version ? Or > even an RDF version ? Even better would a sparql endpoint containing > it... > best, > Andrea Splendiani Received on Thursday, 26 February 2009 00:14:08 UTC
global_05_local_5_shard_00000035_processed.jsonl/23543
Re: Proposed issue: What does using an URI require of me and my s oftware? Date: Fri, 03 Oct 2003 15:40:03 -0400 (EDT) Message-Id: <[email protected]> To: [email protected] Cc: [email protected], [email protected], [email protected] From: "John Black" <[email protected]> Subject: RE: Proposed issue: What does using an URI require of me and my s oftware? Date: Fri, 3 Oct 2003 15:11:47 -0400 > > From: pat hayes [mailto:[email protected]] > > Sent: Friday, October 03, 2003 1:48 PM > > To: LYNN,JAMES (HP-USA,ex1) > > Cc: [email protected] > > Subject: RE: Proposed issue: What does using an URI require > > of me and my > > s oftware? > > Not naive at all, right on the button. Like, what > > problem are we setting out to solve here? What > > might go wrong that our declarations of Policy > > and Correct Architecture and so on are aiming to > > prevent? I for one am completely unclear what the > > issues are supposed to be that so concern us > > here, and I am extremely worried that we will > > make declarations based on mistaken ideas about > > meaning rather than on any actual problems. > Ok. ACorp creates a acorp:uri123 which is a serial > number of one of its acorp:StandardWidget, which > is the product ID of its standard widget and has property > listPrice = $2.00 according to its ontology acorp:catalogue. > BCorp, thru their sw-agent, buys a batch of these including > acorp:uri123. Now BCorp turns around and sends the batch to > CCorp's sw-agent with an RDF invoice that states that > acorp:uri123 a ACorp:DeluxeWidget. CCorp can verify that > the list price of a ACorp:DeluxeWidget is $10.00 and happily > pays BCorp their asking price of $5.00. > Now the RDF invoice used two of ACorps URIs to > commit fraud. Those URIs belong to ACorp and it was never > ACorps intention that acorp:uri123 be called anything other > than a acorp:StandardWidget. How could ACorp make this > clear to CCorp? One solution would be to publish at > acorp:uri123 the statement, this is <> a acorp:StandardWidget. > Note that this is a boring, trivial example. There is no > inference, semantic search, or other sw-interesting ideas > in it. I'm using it to point out that URIs have > social meanings that will become represented and > communicated by the Semantic Web. BCorp lied. So what? Do you really expect the Semantic Web to prohibit lying? CCorp accepted the information that BCorp gave it. Do you really expect the Semantic Web to educate fools? The issues are whether CCorp has to trust the information it gets from BCorp and whether CCorp can determine whether BCorp is telling the truth. In a situation where information about a URI need not be gathered from ``the standard place'' I don't see any reason why CCorp could not go to ``the standard place'' in ACorp's web site to determine whether the information it is getting from from BCorp follows from the information available from ACorp. I similarly don't see any reason (except for the extremely limited expressive power of RDF) why CCorp could not determine whether BCorp's information is inconsitent with ACorp's information. CCorp is free to do this, or not. All the above is in the most simple case, where one would expect that using information consistently would be most desirable, yet there seems, to me, no requirement that everyone has to use the same authoritative information. There may be a cost to doing something else, but there also may be For example, suppose ACorp put up pricing information for its widgets? How could anyone sell ACorp's widgets for a different price if everyone had to use ACorp's information about its widgets? Or suppose that ACorp created acorp:invoiceuri3.14159 which has all the right stuff hanging off it to look like a valid invoice saying that ACorp sent 1000 widgets to CCorp for the total price of $2000. If everyone has to believe ACorp about its uris mean/denote then how can CCorp even tell anyone that ACorp is lying? This information will have to use ACorp's URIs and thus will be infected by ACorps lies. Peter F. Patel-Schneider Received on Friday, 3 October 2003 15:40:31 UTC
global_05_local_5_shard_00000035_processed.jsonl/23545
Re: Web IDL syntax From: Cameron McCormack <[email protected]> Date: Tue, 30 Jun 2009 17:07:22 +1000 To: [email protected], Ian Hickson <[email protected]>, Shiki Okasaka <[email protected]> Message-ID: <[email protected]> Cameron McCormack: > Following are my half baked proposals. I’ve now baked all of these proposals into the spec, except for the one about allowing multiple module levels with a module declaration (i.e., ‘module a::b::c’). * Made ‘in’ optional * Dropped [ImplementedOn] in favour of an ‘implements’ statement * Changed "Object", "TRUE" and "FALSE" to lowercase * Dropped [Optional] and [Variadic] in favour of ‘optional’ and ‘...’ * Dropped [ExceptionConsts] in favour of allowing constants to be defined directly on exceptions * Replaced [Callable], [IndexGetter], [Stringifies], etc. with real IDL syntax * Changed [NameGetter=OverrideBuiltins] into [OverrideBuiltins] * Renamed DOMString to string: If you’re writing a dependent spec and need help changing your IDL to match the changes I’ve made, let me know. Two (perhaps more controversial? or at least undiscussed) changes I’ve made with this commit are to replace boxed valuetypes with the concept of nullable types, like there are in C#, and to remove null from the set of values string (née DOMString) can take. A while ago, there was some discussion about whether null should indeed be a member of that type. Jonas made a comment at one point about it being strange to have null be a valid DOMString value while having the default conversion behaviour being that a JS null value was treated as the string "null". So now authors of IDL will need to make a conscious decision about whether null is a valid value for attributes and operation arguments that take strings. The type ‘string’ doesn’t allow null, while the type ‘string?’ does: interface X { attribute string a; [TreatNullAs=EmptyString] attribute string b; attribute string? c; [TreatUndefinedAs=EmptyString] attribute string d; [TreatUndefinedAs=Null] attribute string? e; x.a = null; // assigns "null" x.a = undefined; // assigns "undefined" x.b = null; // assigns "" x.c = null; // assigns null x.c = undefined; // assigns "undefined" x.d = undefined; // assigns "" x.e = undefined; // assigns null (Oh yeah, I renamed [Null] and [Undefined] to [TreatNullAs] and [TreatUndefinedAs] to give them more descriptive names.) The issue of whether these are the right defaults is still open. I haven’t had time to finish detailed testing to see whether defaulting to stringification is the best. Cameron McCormack: > > An alternative would be to reverse the omission of methods, so that > > “getter” on an operation would always have both the getter. Ian Hickson: > I prefer "omittable" because it would mean I wouldn't have to say "and the > setter works like this other method" in prose all the time. I’ve done it this way. > > If we are breaking syntax, then it seems more compelling to make > > “DOMString” be “string”. > > > > Maybe we could drop the “in” keyword. Seems better to stick with > > plain “in” arguments, for compatibility across language bindings, > > than to also allow “out” and “inout” ones. > I'd vote for not changing these, because we already have a lot of IDL out > there and it would be a pain to fix it all. I tried changing DOMString to string and liked the look of it, so I’m leaving it in for now. There isn’t much Web IDL content out there yet, so I think we’re still at a stage where it’s manageable to change. If you need help changing this (and the other syntax changes) in HTML 5, let me know and I’ll supply a patch against > Regarding 'implements' (heycam and I talked about this on IRC recently; > I just wanted to get some notes down on the record): > There are three use cases that need covering: > - inheritance (e.g. Node -> Element -> HTMLElement -> HTMLAnchorElement) > - interfaces that are to be implemented by many other objects (e.g. > EventTarget) > - interfaces that are defined across multiple specs (e.g. Window, > WorkerUtils, HTMLBodyElement's attributes and methods being separated > from its deprecated attributes and methods) > The first is handled by ':', the second is handled by 'implements'. I > think we need the third also. Haven’t got to this one yet. Shiki Okasaka: > Can we make "in" optional so that new interfaces can be defined > without using "in"? It seems very easy to forget to specify "in" for > each parameter in Web IDL. OK, done. Cameron McCormack ≝ http://mcc.id.au/ Received on Tuesday, 30 June 2009 07:08:09 UTC
global_05_local_5_shard_00000035_processed.jsonl/23546
Re: p:pipeline From: Rui Lopes <[email protected]> Date: Mon, 24 Jul 2006 12:03:11 +0100 Message-ID: <[email protected]> Jeni Tennison wrote: > I don't understand (I said no nested pipelines). If someone has a > pipeline document at foobar.xp like: > <p:pipelines xmlns:p="..."> > <p:pipeline name="foo"> > ... > </p:pipeline> > <p:pipeline name="bar"> > ... > </p:pipeline> > </p:pipelines> > then you can use either the 'foo' or 'bar' pipeline by including > foobar.xp into your pipeline document, and referencing them in a step: > <p:pipelines xmlns:p="..."> > <p:import href="foobar.xp" /> > <p:pipeline name="baz"> > ... > <p:step kind="foo">...</p:step> > <p:step kind="bar">...</p:step> > ... > </p:pipeline> > </p:pipelines> I tend to like this approach, but I belive that most of the times users won't need to define multiple pipelines in the same file. It might be a 15% case, probably. For the other cases (85%), the user shouldn't need to type extra tags (i.e. <p:pipelines>). And this feature would require for us to define which pipeline was triggered automatically (like makefile's and ant's target "all"). Do we really need to cope with these issues to ease a 15%-case? I'm not sure. > As I said in my previous mail, I don't (at the moment) see the > requirement for pipelines that are local to other pipelines. Why not > have *only* 'global' pipelines? (Analogy with XSLT: we don't let people > define templates inside other templates or functions inside other > functions.) I believe local pipelines are analog to XSLT's named templates and functions, as these may be called inside other templates. > If you have a bunch of components that you use regularly, you can put > all the definitions in a pipeline module and import it into other > pipeline modules. > If we do have a language for declaring non-pipeline components, we're > going to have to address issues such as: > - providing mechanisms for pointing to definitions in different > programming languages, including handling things like classpaths > - dealing with situations where different definitions are provided in > different programming languages, and enabling the implementation to > choose between them > This could get quite sticky... Perhaps the fact that you used an > extension attribute (my:javaClass) indicates that you think only the > basics should be part of XProc, and the rest implementation-defined? If we allow implementation-specific issues to be configured in this type of files, maybe we should leave it outside the scope of xproc. Received on Monday, 24 July 2006 11:03:32 UTC
global_05_local_5_shard_00000035_processed.jsonl/23547
Re: Difficulties with URI="" and IDREF From: Andreas Schmidt <[email protected]> Date: Tue, 04 Jan 2000 10:04:48 +0100 Message-ID: <[email protected]> To: XMLDSig WG mailing list <[email protected]> John Boyer wrote: > Also, IDREF is usually used in conjunction with URI="". URI="" is > to indicate the root of *this* document, but there is still not enough > information to tell us how to generate the byte stream that will be > digested. Fortunately, URI="" cannot be used alone since such a > would break as soon as the signature value is added to the document. > must be used in conjunction with either IDREF or an XPath transform. Either that or it is core behavior to omit the contents of SignatureValue in that case. The spec should define that, but I can't find anything about it in [1], in sec. 2.3/3.3.3 nor 6., or have I missed it? Btw two minor editing points: 1. sec. 2.3 defines URI/IDREF as exclusive alternatives <Reference (URI=|IDREF=)? Type=?> in contrast to sec. 3.3.3 2. DTD in sec. 3.3.3. still uses 'ObjectReference'. [1] http://www.w3.org/Signature/Drafts/WD-xmldsig-core-20000104/ Received on Tuesday, 4 January 2000 04:03:52 UTC
global_05_local_5_shard_00000035_processed.jsonl/23550
Re: XML version number via DOM From: Martijn Pieters <[email protected]> Date: Thu, 22 Mar 2001 09:19:22 +0100 To: Dylan Schiemann <[email protected]> Cc: Jeff Yates <[email protected]>, [email protected] Message-ID: <[email protected]> On Wed, Mar 21, 2001 at 07:16:07PM -0800, Dylan Schiemann wrote: > --- Jeff Yates <[email protected]> wrote: > > At the beginning of every XML document there is a > > <?xml version="1.0" ?> > > tag. Is there a way to get this version number from > > within DOM? I know at > > this time there is only one version number, but in > > the future there may be > > more. > I would think that it would be an attribute of that > node. So > document.childNodes[index].getAttribute("version") > would do the trick, assuming a correct implementation. A correct implementation does not model the <?xml version="1.0"?> tag; it is not a Processing Instruction (even though some DOM implementations have modelled the XML declaration as a PI Node). Besides, a PI doesn't have This isn't available in DOM level 2 at all. DOM level 3 does give access to the information given in the XML declaration through attributes of the Document Interface: Martijn Pieters | Software Engineer mailto:[email protected] | Digital Creations http://www.digicool.com/ | Creators of Zope http://www.zope.org/ Received on Thursday, 22 March 2001 03:19:30 UTC
global_05_local_5_shard_00000035_processed.jsonl/23552
Re: List elements (was: Tree Presented Lists ) From: Daniel Hiester <[email protected]> Date: Tue, 24 Jul 2001 10:49:43 -0700 Message-ID: <002201c11469$076892e0$1226b3d1@sol> Indentation could be achieved via stylesheets, actually. But, yes, the status-quo on list indentation is to just list a new list And <list type="tree"> was sort of discussed here earlier, until I decided to ask the style forum about the aspect of making the tree apearence a function of stylesheets, as the type attribute from html4 is deprecated. No one has answered my question yet. If something like (for example) XHTML 2.0 is parsed as XML, and the namespace/schema/dtd is parsed by the browser, does that mean that one could create a generic LIST element, and the valid XML parsing UA would be able to understand the LIST element? Or what if someone wrote it into an XHTML 1.1 module? Would an XML-parsing UA be able to parse a LIST element? Or is it still status-quo, where all parsing is done as per the programming done by the UA vendor, and loading the namespace/shcema/dtd is irrelevent? But the thought hit me last week that it seems silly to have muiltiple list elements. I could understand the DL element being justified, but in specific, having both UL and OL sounds to me like the element-heavy, tag-soup-friendly HTML, and not the structure-heavy, stripped-down, simplified XHTML that I have come to apreciate. I wouldn't advocate deprecating both of them in favor of a brand new LIST element, but I am confident that one could already use just OL or just UL (whichever tickles your fancy), and use stylesheets to control how you want it to look. I don't think it makes a huge difference whether you state in the markup that it's an ordered list or an unordered list, because the UA will still only render the list in whatever order the author puts it. It's not as though the UA looks at the list items in a UL, and says, "Hmmm, this is an unordered list. Let's randomize the order of the list items!" Received on Tuesday, 24 July 2001 13:41:54 UTC
global_05_local_5_shard_00000035_processed.jsonl/23553
Re: Http request high cpu From: Akritidis Xristoforos <[email protected]> Date: Fri, 26 Nov 2004 12:46:45 +0200 To: Steinar Bang <[email protected]> Cc: [email protected] Message-Id: <[email protected]> > libwww is a unique library. But it hasn't been maintained by anybody > for two years, and that's starting to show. I devoted a full man-month trying to understand libwww, writing a function of intermediate complexity (supporting post requests, xml parsing, authentication etc.) and figuring out that the problems I encountered were due to libwww and not my code. I switched to libcurl, rewrote the same code in a week and haven't looked back. Perhaps this library has some features that libcurl doesn't, though I didn't have to use them for my needs. However, for libwww to survive, someone has to focus on these features and let libcurl do the rest. Personally, I can't find any reason to suggest libwww to anyone. Christopher Akritides Hellas On Line, Greece Received on Friday, 26 November 2004 15:05:52 UTC
global_05_local_5_shard_00000035_processed.jsonl/23554
Re: MathML3 specification is inconsistent about qualifier content of non-strict constructors From: David Carlisle <[email protected]> Date: Fri, 23 Mar 2012 13:34:04 +0000 Message-ID: <[email protected]> To: Andrew Miller <[email protected]> Cc: [email protected] On 20/03/2012 02:34, Andrew Miller wrote: > Hi all, > In most of the MathML 3 specification, the content model is described > using the 'content' field, and if qualifiers are allowed in the content, > this is mentioned in the 'Content' row of the table. > Throughout most of the specification, mentioning something in the > 'Qualifiers' row for an element doesn't imply that the qualifier can be > a child element of that element, and instead implies that the qualifier > is used with the parent apply element - this is explicitly stated in the > last paragraph of 4.1.5. > However, when constructor elements are defined (for example, in section >, qualifiers are listed in the Qualifier row, but not in the > Content row. The examples (and the 'Parsing MathML' appendix, and the > transformation rule in, however, contradict the lack of the > qualifiers in the element content, so it seems that the omission of the > qualifiers from the 'Content' rows of the constructors must have been > accidental. > Best wishes, > Andrew I think the tables are OK, but perhaps we should just explain them better. If viewed through OpenMath (or Strict Content MathML) eyes is just syntactic sugar for as part of that syntactic re-arrangement qualifier elements that would have been in the apply are placed directly in the container. So, I think it is consistent (or at least it was intentional) to use the "Qualifier" row to describe qualifiers that can be used with the containers, even though in the non-strict form they appear in the content. I agree that last para of 4.1.5 could perhaps be clearer in calling out the different usage of qualifiers with container elements. It dpes however reference 4.3.3 which says Qualifier elements are always used in conjunction with operator or container elements. Their meaning is idiomatic, and depends on the context in which they are used. When used with an operator, qualifiers always follow the operator and precede any arguments that are present. Hopefully that makes things clearer The Numerical Algorithms Group Ltd is a company registered in England and Wales with company number 1249803. The registered office is: powered by MessageLabs. Received on Friday, 23 March 2012 13:34:38 UTC
global_05_local_5_shard_00000035_processed.jsonl/23555
RE: [CSS3] General question about CSS3 vendor prefixes From: Brian Manthos <[email protected]> Date: Thu, 28 Apr 2011 22:46:13 +0000 To: Mark Ayers <[email protected]>, "[email protected]" <[email protected]> Message-ID: <FA122FEC823D524CB516E4E0374D9DCF19D11F83@TK5EX14MBXC132.redmond.corp.microsoft.com> > These attributes force web developers to either not advertise standards compliance > (losing business), use images (takes much longer to load, also losing business), or > make a less aesthetically pleasing website (losing business). I would state it differently. If a website wants standards compliance, it should only use features that are at least in CR - and use them without prefixes. If a website wants standards compliance and "must have" not-ready-for-CR features, then prefixes are a required tax for using such features. As other have discussed at length in this thread and others, there are reasons why we have the system we have. One of them that should concern you involves changes to the spec. Suppose you use a not-ready-for-CR feature without prefix and some "misbehaving" browser allows you to do so. All "behaving" browsers will not. Your site starts out as "seeming compliant" ("look ma, no prefixes in the markup") even though it only works in that "seemingly cutting edge, but actually misbehaving" browser. Now the spec changes and reaches CR. This "now actually cutting edge" formerly misbehaving browser auto-updates to match the CR spec - unprefixed before and after the conversion. Your site breaks for customers that auto-update that browser, and remains working for those that don't. The spec gets blamed for changing. The browser gets blamed for "breaking" you. Your site gets blamed for tormenting users. The web as a whole gets a criticized for not having solved the core problem after over a decade. Bad situation for all involved. The moral of the story: 1. Don't use prefixes in production sites. Use them to explore and participate in the development of "next" *for future use*, rather than to introduce customer issues with your shopping cart *today*. 2. If the spec is not in CR, the feature isn't ready for production use. Patience is a virtue. Received on Thursday, 28 April 2011 22:46:43 UTC
global_05_local_5_shard_00000035_processed.jsonl/23556
Re: New URI scheme talk in RSS-land From: Norman Walsh <[email protected]> Date: Fri, 05 Dec 2003 17:30:23 -0500 To: [email protected] Message-id: <[email protected]> Hash: SHA1 / "Bullard, Claude L (Len)" <[email protected]> was heard to say: | 2. People who click on things are used to getting | back a page or opening a dialog. Autosubscribing | based on a click seems like a bad idea. It doesn't It would also be a violation of web architecture as GETs are supposed to be safe :-) | pass the Don't Shock The Monkey test. It seems like | a better idea not to subscribe, but to open a dialog | with that value with a Subscribe option on it. Otherwise, | accidental clicks cause problems. And since some | browsers render www.t as a hypertext link control as well, | a guessing game goes on regards defaults. I would guess that what Tim expects is for the click to bring up his RSS reader of choice, displaying the content of the RSS file clicked on, and that the application would provide a "subscribe" button. Be seeing you, - -- recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. reply email and destroy all copies of the original message. Version: GnuPG v1.2.3 (GNU/Linux) Received on Friday, 5 December 2003 17:35:29 UTC
global_05_local_5_shard_00000035_processed.jsonl/23557
Notes from my review of the 'state in web application design'.. Date: Sun, 11 Jun 2006 19:28:37 -0700 Message-ID: <7D6953BFA3975C44BD80BA89292FD60E04E656A5@cacexc08.americas.cpqcorp.net> To: <[email protected]> A few notes.. Many are editorial :) 1) Abstract; extra period at the end 2) You should put a link from FOLDOC, you do reference a hyperlink would be more appropriate as your citing it as a reference. 3) Not sure how 'hardware things' in section 2 is relevant. You do talk about hardware for balancing and architectural implications, but hardware/state is still not clear. 4) Section 2, you never explain how a network message could have state in itself. 5) Section 2, 'Most interesting,...' reads funny. 6) section 3.1 "the web browser is 'one half'".. Maybe a 'Major Component' would be a better reference than 1/2. 7) Section 3.1. I'm not sure that the application that stores state is therefore statefull.. (your reference to the browsers). 8) I'd remove the firefox reference, it will date the document. 9) Section 3.2 you define refer to WWW Servers as stateless, if a browser is stateful because it stores stated, wouldn't www servers also be stateful because they manage state? 10 Section 3.2 typo: "Manages states is a server" should be "managers state resides on a server"? 11) Section 3.2 typo: ". With no opening quote. 12) Section 3.2 you seem to loose the point at the end.. May be over edited ? 13) I still think Roy's point is valid, but your extending it. I don't see Roy's point and your paper in conflict per say. You may want to re-word this to explain "while Roy's point is valid, this paper 14) In 4.1: What is JAX-RPC? (reference would be helpful). 15) In 4.1: "there are no standards' could be 'there are currently no standards' :) 16) in 4.2 you may want to point out that session ID's should not be 17) In 4.2 you refer to account numbers or personal information. I like to define this as any user identifiable information (includes, name, phone, ssn, company etc..) 18) In 4.2 you may want to mention the session cookies vs sticky 19) In 4.3 "middleware of some kind" could be 'a middleware layer or an n-tiered architecture" 20) In 4.3 you may want to define 'cheaper' the first time as less resource intensive. 21) 4.3 still has two TBD's which should be completed. 21) 4.3 you call out 'linear decay' I believe its more important that it's a 'predictable decay'. 22) 4.4: I disagree that networks are inherently unreliable. They are no more unreliable than a computer system or than software. We design for redundancy in software, hardware and networks. In fact, the networks play a major role in reliability. 23) 4.4; 'Related to reliability' is an odd design paragraph. You must design for active fail-over to achieve reliable systems and consistent user experiences. Having to 'drain' a system implies that you don't have fail-over. 24) 4.4 I'd remove the 'on the other hand'.. Seems to substantially weaken the paragraph. 25) 4.5 TBD 26) section 5, the flowcharts look good, but the start and ends need to be defined. Also, they don't print well because they're too wide, you could split this into two diagrams and table them side by side so they would view the same but print better. 27) Section 5, why the browser stores 'username/password'? This is only in HTTP authentication and not in the broader sense unless forms complete is turned on. 28) section 8.1: When any banking URI is requested, the username/password features of http are used, usually implemented as a pop-up window.. I've never seen a bank do this, it wouldn't pass federal regulators since the user/password are transmitted in the clear (I know, I'm working on it :). Its probably worth pointing out that HTTP authentication in this manor is not secure. 29) Section 8.4.. I don't like. The sending the account number in the URI makes two factor authentication now a s single factor. The repeatable URI can be derived for a session state as well. For example; http://mybank.com/checks/12345 could be a uri to a given check but the bank account number could be taken from the session instead of the URI. This may violate the one uri for one resource (since another user with this same URI would see a different check) but it allows for bookmarking 30) Section 9.3 - need to point out the importance of https with web services as well. Imbedding information in a soap document and then transferring it over the wire with normal http would expose all content to the network. 31) Section 10; I'm still looking for the 'preferred method' or at least some sort of statement that some are bad, leaving these as ok and when they should be used. Sorry for the long list. It's a long document and its really very good. Received on Monday, 12 June 2006 02:28:47 UTC
global_05_local_5_shard_00000035_processed.jsonl/23562
She won’t stop texting my boyfriend Hi Meredith, I am a 30-year-old woman dating a 42-year-old man (who I will call Jason). We have a wonderful relationship, communicate VERY well, and love each other deeply. We can play together and be serious when needed. We've been together for over a year and living together for the last nine months or so. All in all, the perfect match. So what's the problem? Well Meredith, it's another woman (cue the dramatic music). Here's some background: Jason started a new job last year, and with the position came new coworkers and colleagues. One coworker is a woman in her early 30s who he works somewhat closely with, but his position is superior to hers. My issue is that this woman (who is single) texts him during off-work hours. Their conversations revolve around personal things, not work-related topics. Nothing incredibly personal but it's still clear she's reaching out just for an excuse to talk. I realize that when you work with someone closely you'll develop a relationship and get to know them, but her texts are downright flirty. The other piece of information to note is that my boyfriend is REALLY good-looking -- and it's not just me being biased; he's one of those classically handsome men that turns heads in restaurants, and his blissful unawareness of his own good looks makes him all the more attractive to me. He is charming without even realizing it and I am pretty certain that she is interpreting his kindness as interest in her. The texting started last fall. He admitted that it was a little odd she was texting and I let him know that it bothered me. He assured me that it was harmless, and she is just a lonely woman reaching out. However, it continued and although he didn't always mention it to me, I would peek at his phone and see she was still texting. I am very bothered by the fact that this woman knows he is in a relationship and continues to reach out at inappropriate times, like 9 pm on a Friday night. It all came to a head this past New Year's Eve when she was drunk and repeatedly texting him and saying that she was going to call him at midnight, teasingly, "You better be awake old man!" I lost it and Jason and I had a blow-out fight which basically ruined our holiday. Just a side note -- he and I RARELY fight and this was a bad one. I need you to know that he doesn't lie to me about it and he doesn't flirt back. When I look through the messages it is painfully obvious that this woman is initiating everything. But I find it wrong that he engages in it. He says that he doesn't want to confront her because it will be awkward at work and he doesn't want to make accusations that she could say are not true -- but this is what gets me: Why is my loving boyfriend caring more about making this woman comfortable and allowing this to go on, ultimately making his girlfriend feel completely disrespected? I never considered myself to be a jealous person and I feel secure in my relationship, but I feel it is wrong for her to be sending him messages when he is lying in bed next to me. It's not about worrying that he is going to leave her for me because I don't believe that is the case. For me this is about the respect that you give another person you are dating. Meredith, readers, am I overreacting or does Jason need to stop this now? – Feeling Disrespected, North Attleboro It sounds like the real issue here is the amount of time your boyfriend spends on his phone, FD. Is Jason paying attention to these texts when he should be engaged in conversation with you? If he's lying in bed next to you, is he reading and responding to these messages? Did Jason text this woman back on New Year's Eve? If so, why? I can't speak to this woman's intentions -- it's possible that she messaged 30 people at midnight on New Year's Eve -- but I will say that your boyfriend should be focused on you when he's in your company. He can check his phone, of course, but he doesn't have to respond to unimportant, chatty text messages right away -- or ever. He doesn't even have to read them if he doesn't feel like it. Perhaps it's best to explain to him that you'd be less annoyed with this woman's messages if he wasn't focused on them during your time together. If he wasn't playing with his phone so much, you probably wouldn't think about her at all. I agree -- the New Year's message does sound ... flirty. But according to you, everybody wants to flirt with your boyfriend. That's fine, as long as Jason isn't distracted by that kind of attention when he's with you. If you're going to talk/fight about anything, make it that. Readers? Is this about the woman or the phone? Should he tell this woman to stop texting? Will she stop on her own if he doesn't respond? What should the letter writer do? Help. – Meredith
global_05_local_5_shard_00000035_processed.jsonl/23570
National News - Lubbock Online.com Distraught Vietnam vet chooses suicide by cop Published: Monday, June 08, 1998 alt="natnews.jpg (16830 bytes)" > Distraught Vietnam vet chooses suicide by cop MELVINA, Wis. (AP) - On Dec. 4, 1995, on Highway 27 just outside town, Mario Cenin held officer Wendell Howland at rifle-point for six minutes. When Howland rolled out of his car, troopers killed Cenin. The following transcript is from the police radio tapes: HOWLAND: BE CAREFUL! He's got the thing pointed right at me. OK. In fact, he's locked and loaded and he's pointing right at me! CAR 7: I got you. His first name should be Mario. HOWLAND: That's confirmed. This man's a true American vet. He's a vet and I believe he's, what he's saying is true. He's got this gun pointed right at me. CAR 7: Twenty, you just want to block off (highway) 27 right there in Melvina? CAR 20: Ten-four. CAR 7: All cars try to keep Channel One free for (Howland's car) 59. HOWLAND: OK, this guy's got it pointed right at me and he's telling me if anybody makes any strange moves he's gonna shoot me. He says he's a good shot and he's gonna shoot. This guy wants a response from somebody or something. He's threatening to shoot me and he wants to hear something from the other people. CAR 20: Twenty headquarters. CAR 20: I got the wife. She says that he doctors with a Dr. Trainor, if it will make any difference, at the VA Hospital. DISPATCHER: Ten-four, I'll make contact. HOWLAND: Says he wants the reds and blues off. He says if the lights don't go out, he's going to shoot me. He's talking to me. If you have any information, relay it to me and I'll give it to him. CAR 20: This Dr. Trainor is his friend. The doctor. HOWLAND: OK, he doesn't want to see any strange movements. Somebody still had reds and blues on out there. He wants them off. CAR 511: Tell him we have to leave one set on back here so no cars run into him. HOWLAND: I already told him that. CAR 23: Fifty-nine, I'm behind ya. Do you want me to make an approach? CAR 20: I got traffic blocked from the south. DISPATCHER: Fifty-nine, the VA is trying to locate Dr. Trainor. CAR 550: Cut your headlights, Twenty, they're in my eyes. CENIN: I wanna do ya, man! I wanna do ya, man! I can't hold this forever. I can't hold this position forever. You grab it ... HOWLAND: I'm grabbing it and got it right here. He's got that gun right in my ribs folks! CENIN: Now you tell 'em what I want. HOWLAND: What do you want? What do you want, Mario? CENIN: I want your gun. HOWLAND: He wants my gun! CENIN: And your backup. HOWLAND: And my backup? CENIN: Uh-huh. HOWLAND: What backup? I don't have a backup gun. CENIN: Hey man, you don't have a backup gun? HOWLAND: No, I don't. They don't allow me to carry one. CENIN: You get blown away, all right? Somebody better come up here and hand me a backup gun then. HOWLAND: He wants our backup guns, if we have any! CENIN: I want yours! HOWLAND: He wants mine. I'm not carrying one. CENIN: I want yours! I'm going to f--- put a round, I'm going to put a round. You understand me? You tell them I'm going to put a round. This f--- close to you now. Hey man, you only got a second, so you better think about it. HOWLAND: What do you want me to do, Mario? CENIN: I want you to f--- unholster your piece and put it on this counter, on this f--- briefcase. You got one one-thousand, two one-thousand, three one-thousand-u... . CAR 23: Shots fired! Shots fired! Roll!
global_05_local_5_shard_00000035_processed.jsonl/23597
From Wed Jan 11 22:13:09 2006 Return-Path: Delivered-To: Received: (qmail 6510 invoked from network); 11 Jan 2006 22:13:09 -0000 Received: from (HELO ( by with SMTP; 11 Jan 2006 22:13:09 -0000 Received: (qmail 49109 invoked by uid 500); 11 Jan 2006 22:13:09 -0000 Delivered-To: Received: (qmail 49086 invoked by uid 500); 11 Jan 2006 22:13:08 -0000 Mailing-List: contact; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: Delivered-To: mailing list Received: (qmail 49075 invoked by uid 99); 11 Jan 2006 22:13:08 -0000 Received: from (HELO ( by (qpsmtpd/0.29) with ESMTP; Wed, 11 Jan 2006 14:13:08 -0800 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: Received-SPF: pass ( local policy) Received: from [] (HELO ( by (qpsmtpd/0.29) with ESMTP; Wed, 11 Jan 2006 14:13:07 -0800 Received: from ([] helo=[]) by with asmtp (TLS-1.0:DHE_RSA_AES_128_CBC_SHA:16) (Exim 4.34) id 1EwoCw-0003gG-L0 for; Wed, 11 Jan 2006 17:12:46 -0500 Message-ID: <> Date: Wed, 11 Jan 2006 17:13:09 -0500 From: Laurent Sacaut User-Agent: Mozilla Thunderbird 1.0.7 (Windows/20050923) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Subject: NullPointerException when trying to use SqlBuilder.alterDatabase X-Enigmail-Version: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Spam-Roc-Score: -4.8 X-Spam-Roc-Level: ---- X-Spam-Roc-Report: -4.8 points, 5.0 required autolearn=ham -4.9 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] 0.1 AWL AWL: Auto-whitelist adjustment X-Virus-Checked: Checked by ClamAV on X-Spam-Rating: 1.6.2 0/1000/N Hello, I am getting familiarized with DdlUtils using a MySQL 4.1x database. I am trying to alter an existing database so that new tables are created and new columns are added to existing tables. I am using the changeDatabase(..) as shown in the API usage example. I keep getting an error at some point. I traced the code and found out that it happens when SqlBuilder.alterDatabase(Database currentModel, Database desiredModel, CreationParameters params, boolean doDrops, boolean modifyColumns) is called, more specifically when params is null (which it is in my case). I narrowed it down to the createTable(desiredModel, desiredTable, params.getParametersFor(desiredTable)) call in that function. It does not go in, I guess because of the null params but I could be wrong. It jumps out with the following message: java.lang.NullPointerException. there is no more details about the exception. Is there a way around this or some settings I missed? thx. Laurent
global_05_local_5_shard_00000035_processed.jsonl/23615
[Numpy-discussion] A case for rank-0 arrays Sasha ndarray at mac.com Sat Feb 18 11:50:08 CST 2006 I have reviewed mailing list discussions of rank-0 arrays vs. scalars and I concluded that the current implementation that contains both is (almost) correct. I will address the "almost" part with a concrete proposal at the end of this post (search for PROPOSALS if you are only interested in the practical part). The main criticism of supporting both scalars and rank-0 arrays is that it is "unpythonic" in the sense that it provides two almost equivalent ways to achieve the same result. However, I am now convinced that this is the case where practicality beats purity. If you take the "one way" rule to it's logical conclusion, you will find that once your language has functions, it does not need numbers or any other data type because they all can be represented by functions (see http://en.wikipedia.org/wiki/Church_numeral). Another example of core python violating the "one way rule" is the presence of scalars and length-1 tuples. In S+, for example, scalars are represented by single element lists. The situation with ndarrays is somewhat similar. A rank-N array is very similar to a function with N arguments, where each argument has a finite domain (i-th domain of a is range(a.shape[i])). A rank-0 array is just a function with no arguments and as such it is quite different from a scalar. Just as a function with no arguments cannot be replaced by a constant in the case when a value returned may change during the run of the program, rank-0 array cannot be replaced by an array scalar because it is mutable. (See http://projects.scipy.org/scipy/numpy/wiki/ZeroRankArray for use Rather than trying to hide rank-0 arrays from the end-user and treat it as an implementation artifact, I believe numpy should emphasize the difference between rank-0 arrays and scalars and have clear rules on when to use what. Here are three suggestions: 1. Probably the most controversial question is what getitem should return. I believe that most of the confusion comes from the fact that the same syntax implements two different operations: indexing and projection (for the lack of better name). Using the analogy between ndarrays and functions, indexing is just the application of the function to its arguments and projection is the function projection ((f, x) -> lambda (*args): f(x, *args)). The problem is that the same syntax results in different operations depending on the rank of the array. >>> x = ones((2,2)) >>> y = ones(2) then x[1] is projection and type(x[1]) is ndarray, but y[1] is indexing and type(y[1]) is int32. Similarly, y[1,...] is indexing, while x[1,...] is projection. I propose to change numpy rules so that if ellipsis is present inside [], the operation is always projection and both y[1,...] and x[1,1,...] return zero-rank arrays. Note that I have previously rejected Francesc's idea that x[...] and x[()] should have different meaning for zero-rank arrays. I was wrong. 2. Another source of ambiguity is the various "reduce" operations such as sum or max. Using the previous example, type(x.sum(axis=0)) is ndarray, but type(y.sum(axis=0)) is int32. I propose two changes: a. Make x.sum(axis) return ndarray unless axis is None, making type(y.sum(axis=0)) is ndarray true in the example. b. Allow axis to be a sequence of ints and make x.sum(axis=range(rank(x))) return rank-0 array according to the rule 2.a above. c. Make x.sum() raise an error for rank-0 arrays and scalars, but allow x.sum(axis=()) to return x. This will make numpy sum consistent with the built-in sum that does not work on scalars. 3. This is a really small change currently >>> empty(()) >>> ndarray(()) Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: need to give a valid shape as the first argument I propose to make shape=() valid in ndarray constructor. More information about the Numpy-discussion mailing list
global_05_local_5_shard_00000035_processed.jsonl/23616
David Cournapeau [email protected]... Wed Sep 12 00:55:09 CDT 2007 David M. Cooke wrote: > I work on it off and on. As you say, it's not trivial :-) It also has > a tendency to be fragile, so large changes are harder. Something will work > for me, then I merge it into the trunk, and it breaks on half-a-dozen > platforms that I can't test on :-) So, it's slow going. > I've got a list of my current goals at > http://scipy.org/scipy/numpy/wiki/DistutilsRevamp. Would some contribution help ? Or is distutils such a beast that working together on it would be counter productive ? The things I had in mind were: - add the possibility to build shared libraries usable by ctypes (I just have a problem to start: I do not know how to add a new command to - provides an interface to be able to know how numpy/scipy was configured (for example: is numpy compiled with ATLAS, Apple perf libraries, GOTO, fftw, etc...) - add the possibility to build several object files differently (one of your goal if I remember correctly) More information about the Numpy-discussion mailing list
global_05_local_5_shard_00000035_processed.jsonl/23619
Clueless Kinsley Back in the days when Michael Kinsley was the designated liberal on CNN’s “Crossfire” show, paired off against Pat Buchanan or Robert Novak, he would answer the complaints of actual liberals that he really wasn’t a liberal himself by agreeing with them. Kinsley was and still is a man of the cautious, corporate center, which means liberal on social and cultural issues and an Aspen/Jackson Hole corporate elitist on economics. Which is to say, while he’s a trenchant social critic, he hasn’t even noticed the bankruptcy of mainstream economics.  For evidence of this assertion, readers need look no farther than Kinsley’s column today, which ran in both the Los Angeles Times and Bloomberg News. In it, he attacks the Obama campaign for going after Mitt Romney for offshoring jobs—because, he argues, offshoring is really a good thing. Well, he doesn’t actually argue it. Instead, he simply asserts that “most economists believe in the theory of free trade, which holds that a nation cannot prosper by denying its citizens the benefit of cheap foreign labor.” Most economists also believed that their economic models were accurate right up until they failed to predict the current recession, but we’ll let that one pass. Kinsley fails to consider the effect of offshoring and free trade on not just job creation but also incomes in the United States. As Princeton economist Alan Blinder, who was the deputy chairman of the Federal Reserve in the mid-90s, has demonstrated, more than 40 million American jobs could be offshored, which has resulted in holding down or decreasing wages in those sectors. Kinsley has also failed to note that wages today are at their lowest level as a share of both corporate revenues and GDP since before World War II, and that the effects of foreign wage competition are a significant factor in that decline.  Kinsley disparages Obama’s campaign for “insourcing,” noting that “one nation’s insourcing is another nation’s outsourcing, and retaliation can quickly lead to a trade war in which everyone loses.” Perhaps Kinsley hasn’t noticed that most other nations, China most particularly, offer major subsidies to companies that will relocate to their shores, while our own government, still largely in the sway of Kinsley’s received beliefs, uniquely does not. When it comes to wooing companies, we’re the only wallflower at the dance. We don’t have a trade war in which everyone loses, as Kinsley fears. We have a trade war in which we alone are the losers—most importantly, vis-à-vis China.  Has Kinsley visited the Rust Belt lately? Noted the wage stagnation of the past three decades, or the wage decline of the past ten years, since the U.S. granted China permanent normalized trade relations? Is he sentient? Not, in answer to all these queries, by the evidence of his columns.  You need to be logged in to comment.
global_05_local_5_shard_00000035_processed.jsonl/23620
Yes We Camelot Feel free to try this at home, but I guarantee you won't get anything out of it except a migraine. Imagine you've been a bit prematurely asked to fill a time capsule with telltale cultural artifacts of the Age of Obama—the evocative movies, TV shows, hit tunes, and other creative whatnots that will someday exemplify the ineffable atmosphere of our 44th president's first term.  Realizing nobody has called these times "The Age of Obama" since early 2009 should be your first clue that this is no easy job. Try to persevere, though. Um—J.J. Abrams's Star Trek reboot, maybe? Obama's partisans and detractors alike do dig comparing him to Mr. Spock. TV's Glee and Modern Family? Hey, how about posteverything pop diva Lady Gaga? That's the best I can do off the top of my head, and they're all a bit of a stretch.  The multiculti on steroids—OK, asteroids—of James Cameron's Avatar may be less of one. Yet the movie's conception predated not only Obama's presidency but George W. Bush's. Anyway, despite its billion-dollar box-office tally, Avatar's own cultural resonance turned out to have the lifespan of a daffodil.  To politicos, I know, this lacuna has as much significance as assessing the Obama presidency's so far negligible impact (correct me if I'm wrong, NBA fans) on rotisserie basketball. Much happier imagining it's in blissful Brigadoon rather than nasty Weimar at the multiplex, the American pop audience purely hates the idea of any connection between its tastes and current events. Trust me, any movie critic who wants to generate a bunch of abuse has only to write a piece suggesting a link exists.  So call it sheer masochism that the interplay between pop-culture gestalts and political trends has been my bread-and-butter topic for years, above all as a barometer of a presidency's success in capturing the public's imagination. For Obama, of all people, to generate so few reverberations in popland—pro or con—is a failed stimulus package if I ever saw one.  Remember, all he had to do to be transformative was get himself elected. Not only our first African American president, he's our first post-boomer one and our first real 21st-century one. Provoking, needless to say, a flood of optimistic, now wistful fantasies on the left and an unabating orgy of unhinged but potent paranoia on the right.  That's just why the dog that isn't barking interests me. Going by the silence, popland's verdict is that Obama's advent, struggles in office, and so forth are all kind of dull. Wouldn't you have at least expected the time was right to remake The Man Who Fell to Earth or The Brother From Another Planet?  In case your default reaction is "So what?", let me point out that any era-defining presidency has a cultural dimension by definition. They affect unrelated areas from haberdashery to cuisine. Consider how John F. Kennedy's administration, besides making it unsafe for American men to wear hats for the next 40 years, both helped set and was fueled by the frisky cultural tone of the early 1960s. Symbolic and/or symbiotic correlatives were everywhere, from James Bond movies to The Dick Van Dyke Show's substitute Jack and Jackie.  Unless Franklin Roosevelt’s counts, Ronald Reagan's presidency had no peers in its dominance of culture as well as politics. If anything, the return of guilt-free privilege that gave us yuppies—and hence, to beneficial effect so far as our national palate went, foodies, because even a stopped wok is right twice a day—was subtlety's contribution to the mix.  Back to the Future summed up Reaganism's heartland appeal as paradoxically transformable nostalgia. Top Gun's cockiness did the same for morning-in-America triumphalism. The Indiana Jones movies split the difference, and do I even need to bring up Rambo? Of the two signature TV dramas of the early '80s, Magnum, P.I., reaffirmed Vietnam as a righteous cause. Meanwhile, Hill Street Blues, its ostensibly "liberal" counterpart—yeah, right—lionized Sisyphean cops taking up the white man's tight-lipped burden here at home.  Affirmations aren't the whole story. The test of a culturally consequential presidency is how vital the White House's occupant is to shaping our collective dreams, including negatively. Along with the right-wing revanchism of Dirty Harry and Walking Tall—and Lynyrd Skynyrd's "Sweet Home Alabama," too—the Nixon years positively seethed with cultural analogues rooted in liberal antipathy. No doubt to creator Norman Lear's chagrin, TV's All in the Family split the difference once blue-collar audiences embraced bigoted Archie Bunker as their tell-it-like-it-is mouthpiece. The point is that popland couldn't ignore Tricky Dick, any more than it could ignore Bill Clinton 20 years later.  The Clinton era was unique in spawning revealing fantasies about fictional chief executives. Starring Bill as we dreamed he might be if only he weren't, well, Bill, the pro-Clinton The West Wing faced off against the dither-allergic, everything-Bill-wasn't cowboy presidents of Air Force One and Independence Day. Both those huge hits prefigured Bush 43 right down to the latter's sci-fi anticipation of 9/11. As for W. himself, his popland monument is The Dark Knight, which essentially has Batman act out the rationale for Bush's dark but—in the movie's view—necessary presidency.  I doubt it'll cost David Axelrod a nanosecond's sleep, because sideshows aren't his specialty. But the precedents for presidencies as disconnected from popland as Obama's seems to be aren't inspiring. That Jimmy Carter managed to preside over the era of disco, punk, Star Wars, and Me Decade hot-tub licentiousness without having a discernible connection to any of them even as a counterweight somehow confirms his ineffectuality. Whatever else he was, Reagan was obviously better at both harnessing and reshaping collective dreams.  As for Carter's fellow feckless one-termer, George H.W. Bush, Twin Peaks is my favorite example of how little truck America's id—on good terms with its superego throughout Reagan's reign, FYI—had with Poppy's idea of the nation's business. When Bush unwisely tangled with The Simpsons and Dan Quayle denounced Murphy Brown's lax morals, both 41 and his Veep were unquestionably the underdogs in terms of popular allegiance, and so much for the bully pulpit. Whatever else he was, Clinton—our first president to frequent McDonald's in search of something other than votes—was obviously more fun, something Obama hasn't often been accused of lately.  The only reason you can't say Obama has already conceded the popland primary is that his potential 2012 rivals aren't in it yet. It's hard to picture Mitt Romney connecting with anything inflammatory in our subconscious, though you never know; The Dark Knight Rises comes out in July, and let's not forget that Bruce Wayne is a two-faced millionaire with magic undergarments. As for the other GOP contestants, they're already cartoons—the Pillsbury Doughboy with a will to power, libertarianism's answer to Rip van Winkle, Jerry Seinfeld's long-lost zealot twin, etc., etc. That may make cultural embroidery redundant.  Even so, between now and November, I'm going to be keeping tabs on which movies are surprise hits and which TV shows strike a nerve with the public, not just following the polls and the economy's numbers. Maybe Warren Beatty's Reds won the Best Picture Oscar the year of Reagan's inaugural, but the biggest hit of the summer before the Gipper got elected was The Empire Strikes Back. Want to guess which one I think was more politically eloquent, not to mention predictive?  You need to be logged in to comment.
global_05_local_5_shard_00000035_processed.jsonl/23625
The Full Wiki Indigenous peoples of the Americas: Map Wikipedia article: Map showing all locations mentioned on Wikipedia article: The indigenous peoples of the Americas are the pre-Columbian inhabitants of North, Central, and South America, their descendants, and many ethnic groups who identify with those peoples. They are often also referred to as Native Americans, Aboriginals, First Nations, Amerigine, and by Christopher Columbus' geographical and historical mistake, Indians, now disambiguated as the American Indian race, American Indians, Amerindians, Amerinds, or Red Indians. Many parts of the Americas are still populated by indigenous Americans; some countries have sizeable populations, such as Boliviamarker, Perumarker, Paraguaymarker, Mexicomarker, Guatemalamarker, Colombiamarker, and Ecuadormarker. At least a thousand different indigenous languages are spoken in the Americas. Some, such as Quechua, Guaraní, Mayan languages, and Nahuatl, count their speakers in millions. Most indigenous peoples have largely adopted the lifestyle of the western world, but many also maintain aspects of indigenous cultural practices to varying degrees, including religion, social organization and subsistence practices. Some indigenous peoples still live in relative isolation from Western society, and a few are still counted as uncontacted peoples. Scholars who follow the Bering Strait theory agree that most indigenous peoples of the Americas descended from people who probably migrated from Siberiamarker across the Bering Straitmarker, anywhere between 9,000 and 50,000 years ago. The time frame and exact routes are still matters of debate, and the model faces continuous challenges. A 2006 study reported that DNA-based research had linked DNA retrieved from a 10,000-year-old fossilized tooth from the Prince of Wales Islandmarker in Alaska with specific coastal tribes in Tierra del Fuegomarker, Ecuadormarker, Mexicomarker, and Californiamarker. Unique DNA markers found in the fossilized tooth were found only in these specific coastal tribes and were not comparable to markers found in any other indigenous peoples in the Americas. This finding lends substantial credence to a migration theory that at least one set of early peoples moved south along the west coast of the Americas in boats. It also suggests there may have been waves of migration, which numerous scholars believe. But, these results may be ambiguous, as there are other issues with DNA research and trying to affiliate biological and cultural groups. Pre-Columbian era Language families of North American indigenous peoples Remnants of a human settlement in Monte Verdemarker, Chilemarker dated to 12,500 years B.P. (another layer at Monte Verde has been tentatively dated to 33,000–35,000 years B.P.) suggests that southern Chile was settled by peoples who entered the Americas before the peoples associated with the Bering Strait migrations. It is suggested that a coastal route via canoes could have allowed rapid migration into the Americas. The traditional view of a relatively recent migration has also been challenged by older findings of human remains in South America; some dating to perhaps even 30,000 years old or more. Some recent finds (notably the Luzia Woman in Lagoa Santamarker, Brazil) are claimed to be morphologically distinct from most Asians and are more similar to Africans, Melanesians and Australian Aborigines. These American Aborigines would have been later displaced or absorbed by the Siberian immigrants. The distinctive Fuegian natives of Tierra del Fuegomarker, the southernmost tip of the American continent, are speculated to be partial remnants of those Aboriginal populations. These early immigrants would have either crossed the ocean by boat or traveled north along the Asian coast and entered America through the Northwest, well before the Siberian waves. This theory is currently viewed by many scholars as conjecture, as many areas along the proposed routes now lie underwater, making research difficult. Some scholars believe the earliest forensic evidence for early populations appears to more closely resemble Southeast Asians and Pacific Islanders, and not those of Northeast Asia. Scholars' estimates of the total population of the Americas before European contact vary enormously, from a low of 10 million to a high of 112 million. Some scholars believe that most of the indigenous population resided in Mesoamerica and South America, with approximately 10 percent residing in North America, prior to European colonization. The Solutrean hypothesis suggests an early European migration into the Americas and that stone tool technology of the Solutrean culture in prehistoric Europe may have later influenced the development of the Clovis tool-making culture in the Americas. Some of its key proponents include Dr. Dennis Stanford of the Smithsonian Institutionmarker and Dr. Bruce Bradley of the University of Exetermarker. In this hypothesis, peoples associated with the Solutrean culture migrated from Ice Age Europe to North America, bringing their methods of making stone tools with them and providing the basis for later Clovis technology found throughout North America. The hypothesis rests upon particular similarities in Solutrean and Clovis toolmaking styles, and the fact that no predecessors of Clovis technology have been found in Eastern Asia, Siberiamarker or Beringia, areas from which or through which early Americans are thought to have migrated. American Indian creation myths tell of a variety of originations of their respective peoples. Some were "always there" or were created by gods or animals, some migrated from a specified compass point, and others came from "across the ocean". Vine Deloria, Jr., author and Nakota activist, cites some of the oral histories that claim an in situ origin in his book Red Earth, White Lies, rejecting the Bering Strait land bridge route. Deloria takes a Young Earth creationism position, arguing that Native Americans actually originated in the Americas. European colonization Cultural areas of North America at time of European contact The European colonization of the Americas forever changed the lives, bloodlines and cultures of the peoples of the continent. The population history of American indigenous peoples postulates that infectious disease exposure, displacement, and warfare diminished populations, with the first the most significant cause. The first indigenous group encountered by Columbus were the 250,000 Taínos of Hispaniolamarker who were the dominant culture in the Greater Antilles and the Bahamas. In thirty years, about 70% of the Tainos died. Enslaved, forced to labour in the mines, mistreated, the Tainos began to adopt suicidal behaviors, with women aborting or killing their infants, men jumping from the cliffs or ingesting manioc, a violent poison. They had no immunity to European diseases, so outbreaks of measles and smallpox ravaged their population. The Laws of Burgos, 1512-1513 were the first codified set of laws governing the behavior of Spanishmarker settlers in America, particularly with regards to native Indians. They forbade the maltreatment of natives, and endorsed their conversion to Catholicism. Reasons for the decline of the Native American populations are variously theorized to be from diseases, conflicts with Europeans, and conflicts among warring tribes. Scholars now believe that, among the various contributing factors, epidemic disease was the overwhelming cause of the population decline of the American natives. After first contacts with Europeans and Africans, some believe that the death of 90 to 95% of the native population of the New World was caused by Old World diseases. Half the native population of Hispaniolamarker in 1518 was killed by smallpox. Within a few years smallpox killed between 60% and 90% of the Inca population, with other waves of European disease weakening them further. Smallpox was only the first epidemic. Typhus (probably) in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, measles in 1618—all ravaged the remains of Inca culture. Smallpox had killed millions of native inhabitants of Mexicomarker. Unintentionally introduced at Veracruz with the arrival of Pánfilo de Narváez on April 23, 1520, smallpox ravaged Mexico in the 1520s, killing 150,000 in Tenochtitlan alone, including the emperor, and was credited with the victory of Hernán Cortés over the Aztec empire at Tenochtitlan (present-day Mexico Citymarker) in 1521. Over the centuries, the Europeans had developed high degrees of immunity to these diseases, while the Native Americans had no such immunity. Europeans had been ravaged in their own turn by such diseases as bubonic plague and Asian flu that moved west from Asia to Europe. In addition, when they went to some territories, such as Africa and Asia, they were more vulnerable to malaria. The repeated outbreaks of influenza, measles and smallpox probably resulted in a decline of between one-half and two-thirds of the Aboriginal population of eastern North America during the first 100 years of European contact. In 1617–1619, smallpox reportedly killed 90% of the Massachusetts Baymarker Native Americans. In 1633, in Plymouth, Massachusettsmarker, the Native Americans were exposed to smallpox because of contact with Europeans. As it had done elsewhere, the virus wiped out entire population groups of Native Americans. It reached Lake Ontariomarker in 1636, and the lands of the Iroquois by 1679. During the 1770s, smallpox killed at least 30% of the West Coast Native Americans. Smallpox epidemics in 1780–1782 and 1837–1838 brought devastation and drastic population depletion among the Plains Indians. In 1832, the federal government of the United Statesmarker established a smallpox vaccination program for Native Americans (The Indian Vaccination Act of 1832). In Brazilmarker, the indigenous population has declined from a pre-Columbian high of an estimated 3 million to some 300,000 in 1997. Later explorations of the Caribbean led to the discovery of the Arawak peoples of the Lesser Antilles. The culture was extinct by 1650. Only 500 had survived by the year 1550, though the bloodlines continued through the modern populace. In Amazonia, indigenous societies weathered centuries of colonization. The Spaniards and other Europeans brought horses to the Americas. Some of these animals escaped and began to breed and increase their numbers in the wild. The re-introduction of the horse had a profound impact on Native American culture in the Great Plainsmarker of North America and of Patagonia in South America. By domesticating horses, some tribes had great success: they expanded their territories, exchanged many goods with neighboring tribes, and more easily captured game, especially bison. Over the course of thousands of years, American indigenous peoples domesticated, bred and cultivated a large array of plant species. These species now constitute 50–60% of all crops in cultivation worldwide. In certain cases, the indigenous peoples developed entirely new species and strains through artificial selection, as was the case in the domestication and breeding of maize from wild teosinte grasses in the valleys of southern Mexicomarker. Numerous such agricultural products retain native names in the English and Spanish lexicons. The South American highlands were a center of early agriculture. Genetic testing of the wide variety of cultivars and wild species suggest that the potato has a single origin in the area of southern Perumarker, from a species in the Solanum brevicaule complex. Over 99% of all modern cultivated potatoes worldwide are descendants of a subspecies indigenous to south-central Chilemarker,Solanum tuberosum ssp. tuberosum, where it was cultivated as long as 10,000 years ago. Natives of North American began practicing farming approximately 4,000 years ago, late in the Archaic period of North American cultures. Technology had advanced to the point that pottery was becoming common, and the small-scale felling of trees became feasible. Concurrently, the Archaic Indians began using fire in a widespread manner. Intentional burning of vegetation was used to mimic the effects of natural fires that tended to clear forest understories. It made travel easier and facilitated the growth of herbs and berry-producing plants, which were important for both food and medicines. In the Mississippi River valley, Europeans noted Native Americans' managed groves of nut and fruit trees as orchards, not far from villages and towns, in addition to their gardens and agricultural fields. Wildlife competition could be reduced by understory burning. Further away, prescribed burning would have been used in forest and prairie areas. Many crops first domesticated by indigenous Americans are now produced and/or used globally. Chief among these is maize or "corn", arguably the most important crop in the world. Other significant crops include cassava, chia, squash (pumpkins, zucchini, marrow, acorn squash, butternut squash), the pinto bean, Phaseolus beans including most common beans, tepary beans and lima beans, tomato, potatoes, avocados, peanuts, cocoa beans (used to make chocolate), vanilla, strawberries, pineapples, Peppers (species and varieties of Capsicum, including bell peppers, jalapeños, paprika and chili peppers) sunflower seeds, rubber, brazilwood, chicle, tobacco, coca, manioc and some species of cotton. The limited distribution of pack animals available for domestication, and the resultant limits of transportation, is certainly one of the factors in the lack of development of certain technologies in pre-Hispanic America. While Eurasia has a predominant east-west orientation that allowed the dissemination of certain technologies and crops along the latitude bands, the orientation of the American continents along the north-south axis made the dissemination of crops from one region to another, even with human migration, difficult and unlikely, given climate change due to altitude and climatic zones. Another factor that distinguished the American continent and Eurasia is the absence of river based cultures due to the configuration of rivers in the Americas. On the arrival of Europeans in America, the use of metal technology was very limited and most American cultures were lithic based. In Mesoamerica the knowledge of the calendar, based on acute astronomical observation, had reached remarkable levels of development. The Aztecs used intensive agricultural systems based on chinampas, with total food production per hectare possibly much higher than elsewhere in the world. Writing systems An independent origin and development of writing is counted among the many achievements and innovations of pre-Columbian American cultures. The Mesoamerican region produced a number of indigenous writing systems from the 1st millennium BCE onwards. What may be the earliest-known example in the Americas of an extensive text thought to be writing is by the Cascajal Block. The Olmec hieroglyphs tablet has been indirectly dated from ceramic shards found in the same context to approximately 900 BCE, around the time that Olmec occupation of San Lorenzo Tenochtitlánmarker began to wane. Music and art Music from indigenous peoples of Central Mexico and Central America often was pentatonic. Before the arrival of the Spaniards it was inseparable from religious festivities and included a large variety of percussion and wind instruments such as drums, flutes, sea snail shells (used as a kind of trumpet) and "rain" tubes. No remnants of pre-Columbian stringed instruments were found until archaeologists discovered a jar in Guatemala, attributed to the Maya of the Late Classic Era (600–900 AD), which depicts a stringed musical instrument which has since been reproduced. This instrument is astonishing in at least two respects. First, it is the only string instrument known in the Americas prior to the introduction of European musical instruments. Second, when played, it produces a sound virtually identical to a jaguar's growl. A sample of this sound is available at the Princeton Art Museum website. Art of the indigenous peoples of the Americas composes a major category in the world art collection. Contributions include pottery, paintings, jewellery, weavings, sculptures, basketry, carvings and hair pipes. Due to the many artists posing as Native Americans, the United States passed the Indian Arts and Crafts Act of 1990, requiring artists prove that they are enrolled in a state or federally recognized tribe. Demography of contemporary populations The following table provides estimates of the per-country populations of indigenous people, and also those with part-indigenous ancestry, expressed as a percentage of the overall country population of each country that is comprised by indigenous peoples, and of people with partly indigenous descent. The total percentage obtained by adding both of these categories is also given. Note: these categories are inconsistently defined and measured differently from country to country. Some are based on the results of population wide genetic surveys, while others are based on self identification or observational estimation. Indigenous populations of the Americas as estimated percentage of total country's population Country Indigenous Ref. Part Indigenous Ref. Combined total Ref. North America Canadamarker 1.8% 3.6% 5.4% Mexicomarker 30% 60% 90% USAmarker 0.9% 0.6% 1.5% Central America Belizemarker 16.7% 33.8% 50.5% Costa Ricamarker 1% 15% 16% El Salvadormarker 8% 90% 98% Guatemalamarker 40.8% % % Hondurasmarker 7% 90% 97% Nicaraguamarker 5% 69% 74% Panamamarker 6% 84% 90% Antigua and Barbudamarker % % % Barbadosmarker % % % The Bahamasmarker % % % Cubamarker % % % Dominicamarker 2.9% % % Dominican Republicmarker % % % Grenadamarker ~0% ~0% ~0% Haitimarker ~0% ~0% ~0% Jamaicamarker % % % Puerto Rico 0.4% 84% 84% Saint Kitts and Nevismarker % % % Saint Luciamarker % % % Saint Vincent and the Grenadines 2% % % Surinamemarker 2% % % Trinidad and Tobagomarker 0.8% 88% 80% South America Argentinamarker 1.0% 2% 3% Boliviamarker 55% 30% 85% Brazilmarker 0.4% % % Chilemarker 4.6% % % Colombiamarker 1% 61% 62% Ecuadormarker 25% 65% 90% French Guianamarker % % % Guyanamarker 9.1% % % Paraguaymarker % 95% % Perumarker 45% 37% 82% Uruguaymarker 0% 8% 8% Venezuelamarker % % % History and status by country Argentina's indigenous population is about 403,000 (0.9 percent of total population). Indigenous nations include the Toba, Wichí, Mocoví, Pilagá, Chulupí, Diaguita-Calchaquí, Kolla, Guaraní (Tupí Guaraní and Avá Guaraní in the provinces of Jujuy and Salta, and Mbyá Guaraní in the province of Misiones), Chorote (Iyo'wujwa Chorote and Iyojwa'ja Chorote), Chané, Tapieté, Mapuche (probably the largest indigenous nation in Argentina) and Tehuelche. The Selknam (Ona) people are now virtually extinct in its pure form. The languages of the Diaguita, Tehuelche, and Selknam nations are now extinct or virtually extinct: the Cacán language (spoken by Diaguitas) in the 18th century, the Selknam language in the 20th century; whereas one Tehuelche language (Southern Tehuelche) is still spoken by a small handful of elderly people. Mestizos (European with indigenous peoples) number about 34 percent of the population; unmixed Maya make up another 10.6 percent (Ketchi, Mopan, and Yucatec). The Garifuna, who came to Belize in the 1800s, originating from Saint Vincent and the Grenadinesmarker, with a mixed African, Carib, and Arawak ancestry make up another 6 percent of the population. In Boliviamarker, about 2.5 million people speak Quechua, 2.1 million speak Aymara, while Guaraní is only spoken by a few hundred thousand people. Also there are 36 recognized cultures and languages in the country. Although there are no official documents written in these languages, Quechua and Aymara were historically only ever oral languages until fragmented modern attempts at transcription and written standardization. Radio and some television in Quechua and Aymara is produced. However, the constitutional reform in 1997 for the first time recognized Bolivia as a multilingual, pluri-ethnic society and introduced education reform. In 2005, for the first time in the country's history, an indigenous descendant Aymara, Evo Morales, was elected as President. Morales began work on his “indigenous autonomy” policy which he launched in the eastern lowlands department on 3 August 2009, making Bolivia the first country in the history of South America to declare the right of indigenous people to govern themselves. Speaking in Santa Cruz Departmentmarker, the President called it "a historic day for the peasant and indigenous movement", saying that he might make errors but he would "never betray the fight started by our ancestors and the fight of the Bolivian people". A vote on further autonomy will take place in referendums which are expected to be held in December 2009. The issue has divided the country. The Amerindians make up 0.4% of Brazilmarker's population, or about 700,000 people. Indigenous peoples are found in the entire territory of Brazil, although the majority of them live in Indian reservations in the North and Centre-Western part of the country. On 18 January 2007, FUNAI reported that it had confirmed the presence of 67 different uncontacted tribes in Brazil, up from 40 in 2005. With this addition Brazil has now overtaken the island of New Guineamarker as the country having the largest number of uncontacted tribes. The most commonly preferred term for the indigenous peoples of what is now Canadamarker is Aboriginal peoples. Of these Aboriginal peoples who are not Inuit or Métis, "First Nations" is the most commonly preferred term of self-identification. Aboriginal peoples make up approximately 3.8 percent of the Canadian population. Canadian Inuit live in subarctic and arctic Canada, as well as Alaska, Greenland, and Siberia, and maintain their own distinct Inuit culture. First Nations are the American Indian tribes of Canada, while Métis are a distinct group of people descended from First Nations peoples and French traders. Despite an ancient history of their own, Canadian Aboriginal peoples cultures have sometimes been written about as if their history began with the encroachment of Europeans onto the continent. This is because the First Nations, Inuit and Métis "written" history began with European accounts, as in documentation by trappers, traders, explorers, and missionaries (cf. the Codex canadiensis). Although not without conflict or some slavery, Canada's early interactions with First Nations populations were relatively peaceful, compared to the experience of native peoples in the United Statesmarker. Combined with relatively late economic development in many regions, this peaceful history has allowed Canadian native peoples to have a relatively strong influence on the national culture while preserving their own identity. Nevertheless, explorers and traders brought European diseases, such as smallpox, which killed off entire villages. Relations varied between the settlers and the Natives. Today, a revival of pride in First Nations, Inuit and Métis art and music is taking place, and the beauty created by traditional Aboriginals has become a dominant art style in Canada. According to the 2002 Census, 4.6% of the Chilean population, including the Rapanui of Easter Islandmarker, was indigenous, although most show varying degrees of miscegenation. Many are descendants of the Mapuche, and live in the country's central valley and lake district. The Mapuche successfully fought off defeat in the first 300–350 years of Spanish rule during the Arauco War. Relation with the new Chilean Republic were good until the Chilean state decided to occupy their lands. During the Occupation of Araucanía the Mapuche surrendered to the country's army in the 1880s. The former land was opened to settlement for Chileans and Europeans. Conflict over Mapuche land rights continued until present days. Other groups include the Aimara who live mainly in Arica-Parinacotamarker and Tarapacá Region and the Alacalufe survivors who now reside mainly in Puerto Edénmarker. A small minority today within Colombiamarker's overwhelmingly Mestizo and Afro-Colombian population, Colombia's indigenous peoples nonetheless encompass at least 85 distinct cultures and more than 1,378,884 people. A variety of collective rights for indigenous peoples are recognized in the 1991 Constitution. Costa Rica Costa Ricamarker was the site of many indigenous cultures, but only eight remain today: Bribri, Borucamarker, Cabecar, Chorotega, Guaymí, Huetar, Maleku and Terraba, also called Teribemarker or Naso. Otavaleña girl from Ecuador Approximately 96.4% of Ecuador's are Highland Quichuas living in the valleys of the Sierra region. Primarily consisting of the descendents of Incans, they are Kichwa speakers and include the Caranqui, the Otavaleñosmarker, the Cayambi, the Quitu-Caras, the Panzaleo, the Chimbuelo, the Salasacan, the Tugua, the Puruhá, the Cañari, and the Saraguro. Linguistic evidence suggests that the Salascan and the Saraguro may have been the descendants of Bolivian ethnic groups transplanted to Ecuador as mitimaes. Coastal groups, including the Awá, Chachimarker, and the Tsáchila, make up 24% percent of the indigenous population, while the remaining 3.35 percent live in the Oriente and consist of the Oriente Kichwa (the Canelo and the Quijos), the Shuar, the Huaorani, the Siona-Secoya, the Cofán, and the Achuar. El Salvador Much of El Salvador was home to the Pipil, Lenca, and a number of Maya. The Pipil lived in western El Salvadormarker, spoke Nahuat, and had many settlements there. The Pipil had no treasure but held land that had rich and fertile soil, good for farming. This both disappointed and garnered attention from the Spaniards who were shocked not to find gold or jewels in El Salvadormarker like they did in other lands like Guatemalamarker or Mexicomarker, but later learned of the fertile land El Salvador had to offer and attempted to conquer it. At first the Pipil had repelled Spanish Attacks but after many other attacks they had stopped fighting and many were used for labor by Spaniards. Today many Pipil and Indigenous populations live in small towns of El Salvador like Izalcomarker, Panchimalcomarker, Sacacoyomarker, and Nahuizalcomarker. Many of the indigenous peoples of Guatemalamarker are of Maya heritage. Other groups are Xinca people and Garifuna. Pure Maya account for some 40 percent of the population; although around 40 percent of the population speaks an indigenous language, those tongues (of which there are more than 20) enjoy no official status. Guatemala's majority population holds a percentage of 59.4% in White or Mestizo (of mixed White and Amerindian ancestry) people. The area of Livingston, Guatemalamarker is highly influenced by the Caribbean and its population includes a combination of Mestizos and Garifuna people. About 5 percent of the population are of full-blooded Amerindian descent, but upwards to 80 percent more or the majority of Hondurans are mestizo or part-Amerindian with Caucasian, and about 10 percent are of Amerindian and/or African descent. The main concentration of Amerindians in Hondurasmarker are in the rural westernmost areas facing Guatemala and to the Caribbean Seamarker coastline, as well on the Nicaraguan border.The majority of indigenous people are Lencas, Miskitos to the east, Mayans, Pech, Sumos, and Tolupan. The territory of modern-day Mexicomarker was home to numerous indigenous civilizations prior to the arrival of the Spanish conquistadores: The Olmecs, who flourished from between 1200 BCE to about 400 BCE in the coastal regions of the Gulf of Mexicomarker; the Zapotecs and the Mixtecs, who held sway in the mountains of Oaxacamarker and the Isthmus of Tehuantepecmarker; the Maya in the Yucatánmarker (and into neighbouring areas of contemporary Central America); the Purepecha or Tarascan in present day Michoacánmarker and surrounding areas, and the Aztecs, who, from their central capital at Tenochtitlan, dominated much of the centre and south of the country (and the non-Aztec inhabitants of those areas) when Hernán Cortés first landed at Veracruzmarker. In the states of Chiapasmarker and Oaxacamarker and in the interior of the Yucatánmarker peninsula the majority of the population is indigenous. Large indigenous minorities, including Aztecs, P'urhépechas, and Mixtecs are also present in the central regions of Mexico. In Northern Mexico indigenous people are a small minority. The General Law of Linguistic Rights of the Indigenous Peoples grants all indigenous languages spoken in Mexico, regardless of the number of speakers, the same validity as Spanish in all territories in which they are spoken, and indigenous peoples are entitled to request some public services and documents in their native languages. Along with Spanish, the law has granted them — more than 60 languages — the status of "national languages". The law includes all Amerindian languages regardless of origin; that is, it includes the Amerindian languages of ethnic groups non-native to the territory. As such the National Commission for the Development of Indigenous Peoples recognizes the language of the Kickapoo, who immigrated from the United Statesmarker, and recognizes the languages of the Guatemalanmarker Amerindian refugees. The Mexican government has promoted and established bilingual primary and secondary education in some indigenous rural communities. Nonetheless, of the indigenous peoples in Mexico, only about 67% of them (or 7.1% of the country's population) speak an Amerindian language and about a sixth do not speak Spanish (1.2% of the country's population). • the right to preserve and enrich their languages and cultures; amongst other rights. The Miskito are a native people in Central America. Their territory extended from Cape Camarón, Hondurasmarker, to Rio Grandemarker, Nicaraguamarker along the Mosquito Coast. There is a native Miskito language, but large groups speak Miskito Coastal Creole, Spanish, Rama and other languages. The Creole English came about through frequent contact with the British who colonized the area. Many are Christians. United States Indigenous peoples in what is now the contiguous United States are commonly called "American Indians", or just "Indians" domestically, but are also often referred to as "Native Americans". In Alaska, indigenous peoples, which include Native Americans, Yupik and Inupiat Eskimos, and Aleuts, are referred to collectively as Alaska Natives. Native Americans and Alaska Natives make up 2 percent of the population, with more than 6 million people identifying themselves as such, although only 1.8 million are recognized as registered tribal members. Tribes have established their own rules for membership, some of which are increasingly exclusive. More people have unrecognized Native American ancestry together with other ethnic groups. A minority of U.S. Native Americans live in land units called Indian reservations. Some southwestern U.S. tribes, such as the Yaqui and Apache, have registered tribal communities in Northern Mexico. Similarly, some northern bands of Blackfoot reside in southern Alberta, Canadamarker, in addition to within US borders. A number of Kumeyaay communities may be found in Baja California del Norte. Other parts of the Americas Indigenous peoples make up the majority of the population in Boliviamarker and Perumarker, and are a significant element in most other former Spanishmarker colonies. Exceptions to this include Costa Ricamarker, Cubamarker, Puerto Rico, Argentinamarker, Dominican Republicmarker, and Uruguaymarker. At least three of the native American languages (Quechua in Peru and Bolivia; Aymara also in Peru, Bolivia and Chile, and Guaraní in Paraguaymarker) are recognized along with Spanish as national languages (or Aymara in Chile, by regional basis). Native American name controversy The Native American name controversy is an ongoing dispute over the acceptable ways to refer to the indigenous peoples of the Americas and to broad subsets thereof, such as those living in a specific country or sharing certain cultural attributes. Once-common terms like "Indian" remain in use, despite the introduction of terms such as "Native American" and "Amerindian" during the latter half of the 20th century. Rise of indigenous movements Moves towards the rights of the indigenous in Leftist countries of Latin America, led to a surge in activity in historically the most right-winged state in South America. In Colombia various indigenous groups protested the denial of their rights. People organized a march in Cali in October 2008 to demand the government live up to promises to protect indigenous lands, defend the indigenous against violence, and reconsider the free trade pact with the United States. Legal prerogative With the rise to power of Leftist governments in Venezuela, Ecuador, Paraguay, and especially Bolivia where Evo Morales was the first indigenous descendant elected president of Bolivia, the indigenous movement gained a strong foothold. Representatives from indigenous and rural organizations from major South American countries, including Bolivia, Ecuador, Colombia, Chile and Brazil, started a forum in support of Morales' legal process of change. The meeting condemned plans by the European "foreign" power elite to destabilize the country. The forum also expressed solidarity with the Morales and his economic and social changes in the interest of historically marginalized majorities. Furthermore, in a cathartic blow to the US-backed elite, it questioned US interference through diplomats and NGO's. The forum was suspicious of plots against Bolivia and other countries, including Cuba, Venezuela, Ecuador, Paraguay and Nicaragua. Molecular genetics study suggests that Amerindian populations derived from a theoretical single founding population, possibly from only 50 to 70 genetic contributors. Preliminary research, restricted to only 9 genomic regions (or loci) have shown a genetic link between original Americas and Asia populations. The study does not address the question of separate migrations for these groups, and excludes other DNA data-sets. The American Journal of Human Genetics released an article in 2007 stating "Here we show, by using 86 complete mitochondrial genomes, that all Indigenous Americans haplogroups, including haplogroup X, were part of a single founding population." Amerindian groups in the Bering Strait region exhibit perhaps the strongest DNA or mitochondrial DNA relations to Siberian peoples. The genetic diversity of Amerindian indigenous groups increase with distance from the assumed entry point into the Americas. Certain genetic diversity patterns from West to East suggest at least some coastal migration events. Geneticists have variously estimated that peoples of Asia and the Americas were part of the same population from 42,000 to 21,000 years ago. See also 1. See also Classification of indigenous peoples of the Americas 2. Fagundes, Nelson J.R.; Ricardo Kanitz, Roberta Eckert, Ana C.S. Valls, Mauricio R. Bogo, Francisco M. Salzano, David Glenn Smith, Wilson A. Silva, Marco A. Zago, Andrea K. Ribeiro-dos-Santos, Sidney E.B. Santos, Maria Luiza Petzl-Erler, and Sandro L.Bonatto (2008). "Mitochondrial Population Genomics Supports a Single Pre-Clovis Origin with a Coastal Route for the Peopling of the Americas". American Journal of Human Genetics 82 (3): 583-592. 3. Carey, Bjorn (19 February 2006). First Americans may have been European.Life Science. Retrieved on August 10, 2007. 4. Conner, Steve, Science Editor, (3 December 2002). Does skull prove that the first Americans came from Europe?. Published in the UK Independent. Retrieved on August 14, 2007. 5. Hecht, Jeff (4 September 2003). Skulls narrow clues to First AmericansNew Scientist. Retrieved on August 12, 2007. 6. Gonzalez, Sylvia, C. Jimenez-Lopez, R. Hedges, D. Huddart, J.C. Ohman, A. Turner, J.A. Pompa y Padilla (2003). Earliest humans in the Americas: new evidence from Mexico, Journal of Human Evolution 44, 379–387. 8. Vine Deloria, Jr. "Red Earth, White Lies: Native Americans and the Myth of Scientific Fact." Fulcrum Inc. 1999. 9. "Native Americans of North America", Microsoft Encarta Online Encyclopedia 2006, Trudy Griffin-Pierce. Retrieved September 14, 2006. Archived 2009-11-01. 10. "Espagnols-Indiens: le choc des civilisations" in L'Histoire, n°322, July-August 2007, pp.14–21 11. Laws of Burgos, 1512-1513 12. Cook, p. 1. 13. BBC Smallpox: Eradicating the Scourge 14. The Story Of... Smallpox – and other Deadly Eurasian Germs 15. American Indian Epidemics 16. Smallpox: The Disease That Destroyed Two Empires 17. Epidemics 18. American plague, New Scientist 19. Oaxaca 20. Smallpox's history in the world 21. Stacy Goodling, "Effects of European Diseases on the Inhabitants of the New World" 22. " Aboriginal Distributions 1630 to 1653". Natural Resources Canada. 23. David A. Koplow, Smallpox: The Fight to Eradicate a Global Scourge 24. Dutch Children's Disease Kills Thousands of Mohawks 25. Smallpox 26. Iroquois 27. Smallpox epidemic ravages Native Americans on the northwest coast of North America in the 1770s. 28. The first smallpox epidemic on the Canadian Plains: In the fur-traders' words 29. Mountain Man Plain Indian Fur Trade 30. Lewis Cass and the Politics of Disease: The Indian Vaccination Act of 1832 32. '500 Years of Brazil's Discovery' 33. Brazil urged to protect Indians 34. See Varese (2004), as reviewed in Dean (2006). 36. "Native Americans: The First Farmers." AgExporter October 1 1999 37. Lay summary 38. Michael Pollan, The Omnivore's Dilemma 39. Elizabeth Hill Boone, "Pictorial Documents and Visual Thinking in Postconquest Mexico". p. 158. 40. Aboriginal Ancestory, 2006 Census 41. Overview of Race and Hispanic Origin, 2000 US Census (Page 3-4) 43. Primeros Resultados de la Encuesta Complementaria de Pueblos Indígenas (ECPI) 44. População residente, por cor ou raça, segundo a situação do domicílio - Instituto Brasileiro de Geografia e Estatística 45. INDEC: Encuesta Complementaria de Pueblos Indígenas (ECPI) 2004 - 2005 46. Brazil sees traces of more isolated Amazon tribes 47. Aboriginal People. Statistics Canada. Release No. 5: 15 Jan 2008 . Retrieved 12 March 2009. 48. George Woodcock A Social History of Canada, 1988; Eric Wolf, Europe and the People Without History, 1982. 49. Wolf, chapter 8 50. El gradiente sociogenético chileno y sus implicaciones ético-sociales. 51. DANE 2005 national census 52. " HEALTH EQUITY AND ETHNIC MINORITIES IN EMERGENCY SITUATIONS", Pier Paolo Balladelli, José Milton Guzmán, Marcelo Korc, Paula Moreno, Gabriel Rivera, The Commission on Social Health Determinants, Pan American Health Organization, World Health Organization, Bogotá, Colombia, 2007 53. . Second article. 56. The American Heritage Dictionary of the English Language. Boston: Houghton Mifflin. 2000. ISBN 0-395-82517-2 (hardcover), ISBN 0-618-08230-1hardcover with CD ROM) 58. ( R.S., 1985, c. I-5 )Canadian Constitution Act, 1982, Section Twenty-five of the Canadian Charter of Rights and Freedoms 35. 59. Reuters External links Embed code: Got something to say? Make a comment. Your name Your email address
global_05_local_5_shard_00000035_processed.jsonl/23634
Hey everyone, I am given that a solid is bounded by: y= x^2-1, y+z=0, z=0. I am asked to find the bounds of a triple integral with respect to dz, dy, dx. So, I basically traced each graph with respect to the xy, xz, and yz plane. The bounds that I got are: (x=-1 to x=1), (y=x^2-1 to y=0) and (z=0 to z=-y). Can anyone tell me if I am approaching this problem correctly? I am not exactly sure if my bounds are correct. Any help and feedback appreciated, thanks.
global_05_local_5_shard_00000035_processed.jsonl/23645
This Awesome Gif Shows the Population of New Yorkers at Work vs. the Population at Home Over a 24-Hour Cycle gifs,new york city - - The neighborhood just south of Central Park that stays red unusually late is Midtown, home to Times Square, 30 Rock, and Grand Central Station, and intersected almost right down the middle by Broadway. With so many landmarks, it's a wonder that it doesn't stay red even later. The same goes for the southern tip of the island, home to Wall St., and the financial district. As they say, "money never sleeps."
global_05_local_5_shard_00000035_processed.jsonl/23647
What is meta? × I read Users with 5k+ rep have "approve tag wiki edits" privilege but can't see list of suggested edits and How to review edits?, and iiuc users between 5k and 10k reputation can approve suggested edits, but only if they "come across them" directly on the question page of the respective edit. However, they can see the suggested edits tab, which shows up as empty to them. Couldn't the suggested edits tab be entirely hidden until you can actually properly use it? share|improve this question 1 Answer 1 up vote 10 down vote accepted All this isn't really true anymore. You can access the suggested edits queue starting at 5000 reputation. The suggested edits tab appears empty to you because… it's actually empty. If you were unable to use it, you'd get the "this page requires more privileges" error. share|improve this answer D'oh! Beat me to it! Perhaps the post in question should be updated! Going to do that now... –  Andrew Barber Jan 31 '12 at 16:58 You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
global_05_local_5_shard_00000035_processed.jsonl/23648
What is meta? × has 63 questions. has 1761 questions. share|improve this question closed as off-topic by Roombatron5000, Emrakul, gnat, ProgramFOX, Martijn Pieters Jan 18 at 0:46 I was about to suggest the same synonym, and additionally there is a huge overlap with eclipse-pde and partially with eclipse-rcp. –  Steven Jeuris Dec 5 '12 at 13:47 2 Answers 2 up vote 7 down vote accepted No, I don't think they should be synonyms. Eclipse plugin development is a different enough activity from just using plugins for development that I think a separate tag is justified. I strongly suspect that you'll find a lot of plugin development questions in though. share|improve this answer Is it worth looking through the eclipse-plugin tag and retagging any questions that talk about developing a new plugin to eclipse-plugin-dev? –  staticbeast Jul 15 '11 at 11:45 @staticbeast: Yes, this search shows that there a lot of them, so it's probably worth going through to retag them. Particularly the ones that aren't answered, to give them a bump. –  Bill the Lizard Jul 15 '11 at 12:25 Isn't it equally worth writing up a better wiki then? I came here suggesting the same synonym, and it wasn't clear to me at all that was the difference implied between the two of them. Neither did I see them being used in such a way. Isn't that already partly proof that it's not a different enough activity from just using plugins? –  Steven Jeuris Dec 5 '12 at 13:46 @StevenJeuris Yeah, feel free to improve the tag wikis to make the difference more clear. How people use (and abuse) tags really doesn't change the fact that using Eclipse plugins is a completely different activity than developing them. –  Bill the Lizard Dec 5 '12 at 13:59 That indeed doesn't change that fact, but it does warrant questioning whether or not differentiating between the two tags is useful at all. –  Steven Jeuris Dec 5 '12 at 14:59 Yes, I think should be made a synonym of . At the time of writing: has 162 questions, and has 34 followers. has 4196 questions, and has 490 followers. Selecting the 10 most recent questions: Out of 10 question, only 1 was talking about plugin usage and didn't have another more relevant tag applied, 2 others were about plugin usage, but had related more specific tags applied. 7 were misusing the tag . This raises the following questions: 1. When would it be useful to differentiate between plugin development and plugin usage? Common plugin questions might be better of being tagged with an appropriate tag for that particular plugin. 2. As it stands now, the tag is in no way useful. The difference Bill the Lizard specified is clearly not reflected in the tag usage. Is it even worth/possible cleaning this up? 3. What would be the downside of merging the two tags? I believe it would be a lot clearer to make a synonym of , merging them, and updating the wiki to reflect this tag is about usage and development. Probably people who are knowledgeable about the development of plugins are also capable of answering generic problems with plugin usage anyway. share|improve this answer Not the answer you're looking for? Browse other questions tagged .
global_05_local_5_shard_00000035_processed.jsonl/23665
CNN Portrays Foreign Investment in America as Sign of Economic Decline ABC Parrots Cuomo's Attack on Loan Industry Wong Finds a Way to Spin Early Success of the Troop "Surge" Edward Wong makes the worst out of some good news from Iraq: "The heightened American street presence may already have contributed to an increase in the percentage of American deaths that occur in ... No Critical CAIR in the Times Reporter Neil MacFarquhar avoids the controversial accusations about the "Islamic civil rights group" CAIR, while suggesting its critics are motivated by hatred and McCarthyism. New York Times Movie Reviews: We'll Take Lust over 'Moral Reprimand' Which movie do you think would get a positive write-up in The New York Times? An inspirational movie about character building, or a comic tribute to lust? Reporting Protest In the Proper Context David Kirkpatrick and Sarah Abruzzese added little details that make a protest story fairer: they used liberal labels, explained who the protest organizers were, and quoted rally speakers. The ... Hulse With No Name Democrats Refuse to Talk about Tax Hikes on CNBC CS Monitor Praises 'Renewable' House, Ignores Cost to Taxpayers ABC Chavez Interview Fawns over 'Intelligent,' 'Passionate' Dictator Though Chavez has called Bush a 'devil' and seized foreign assets, 'Good Morning America' preview shows Barbara Walters claiming Chavez 'does like this country.' Syndicate content
global_05_local_5_shard_00000035_processed.jsonl/23676
Comments (131) « 1 2 3 4 » demolitionX  +   2689d ago S W E E T! cr33ping_death  +   2689d ago damn (crosses fingers) this indeed would be pretty f'in sweet. may it be so. PSN tag. ballisticrage xbotsRidiots  +   2689d ago do you think itll be releases simultaniously on LIVE......oh wait i forgot xbots dont get the game....damn my deepest condolences on missing out on the biggest game in gaming history. NanoGeekTech  +   2689d ago Seven words..... Hell yes....F*#k yes....can't wait....PS3 has not been turned off since Christmas eve 2007..... I know its not seven words but I am a little excited.......... ygxbrayzie  +   2689d ago god... i want a ps3 now this game ROCK!!!!! ScottEFresh  +   2689d ago YES!!!! Xbots are jealous. ElfShotTheFood  +   2689d ago Yeah, look at all of them posting in this thread, saying how jealous they are! Wait?! What?! Grow up. Mercutio  +   2689d ago No, seriously the earth will crumble to dust.... Bastard 360  +   2689d ago Sorry Xbox 360 owners! This is for PS3 only and it's due out in a few months, what desperate move will Xbox try pull to stop the sales of this game because they don't have anything to combat it's release ahhahaha ahhahaha get it up ya Xbot fools. Cwalat  +   2689d ago Well this is a really great surprise for me, if they are releasing a demo than that probably means that they are on the final touches of the game. xc7x  +   2689d ago need to be registered to look in forums err,how about mentioning that mr. news poster,grrr thepill88  +   2689d ago Xbox version... Any word on the Xbox version yet? Or cant the Xbox 360s last gen hardware handle this title... *rubs comment in Xbots face* Grown Folks Talk  +   2689d ago I bet you guys can't wait to watch it. = } Figboy  +   2689d ago wouldn't surprise me if there *WAS a demo next month so far, *EVERY Metal Gear Solid released so far has had a demo before the actual release of the game. i got the MGS demo in an issue of OPM magazine on thier demo disc. that demo actually sold me on MGS in the first place (i wasn't interested before that). i got the MGS2 demo with my copy of Zone of the Enders (i actually *DID buy ZOE for ZOE, but the MGS2 demo certainly didn't hurt. lol) i got the MGS3 demo in an issue of OPM magazine. i had actually played the game at E3 that year, but it was cool to actually have the game, playing it at home on my TV (and showing the guys at work, i think i was at Activision at the time). now that there are online networks capable of bringing the demo to even *MORE people than the magazines (not everybody has a subscription to a magazine, but over half of the PS3 owners out there have a PSN account), i think it would be wise for Konami to release this demo over the PSN, and to also have Sony update the *KIOSKS in stores like Best Buy with the demo, so more people can see it. i guarantee that if a potential console purchaser walks past a Playstation 3 kiosk, and sees Metal Gear Solid 4 running, or somebody playing it, they would probably buy a PS3 then and there (unfortunately for the Best Buy by my apartment, some douche fanboy *ALWAYS turns the PS3 kiosk off, which is right next to the *ALWAYS on 360 kiosk. i turn the PS3 back on, but then somebody turns it off again a few minutes later when i walk buy again). games like GT5 and MGS4 have fanatical audiences, as does Final Fantasy. if those fanbases get a whiff that a GT5 or MGS4 demo of any sort is hitting the PSN, they will rush and get the system, if just to play it for a minute (here's hoping the demo won't be 5 minutes long like Heavenly Sword's). i'll still take this story as a rumor, but it's not unreasonable to think an MGS4 demo will hit at some point, considering the precedents set by the previous games in the series. JBaby343  +   2689d ago I'm All For Demos I love demos no matter what game. Not all of them need demos but I never complain when we get demos. Keep the demos coming. solidt12  +   2689d ago Premonition  +   2689d ago I recall a post back on this site like about a week ago saying if GB won that someone from Konami would say something that was suppose to be said later on, I wonder if this is it. TwissT  +   2689d ago God said let there be a demo, and so it was said. cr33ping_death  +   2689d ago damn ill have to download it the day after....due to all the peeps trying to download it the second its released. Tsukasah  +   2689d ago #79 (Edited 2689d ago ) | Agree(0) | Disagree(0) | Report | Reply Account deleted  +   2689d ago what daaa is this some kind of black ritual(looking at your avatar) or what?? anyway can't way to see the snakes in action Tsukasah  +   2689d ago No, I'm just going insane to the fact that there's a demo. The avatar is the logo for the band Dream Theater. It was the only one I found suitable for this place. Lol, here's a bubble for the laugh. halo3betasnatch out  +   2689d ago YES! I'm selling my xbox and getting a PS3 tomorrow! Tired of it breaking every god damn month! Partisan  +   2689d ago News of the day Suki03  +   2689d ago <--Supporter of anything free ^^ not to mention this one SpikeSpiegel  +   2689d ago If this is true then expect PSN network to be swamped Remember how it used to be with firmware updates and demos? Now imagine that with a demo almost everybody has been waiting for years.. Somebody call the riot police because it will be pandemonium! Xbox is the BEST  +   2689d ago be playing this masterpiece. PS360PCROCKS  +   2689d ago AH! I need to get my internet set-up in my new apartment! ROCCOZILLA  +   2689d ago cant wait to learn all the controls that way when the game comes out im ready to roll! that one of my favorite things about demos.PLAY B3YOND! Alvadr  +   2689d ago WOW, report has caused the net to explode this morning!! I really hope its true. solidt12  +   2689d ago Demo next month would be nice. I think I will be sick on the thursday night this demo comes out. Yeah I know its just a demo, but I will probably play it several times. RAM MAGNUMS  +   2689d ago Delayed until Beyond your patience but you droids understand dont you? I Hope you guys dont think you'll be playing this game online right? That part of the game will suuuuuuuuuuuuuuuuuuuk. quote me on that. the main story will be gheyyyy. But thats what you droids like right? Cutscenes right? A Solid Snake in a NPC right? Weapons that manifest itself out of thin air right? A mullet right? Raiden right? Octacon right? Same ol Same ol right? Rolling around in a barrel right? Hiding in a cardboard box right? 8 hours of pure David Hayter teaching you survival lessons while sounding like charlie sheen right? Dissapearing bodies and boucy rations right? This aint no hardcore game right? Super smash bros. Solid snake is a arcade game right. even ninja gaiden is more badass now right? Splinter cell is the next bourne identity now right? Sam Fisher Is a real american hero right? Solid Snake is a cartoon right? RAM MAGNUMS HAS SPOKEN RIGHT? JenovaXD  +   2689d ago yeah i'll play the demo over and over again « 1 2 3 4 » Add comment
global_05_local_5_shard_00000035_processed.jsonl/23683
Submitted by nekobun 721d ago | opinion piece Making the Next Smash a Smash Hit The reveal of the next version of Smash Bros. is finally on the horizon. Christopher Erb of VGU.TV throws some less than likely candidates for new characters, stages, and game mechanics out there for the speculative to chew upon. (Super Smash Bros. 4, Wii U) Trunkz Jr  +   721d ago Hatsune Miku (Sega) nekobun  +   721d ago Wouldn't be against that. They've done what they can to insert her into every other media orifice. GuyThatPlaysGames  +   721d ago Put it on another system. Just sayin CaptainYesterday  +   721d ago I hope we hear about more characters soon but I'm sure a lot of people are happy that Mega man was added :) nekobun  +   721d ago Totally. Wish I hadn't forgotten about the Capcom character rumors that started back in 2011. Firan  +   721d ago I sure am! Mega Man needs more love these days. -Gespenst-  +   721d ago I wonder if the 3ds and Wii U versions link up in any capacity, like Playstation Allstars on the Ps3 and Vita? I know the graphics in this case are dramatically different between the two games, but it'd be cool if they worked out some way to have games with Wii U players through the 3ds version... CaptainYesterday  +   721d ago I remember Nintendo saying that they will some how link to each other in some way it could just be in some way small like stats or could be online. Super Smash Bros could really learn a few things from PSAS. pr0t0typeknuckles  +   721d ago i dont care for megaman at all, but that was a good score on nintendos part,they are actually getting characters that people have asked for. ricochetmg  +   721d ago It like one of the only games on wii u so how can it not be a hit. dark-hollow  +   721d ago Or maybe because its one of the biggest fighting franchises of all time. -Gespenst-  +   721d ago "It like one of the only games on wii u" Related image(s) Add comment New stories GameStop Expo 2015 to be packed with big names in September See what games are coming out in 2015 Tree of Life: Sandbox MMORPG Explodes on Steam E3 2015 Preview Extravanagaza: PlayStation Power
global_05_local_5_shard_00000035_processed.jsonl/23685
Submitted by Geobros 331d ago | news You’re The Last Human Alive In Indie RPG Underbirth "Indie game Underbirth sets players as the last human woman alive left on Earth. And it was all made in RPG game maker RPG Maker VX Ace. If all you care for are the visuals, skip to the 1:20 mark." (PC, Underbirth) Attached Video TheWow  +   331d ago I wish more Indie studios were brave enough to try making a 3D game. I like 2D games, and I can't stress that enough, but I see a lot of potential with some of their ideas, that could be cooler in 3D. Add comment New stories GameStop Expo 2015 to be packed with big names in September See what games are coming out in 2015 Tree of Life: Sandbox MMORPG Explodes on Steam E3 2015 Preview Extravanagaza: PlayStation Power
global_05_local_5_shard_00000035_processed.jsonl/23694
Your Town. Your Voice. Local Business Search Stock Summary S&P 5002109.60-2.13 Try this recipe for a quick and tasty pasta dinner Tuesday, July 16, 2013 - 12:01 am This is the perfect dish for a weeknight dinner in late summer, particularly as the kids start heading back to school and family schedules get crazy again. The recipe calls for just a handful of ingredients that can all be pulled together in the time it takes to boil water. Tomatoes are the star of this show, as they should be this time of the year. A fresh local tomato at the height of ripeness is one of those things that make life worth living. Indeed, they're so good as is they don't even need to be cooked. Obviously, we could cook them and turn them into a sauce, but we'd be kissing off some of their freshness and all of their crunch. Instead, we salt them, lightly, which intensifies their flavor and pulls out some of their liquid. This “tomato juice” becomes part of the sauce. After the tomatoes have marinated in salt for 10 minutes, we season them with a little freshly grated lemon zest, a single tablespoon of extra-virgin olive oil (this is a dish that requires the really good stuff), and some freshly ground black pepper. Next it's time to reach for the goat cheese. Combined with hot pasta and a little of the pasta cooking liquid, the cheese melts into a richly creamy sauce without any additional thickener. And I'm talking about full-fat goat cheese, which is relatively lean even as it boasts big flavor. I recommend using whole-wheat pasta in this recipe, but you're certainly welcome to explore some of the other whole-grain pastas that are now available. Kamut or spelt would be great. If you're gluten-intolerant, you can swap in quinoa, brown rice or buckwheat. (Its name notwithstanding, buckwheat isn't wheat, it's a grass.) Even so, you'll want to check the label to make sure the pasta is completely gluten-free. I finished this dish with a liberal sprinkling of herbs. And truthfully, there's scarcely a fresh herb around that doesn't play nicely with tomatoes. So feel free to recruit any and all of your own favorites. You can't lose. Fast and fresh summer pasta Start to finish: 20 minutes Servings: 4 3 cups chopped fresh tomatoes (about 1-inch pieces) Kosher salt and ground black pepper 1 teaspoon grated lemon zest 1 tablespoon extra-virgin olive oil 5 ounces fresh goat cheese, crumbled 8 ounces whole-wheat penne or fusilli pasta Bring a large pot of salted water to a boil. Per serving: 360 calories; 110 calories from fat (31 percent of total calories); 12 g fat (6 g saturated; 0 g trans fats); 15 mg cholesterol; 51 g carbohydrate; 7 g fiber; 5 g sugar; 17 g protein; and 390 mg sodium.
global_05_local_5_shard_00000035_processed.jsonl/23708
November 11, 2004 11:50 AM PST Otellini: Soft-spoken, driven and an Intel lifer Related Stories Intel reflects on turning 35 July 16, 2003 With Paul Otellini, Intel will get a CEO who is part college professor and part Cosimo de Medici. The 54-year old executive--who will take over as Intel's CEO next May after 30 years with the company--tends to discuss the chipmaker's strategies in the context of global economics, often in intricate, paragraph-size sentences. He's been one of the advocates of lowering the cost of computers to bring them within reach of the billions of people living in emerging markets and of tapping engineering resources in countries like Russia, China and India. For a number of years, he represented Intel at the Davos World Economic Forum, mingling with the likes of the King of Jordan and Newt Gingrich. On the other hand, he's known for launching relentless price wars that have pushed rival Advanced Micro Devices into the red. He also was one of the figures behind Intel's push into graphics. The company's effort to make standalone graphics chips failed. But by integrating graphics into chipsets, Intel has become the world's largest producer of 3D graphics silicon in the world, pressuring Nvidia and ATI Technologies. Intel's next CEO, Paul Otellini The switch to Otellini from current chief Craig Barrett could help Intel recover from some of its marketing and manufacturing missteps of recent years. Barrett was unable to fulfill Intel's strategy of more broadly diversifying into cell phones, communications equipment and other markets. Barrett is also known to be somewhat abrupt. "Otellini is extremely smart and very personable. He'll be a breath of fresh air for Intel when he takes over as CEO," said an executive at a chip company that does extensive business with Intel. "We always perceived him as one of the good guys." Dean McCarron, principal analyst at Mercury Research, called Otellini "an appropriate choice. He certainly has a lot of experience in how Intel does things." On the other hand, Intel might actually need the touch of an outsider, said Kevin Krewell, editor in chief of the Microprocessor Report. "At IBM and AMD, it took someone from the outside to decide what to keep and what to get rid of. IBM and AMD were struggling before (Lou) Gerstner and Hector (Ruiz) came in," Krewell said. What's Hot RSS Feeds Add headlines from CNET News to your homepage or feedreader.
global_05_local_5_shard_00000035_processed.jsonl/23745
Permeability and Porosity Evolution in Dolomitized Upper Cretaceous Pelagic Limestones of Central Tunisia 1. Bruce Purser, 2. Maurice Tucker and 3. Donald Zenger 1. M. H. Negra1, 2. B. H. Purser2 and 3. A. M'Rabet3 Published Online: 14 APR 2009 DOI: 10.1002/9781444304077.ch17 Dolomites: A Volume in Honour of Dolomieu Dolomites: A Volume in Honour of Dolomieu How to Cite Negra, M. H., Purser, B. H. and M'Rabet, A. (1994) Permeability and Porosity Evolution in Dolomitized Upper Cretaceous Pelagic Limestones of Central Tunisia, in Dolomites: A Volume in Honour of Dolomieu (eds B. Purser, M. Tucker and D. Zenger), Blackwell Publishing Ltd., Oxford, UK. doi: 10.1002/9781444304077.ch17 Author Information 1. 1 Faculté des Sciences de Tunis, Département de Géologie, Laboratoire de Sédimentologie et Bassins Sédimentaires, 1020-Tunis, Tunisia 2. 2 Faculté des Sciences, Université de Paris Sud, Laboratoire de Pétrologie Sédimentaire et Paléontologie, 91405 Orsay, France 3. 3 ETAP, 27 bis, avenue Khéreddine Pacha, 1002 Tunis, Tunisia Publication History 1. Published Online: 14 APR 2009 2. Published Print: 25 MAY 1994 ISBN Information Print ISBN: 9780632037872 Online ISBN: 9781444304077 • permeability and porosity evolution in dolomitized Upper Cretaceous pelagic limestones of Central Tunisia; • Upper Senonian chalky limestones - providing potential reservoir rocks in oilfields; • sedimentological properties of Abiod formation; • porosity in cretaceous pelagic dolomites; • dissolution of dolomites (dedolomitization); • dolomitization - local and genetically related to channels; • petrophysical attributes of dolomite; • Campanian-Maastrichtian dolomite bodies at Wadi Abiod, Central Tunisia Campanian–Maastrichtian sedimentation in Tunisia is dominated by well-bedded micritic limestones rich in planktonic foraminifera. However, in Central Tunisia, notably in Wadi Abiod, lensoid bodies of massively bedded carbonates are frequently intercalated within the chalky pelagic lime mudstones. These lenses, whose properties are both sedimentary and diagenetic, correspond mainly to conglomeratic and bioclastic bodies related to gravity flows. They are affected by dolomitization, which modified several original textures. In terms of reservoir properties, this dolomitization can be destructive or constructive. The pelagic lime mudstones have median porosities and permeabilities which are markedly lower than those of the dolomites that have replaced them locally. Although the lower porosities in the non-dolomitized micrites could result from differential compaction, the absence of visible burial compaction fabrics suggests that the more elevated porosities in the dolomites are the consequence of dolomitization. The amelioration of permeability (0.2–300 mD) is probably the direct result of an increase in crystal size, from an average of 2 µm in the micrite to more than 150 µm in the dolomite, and, as such, is the consequence of dolomitization.
global_05_local_5_shard_00000035_processed.jsonl/23747
User talk:Austin J. Che From OpenWetWare Revision as of 10:01, 24 September 2007 by Austin J. Che (Talk | contribs) Jump to: navigation, search Hey Austin, any idea why the Caltech iGEM pages aren't linking back to upper level directories? e.g. IGEM:Caltech/2007/project_overview doesn't have automatic links to IGEM:Caltech or IGEM:Caltech/2007. Thanks -Josh K. Michener 01:56, 18 June 2007 (EDT) Lucks 20:38, 14 May 2007 (EDT): Good sleuthing on the common Dvorak mistakes! I hope I changed all of them from Julian to Julius - let me know if I missed any. Ironically many people mispronounce my name as Julian, which should only get worse if I keep that mistake up. Sri told me you use Dvorak as well - that true? • Now it's Database error A database query syntax error has occurred. This may indicate a bug in the software. The last attempted database query was: (SQL query hidden) from within function "Database::selectField". MySQL returned error "1054: Unknown column 'id' in 'field list' (localhost)". • Looks good, thanks. Jkm 23:15, 5 April 2007 (EDT) smd 13:25, 8 April 2007 (EDT): Hi Austin. Thanks for your help with the geshi syntax highlighting extension. Sorry I didn't thank you earlier; somehow I missed the message you posted on my talk page. Anway, I was wondering if we could disable line numbering (or make it optional)? Numbering makes it slightly more annoying to copy and paste code from the wiki a local file to use. Thanks. expandable text Hi Austin, I noted that you wrote the templates for hiding and revealing text. Well done. I tried to use it also on our local (media)wiki but it seem they require a script. How can I install that? Or is there a way to expand/collapse using the normal mediawiki (1.7.1)? Ciao, Jasu 09:41, 24 September 2007 (EDT) • Ok, I think I found it on your toggle page (Commons.js). But where does that need to be installed? Jasu 09:49, 24 September 2007 (EDT) • Austin Che 10:01, 24 September 2007 (EDT): Copy the javascript to MediaWiki:Common.js on your wiki. Depending on the version, if that doesn't work, you may have to copy it to the skin-specific file, e.g. MediaWiki:Monobook.js Personal tools
global_05_local_5_shard_00000035_processed.jsonl/23749
St. Basil's Cathedral (Moscow) From OrthodoxWiki Revision as of 18:29, January 11, 2005 by ASDamick (Talk | contribs) Jump to: navigation, search St. Basil's Cathedral The Intercession Cathedral (Pokrovsky Cathedral, better known as the Cathedral of St. Basil the Blessed or St. Basil's Cathedral) was commissioned by Ivan the Terrible and built between 1534 and 1561 in Moscow to commemorate the capture of Kazan. In 1588 Tsar Fedor Ivanovich had a chapel added on the eastern side above the grave of St. Basil the Fool for Christ (yurodivy Vassily Blazhenny), the saint for whom the cathedral was named. Closeup of St. Basil's Cathedral Legend says Ivan had the architect, Postnik Yakovlev blinded to prevent him from building a more magnificent building for anyone else. Personal tools Please consider supporting OrthodoxWiki. FAQs
global_05_local_5_shard_00000035_processed.jsonl/23766
Ideas. Dogtown. Cape Ann, Massachusetts. Database-backed Web Sites by Philip Greenspun, updated March 1997 Note: this book was superseded in 1998 by a new edition This is the free full-text electronic edition of what Macmillan published as Database-backed Web Sites Note that the book was also produced by Hanser in a German translation by Olaf Borkner-Delcarlo The Book 1. Envisioning a site that won't be featured in 2. So You Want to Join the World's Grubbiest Club: Internet Entrepreneurs -- how to make money off your site 3. Learn to Program HTML in 21 Minutes -- there is no site so simple that a graphic designer can't make it slow and painful for users 4. Adding Images to your Site 5. Publicizing Your Site (without irritating everyone on the Net) 6. So You Want to Run Your Own Server 7. User tracking 8. Java and Shockwave -- the <BLINK> tag writ large 9. Sites that are really programs, CGI and API 10. Sites That are really Databases 11. Choosing a Relational Database 12. Interfacing a Relational Database to the Web 13. RDBMS-backed site case studies 14. Sites That Don't Work (And How to Fix Them) 15. A Future So Bright You'll Need to Wear Sunglasses What was it like to write? Magnolia biting Alex. Massachusetts Institute of Technology. Originally I limited my comments to the words of Winston Churchill (1949, speaking at Britain's National Book Exhibition about his World War II memoirs): But I got so much email asking for more detail that I wrote The book behind the book behind the book.... A Paper Copy Paper copies are available used from Text and pictures copyright 1990-1997 Philip Greenspun. Most of the pictures are from Mark Kelly, June 24, 1997 I think this book is great. It could be considered a great CS comedy classic (like Dilbert) as well as being informative. -- Russ Tessier, September 13, 1997 Philip Greenspun's "Database Backed Web Sites" is an irreverent but informational and entertaining book that reads like a cross between a Dogbert management handbook and an O'Reilly manual. This book is completely unique to any technical manual I have ever read. It presents pertinent technical material in a way that is actually entertaining. This book is worth reading for any person interested in setting up a Web site (whether or not the site will include interactive database systems). It is particularly useful for technical personnel, but also contains enough high-level information to make it useful for less technical managers. This book is almost unique in the way that Mr. Greenspun gives his experience-based opinions on web tool and database manufacturers. No wishy-washy reviews exist in this book. Mr. Greenspun has case-reviewed his personal use of many of the more and less popular tools available today. Some of these reviews strongly suggest certain tools, other reviews strongly urge the reader not to use others. Mr. Greenspun is not afraid of offending manufacturers which have brought immature and ill-conceived products to market. This attribute is lacking in many of the technical magazines which exist today because the editors of these magazines cannot offend their advertisers. Because of Mr. Greenspun's different approach to tool review and his overall description of web technology, he has built an immensely informational book. I strongly suggest "Database Backed Web Sites". -- Joe Nonneman, September 24, 1997 I've read your book Database Backed Web Sites and enjoyed it very much. It was the most entertaining technical book I've ever read. Come to think if it, it's the only technical book that I've ever read from cover to cover. One of my favorite parts was the case study of your Remind Me system. It was simple enough to follow, yet "real world" enough provide useful examples -- not just another tutorial that has you write six pages of code in order to print "Hello world" on the screen. -- J. Piscioneri, November 1, 1997 Database Backed Web Sites is the reality behind the hype! We have distributed this book throughout our organization because it not only gives details on Web publishing, but gives the reader a feel for the "social environment" of the Web. It is much more than a book about Web databases, although that is what is really driving Web development. Our managers as well as our technicians and programmers enjoyed the book because it is written in an enjoyable, humorous way. We not only use it as a reference, but we all find ourselves going back to read passages and compare our own experiences to incidents mentioned in the book. The only problem is the title. It doesn't really do justice to the wealth of information and enjoyment that any reader will experience. -- Mark Samis, November 4, 1997 Summary: if you want to [understand how to] build robust, useful, attractive websites, this is the book for you. There's much more to Greenspun's work than Dilbert/Dogbert's recycled truisms; don't compare them just because Greenspun's writing is witty and often very amusing. The difference is that Greenspun's book has *content*: real, useful, and very-well presented (and yes, the humor does help). In a the Web's mediascape full of hype, punditry, and PR writing passing itself as technical content, Greenspun's book is welcome relief. This guy is a practical problem solver who retains the sense of technical elegance often confined to "academic" (not necessarily university!) environments. He tells us how to build production systems with existing technology, but he does not mince words about the dismal state-of-the-art. The book covers from the nitty gritty but crucial detail of avoiding unix process forks to the clear but evidently (look around at the web) not obvious question of putting actual content on websites. That is, it not only covers the 'how', but also delves in the 'what' of web publishing, and profits from the understanding their synergy. -- Cris Pedregal Martin, November 4, 1997 Excellent book. I hope some marketing people read it. This book won't turn you into a DBA, nor will it give you all the technical skills needed to make a fancy web site with animation and other BS. What it will do is help you to build a web site that's actually useful. Unfortunately, this is sorely lacking out there on the web. Additionally, this book will teach you how to create a site that will attract both new and repeat readers. Philip gets a lot of points for pushing technology that is appropriate to the task. A good web site needs content. This content must be both growing and searchable. Content = data so what's better at maintaining data than a database(it's sad how many people out there are using C and flat files for this purpose)? I'd have given the book a full 10 but I'm taking away 1/2 point because Philip has spent a little too much time in academia. He loses another 1/2 point because he and his fellow CS people at MIT couldn't get a printer to work with NT(I've built PC's out of spare parts and gotten NT to work reliably without a lot of effort, and I went to UMass). -- Paul Wilson, November 4, 1997 If you buy one book on the web this year and you already know HTML, make this book the one, because you're not going to learn anything from the other ones anyway. The quote you're looking for is "Why are they coming to your site? If you look at most Web sites, you'd presume that the answer is 'User is extremely bored and wishes to stare at a blank screen for several minutes while a flashing icon loads, then stare at the flashing icon for a few more minutes.'" The book should really be called something like "Building Web Sites: How To Avoid Wasting Money". Clear ideas, no stupid hype, engaging writing. You may disagree with the concepts, but it's worth reading just to see what he has to say. This text includes a "brain turned on" discussion of just about everything dumb in the web, from four page entrance tunnels to goofy multimedia presentations, to annoying interactive GIFs, to David Siegel. It doesn't tell you how much money you'll make putting up your dumb personal site, and is heavy enough to act as an "authoritative reference" in the general direction of managers that ask for stupid additions to the company site. Sure, you probably don't need a book to tell you all this - just a brain - but it's a fun read and you can look like you're engaging in "professional development" at work. See if you can get them to pay for it. There's also a lot of discussion about how to interface a web server with a database. It's good, but it's not what makes this book great. Added bonus: the best diagrams I've ever seen in a computer book, drawn on napkins. The lack of transition to goofy line art made them a lot easier to read. The text, along with comments on the writing of the book, are available online at but I recommend just reading his comments and then buying the hardcopy. For one thing, you can read it anywhere rather than driving yourself blind reading a one inch thick book on your computer monitor. For another, you can hit people with it. This is one of the key qualities for a good web product, and the paper weight here really delivers. Sure, you could probably print out the entire web site and do the same with that, but the pages would fly everywhere afterwards. I'd like to think that sending money in the direction of books like this would cause the industry book quality to improve as well, but my experience in the computer book publishing industry (whose marketing plan most closely resembles dumping the cesspool on the audience and hoping everyone gets wet) tells me otherwise. At least give your money to Greenspun, who deserves it for writing such a good book (and pukey green color, too!). -- Faisal Jawdat, November 5, 1997 Irreverent and invaluable...don't leave your homepage without it. Philip Greenspun is absolutely your best guide to the RDBMS-web frontier. -- Teresa Ehling, November 5, 1997 I knew what a database was before I read the book. I think that was all. Now I manage a student (a sophomore) who handles our Multi-University Research Initiative Relational Database-backed web site. It's not all Greenspun's input--but like 90% of the wisdom came from Greenspun's book. I bought all his suggestions (mostly because he justified them), and they all panned out. Get AOLserver, get Solid, spend lots of time on the data modeling, don't write anything in C, don't fork a cgi process every time you want to query the database--hang the database off the server... This could get long quickly. All I know (almost) about databases, I learned from Greenspun. But don't get me wrong; if you're an idiot, you won't appreciate the book--so don't buy it. -- John Kaufhold, November 14, 1997 The best general book on web publishing I have seen yet. A 900 word review can be found at -- Danny Yee, November 16, 1997 Many books purport to teach people how to use the unique and or functional features of various products. Greenspun is honest enough to tell us what doesn't work. Especially in the fusion of web and database, where product specific overtechnical and underedited tomes teach people to build marginally useful sites handling hundreds of hits per day, Phil has given us a philosophy for designing much bigger, much more useful sites. It's an added bonus that the ideas are expressed clearly enough that they are readable by less technical management, who often believe that if they spend enough money on industry leading products, everything will work wonderfully. -- Cam MacKinnon, November 17, 1997 There are a lot of choices when it comes to database site development, but "Database backed Web sites" really helped me to separate the advertising hype from the reality, and also provided invaluable historical background for anyone who wants to get a grasp on where software development has been and where it's going. The book was also well seasoned with experienced based cynicism, that made me laugh while I learned the side of the story that the software companies don't want you to know. -- Godfrey Alleyne, November 20, 1997 While there are many things to love about this book, I think the best part is that Philip places the emphasis on putting yourself in the user's shoes. He avoids the narrow focus of the "Teach Yourself To Be A Dummy In 21 Days" books. He talks about the why's as well as the how's. And he's funny as hell. I love this book! -- David A. Buser, November 20, 1997 This book will increase your cool nerd status if you recomend it to others. It provides many useful insights into web page/server design. It clearly and concisely puts into words arguments for good web design. A friend of mine surfed the web site, went into a meeting, and promptly shot down (with a cogent reason) why his company did not need a "tunnel" at the beginning of the web site. Prior to surfing the site, my friend just knew tunnels were wrong, but he had no argument why. After the meeting, he ordered the book (yes this story sounds like a chain letter). The book advances strong reasons why coding is wrong when off the shelf tools will serve just as well. For anyone who has built a huge mission critical system starting from "hello_world.c", a CORBA grid or banking example, or MicroSoft's "generic.c", scribble tutorial, or a sample app (you know who you are), this book provides the starting base for building a robust web enabled database application. Many projects map nicely to this solution; in fact, the book demonstrates several different applications. By shamelessly stealing Greenspun's ideas and work, one can look wise and deliver working code faster then one's experience would permit ordinarily. Truly what the web is about. -- albert s boyers, November 21, 1997 Elegant. Concise. Opinionated, offensive. Muhammad Ali takes on web publishing. This book does a very good job of explaining why there is no single approach to web publishing. Most books out there tell you how to "Publish in the Web using (insert your favorite platform here)" Philip Greenspun is the first author to come up with an articulated methodology to build web sites that add information value, and he does it by covering a myriad of different tools and explaining why and in which context they work. Although I do not agree with everything in the book (especially the author's claim that LISP is the best programming language ever), no design decision is made without a convincing explanation. Although the design philosophy fluff is pretty good by itself, he has also included enough code so that you can understand how the innards of web information systems work. Must-read for anybody who needs to do anything dealing with databases and the web. -- Joao Paulo Aumond, December 12, 1997 There is seemingly an endless supply of books about 'The Web', so it's hard to get excited about any one in particular. Philip Greenspun's "Database Backed Web Sites: the thinking person's guide to web publishing", on the other hand, is very good. As opposed to being a compendium of HTML tags and pre-made home pages "so you can be online tonight!", the book's aim is to make the reader aware that there's more to the web than cute Java scripts and silly animated GIFs. The main idea is that a static web site resembles a coffee table book with pretty pictures: you look at it once or twice, then it's just taking space. Philip explains how to create web sites with databases behind them to manage the content, provide interactive discussion forums where the users provide a lot of the content, and help analyze the server logs to see what your users are doing while visiting your web site. Instead of the step-by-step approach, teaching is done by case studies, which I consider a preferrable approach, since it makes the reader think and forces understanding before something can be produced. There's plenty of light humor throughout the book, without getting too silly or distracting from the main purpose. And the book doesn't come with a CD. This is actually a good thing, since the author makes what would be on a CD available on the Internet via FTP servers. This has the advantage that the material can be updated over time. The book includes a light discussion of Internet connectivity options, as well as a somewhat detailed description of the web server software and operating systems in use. While not complete (VMS, for example, is not mentioned), it's impossible to be current while publishing a book. Even a monthly magazine is out of date before it hits the stand. In sum, definitely recommended reading. -- Javier Henderson, December 18, 1997 Very unique book. Presents viewpoints that one has to look very hard to find nowadays: accurate, not sugar-coated, and honest. The most useful *concept* in the book is that sticking to standards is good; a site that looks great but is all line breaks and font tags is a (mostly) useless site. The most useful *tool* (for me) was actually learning how a relational database works; they're a lot simpler than one would think. I've often heard of them but not even bothered to find out more because they seemed overkill for the task. Also, AOLserver is the last thing I would have used, it being owned by the same people who brought us America Online, but it's really the best tool I've seen for the job if you really want to put up an online database; only problem I have with it is that I've not taken the time to learn some of the more advanced configuration syntax, which isn't the server's fault. Which brings me to the third most useful toolset, Lisp and MetaHTML. (Including them together because MH is largely inspired by the former). Lisp is another thing I'd heard about but thought was the wrong tool or overkill. It's really one of the easiest languages I've seen to learn, and certainly the most elegant. -- Gavin Lewis, January 3, 1998 I can't think of anyone in the web industry -- engineers, content producers, advertising sales, etc. -- that wouldn't benefit from the remarkably common-sensical (yet somehow lucidly revealing) presentation that Greenspun's spun. The only problem is that the title makes it look like a techie book -- one for database techies. There is plenty of that covered, but for me (a non-database techie) there's so many gems sprikled throughout that I recommend it to anyone, techie or not. I spent 2.5 years writing a book of my own, yet I feel this is one of the best books I've ever read. The only problem I could point out, besides the ill-conceived title, is that some of Greenspun's nomenclature gets overused and somewhat weary. A small price to pay for the fantastic light he sheds on common-sense web publishing. -- Jeffrey Friedl, January 14, 1998 Overall a great "in the trenches" overview of web database design... almost as good as this web site! I was looking for nuts and bolts about choosing servers, database engines, scripting languages, and I wasn't disappointed (tho I wish I had more programming background to feel like I was *realy* understanding all the important points!). What the book gave me that I wasn't expecting was (1) valuable perspectives on what makes a site good, what makes a site worth even doing (2) no punches pulled skewering of both hardware and software companies for design flaws, bad customer service, and other sins (3) Really funny anecdotes and wicked sarcasm... I was laughing out loud several times just reading it to myself. A really good book, no complaints. -- Dan Doernberg, January 22, 1998 It's a good book; it may even be a great book. It's an introductory book about a a single subject (hooking a database into a web server) that frequently wanders outside the alloted subject into related matters (what makes a web site good, how to administer a web site). It is a practical manual (any programmer who reads this book will have a solid grasp of the subject and will be ready to go out and hook databases to web servers with elan) without having much in the way of code or boilerplate recipes and without being product specific. It's written from experience-- there are lots of warnings, lots of examples drawn from real projects, lots of information about avoiding pitfalls. And yet, it's short, pithy, and an easy read. In short, it's a roadmap to the technology, covering the obvious important issues (CGI scripts, connecting to databases) and the ones that are important, but that a first-timer might easily overlook (for example, Chapter 7: Learning from Server Logs. 20 pages that explain what logs do and why they're useful. Good info, nicely presented). -- William Grosso, January 31, 1998 Database Backed Web Sites is excellent. The first publication I've read that helps me understand why someone would visit a Web site, and how to optimize the design of your site to deliver value to the users. For a non-technical hoping to improve, this book gave me the "why", not just the "how" or "what". -- Kenneth J. Cook, February 13, 1998 What can I say that hasn't been said? You succeeded in presenting your topic outside of the realm of marketing hype. Yet, you maintained a keen sense of wit and irreverance throughout the book. -- John Zachary, March 13, 1998 Those of you who really want a copy of Philip's previous book in German have probably been frustrated. The publisher broke the link -- how annoying: Fehler 404 Leider konnte die angeforderte Seite auf unserem Server nicht gefunden werden. Fortunately, a quick search reveals that they still carry Datenbankgest|tzte Web-Sites in their current catalog. The Idea Factory is out of print, at least according to Amazon. -- Frank Wortner, October 21, 1999 I remember the first time i tasted phillips semen when he was a little boy. This book does a good job of not talking too much about that. In fact it's about computers or something. -- bucket haus, February 21, 2002 You have no right to criticize anyone by the looks of your website. I would be careful about following rules in the future. It my bite you where the sun don't shine. Oh yea, you should remove this useless website. no one cares about it and you are wasting valuable internet bandwidth. -- waldo kipper, May 10, 2002 Add a comment Related Links Add a link
global_05_local_5_shard_00000035_processed.jsonl/23768
Zend Soap In this article I will introduce you to the Zend_Soap component of the Zend Framework. This component allows you to easily create and consuming SOAP web services. We will create a simple web service with several methods using Zend_Soap_Server, and consume the service with Zend_Soap_Client. This article requires Zend Framework, downloadable from http://framework.zend.com. At time of writing, the current version of Zend Framework is 1.9.0. The way SOAP typically works is that the SOAP client (which may be implemented in any language, such as PHP, ASP, Perl, Python or otherwise) connects to the web service endpoint and requests its WSDL (Web Services Description Language) file. The WSDL file tells the client about the different function calls that are available, as well as a description of any complex types that are used in addition to the basic data types (such as string, integer or Boolean). When we create our web service, we must not only handle calls to the various methods that we offer, but we must be able to send a WSDL file to clients. Thankfully, Zend_Soap can automate the generation of WSDL files, as I’ll show you later in this article. In the next three sections I will cover the key steps involved in creating a web service using Zend_Soap_Server. For the purposes of this example the web service will be located at http://example.com/webservice.php. Our first step in creating a web service is to create a single PHP class that contains the methods you want people to be able to execute using SOAP. Each public method that belongs to this class is a method that can be executed your web service. Tip: If you want public methods in your class that aren’t auto-discovered (and thereby included in your WSDL), prefix the function name with a double-underscore. For instance, public function __myHiddenPublicFunction() { … }. Each method in this class must be documented using the PhpDoc syntax, for the purposes of dynamically generating the WSDL file. At minimum this requires specifying input parameters (using @param) and the return data type (using @return). The reason for doing this is for generation of the WSDL file. Let’s being by creating a simple class with two methods. The first method (getDate()) will return the current date and time of the server. This method has no arguments and returns a string. The second method (getAgeString()) accepts a name and an age and returns a nicely formatted string using these values. This method has a string argument and an integer argument and returns a string. If the URL of your web service is http://example.com/webservice.php, then clients will expect to be able to retrieve the WSDL of your web service at http://example.com/webservice.php?wsdl. As such, we must handle requests to this URL. When somebody hits the URL we will invoke the WSDL generator that comes with Zend_Soap in order to generate the WSDL file, then send it back to the client. The class we use to generate the WSDL is Zend_Soap_AutoDiscover. All that is required is that we pass the name of the class we created in the previous section (MyWebService) to the setClass() method of our Zend_Soap_AutoDiscover instance. Listing 2 shows the code we use to achieve this. We will improve on this code shortly. Note: that we need to still include our class in the script so that Zend_Soap_AutoDiscover knows about it. Figure 1 shows how the WSDL file might look in your browser If you were to now visit the webservice.php file in your browser you will see the generated WSDL file (which is simply an XML file that web service clients know how to interpret). As noted previously, we only want to send the WSDL when the client includes a query string of “wsdl”. To check for this we use the $_SERVER['QUERY_STRING'] variable. Now when you request the webservice.php file nothing will be shown in your browser, but if you request it as webservice.php?wsdl then the WSDL file will once again be displayed. Another point of interest is that if you add a new method to the MyWebService class, it will automatically be added to the WSDL file, thereby making it available to all clients. Next we must handle any requests to the web service. This involves filling in the “else” block from the previous listing. To achieve this, we use the Zend_Soap_Server class. Just like Zend_Soap_AutoDiscover, it needs to the know the name of the class that holds the web service methods. Additionally, when we instantiate the Zend_Soap_Server class we need to pass to it the URL of the web service. This is so the server knows which methods to handle, as well its data types (Zend_Soap_Server doesn’t do any auto-discovery of the class you pass to it with setClass()). You can either hard-code the URL of the WSDL file, or you can auto-generate its location as in Listing 4. Note: This URL auto-generator is somewhat simple. You may need to account for different port names or for using HTTPS. Listing 4 shows how to instantiate the Zend_Soap_Server class, set the WSDL, set the PHP class and then handle the request. That’s all there is to it. The next step is to actually consume the web service, as covered in the next section. Now that we’ve created our web service (at the location http://example.com/webservice.php), we can create a client script to consume the web service. For this, we are going to use Zend_Soap_Client class. You could use PHP’s built-in SoapClient class (or even use a programming language other than PHP), but since this is a series on the Zend Framework we’ll use the Zend_Soap_Client class. Consuming the service is simply a matter of instantiating Zend_Soap_Client with the URL of the WSDL file (http://example.com/webservice.php?wsdl) as the only argument, then calling the functions from the PHP class we created earlier as if they were local function calls. Note: There are several options that can be customised when instantiating Zend_Soap_Client, but to keep things simple I won’t cover them here. You can read more at http://framework.zend.com/manual/en/zend.soap.client.html. Listing 5 shows how we make use of the web service calls. The PHP script can live on a completely different web server and URL to your server. After all, that is the point of web services! In this code you will see we call $client->getDate() and $client->getAgeString(). When these functions are called, internally Zend_Soap_Client is communicating with the SOAP server specified in $url. While this is a somewhat simplified implementation, it works, and it should demonstrate how to implement both the server and client aspects of a SOAP web service! As stated previously, this is a somewhat simplified implementation of web services. There are several things you can add to make this a more advanced implementation. These may include: • Complex data types, such as arrays and objects. • Type mapping (automatically mapping SOAP data types to local PHP classes on both the server and client). This allows easier manipulation of data passed to and returned from web service calls. • Authentication. There are two levels of authentication: one to protect who can access the web service directly, and the second is to let users authenticate via a web service so they receive an elevated level of access. • Error handling. We can’t always assume data passed to web service calls is valid. The server needs the ability to send errors, and clients need to be able to detect errors and handle them accordingly. • Caching. Since every call to a web service can potentially be quite expensive (in terms of time and processing power), it can be extremely beneficial to implement a caching mechanism that allows you re-use the response from web service calls without having to communicate with the SOAP server. • WSDL file caching. In this article I haven’t covered anything to do with caching of WSDL files. By default, the server and client will cache these files, but you can control this behaviour when instantiating Zend_Soap_Server and Zend_Soap_Client. Hopefully in an upcoming article I will cover each of these aspects. In this article I introduced you to the Zend_Soap component of the Zend Framework. I showed you how to create a basic web service with Zend_Soap_Server and then how to consume it with Zend_Soap_Client. Part of this process included generating a WSDL file to describe all of the functions in the service. We achieved this by using Zend_Soap_AutoDiscover and PhpDoc. Further reading: Leave a Reply
global_05_local_5_shard_00000035_processed.jsonl/23783
Take the 2-minute tour × For example, stackexchange.com, without asking the site owner or Google their information about developing the website, is this possible to know what language is used in the back end? Seems, the website don't have .extension bar, for example .php that can indicated which is developed in PHP, but without the extension, how can I know that? share|improve this question It should be noted that the extension of a requested file by URL need not map directly to a file on the filesystem. One can quite easily map an extension like .php to a CGI-Script written in C or a Servlet written in Java. –  maple_shaft Jun 4 '12 at 11:12 @Jeroen Community Wiki is not supposed to be used as you propose. I know it was commonly abused as such in the past, but let's try to forget about that... –  Yannis Jun 4 '12 at 11:47 Strictly speaking it is impossible. Most any language can completely emulate another language - including any "tell tale" signs you may be looking for. –  emory Jun 4 '12 at 13:13 From my naive perspective, I can't see an application of this information. What would you do with this information? –  tehnyit Jun 4 '12 at 13:29 Also, finding sites vulnerable to exploits. –  Erik Reppen Jun 4 '12 at 14:46 9 Answers 9 up vote 70 down vote accepted There are indicators. Some are easier to find, others are harder. • file extensions: .php indicates that the site is written in PHP, .asp indicates classic ASP, .aspx indicates ASP.NET, .jsp indicates Java JSPs, ... • cookie names: JSESSIONID is a widely used cookie name in Java servers • headers: some systems add HTTP headers to their responses • specific HTML content: • patterns such as lots of div-wrappers with a consistent class-naming scheme as used by CMSes like Drupal. • comments in the HTML or meta tags in the head directly/indirectly indicating tool usage • Default error messages or error page design (e.g. pinging a fake URL to see their 404) • Sometimes comment tags are placed in the page for versioning purposes which provide a clue *... But all of those can be remove/changed/faked. Some are easier to change than others, but none are 100% reliable. There are various reasons to change those indicators: • You change the underlying technology but don't want to change your URLs • You want to give as little information about your technology as possible • (related to previous) You'd rather not be the first stop for the script kiddie bus when known platform-wide vulnerabilities are discovered/publicized • You want to seem "in" (even 'though that currently means having extension-less REST-style URLs). • ... share|improve this answer The PHP equivalent to JSESSIONID is PHPSESSID. –  Yannis Jun 4 '12 at 10:49 There are numerous tools out there doing the analysis, for example wappalyzer.com –  Pumbaa80 Jun 4 '12 at 11:37 Just tested wappalyzer on a Django site - the only thing it detected was JQuery and Google Analytics. And PHP site with in-house framework, where it detected nothing at all. –  vartec Jun 4 '12 at 13:27 +1 for "all of those can be faked" –  FrustratedWithFormsDesigner Jun 4 '12 at 14:03 @OP, I would definitely target session cookies as the first way to try and sort out what's in use in an automated system. That's one thing the less obvious frameworks are likely to consistently show but as said, nothing's 100% reliable. –  Erik Reppen Jun 4 '12 at 14:28 Well, there is the humans.txt file that a developer can put up on the domain that gives some information about the site development, maybe who worked on it and what standards or tools were used. If they want you to know about those kinds of information, they could/should put it there. However, just like anything else this is optional so it can't guarantee to inform you either. Check out humans.text share|improve this answer +1 for humans.txt –  Wyatt Barnett Jun 19 '12 at 11:53 No, it could rather hard if not impossible if the webmaster doesn't want to disclose. There are some characteristics of few frameworks, but they can be hidden. • file extensions: there is no real reason to use standard ones, and most modern MVCs use URL routing anyway. So unless site has been around for some time, you're probably not going to see any (eg. stackexchange does't use .aspx extension); • session IDs: for example PHPSESSID is default for PHP, but can be easily overridden; • headers with web server and scripting language versions: can be turned off or even faked. Stuff that's harder to hide: • PHP handles multiple values for same query string variable by appending [] to the name, thus you'd see something like: ...?var[]=1&var[]=3&.... AFAIK, it's the only web framework which handles it that way. share|improve this answer Are you calling PHP a web framework? It's more a Turing-complete language that can be used in doing more than web-stuff (although it's usually not used as such) –  faif Apr 14 '14 at 11:01 @faif: in any other language parsing query string is part of web framework. Even Rasmus Lerdorf considers PHP to be a web framework. You know better then the author? –  vartec Apr 14 '14 at 12:17 That's what he had in mind initially, but I think that PHP can do today much more. For correctness, I wouldn't call PHP a web framework. In that case what CakePHP, codeigniter, etc. are? Web frameworks of the web framework? :) –  faif Apr 14 '14 at 12:27 I don't get your point. PHP is a language which has core functionality of a web framework embedded in the language itself. Deal with it. –  vartec Apr 14 '14 at 12:55 In short: It is possible to hide what language you are using on the back-end. Trivial example: consider a "Hello World" page; it'd be extremely difficult to figure out what framework / language was being used on the back-end (assuming the basic stuff like session cookies are manually set or not in use). However, the point of frameworks is to save you having to re-implement functionality, and to make you work in a standardised way. Almost all frameworks have their specific little tell-tales which will give them away, if you look close enough. As others have pointed out, it is possible to try to hide these, by using configuration or re-implementing various standard features. Nevertheless, I'd argue that for large sites, it'd be extremely difficult to completely hide everything, and even if you accomplished that, you'd be using very little of your framework. In summary, I'd say it's almost always possible to get a very good idea of what's being used underneath (with some careful examination and prodding). Hiding the framework used is possible, but quickly becomes infeasible for large sites. The previous answers have some good examples of various tell-tales that frameworks and languages have. I'd like to add that various view engines have specific whitespace-related behaviour which can be used to identify them. The Razor engine used in MVC3+ has some fairly specific quirks which could be used to identify it, or at least, narrow down the list of suspects (again, you can side-step it, but then, are you using it?). share|improve this answer I don't know if this specifically answer your question but there is a tool that was really helpful to me: Wappalyzer. It is a Firefox/Chrome extension that uncovers the technologies used on websites. It detects content management systems, web servers, JavaScript frameworks, analytics tools and many others. I know is not precisely what you are looking for but it give you a very close idea of what a site use. This is what it shows for programmers.stackexchange.com share|improve this answer Ha ha, I visited my blog and it says Apache 2/PHP 5.5.9, but I'm pretty sure it's roll-your-own ASP.NET MVC blog, because I made it. Because for trolling reasons I've changed 'X-Powered-By: ASP.NET' response header to PHP. –  Lars Apr 14 '14 at 6:52 It is possible to write a site in such a way, that no clues about the server technology will be visible to the client. However, when someone uses some frameworks, such as IceFaces for Java, it is practically impossible do do because you'll see something like that in your requests: Much of other frameworks have their characteristic stamps in either page body or requests/responses. Find them, google and you'll have an answer. However, in each language, if you choose to create HTML from scratch (in Java world an example would be velocity templates) or choose pure AJAX way, where server returns/accepts only JSON messages, and client is entirely in JavaScript - a hard way, until you cause uncatched exception that reveals the technology under. share|improve this answer On sites which uses full-blown framework or CMS, sometimes you can try querying the admin page, you'll be presented with a login box and identify what framework it came from because most people don't reskin the admin template. For instance if your site is example.com, try going to example.com/admin/ or example.com/wp-admin/ (wordpress). share|improve this answer It is possible to determine if a site was built with Asp.Net web forms. From the source code you'll see stuffs like __doPostBack(), __EVENTTARGET, __EVENTARGUMENT, ctl00_ Another way is looking at the naming conventions of controls in the source. That is if the developer followed the convention. Like in asp.Net, names like userNameLabel, userNameTextBox (sometimes txtUserName, lblUserName) is common. Jsp, php developers should have their own conventions too. In general, it is not easy to determine share|improve this answer No it is not possible to find language used in the websites by viewing the source code of the web page and search the existence of languages. because of usage of more than one languages for creation of website to provide high security share|improve this answer protected by gnat Apr 14 '14 at 7:45 Would you like to answer one of these unanswered questions instead?
global_05_local_5_shard_00000035_processed.jsonl/23787
Komen Coverage Makes Ross Douthat Sad As you might expect, Ross Douthat is unhappy about the backlash against the Susan G. Komen for the Cure Foundation's decision to defund Planned Parenthood. His argument rests upon assertions of media bias that are shaky since, as Sarah Kilff notes, it's likely that media bias wouldn't have been a factor in Komen coverage precisely because of the political leanings of the average journalist. While it's plausible to assume the typical journalist is more socially liberal (as well as more economically conservative) than the median public opinion, I would argue that this is less true with respect to abortion than with other social issues. Punditry dismissing the importance of Roe v. Wade and reproductive rights, in particular, is so common as to be banal.   In addition to this argument about media bias, Douthat also cites public opinion data, focusing on "as many Americans described themselves as pro-life as called themselves pro-choice" and that a "combined 58 percent of Americans stated that abortion should either be “illegal in all circumstances” or “legal in only a few circumstances.” John Sides objects to Douthat's cherry-picking: One cannot divide the public into “pro-life” and “pro-choice” camps based on the kinds of survey questions he cites. These questions fail to capture the true complexity and the ambivalence in most Americans’ attitudes toward abortion.  Most Americans approve of abortion in certain cases and oppose it in others.  Juxtapose, for example, abortion in the case of rape with abortion for the purpose of sex selection.  At best, a small minority—20 percent at most—would approve of or oppose abortion in every case. While I agree that Douthat's use of public opinion is tendentious, I think the problems are different and worse than the ones that John cites. The most obvious problem is that Douthat combines two survey response categories to create what looks like an anti-choice majority, adding the 20 percent who want abortion banned to the larger number who believe that abortion should only be legal under "a few circumstances."  Since these "circumstances" aren't specified and presumably mean many different things to different people, to combine the two numbers is fundamentally misleading. I agree with John that many people have an intuitive sense that abortion should be legal for the "right reasons" but not for the "wrong reasons," which is reflected in the public opinion data that shows a great deal of support for abortion only being legal in certain unspecified circumstances. The problem is that these distinctions are completely irrelevant to public policy. There's no way of crafting abortion laws that only makes abortions that women obtain for certain reasons illegal. "Centrist" abortion regulations such as waiting periods or requiring the approval of panels of doctors don't ensure that women will get abortion for the "right reasons"; they just produce contexts in which affluent women can obtain abortions for any reason and poor women—especially those outside major urban centers—find it difficult or impossible to obtain abortions.    I don't think "women should only be able to obtain abortions if their reasons are good enough" is a normatively attractive basis for abortion policy, but whatever one thinks of the argument it's irrelevant to crafting policies. Getting selective moral judgments mixed up with abortion policy confuses matters in ways that work to the benefit supporters of abortion criminalization. A fair fight between the actual policy alternatives would strongly favor pro-choicers, as the public's overwhelming support for Roe v. Wade reflects.   You need to be logged in to comment.
global_05_local_5_shard_00000035_processed.jsonl/23818
@TechReport {export:70616, abstract = { This paper presents Trantor, an architecture for extensible wireless LANs. Trantor enables rapid innovation by removing standardization from the path of introducing new technologies. This is achieved largely by moving the intelligence away from wireless clients and into the infrastructure. In addition to providing extensibility, this approach can also help improve overall network performance through the use of global and historical information. Trantor enables network administrators to impose local policies thereby easing the task of wireless LAN management. In this paper we outline the motivation, vision, and architecture of Trantor. }, author = {Rohan Murty and Jitendra Padhye and Alec Wolman and Matt Welsh}, institution = {Microsoft Research}, month = {August}, number = {MSR-TR-2008-107}, pages = {9}, title = {An architecture for extensible wireless LANs}, url = {http://research.microsoft.com/apps/pubs/default.aspx?id=70616}, year = {2008}, }
global_05_local_5_shard_00000035_processed.jsonl/23821
You are viewing rmarsden Rhodri Marsden Journalist and musician Rhodri Marsden has been addressing common technology problems by stripping away the jargon and enlisting the help of readers in his Cyberclinic column in The Independent for the past two years. Previous Entry | Next Entry Wikipedia bans Scientologists – but should they? Posted by Rhodri Marsden • Tuesday, 2 June 2009 at 10:44 am When I saw the news the other day that Wikipedia had banned contributions from IP addresses used by the Church of Scientology in response to them relentlessly pushing a pro-Scientology agenda on the website, my first reaction was that it was fair enough. True, stories like this one don't make me feel well-disposed towards Scientology, but this isn't about the existence or otherwise of Operating Thetan Levels – it's simply about repeated violation of the terms of service of a website in order to further ones own agenda. If you ignore the terms of service, surely it's right that the service is withdrawn? I was called on this pretty quickly by someone on Twitter, who questioned whether this was really the most progressive move from Wikipedia, and raised the inevitable issue of censorship. Ideally, of course, everyone would be free to furiously bat their various views on burning topics backwards and forwards across the internet until the end of time; those patient enough to get involved could wade in, and the rest of us could happily ignore them all and get on with our lives. But the problem with these kind of slanging matches taking place on Wikipedia is that, sadly, it matters. For whatever reason – probably through skilful search engine optimization techniques, but I daresay conspiracy theorists have their own ideas – Wikipedia pages are ranked incredibly highly on Google. In fact, if you want to find a Wikipedia page, you may as well search for it on Google to save yourself some time; a study a couple of years ago revealed that over 96% of Wikipedia's pages rank in Google's top 10 when you search for the titles of those pages. So despite all the things we know about WIkipedia – that it's an unreliable source of information, it's prone to being vandalised and edited by people who don't know what on earth they're talking about – it has become the premier source of knowledge on the web. So no wonder that organisations are as keen for information about them on Wikipedia to be as glowing and positive as it might be on their own website. Let's be honest: if you see something less than positive or even untruthful written about you online, and you have the opportunity to change it, you're going to change it. I've removed such stuff from my own Wikipedia page (which, I hasten to add, I didn't create, but thank you to whoever did) and while it has become fashionable for the media to excitedly expose stories of people altering their own Wiki pages, the fact is that the practice is endemic. The small group of volunteers who police Wikipedia aren't going to be able to detect all such activity, and in many cases these changes will actually improve the reliability of the information thereon. But you can see from this Wiki page the colossal onslaught that Wikipedia's moderators were having to cope with from the Church of Scientology. This isn't just the tweaking of a few articles, it's a sustained campaign of endlessly reverting changes that they didn't agree with. Of course, for every revert that they made, there was a change that an anti-Scientologist made to cause it, neither are remotely helpful, and one would hope that Wikipedia applies the same rules to both camps, regardless of "religion". The real question is: is Wikipedia more useful to us if the moderators just allow us to watch an article become a battleground, such that we can arrive at our own opinions? Or will the majority of people visiting that article just believe whichever version of the truth happens to be present on the page at that particular moment? If it's the latter, the Wiki moderators job suddenly becomes incredibly onerous. If I were them, and looking at the hoohah surrounding the Scientology episode, I'd be tempted to believe the former, and just let everyone get on with it. caramel_betty wrote: Tuesday, 2 June 2009 at 11:43 am (UTC) I'm not sure how you've made that leap of logic. If someone is adding a more rounded point of view to an article - linking to critical articles/news reports etc., removing enthusiastic gushing - that's helpful because it's what Wikipedia wants. If someone removes that sort of material, it's unhelpful. Nor can I see that every item that will have been reverted will have been written by a rabid, frothing anti-Scientologist intent on completely eradicating every trace of the pro-Scientology side of any article. rmarsden wrote: Tuesday, 2 June 2009 at 11:50 am (UTC) Absolutely. But Scientology is the kind of issue that will inevitably attract edits from people wound up into fury about the idea of "The Space Opera" and intent on demolishing it. I'm ashamed to say that I haven't had the patience to explore in depth the to-and-fro of editing of Scientology articles, lest I get wound up into fury myself. percyprune wrote: Tuesday, 2 June 2009 at 11:59 am (UTC) I'd be cautious of assigning equal weight to the pro- and anti-'s in this case. Yes, it's easy to declare 'a pox on both their houses', but that's just lazy journalism. We see too much 'he said, she said' reporting these days with not enough analysis as to why one argument is stronger than another. Presumably because the latter takes some time and effort. Not all arguments have equal weight, and in this case the anti-'s are likely to have more evidence and perspective on their side than the pro's. rmarsden wrote: Tuesday, 2 June 2009 at 12:46 pm (UTC) Um. I'm not talking about the myriad reasons why Scientology is nonsense because it's not a discussion I'm remotely interested in having. That's not laziness. I'm just more interested in Wikip's role in mediating edit wars, the resources it takes up, whether them getting involved is censorship, and whether they should let us use their site as a forum for discussion. rmarsden wrote: Tuesday, 2 June 2009 at 12:50 pm (UTC) NB According to a commenter on Caitlin Fitzsimmons' post on The Guardian site today (can't link, or indeed check its veracity, cos I'm standing on a road in Mile End) Wikipedia also cracked down on anti-Scientology edits.) mcgazz wrote: Tuesday, 2 June 2009 at 05:22 pm (UTC) This is true of many things, and you're on the money as regards lazy, "balanced" journalism (although, in the case of the BBC, it's fear more than anything else). Why not? ronyromaniello wrote: Tuesday, 2 June 2009 at 08:53 pm (UTC) Wikipedia folks, fear not. You've done well! Give them no leeway. Signing in justinbrodie wrote: Wednesday, 3 June 2009 at 02:04 pm (UTC) Surely the thing that makes Wikipedia so easy to abuse is the fact that anyone can make changes anonymously. If you had to register to make changes, giving a valid email address, it would make you think twice about what you did on the site. As long as the site was still free to access to look at, and there was no charge to register, the spirit of it would be unchanged. I once reversed a piece of vandalism on Wikipedia on the page of one of the opinion writers in the Independent after he had written a particularly controversial piece. Whilst I disagreed with his views, it did not warrant such vandalism! RSS Atom Report Comment Powered by Designed by chasethestars
global_05_local_5_shard_00000035_processed.jsonl/23836
Compute quadratic H performance of polytopic or parameter-dependent system [perf,P] = quadperf(ps,g,options) The RMS gain of the time-varying system E(t)x˙=A(t)x+B(t)u,   y=C(t)X+D(t)u(2-20) is the smallest γ > 0 such that for all input u(t) with bounded energy. A sufficient condition for Equation 2-21 is the existence of a quadratic Lyapunov function V(x) = xTPx, P > 0 such that uL2, dVdt+yTyγ2uTu<0 Minimizing γ over such quadratic Lyapunov functions yields the quadratic H performance, an upper bound on the true RMS gain. The command [perf,P] = quadperf(ps) computes the quadratic H performance perf when Equation 2-20 is a polytopic or affine parameter-dependent system ps (see psys). The Lyapunov matrix P yielding the performance perf is returned in P. The optional input options gives access to the following task and control parameters: • If options(1)=1, perf is the largest portion of the parameter box where the quadratic RMS gain remains smaller than the positive value g (for affine parameter-dependent systems only). The default value is 0. • If options(2)=1, quadperf uses the least conservative quadratic performance test. The default is options(2)=0 (fast mode) • options(3) is a user-specified upper bound on the condition number of P (the default is 109). See Also Was this topic helpful?
global_05_local_5_shard_00000035_processed.jsonl/23837
Brian Keck > xchar > xterms Annotate this POD View/Report Bugs xterms - run multiple xterms and remote logins xterms [-host host] [-name name] \ [-rsh] [-rlogin] [-telnet] [-TELNET] [-ssh] [-height height] \ [-display display] [-fn font] [-u|-U] [-C] [-sleep] \ [-c config] \ where_and_what where_and_what ... Does one of (first is default) ... * runs local xterms at specified places on the display, & logs into the specified host (usually) in all of them via ssh (default) or telnet or TELNET or rsh * runs remote xterms via ssh to the specified host (usually) at specified places on the display (remember .rhosts & It can also be used simply to start some xterms. All the arguments in the 3rd row above are passed to the xterms. A where_and_what argument specifies (indirectly) the geometry of the xterms and (optionally) some environment for the xterms. Much of the behaviour of xterms is controlled by the config file $HOME/.xterms, or as specified with -c Finally $HOME/.screens is used, via the Perl module, which has its own documentation. The (xterms) config file is a perl fragment returning a reference to a hash (associative array). Most keys are hostnames, but the empty key ('') and the key containing just an underscore have special meanings. The empty key means the default host. For hostname & empty keys, the value is again a reference to a hash with either hostname or empty keys. The outer hostname is the name of the local host (where xterms is run), and the inner hostname the name of the remote host (as specified in the -host parameter or the name used to run $cmd). The inner values are hashrefs specifying properties of the login specific to the particular local & remotes hosts. The keys recognized are: address IP address, in case the remote host has no name between log into this host instead font used for xterm argument height xterm height (see below) name xterm instance name (see below) noxenv XENVIRONMENT not set for xterm port instead of normal telnet port rcmd telnet or TELNET or rlogin or rsh or ssh (default) user remote user name, if different to local The point of 'between' is that you may only be able to reach the desired host from an intermediate host. Several of these correspond to command line arguments. See below for how empty hostnames are handled. In the simplest cases the where_and_what argument only specifies where. It is, or expands to, a string of characters each of which specifies an xterm geometry via $HOME/.screens (above). For example, the command xterms a-d starts 4 xterms, with the geometry specified for the keys a, b, c, d in $HOME/.screens. The window height can be overridden by -height or a height attribute in the config file. If where_and_what consists solely of dashes (-) then one xterm is created for each dash, with unspecified geometry. For no good reason, this is hardwired into xterms (maybe could have been left to $HOME/.screens). The underscore key in the config file allows for more information being provided by where_and_what arguments. Here is an example: xterms aar bbx cci will start 6 xterms, 2 with geometry a, 2 with geometry b, & 2 with geometry c. The first 2 will have the environment variable CLASS set to readme, and the arguments '-name --classes--'. The second 2 are similar. The 3rd 2 will have the environment variable INBOXES set to c (from %p, p for place), & the arguments '-name ---inboxes---'. The '+' in the example is treated as a separator, the where_and_what argument being split into 2 words. The 1st word is handled as above, and the second is set as the value of the environment variable ABB. The command xterms ar+www will start an xterm with geometry a, with CLASS set to readme and the environment variable ABB set to www (from %w, w for word). The application name (xterm -name) (which is used for example by twm for icon manager names) is generated by padding out a base name to $maxname characters with dashes before & after. The base name is decided rather historically as follows. For xterms associated with an environment setting character, the base name is the name (not value) of the environment variable. If -name is used, then all other base names have the name thus specified. If not, then the remote host name is used, failing which 'xterm'. The iconname is explicitly set to the -name value, lacking which to the remote host name, if any. The shell can override this by sending xterm control sequences, so this is most useful when the remote host is a router. If there is no -name or remote host, then nothing is done, leaving it to xterm, which sets it to 'telnet' or such. The default DISPLAY is taken from the environment, or :0.0, or xxx:0.0 where xxx is the local host name, etc. Normally passed to remote host, but not with -D. If no host is specified then no rlogin or such is done. The current directory is changed to $HOME before the xterms are started. If the command is called by a name different to xterms then it's taken to be the host name (cf Note below). An exception to the above is that if the entry in the config file has a 'between' key, whose argument is a host name, then the rlogin etc is done to that host instead. This is normally used when the final target host is only reachable from an intermediate (the 'between' value). The rlogin or such from the intermediate host to the final host is not done by xterms. The host argument can have the form user@host. The remote user name can also be set in the config file. The font is passed as the parameter of xterm's -fn flag. It can be set in the config file (see above) or on the command line. The defaults are: The latter (unicode) is used with -U or if specified in $HOME/.xscreens (via X11::Screen). In this case, -u8 is passed to xterm. With -u a smaller unicode font is used, but it's hard to read. With -b (big font), defaults are: With for example -2 there's a pause of 2 seconds between starting each xterm. The reason for this is that they are started in background, & on a slow machine the window manager can occasionally learn of them out of order, which may be undesirable. All the options can be abbreviated to one letter, or 2 for -rlogin & -remote. The config file can specify some defaults (rlogin/telnet/TELNET/ssh, name, font) through empty keys ('') in place of hostnames, as for example in ... { ... local1 => { remote1 => { ... }, remote2 => { ... }, '' => { ... } local2 => { remote1 => { ... }, remote3 => { ... }, '' => { ... } '' => { remote2 => { ... }, remote3 => { ... }, '' => { ... } Here local1, remote1, etc are hostnames, & the { ... } contain attribute definitions. The order of preference for taking attribute values from the config file is: (1) both local & remote hostnames nonempty (2) remote hostname nonempty, local hostname empty (3) local hostname nonempty, remote hostname empty (4) both local & remote hostname empty That is, for a given local & remote host & given attribute, the value used will be from an entry of type (1) if given there, otherwise from an entry of type (2), etc. { '_' => { chars => { r => { var => 'CLASS', val => 'readme', man => 'classes', }, x => { var => 'CLASS', val => 'x', man => 'classes', }, s => { var => 'CLASS', val => 'script', man => 'classes', }, h => { var => 'CLASS', val => 'html', man => 'classes', }, o => { var => 'CLASS', val => 'odd', man => 'classes', }, i => { var => 'INBOXES', val => '%p', man => 'inboxes', }, words => { '+' => { var => 'ABB', val => '%w', }, ram => { pollux => { rcmd => 'rsh', }, '' => { rcmd => 'telnet', }, '' => { lonsdale => { address => '', rcmd => 'ssh', }, isp1 => { between => 'evunka', rcmd = 'rlogin', }, ermintrude => { user => 'keck', }, The xterms are put in background so xterms & any ssh's don't wait. The config file name is always the same, regardless of renaming or symlinking. xterms -l Outputs (lists) the config file via $PAGER or more. xterms -show ... Just outputs the commands that would be run. Still uses X11::Screen, so DISPLAY or -display needs to be correct. Brian Keck <> $Source: /home/keck/gen/RCS/xterms,v $ $Revision: 8.11 $ $Date: 2007/07/05 17:00:30 $ xchar 0.2 syntax highlighting:
global_05_local_5_shard_00000035_processed.jsonl/23838
Bruno Postle > MKDoc-XML-0.75 > MKDoc::XML::TreeBuilder Annotate this POD New  2 Open  1 View/Report Bugs MKDoc::XML::TreeBuilder - Builds a parsed tree from XML data my @top_nodes = MKDoc::XML::TreeBuilder->process_data ($some_xml); MKDoc::XML::TreeBuilder uses MKDoc::XML::Tokenizer to turn XML data into a parsed tree. Basically it smells like an XML parser, looks like an XML parser, and awfully overlaps with XML parsers. But it's not an XML parser. XML parsers are required to die if the XML data is not well formed. MKDoc::XML::TreeBuilder doesn't give a rip: it'll parse whatever as long as it's good enough for it to parse. XML parsers expand entities. MKDoc::XML::TreeBuilder doesn't. At least not yet. XML parsers generally support namespaces. MKDoc::XML::TreeBuilder doesn't - and probably won't. my @top_nodes = MKDoc::XML::Tokenizer->process_data ($some_xml); Returns all the top nodes of the $some_xml parsed tree. Although the XML spec says that there can be only one top element in an XML file, you have to take two things into account: 1. Pseudo-elements such as XML declarations, processing instructions, and comments. 2. MKDoc::XML::TreeBuilder is not an XML parser, it's not its job to care about the XML specification, so having multiple top elements is just fine. my $tokens = MKDoc::XML::Tokenizer->process_data ('/some/file.xml'); Same as MKDoc::XML::TreeBuilder->process_data ($some_xml), except that it reads $some_xml from '/some/file.xml'. Returned parsed tree - data structure ^ I have tried to make MKDoc::XML::TreeBuilder look enormously like HTML::TreeBuilder. So most of this section is stolen and slightly adapted from the HTML::Element man page. It may occur to you to wonder what exactly a "tree" is, and how it's represented in memory. Consider this HTML document: <html lang='en-US'> <meta name='author' content='Jojo' /> <h1>I like potatoes!</h1> Building a syntax tree out of it makes a tree-structure in memory that could be diagrammed as: html (lang='en-US') / \ / \ / \ head body /\ \ / \ \ / \ \ title meta h1 | (name='author', | "Stuff" content='Jojo') "I like potatoes" This is the traditional way to diagram a tree, with the "root" at the top, and it's this kind of diagram that people have in mind when they say, for example, that "the meta element is under the head element instead of under the body element". (The same is also said with "inside" instead of "under" -- the use of "inside" makes more sense when you're looking at the HTML source.) Another way to represent the above tree is with indenting: html (attributes: lang='en-US') meta (attributes: name='author' content='Jojo') "I like potatoes" Incidentally, diagramming with indenting works much better for very large trees, and is easier for a program to generate. The $tree->dump method uses indentation just that way. However you diagram the tree, it's stored the same in memory -- it's a network of objects, each of which has attributes like so: element #1: _tag: 'html' _parent: none _content: [element #2, element #5] lang: 'en-US' element #2: _tag: 'head' _parent: element #1 _content: [element #3, element #4] element #3: _tag: 'title' _parent: element #2 _content: [text segment "Stuff"] element #4 _tag: 'meta' _parent: element #2 _content: none name: author content: Jojo element #5 _tag: 'body' _parent: element #1 _content: [element #6] element #6 _tag: 'h1' _parent: element #5 _content: [text segment "I like potatoes"] The "treeness" of the tree-structure that these elements comprise is not an aspect of any particular object, but is emergent from the relatedness attributes (_parent and _content) of these element-objects and from how you use them to get from element to element. This is pretty much the kind of data structure MKDoc::XML::TreeBuilder returns. More information on different nodes and their type is available in MKDoc::XML::Token. Did I mention that MKDoc::XML::TreeBuilder is NOT an XML parser? Copyright 2003 - MKDoc Holdings Ltd. Author: Jean-Michel Hiver MKDoc::XML::Token MKDoc::XML::Tokenizer syntax highlighting:
global_05_local_5_shard_00000035_processed.jsonl/23846
Take the 2-minute tour × We have a LAMP box with 2x mirrored 1 TB WD Black Caviar disks running the whole OS and MySQL. 8 GB / RAM, 2x quad core CPUs. We're really taxed on disk I/O, and I've been thinking of suggesting getting a couple SSD drives in there for /var/lib/mysql, and be done with it. I did a little research, and I like the price point of the Intel X25-M 160 GB, but I've read conflicting options about SSDs in production. We are at ~70 GB, mostly MyISAM tables (> 95%). We are doing mostly reads during production (8-5 p.m.), mostly writes overnight (12 a.m. - 8 a.m.). There have been some helpful posts on here before about SSDs in production, but I think the better ones are a bit dated (the best one was in 2008). Is there more up-to-date feedback on whether SSDs are really ready for medium sized businesses? If not, how can I scale our database server a little better? share|improve this question 4 Answers 4 up vote 10 down vote accepted Stephen, you need to dig deeper first. • Would the entire 'hot' (frequently used) subset of the database fit in RAM if you just upgraded RAM to something larger, like 32 or 64GB? • Have you checked that your database has the right indexes in place, have you done a basic MySQL performance audit? About consumer gear: Using consumer-grade gear like your WD disks in servers is a strongly debated topic. Personally, I think it's a wrong choice in general. But certainly, do not use consumer-grade SSDs like the Intel X-25M (M stands for mainstream). Enterprise-grade SSDs have radically different durability and write endurance goals from consumer SSDs (better wear leveling, more space overprovisioning). Typical setup: A typical disk setup for a server like yours might be 4 enterprise SAS disks, in RAID10, using a proper RAID controller, with a controller RAM cache buffering all writes, and a battery backup unit for the cache. Such gear isn't exactly cheap, but it is a proven choice. SSDs do have advantages, and can be substantially faster than a couple of conventional disks in RAID 10 (especially on heavy random reads, assuming the hot dataset doesn't fit in RAM). The Percona team blogs about SSDs and real-life performance with MySQL here. Anyone have any more up-to-date feedback on whether SSDs are really ready for medium sized businesses? They are, but IMHO only the enterprise-grade SSDs, and preferably a series of SSDs that have been in production for some time to fix bugs. Good choices right now are the Intel X25-E (Extreme) series, and in a 4-6 months (when they're more mature) the Sandforce 25xx series drives with the enterprise feature set. If not, how can I scale our db server a little better? Perhaps you have already done this, but if not: my first suggestion would be to find a MySQL database administrator, and have him do a performance audit of your system. You could very well discover that adding more disk-I/O isn't a cost effective solution for your case. share|improve this answer Is adding an SSD or 2 not cheaper the hiring a MySQL database administrator to do a performance audit? –  Petah Oct 24 '12 at 22:25 I've been running raid arrays of those exact drives, the 160GB intel G2's for almost a year now. Its a cluster of 12 servers doing about 3500 queries per second right now, but thats with a lot of spare capacity, I've had it down at 6 servers and everything was fine. If you do the research and the math it basically boils down to "don't use them for five years or longer", which for me I rounded down and plan to replace them in another year. Considering the insane amount of hardware and developer time they saved I could replace them every quarter and it would still be worth it. share|improve this answer +1 for real-world deployment stories –  Daniel Lawson Apr 2 '11 at 9:08 +1 for the "hardware and developer time .. saved". But, I respectfully mean that "do .. the math ... use for 5 years" is on the high end for servers. 5+ years of endurance is often quoted for desktop workloads, but for a multiuser server workload that sounds high unless that server is unusually light on writes and/or much spare capacity is given: anandtech.com/show/4159/… –  Jesper Mortensen Apr 2 '11 at 13:22 sure, take the numbers in your link. for the 25nm he comes to 108K days at 7GB/day. So lets 10x that and figure i'm writing 70GB/day (the db's only about 150GB and it took 5 years to grow to that). Then in turn that means I'm looking at 10,800 days. What the hell, lets fudge that out a whole nother order of magnitude just for saftey's sake and say 1080 days. Thats two weeks shy of 3 years. So I planned for 2. Plus, like that link says, they're supposed to just go read-only when they wear out, not actually lose data. –  cagenut Apr 3 '11 at 4:11 For anything serious you should go enterprise grade SSDs like FusionIO and the gear STEC sells . They have data on when to run maintenance (TRIM etc.) and provide support in case something goes wrong. I have even seen STEC SSDs beeing yanked by HBAs. My guess is that the real problem here is that most vendors expose SSDs not as flash but as a block device and the controller firmwares out there don't really know how to handle disks and their disk firmware interface. Only experience will tell. The usual thing about backups and RAID also applies to SSDs. But since you are using MyISAM as a storage backend reliability and consistency might not be your primary concern so just buy them and see what happens. share|improve this answer Consider upgrading to Percona MySQL which is specifically tuned for SSD's I/O capabilities. Just switching to SSD drives will give some improvements but MySQL doesn't take full advantage of the capabilities of SSDs http://www.percona.com/software/percona-server/for-ssd/ See their benchmarks stats at http://www.percona.com/docs/wiki/benchmark:ssd:start share|improve this answer Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23850
The Shakespeare Conference: SHK 14.0580 Monday, 24 March 2003 From: Roger Parisious <This email address is being protected from spambots. You need JavaScript enabled to view it.> Date: Friday, 21 Mar 2003 12:42:46 -0800 (PST) Subject: 14.0569 Re: King John Date Comment: Re: SHK 14.0569 Re: King John Date >Roger Parisious wrote: >>And, by the way, the possibility was raised with >care, but not argued, >>in my last communication that "King John" dates >pretty much as we have >>it from around l587. If so, Shakespeare was >technically well in control >>of himself six years before he published "Venus and >John Briggs writes: >>I should point out that such an early date for >"King John" is not >>accepted these days. The general consensus seems >to be c.1595, even if >>the date has been plucked out of thin air! In any >case, the copy for >>the Folio text seems to be a transcript and may be >a late theatrical >While I would say that 1587 is too early for King >John because there are >so many other plays that Shakespeare has to have >written before he >tackled it, there is certainly no consensus that >c.1595 is the answer. >That's the date suggested in Braunmuller's Oxford >edition. I've argued >(in 'The case for the earlier canon' in >Shakespearean Continuities: >Essays in Honour of E.A.J. Honigmann, John >Batchelor, Tom Cain, Claire >Lamont (eds.) London and New York: Macmillan, 1997) >that the play >probably dates from c. 1590 as does Lester Beaurline >in his Cambridge >edition (1990). >It is inconceivable that the author of Troublesome >Reign (published >1591) could have invented the character of the >Bastard unaided by >previous example. That author doesn't understand the >reasons for creating such an a-historical character. >He and his printer >do know, however, that the character is essential >for marketing >purposes. Audiences are not going to be flocking to >see TR's Bastard. If >the character is marketable he is so only in >Shakespeare's version. >Shakespeare's version therefore has to be on the >stage before TR is I appreciate Mr. King's argument and look forward to reading his essay. However, his further comments bring us back to J.M.Robertson's position to which I alluded on the Titus thread. Robertson and Mrs Eva Turner Clark (following Robertson) are almost unique in the 20th century in pointing out that the successive run of MND, King John, I Henry IV, and the first two Acts of Henry IV Part II contain the lowest percentage of double endings in the Canon. The four plays start at something over five per cent for A Dream and are around nine per cent for the opening acts of Henry IV, Part II. At this point they suddenly zoom up to most of twenty per cent and never come down again for the rest of the poet's From Robertson's point of view, the answer was simple. He took the dedication to Venus and Adonis literally as "the first heir of my invention", put the Dream in l594 and proceed with one virtually unique work a year until Shakespeare belatedly adopts the double ending while revising Henry IV, Part II, about the time of the Cobham scandal. This meant that the use of the double ending in English dramatic verse was virtually created by Marlowe and Greene in their later works and that most of the earliest Shakespeare work is low double ending work inserted at the request of his management over unfinished or superannuated material by the last named gentlemen, Kyd and Peele. Clark(l931) was the first critic to realize that one could avoid Robertson's inevitable and disconcerting conclusion by the simple expedient of moving the low double endings sequence en bloc back to the second half of the l580's.Unfortunately she could not synthesize this eminently sensible perception of the metre problem with the rest of her chronology,which was in no way based on metrical considerations. As Nashe refers to an Oldcastle play which sounds suspiciously like the one we know in l593,it could be then argued that the raise of the double ending rate dates from a revision four to six years after the original composition(1588-l589) but nearly simultaneous with Lucrece. DISCLAIMER: Although SHAKSPER is a moderated discussion list, the editor assumes no responsibility for them. Subscribe to Our Feeds Make a Donation Consider making a donation to support SHAKSPER.
global_05_local_5_shard_00000035_processed.jsonl/23861
Follow Slashdot stories on Twitter Forgot your password? + - 'Smart' labware made with bathroom sealant and a 3D printer-> Submitted by ananyo ananyo writes: Researchers have used a 3D printer and quick-setting bathroom sealant to make a variety of customized vessels for chemical reactions. The method (abstract) allows labware to become, for the first time, an integral part of the experiment itself. The scientists printed one vessel with catalyst-laced 'ink', enabling the container walls to drive chemical reactions. Another container included built-in electrodes, made from skinny strips of polymer printed with a conductive carbon-based additive. The strips carried currents that stimulated an electrochemical reaction within the vessel. The system could allow scientists to test chemical processes in ways that were not economical before — producing just a few tablets of a particular drug, for example, for further tests or clinical trials. Link to Original Source 'Smart' labware made with bathroom sealant and a 3D printer Comments Filter:
global_05_local_5_shard_00000035_processed.jsonl/23862
Forgot your password? Comment: Re:And your point is? (Score 4, Informative) 627 I wrote back and they replied insisting that the $50k was a firm number. I had forgotten too that they had approached me about buying advertizing from them several weeks ago and I rejected them because although Boston is the major DMA, my campaign can't afford to pay to broadcast to 5million who are not in my district. Comment: Re:Idealism is Impractical (Score 1) 3 + - Libertarian Candidate Excluded From Debate For Refusing Corporate Donations-> 3 Submitted by Link to Original Source + - Ebooks for libraries to self destruct?-> Submitted by fishdan writes: "The New York Times is reporting that HarperCollins Publishers announced last week that they would begin making the eBooks that they give to libraries expire after 26 readings (assuming a 2 weeks checkout period, that means one year of being loaned). Simon & Schuster and Macmillan (among other publishers), do not sell eBooks to libraries at all because checking out ebooks from an online library in many cases is easier than buying a book online. “We are working diligently to try to find terms that satisfy the needs of the libraries and protect the value of our intellectual property,” John Sargent, the chief executive of Macmillan, said in an e-mail. “When we determine those terms, we will sell e-books to libraries. At present we do not.”" Link to Original Source + - Comcast's box rental rules violate antitrust laws?-> Submitted by DaGoatSpanka Link to Original Source Comment: Re:REALLY misleading title (Score 3, Interesting) 417 by fishdan (#27759129) Attached to: US ISPs Using Push Polling To Stop Cheap Internet I'm with you on this -- the monopoly is completely anti-consumer. The problem is that with significantly lower operating costs, the city will be able to drive the telcos out, and then THEY will be the monopoly. I hate private monopolies but I hate the state as monopoly equally. Simple solution here. Tell the city they cannot collect fees/taxes on the ISPs we're all good. I definitely want the city to come in and bust up the Telco monopoly -- I just don't want one monopoly to be replaced by another. I agree the way the telcos are going about this is wrong though. I'd rather see legislation like: Where municipalities set up their own ISP, they cannot assess city taxes or fees on competing ISPs." It's all about operating costs -- make those as equal as you can, and THEN let everyone compete. Comment: Re:REALLY misleading title (Score 1) 417 by fishdan (#27759059) Attached to: US ISPs Using Push Polling To Stop Cheap Internet >provision of communications service They used that language because it's internet today, and VOIP tomorrow. >Is a telco or cable company required to keep separate accounts for their internet service? Geez. http://www.tradingmarkets.com/.site/news/Stock%20News/2296405/ >...the bill would say that *ALL* internet providers would be subject to these rules That is an excellent suggestion, and I agree that would be the perfect wording. On the other hand, I doubt that the current Telcos are collecting much in the way trash and water fees. Comment: REALLY misleading title (Score 4, Insightful) 417 by fishdan (#27758205) Attached to: US ISPs Using Push Polling To Stop Cheap Internet Read the senate bill: http://www.ncga.state.nc.us/Sessions/2009/Bills/Senate/PDF/S1004v1.pdf I hate the telcos as much as anyone, but this bill says that when the city enters into the communications business, it should have to pay all the same taxes and fees as private business would, and be burdened with the same oversight. They also say that other fees the citizens pay (trash, water etc) cannot be used to fund the communications business. I don't see how this bill is unfair at all. The telcos are essentially saying "If we didn't have to pay any fees to the city to provide service, we could be competetive." If government wants to set up a business, they should have go compete with other businesses on a level playing field. If municipalities want to open up their own ISP, I am all for that, but then they should stop collecting fees and taxing the other ISPs they are competing with. Municipal government should not be using taxes and fees to provide a commercial advantage for themselves. I think the "level playing field" is actually a good title for this bill, and not an unreasonable request. We're all hopped up on this because it's something that's near and dear to us, but imagine if the city set up a taxi service, but then did not have to pay gasoline tax or hackney licenses. Obviously it benefits the public who uses taxis, but is it fair to the taxi drivers and cab companies that they now have to charge more than the city taxis. + - Frito Lay sues Derby Dames in Trademarkdispute.-> Submitted by fishdan writes: "You think it's tough when your hobby involves you getting blindsided by a leggy blond while on the track? Imagine if the same hobby got you blind sided by a multinational! Frito-Lay, a company previously thought of favorably by 5 out of 5 code monkeys, recently filed a suit of opposition against Coleen Bell, a Madison WI native and former Mad Rollin' Dolls roller derby player. Frito-Lay claims that Bell's roller derby name, Crackerjack, is too similar to the name of their famous caramel popcorn and nut snack. Which seems ironic, when in their own ads they say "What do you call a kid, who can skate like that? You call that kid a Cracker Jack." Bell has posted a video that succinctly makes her point." Link to Original Source Comment: Re:Alamo Drafthouse is awesome (Score 1) 437 by fishdan (#27505611) Attached to: Star Trek Premiere Gets Standing Ovation, Surprise Showing In Austin >but that industry is in danger due to poaching from states like Lousiana and New Mexico. >If you live in Texas, write your state representative and senator and get them to support >Representative Dawnna Duke's economic incentive bill. If you live in Louisiana or New Mexico, find some random rep/sen in Texas and tell them how they better not waste any more money on films that make $$$. :)
global_05_local_5_shard_00000035_processed.jsonl/23888
Take the 2-minute tour × I just had Apple's C/C++ compiler initialize a float to a non-zero value (approx "-0.1"). That was a big surprise - and only happened occasionally (but 100% repeatably, if you ran through the same function calls / args beforehand). It took a long time to track down (using assertions). I'd thought floats were zero-initialized. Googling suggests that I was thinking of C++ (which of course is much more precise about this stuff - c.f. SO: What are primitive types default-initialized to in C++? ). But maybe Apple's excuse here is that their compiler was running in C mode ... so: what about C? What should happen, and (more importantly) what's typical? (OF COURSE I should have initialized it manually - I normally do - but in this one case I failed. I didn't expect it to blow up, though!) (Google is proving worse than useless for any discussion of this - their current search refuses to show "C" without "C++". Keeps deciding I'm too stupid, and ignoring even my input even when running in advanced mode) Here's the actual source example where it happened. At first I thought there might be a problem with definitions of MAX and ABS (maybe MAX(ABS,ABS) doesnt always do what you'd expect?) ... but digging with assertions and debugger, I eventually found it was the missing initialization - that float was getting init'd to non-zero value VERY occasionally): float crossedVectorX = ... // generates a float float crossedVectorY = ... // generates a float float infitesimal; // no manual init float smallPositiveFloat = 2.0 / MAX( ABS(crossedVectorX), ABS(crossedVectorY)); // NB: confirmed with debugger + assertions that smallPositiveFloat was always positive infitesimal += smallPositiveFloat; NSAssert( infitesimal >= 0.0, @"This is sometimes NOT TRUE" ); share|improve this question Depends on where that "initialization" happens - might not be initialized at all. –  Mat Apr 10 '12 at 13:19 Added copy/paste of the source where I saw the problem. Seemed to me that it ought to have been init'd ? –  Adam Apr 10 '12 at 13:27 +1 For asking a question that everyone is better off knowing the answer to. –  borrrden Apr 10 '12 at 14:21 5 Answers 5 up vote 17 down vote accepted Only objects with static storage duration are initialized to 0 if there is no explicit initializer. #include <stdio.h> float f; // initialized to 0, file scope variables have static storage static float g; // initialized to 0 int main(void) float h; // not initialized to 0, automatic storage duration static float i; // initialized to 0 return 0; Objects with automatic storage duration (like h in the example above) that are not explicitly initialized have an indeterminate value. Reading their value is undefined behavior. EDIT: for the sake of completeness, since C11 objects with thread storage duration are also initialized to 0 if there is no explicit initializer. share|improve this answer Thanks. Lots of good answers here, but IMHO laying it like this makes it easiest to understand at a glance. –  Adam Apr 10 '12 at 13:35 Agreed, I like the format of this answer. –  R.. Apr 10 '12 at 13:56 The relevant part of the standard is §6.7.9 paragraph 10: If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate. If your variable had thread or static storage duration instead, then the next part of the paragraph would take effect: If an object that has static or thread storage duration is not initialized explicitly, then: -- if it has pointer type, it is initialized to a null pointer; -- if it has arithmetic type, it is initialized to (positive or unsigned) zero; I would also note that you should turn on your compiler's warnings (specifically the warning for uninitialized variables), as that should have identified the problem for you immediately. share|improve this answer Static variable would be initialized to zero, but I'm guessing you are talking about a local variable (i.e. stack, or automatic) - these are not initialized for you, but get whatever value is at that memory on the stack. share|improve this answer I had to pull out my K&R for this answer: In the absence of explicit initialization, external and static variables are guaranteed to be initialized to zero; automatic and register variables have undefined (i.e., garbage) initial values. share|improve this answer I don't believe that any of the standards for C define initial values for variables in general. This would be in accord with the general philosophy of and application domain for C -- programming for grown-ups who may, one day, have reason to want their compiler to not initialise a variable for them and who know that it is their responsibility to initialise their own variables. share|improve this answer As mentioned elsewhere, file and global variables are initialized to zero by default. –  Kevin Apr 10 '12 at 13:37 Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23889
Take the 2-minute tour × I have a simple "Invoices" class with a "Number" attribute that has to be assigned by the application when the user saves an invoice. There are some constraints: 1) the application is a (thin) client-server one, so whatever assigns the number must look out for collisions 2) Invoices has a "version" attribute too, so I can't use a simple DBMS-level autoincrementing field I'm trying to build this using a custom Type that would kick in every time an invoice gets saved. Whenever process_bind_param is called with a None value, it will call a singleton of some sort to determine the number and avoid collisions. Is this a decent solution? Anyway, I'm having a problem.. Here's my custom Type: class AutoIncrement(types.TypeDecorator): impl = types.Unicode def copy(self): return AutoIncrement() def process_bind_param(self, value, dialect): if not value: # Must find next autoincrement value value = "1" # Test value :) return value My problem right now is that when I save an Invoice and AutoIncrement sets "1" as value for its number, the Invoice instance doesn't get updated with the new number.. Is this expected? Am I missing something? Many thanks for your time! (SQLA 0.5.3 on Python 2.6, using postgreSQL 8.3) Edit: Michael Bayer told me that this behaviour is expected, since TypeDecorators don't deal with default values. share|improve this question 1 Answer 1 up vote 3 down vote accepted Is there any particular reason you don't just use a default= parameter in your column definition? (This can be an arbitrary Python callable). def generate_invoice_number(): # special logic to generate a unique invoice number class Invoice(DeclarativeBase): __tablename__ = 'invoice' number = Column(Integer, unique=True, default=generate_invoice_number) share|improve this answer Ouch didn't know you could use a callable there, thanks! I'll try it right away :) –  Joril Jun 24 '09 at 12:57 Say your default callable returns the max of the column in the DB plus one. Is there any way to assert there are no race conditions without relying on a error from the column being unique? –  Mike Boers Jun 25 '09 at 20:03 In that case, you're better off writing your default as an inline SQL expression. This is covered in detail in the SQLAlchemy documentation at sqlalchemy.org/docs/05/… –  Rick Copeland Jun 26 '09 at 1:20 You are going to get race conditions even with SQL expressions, it only makes the race window smaller. To avoid races you need locking, either explicit table level or row level via select for update. –  Ants Aasma Jun 27 '09 at 12:34 Are you certain you'll get race conditions if your insert is a single statement (as it will be if you use an SQL expression as a default)? I was under the impression that single statements typically execute atomically. –  Rick Copeland Jun 27 '09 at 16:01 Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23890
Take the 2-minute tour × I just found PhoneGap and I want to use it for mobile applications, is there an IDE for developing in PhoneGap? thank you in advance. share|improve this question Just about any Ruby IDE will do the job. Are you looking for an IDE that's geared specifically toward PhoneGap development? That might be a better question to ask. –  Brian Driscoll Jun 15 '12 at 14:11 Yes, but i don't have a lot of ideas for PhoneGap ? Thank you Brian –  Imad Jun 15 '12 at 14:44 9 Answers 9 You may use Eclipse! Or look at appMobi XDK (google it) share|improve this answer So, I can use Eclipse for Android applications and Xcode for iPhone, iPad, iPod applications ... ? or I can use appMobi XDK?, and programing for langage used: is HTML / JavaScript / CSS? –  Imad Jun 15 '12 at 14:36 What I was saying was, you can use eclipse to develop phonegap application to run in Android. At the end you will have to "compile" the application into it's native form, so yes, you must have xcode for iphone. I know there are some online services where you upload the phonegap app and it will "compile" them into the native form, never used it... Good luck –  Rui Lima Jul 9 '12 at 10:19 Ok, thank you Lima –  Imad Jul 11 '12 at 12:15 You can use eclipse with phonegap plugin. Install plugin to eclipse by this link and also read this link share|improve this answer ok, thank you for your help –  Imad Jun 18 '12 at 9:08 You can use Adobe Dreamweaver CS6 for building up these mobile apps using PhoneGap. Or if you want to use different IDE's then - For Android - Eclipse For iOS - XCode Hope it helps. share|improve this answer Ok , think you Aksh –  Imad Jun 18 '12 at 9:02 Have a look at NSB/App Studio. The IDE has a feel similar to Visual Studio, and it lets you program in JavaScript and/or BASIC. It compiles directly to PhoneGap. share|improve this answer NetBeans 7.4 supports PhoneGap development https://netbeans.org/community/releases/74/ share|improve this answer you can use Dreamweaver CS 5.5 it allows you to to build a native mobile application using the popular PhoneGap framework - and deliver rich native applications to iOS and Android without needing to learn new languages or tools. share|improve this answer ok thank you, good idea –  Imad Jun 18 '12 at 9:20 Dreamweaver, Web matrix is good IDE or Cloud9 IDE for cloud editor :D share|improve this answer Actually NetBeans 7.4 supports Apache Cordova and not Phonegap. This way you can develop cross platform application without the need to install any additional plugin. YOu can read here the difference between Phonegap and Apache Cordova phonegap.com I dont think that there is another IDE allowing it without adding any plugin. share|improve this answer The Monaca online IDE has saved me a lot of time and money. It works great and they integrate their beautiful UI (Onsen UI) with the projects. Moreover, they give you free server space to fetch and edit data online from your application! If your new I would highly recommend it as a place to start. share|improve this answer Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23891
Take the 2-minute tour × I have the following pieces of code: public static PublicKey pubKey; public static PrivateKey privKey; public static Cipher cip; //Generate the keys KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA"); KeyPair kp = kpg.genKeyPair(); Key publicKey = kp.getPublic(); Key privateKey = kp.getPrivate(); KeyFactory fact = KeyFactory.getInstance("RSA"); cip = Cipher.getInstance("RSA/ECB/NoPadding"); // Store Public Key. X509EncodedKeySpec x509EncodedKeySpec = new X509EncodedKeySpec( FileOutputStream fos = new FileOutputStream("public.key"); // Store Private Key. PKCS8EncodedKeySpec pkcs8EncodedKeySpec = new PKCS8EncodedKeySpec( fos = new FileOutputStream("private.key"); //Get the public and private keys out of their files //Check if the keys gotten out of the files are the same as the generated files (this returns truetrue) byte[] text = "This is my super secret secret".getBytes(); encryptToFile("encrypted.txt", text ); decryptToFile("encrypted.txt", "decrypted.txt"); Getting the keys from the files private static void getPubAndPrivateKey() throws IOException, Exception { // Read Public Key. File filePublicKey = new File("public.key"); FileInputStream fis = new FileInputStream("public.key"); byte[] encodedPublicKey = new byte[(int) filePublicKey.length()]; // Read Private Key. File filePrivateKey = new File("private.key"); fis = new FileInputStream("private.key"); byte[] encodedPrivateKey = new byte[(int) filePrivateKey.length()]; KeyFactory keyFactory = KeyFactory.getInstance("RSA"); X509EncodedKeySpec publicKeySpec = new X509EncodedKeySpec( pubKey = keyFactory.generatePublic(publicKeySpec); PKCS8EncodedKeySpec privateKeySpec = new PKCS8EncodedKeySpec( privKey = keyFactory.generatePrivate(privateKeySpec); public static void encryptToFile(String fileName, byte[] data) throws IOException { try { cip.init(Cipher.ENCRYPT_MODE, privKey); byte[] cipherData = cip.doFinal(data); String encryptedData = cipherData.toString(); BufferedWriter out = new BufferedWriter(new FileWriter(fileName)); } catch (Exception e) { private static void decryptToFile(String string, String string2) throws Exception { try { File encryptedFile = new File("encrypted.txt"); byte[] encrypted = getContents(encryptedFile).getBytes(); cip = Cipher.getInstance("RSA/ECB/PKCS1Padding"); cip.init(Cipher.DECRYPT_MODE, pubKey); byte[] cipherData = cip.doFinal(encrypted); String decryptedData = cipherData.toString(); BufferedWriter out = new BufferedWriter(new FileWriter( } catch (Exception e) { throw e; Things I already checked • The data used in the decryption is the same as in the encrypted file • The generated keys are the same as the ones gotten from the file • The encryption and decryption both don't give errors Original string: My super secret secret The encryption results in: [B@1747b17 The decryption results in: [B@91a4fb share|improve this question Take a look at [this][1] - maybe some of the tips can help you out. [1]: stackoverflow.com/questions/2714327/… –  Yair Zaslavsky Jul 4 '12 at 13:26 @zaske I changed it to "RSA/ECB/NoPadding" and the en/decryption work without errors. But 2 questions why does it work with "RSA/ECB/NoPadding" and not with "RSA/ECB/PKCS1Padding" (I'm padding it both the same right?) And why does the decryption not work propperly? –  Rick Hoving Jul 4 '12 at 13:34 Beginner Java errors are present here. In particular, the (byte []).toString() method does not do what you think it does. –  GregS Jul 4 '12 at 14:58 @GregS that said, I would not mind if the byte[].toString() would be defined and would return a hexadecimal String. That object reference is bloody useless. –  Maarten Bodewes Jul 6 '12 at 0:29 2 Answers 2 up vote 2 down vote accepted If you print out a byte array via toString() method you are getting a value that is totally independent of the content. Therefore the values [B@1747b17 [B@91a4fb are just garbage that does not tell you anything. If you want to print the content of a byte array convert it to Base64 or hex-string. System.out.println(new sun.misc.BASE64Encoder().encode(myByteArray)); A hex string can be generated by using org.apache.commons.codec.binary.Hex from Apache Commons Codec library. share|improve this answer You beat me to that answer, it's wrong to assume that toString() is going to give a meaningful result in every case. What this result says to me is that the original string is the one at location 1747b17 and the decrypted one is located at 91a4fb; in other words the original and decrypted string are not the same string (they are at different locations). This may not be the correct way to interpret those strings but it works for me. "same" is does not mean "equal". If I create two stings "a" and "a", they are not the same string because they are different instances but they are equal. –  Mark S. Jul 4 '12 at 16:58 Thanks a lot, It has been 4 years since I have programmed Java. Beginners mistake my bad. –  Rick Hoving Jul 6 '12 at 7:56 I agree with the above answer. I would like to add that in your case, you can simply use FileOutputStream, write the bytes to a file - For example: throws IOException { FileOutputStream out = null; try { cip.init(Cipher.ENCRYPT_MODE, privKey); byte[] cipherData = cip.doFinal(data); out = new FileOutputStream(fileName); } catch (Exception e) { } finally { if (fos != null) { try { } catch (IOException ex) { share|improve this answer Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23892
Take the 2-minute tour × I have a WSDL that I need to generate a ServiceContract (OperationContract, DataContract)... I have found a way to do it for ASMX WebServices but can't seem to find how to do it in WCF. I have tried running svcutil AuthPartnerWSDL.wsdl /i /messagecontract /tcv:version35 but the resulting interface doesn't deserialize the call coming in so all the request parameters to the service implementation are null share|improve this question Small correction of terminology: what you're referring to as "webservices" are "ASMX Web Services", sometimes known as "ASP.NET Web Services". WCF services are web services if they use SOAP or REST. –  John Saunders Jul 24 '09 at 0:45 thanks, fixed the question –  kay.one Jul 25 '09 at 6:43 2 Answers 2 up vote 7 down vote accepted Contract first tool for WCF share|improve this answer I dont understand the downvote, would you care to explain? –  kay.one Jul 25 '09 at 6:47 Maybe the downvoter didn't realize this is your question. –  John Saunders Jul 25 '09 at 9:26 even then, why would it be down voted? isn't it a valid answer? –  kay.one Jul 25 '09 at 15:50 It's valid, but if the answer was from someone else, it would be a fairly bad answer. It's not considereed good to just throw in a link. BTW, you might edit this answer to say a little more about this tool: why is it useful? How does it answer your question, etc. –  John Saunders Jul 25 '09 at 16:01 Make sure you have the most updated WSDL that matches the current service definition. share|improve this answer both the client proxy and the ServiceContract are generated from the same wsdl file. –  kay.one Jul 23 '09 at 20:40 Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23893
Take the 2-minute tour × I have entered a sentence of type string. std::string message; std::getline(std::cin, message); After entering a sentence I used a if statement to convert the string to "Morse code": int length = message.length(); for(int i = 0; i < length;i++) //to loop in the message if(message[i] == 'A') cout << "-.";//and the rest for 'b','c','d'....'z' How do i take the Morse code of the string entered and decode it. Eg: If the is ".-" in the Morse code then display 'A' and if "-..." in the message display 'B'. share|improve this question regular expressions would be the smartest way .. –  philippe Aug 11 '12 at 18:29 you would have to iterate through the string manually, searching for substring could match something that some of it belongs to a different letter –  Gir Aug 11 '12 at 18:32 are there spaces between the "characters" in the morse string? i assumed that there aren't any –  Gir Aug 11 '12 at 18:38 @Gir there is spaces in the morse string. –  Jonathan Geers Aug 11 '12 at 18:41 1 Answer 1 up vote 8 down vote accepted use a binary tree this way - the root is empty (NULL). each child well have one of the chars '-' '.' . this way you decode the hole Morse code into the tree. now instead of NULL put at the end put the char that you should get at the end. the tree should look like that: / \ '-' '.' etc. now you could find chars in O(lg n) when n = size of tree. share|improve this answer can be done with a LUT as well, but will have to convert the string to a bitstring with '-'=1 and .='0' –  Gir Aug 11 '12 at 18:43 +1 for those 4 faces in your diagram.. –  iKlsR Aug 11 '12 at 19:18 Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23894
Take the 2-minute tour × I'm creating some Dojo 1.8 GlossySemiCircularGauge at runtime via javascript and I'm trying to set the background color of the gauge. I'm looking to set the color outside the gauge, not the gauge it self. I'm creating the gauge with syntax like this: glossyCircular = new dojox.gauges.GlossySemiCircularGauge({ textIndicatorColor: '#FFFFFF', background: "[0, 255, 0, 0]", id: NewID, Max: 20, value: newValue, noChange: "true", width: wid, textIndicatorPrecision: "2", color: '#101030', height: hei }, dojo.byId(NewID)); Since the gauge is drawn with SVG, it doesn't work to set the background color of the container div. Is there a way around this? share|improve this question 1 Answer 1 up vote 1 down vote accepted The correct format for background is: background: { color: "rgba(0,0,0,0)"} Set the alpha channel to zero, so it will get transparent and you can adjust the background color via parent <div>. See and play with a working example at jsFiddle: http://jsfiddle.net/phusick/E9YNM/ EDIT: I added dojo/domReady! to the example, so now it works not only in my browser. EDIT2: background: [0,0,0,0] works as well, so just get rid of those quotation marks to have array instead of a string. share|improve this answer Thanks! That was exactly what I was looking for. –  Chris Miller Aug 30 '12 at 14:17 Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23895
Take the 2-minute tour × I am trying to cluster a protein dna interaction dataset, and draw a heatmap using heatmap.2 from the R package gplots. Here is the complete process that I am following to generate these graphs: Generate a distance matrix using some correlation in my case pearson. args <- commandArgs(TRUE); matrix_a <- read.table(args[1], sep='\t', header=T, row.names=1); mtscaled <- as.matrix(scale(matrix_a)) pdf("result.pdf", pointsize = 15, width = 18, height = 18) result <- heatmap.2(mtscaled, Colv=T,Rowv=T, scale='none',symm = T, col = brewer.pal(9,"Reds")) I am able to acomplish this with the normal heatmap function by doing the following: result <- heatmap(mtscaled, Colv=T,Rowv=T, scale='none',symm = T) However when I use the same settings for Heatmap.2 the clusters don't line up as well on the diagonal. I have attached 2 images the first image uses heatmap and the second image uses heatmap.2. I have used the Reds color from the package RColorBrewer to help better show what I am taking about. I would normally just use the default heatmap function, but I need the color variation that heatmap.2 provides. Here is a list to the dataset used to generate the heatmaps, after it has been turned into a distance matrix: DataSet Heatmap drawn from heatmap Heatmap drawn from Heatmap.2 share|improve this question migrated from stats.stackexchange.com Sep 30 '12 at 21:34 1 Answer 1 up vote 3 down vote accepted It's as if two of the arguments are conflicting. Colv=T says to order the columns by cluster, and symm=T says to order the columns the same as the rows. Of course, both constraints could be satisfied since the data is symmetrical, but instead Colv=T wins and you get two independent cluster orderings that happen to be different. If you give up on having redundant copy of the dendrogram, the following gives the heatmap you want, at least: result <- heatmap.2(mtscaled, Rowv=T, scale='none', dendrogram="row", symm = T, col = brewer.pal(9,"Reds")) symmetrical heatmap share|improve this answer Hi @Xan thank you I was actually able to do it with this result <- heatmap.2(mtscaled,dendrogram="col", scale='none',symm = T, col=bluered(16), breaks=my.breaks) I accept your answer though and +1 because it was different from what I had :-) –  Alos Sep 26 '12 at 0:50 Thanks @Alos. It's OK to answer your own question, btw. –  xan Sep 26 '12 at 0:55 Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23896
Take the 2-minute tour × I have created my own custom account and that is working fine to add an account and to use it in the app. I now want to be able to edit the account information instead of removing and adding an account. I have a number of fields on my account and some can be a bit lengthy, so removing and adding is rather cumbersome. I have created the basics for this by adding my own PreferenceScreen and connected it via the android:accountPreferences in my account-authenticator XML file as per the example here: AbstractAccountAuthenticator. In my PreferenceScreen I define an intent to open my activity that is used to enter the user data for the account. android:title="Edit Account Details" android:summary="Change System ID, user name, password etc."> android:targetClass="my.app.accountmanager.UserCredentialsActivity" /> My issue is, how do I either pass along as extras in the intent or find the account information for the account I selected in the Settings/Accounts & Sync. It is possible to have multiple accounts of this custom account type, so I can't just search for any account of that type. I need the data from the selected account. My thoughts have roughly been in these areas: 1. Include something in the xml to add extras. Don't see how this can be possible. 2. Have the target for the intent be my AccountAuthenticator class or the authenciation service, but how do I pass in that I want to edit the data? Since AbstractAccountAuthenticator has a method updateCredentials that returns a bundle with the intent to my data entry activity, that could perhaps work if I could pass in an EDIT action or something like that. 3. Override some method somewhere to create my own intent with the account data. I hope this is possible to do as both a Samsung app and the Dropbox app do this from the Accounts & Sync, although neither allow multiple accounts... share|improve this question 1 Answer 1 I think the accountPreferences attribute in AbstractAccountAuthenticator is going to be obsolete soon. If you look at the accounts in JB, if you add multiple accounts, it is going to be displayed like this 1. [email protected] 2. [email protected] 3. Preference Instead of 1. [email protected] -> Preference 2. [email protected] -> Preference And if you take a look at the Gmail app, the preferences (notifcation ringtone) for Gmail is configured within the Gmail app itself, and can't be configured from the Accounts & Settings page. So, you should only use the accountPreferences attributes for preferences that are common to all accounts. share|improve this answer Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23897
Take the 2-minute tour × I have a view model with a custom object. On the initial get, I populate Foo and use a couple of Foo's properties. On the post, I find that Foo on the view model is null. I could add to my view, @Html.HiddenFor(x => x.Foo.Id) which could ensure that a Foo is populated with at least an Id, but then I could need to add similar code for all of the properties. Is there a way to send back the complete object? public class RequestModel public Foo Foo{ get; set; } [Display(Name = "Comment")] public string Comment { get; set; } public ActionResult Index(int? id) //Populate Foo here using EF and add it to the model var model = new RequestModel { Foo = foo }; return View(model); public ActionResult Index(int? id, RequestModel model) return View(model); share|improve this question Could you add the code from your view and controller? –  heads5150 Nov 22 '12 at 5:57 I've updated the question. I guess my question is really whether a VM should expose EF objects, or if the controller should just map the simple values onto the view. –  Kye Nov 22 '12 at 6:10 Why do you want to send all Foo properties back to the action? –  Jan Nov 22 '12 at 8:42 Because I want to use the object in the controller post event and I don't want to guess if a proprty is null by default or because I missed a mapping. –  Kye Nov 22 '12 at 11:49 1 Answer 1 up vote 1 down vote accepted Add a view model with the properties you want to your solution. Put your validations etc on it and use that to move your data between your page and controller then map the properties to your EF object. share|improve this answer Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23898
Take the 2-minute tour × I need to add custom color in creating each event in the full calendar (just like in the google calendar- the user can customize the text and bg colors of events). Do you know any color picker plugins that may best fit to full calendar? share|improve this question 2 Answers 2 up vote 0 down vote accepted You can set the CSS Class name for a new event. Like var myEvents = [ title: 'XMas', start: theDate, className : 'holiday' Then to update the style of the given event type do something like the following: #calendar .holiday, #calendar .holiday div, #calendar .holiday span { background-color: #6d4d47; color: #ffffff; border-color: #6d4d47; font-size: 12px; share|improve this answer It's in the fullcalendar documentation... which seems to be down at the moment. You can pass in colours for each event as elements of the array: 'backgroundColor' => "#36c", 'borderColor' => "#36c", 'textColor' => "#36c", No plugin needed! share|improve this answer Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23899
Take the 2-minute tour × Is it possible to set a message header to a value read from a properties file, using the camel Properties Component? I can set such properties to URI options, but I'm unable to set them as a header values. I need something like this: <camel:setHeader headerName="actionId"> where onus.transPosting.RtSFailed is a property key set on a file imported using camel Properties Component. Note: I'm using Apache Camel 2.10.1 Using the <propertyPlaceholder> as suggested by this discussion did not work and it causes an exception: Caused by: org.apache.camel.language.simple.types.SimpleParserException: Unknown function: onus.transPosting.RtSFailed share|improve this question <simple>${onus.transPosting.RtsFailed}</simple> does not work? (As good as no experience with Apache Camel) –  Joop Eggen Dec 26 '12 at 9:41 no it doesn't :( –  Ahmad Y. Saleh Dec 26 '12 at 10:13 See this discussion: camel.465427.n5.nabble.com/… –  Konstantin V. Salikhov Dec 26 '12 at 12:07 thanks Konstantin, plz check my update on the post –  Ahmad Y. Saleh Dec 26 '12 at 12:52 1 Answer 1 up vote 6 down vote accepted Yes you can, use the simple language which has a properties function: http://camel.apache.org/simple <camel:setHeader headerName="actionId"> Though I think we have fixed in latest Camel releases that < camel:constant > will resolve property placeholders as well. share|improve this answer My bad, I should've mentioned what release I'm using, I updated the question accordingly. Anyway, the simple properties function worked for me. Thank you very much :) –  Ahmad Y. Saleh Dec 27 '12 at 13:39 Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23900
Take the 2-minute tour × I want to install java using yum command (Red Hat Enterprise Linux Version 5) but exception arises.. $ yum install java-1.6.0-openjdk Loading "rhnplugin" plugin Loading "security" plugin Loading "installonlyn" plugin **This system is not registered with RHN.** RHN support will be disabled. plz help i m new for linux.. share|improve this question closed as off topic by Toto, Bobrovsky, fancyPants, Thor, Soufiane Hassou Feb 13 '13 at 11:28 your error is simmilar to one described in this link: flashdba.com/2012/10/08/… –  Tucker Feb 13 '13 at 6:42 1 Answer 1 up vote 3 down vote accepted You need to run this from your command prompt to register your server: If your new to linux I'm going to suggest the nixCraft website as one (of many) excellent resources to *nix tutorials and questions. share|improve this answer
global_05_local_5_shard_00000035_processed.jsonl/23901
Take the 2-minute tour × I have recently grown fond of VIM for simple scripts now that I know how to use it a little bit. (thanks VIM adventures!!!) Is there a LaTeX editor out there with vim-like commands? Having the toolbar of Winedt with the VIM commands to move around the text and replace/substitute things would be great. share|improve this question google.com/… ?????? –  Andy Ray Feb 22 '13 at 21:13 It would be silly if I suggest you Vim? –  hauleth Feb 22 '13 at 21:17 @ŁukaszNiemier thats what I just did :) –  Chris Knadler Feb 22 '13 at 21:18 VIM adventures, is that a site I haven't looted yet? Going to google.. EDIT: Ah, it's not a tips/docs base. It's more like... a crutch instead of doing your actual work in VIM! Still nice though :) –  sehe Feb 22 '13 at 21:21 See LaTeX Editors/IDEs. –  Werner Feb 23 '13 at 1:24 1 Answer 1 There is vim-latex which adds LaTeX support to Vim, if that is what you are looking for. I recommend installing it using Vundle. share|improve this answer And in the GUI Version of Vim, aka gvim, vim-latex also provides some toolbar functionality. –  Jan Feb 22 '13 at 22:25 Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23902
Take the 2-minute tour × In my cocoa OS X application, I have a WindowController with a xib file, and Two ViewControllers with xib files, I have added a Custom View in the WindowController, Where I am swapping those two Sub Views by removing and adding those Views when clicked in continue or next button. [[theViewController view] removeFromSuperview]; self.theViewController = [[WelcomeInstallViewController alloc] initWithNibName:newView bundle:nil]; [innerInstallerView addSubview:[theViewController view]]; [[theViewController view] setFrame:[innerInstallerView bounds]]; Now In one of those views i have a button which needs to disable the continue button in the WindowController.I have looked into NSNotificationCenter, this is my first mac,cocoa, objective c app. should i use NSNotificationCenter? i am confused, and didn't understand properly. enter image description here share|improve this question 2 Answers 2 up vote 1 down vote accepted There are many ways to skin the cat.. The simplest approach would consist of adding an outlet to your NSWindowController and link the button to that outlet in Interface Builder, then handle the button enablement on whatever conditions you require. Notifications are one good way of loosely coupling application components, e.g. in case the window controller doesn't initiate the state change that would trigger the button to disable/enable itself. Other possibilities include NSUserInterfaceValidations, a dedicated mechanism (protocol) in Cocoa to.. share|improve this answer but action (a button triggred) must come from one of the sub views, then it will disable the button in the windowController. –  Ataur Rahim Chowdhury Apr 16 '13 at 8:50 Given a similar design requirement (multiple loadable XIBs), I have used the NSViewController paradigm to allow logic to be attached to the sub-views that I load into the main view. In this case, I would create an NSViewController subclass which has a bool property (let's call it canContinue), which I would bind from the main view's button to owner's subview.canContinue. If you do this, the main view will have to load the view controller (which will take care of loading the XIB) when you bring in each of the individual subviews, and then make sure to assign the subview property in the owner to point to the NSViewController that you load. share|improve this answer Your Answer
global_05_local_5_shard_00000035_processed.jsonl/23903
Take the 2-minute tour × I find out that I can get all subdirectories of the folder with below code in php $address = new RecursiveIteratorIterator( new RecursiveDirectoryIterator($root, RecursiveDirectoryIterator::SKIP_DOTS), RecursiveIteratorIterator::CATCH_GET_CHILD // Ignore "Permission denied" and put it in the $address. How can I add one more criteria and say if the subdirectory has the 'tmp' folder inside it, then put it in the $address ? share|improve this question for clarification: do you want /path/to in address or /path/to/tmp. The "it" in "put it in the $address" is somewhat ambiguous. –  Gordon May 5 '13 at 21:26 path/to/tmp address –  wikinevis May 5 '13 at 21:28 @wikinevis does in mean /path/to is not valid because it has tmp in the path ? because path/to/tmp also show that path/to/ has folder in it ..... Can up update your answer with sample output –  Baba May 5 '13 at 21:33 2 Answers 2 up vote 4 down vote accepted You can create your own RecursiveFilterIterator $dir = new RecursiveDirectoryIterator(__DIR__, $address = new RecursiveIteratorIterator(new TmpRecursiveFilterIterator($dir), foreach($address as $dir) { echo $dir,PHP_EOL; Class Used class TmpRecursiveFilterIterator extends RecursiveFilterIterator { public function accept() { $file = $this->current(); if ($file->isDir()) { return is_dir("$file/tmp"); return false; share|improve this answer +1 is totally correct (if not even making this more little as it needs to). I'm not 100% okay with this code per sé but I very much like this style. First of all any kind of FilterIterator is okay, so is RecursiveFilterIterator. Also using functions from PHP's standard lib inside the object abstraction (here: is_dir) is totally valid. Especially as SplFileInfo is part of the Spl which is the re-incarnation of Standard Php Library (SPL). Yay! So what else can I say good about this? Variable usage! Well done, PHP has COW so just use variable like here with $file. This is all well +1! –  hakre May 6 '13 at 1:20 This is one comment am going to remember for a long time ..... :) –  Baba May 6 '13 at 1:24 You probably can add the criteria by creating yourself a FilterIterator that checks for a subdirectory. The following usage example demonstrates this to list folders I have under git. $address is what you have in your question already, the filter is just added around: $filtered = new SubDirFilter($address, '.git'); foreach ($filtered as $file) { echo $filtered->getSubPathname(), "\n"; And what not. This filter-iterator used is relatively straight forward, for each entry it's checked whether it has that subdiretory or not. It is important that you have the FilesystemIterator::SKIP_DOTS enabled for this (which you have) otherwise you will get duplicate results (expressing the same directory): class SubDirFilter extends FilterIterator private $subDir; public function __construct(Iterator $iterator, $subDir) { $this->subDir = $subDir; public function accept() { return is_dir($this->current() . "/" . $this->subDir); share|improve this answer Your Answer