id
stringlengths 50
55
| text
stringlengths 54
694k
|
---|---|
global_05_local_5_shard_00000035_processed.jsonl/34315
|
Discussion in 'Mac OS X Lion (10.7)' started by Stevies3, May 22, 2012.
1. macrumors member
I'n Safari I'd like to change the default Download folder to my external drive. My question is, will the NEW Download folder still show up in my Favorites within Finder?
2. macrumors 6502a
You can drag it onto the sidebar.
Share This Page
|
global_05_local_5_shard_00000035_processed.jsonl/34316
|
Problem with folder names
Discussion in 'iCloud and Apple Services' started by SiFly, Sep 17, 2010.
1. macrumors newbie
I logged into MobileMe last night and noticed a few things which were unusual. First of all, the introductory movie that usually plays when you sign up to MobileMe started playing. Then, when I cancelled it, my Trash Folder had been renamed "trashFolder: Deleted Messages". I checked it in Mail.app and the Trash folders there are named On My Mac and MobileMe.
I also noticed that when I deleted any of the messages from my inbox on MobileMe, it deleted it immediately and did not store them in the Trash folder for emptying.
I'm in a support chat with somebody from Apple right now but I wondered if anybody else had experienced a similar problem at all?
Share This Page
|
global_05_local_5_shard_00000035_processed.jsonl/34317
|
Questions.. Thinking about selling iPad and grabbing a MBA
Discussion in 'MacBook Air' started by gdeputy, Nov 3, 2010.
1. macrumors 6502a
I have and love my iPad. With that said, I mainly use my iPad to browse the web at night in my bed. The new MBA 11 inch looks mighty tempting for the fact that it has a real keyboard. I do love my iPad to death but I'm absolutely considering getting rid of it for 420ish (spend 540 all said and done) and taking the plunge on a MBA.
I will say I doubt i'll do MUCH traveling with the MBA.. moreso around the house... but I will take it with me occasionally, and I think with the portability I'll be more likely to take it than I would a regular MBP.. especially with iphone 4 tethering I use...
I just know that sometimes with my iPad i wish I was using a real computer for certain things.
At the end of the day, I have a killer windows desktop that I use for my games, and demanding tasks.. I would want this for browsing the web, portability, maybe some SC2 (I don't expect great settings, but maybe...) and just for general convenience.
Should I be looking at the MBA or MBP? They are around the same pricepoint.. which one is going to benefit me more? Which has the better screen for viewing, because to me that's somewhat of a big deal.
Also, am I pricing my iPad fairly? I don't want to get less than I can because the thing is immaculate, not a single mark or scratch anywhere on it. You would swear it was brand new.
2. macrumors 65816
Dammit Cubs
i can tell you right now. This is the problem i am facing. It truly is a hard decision. I'm trying to see how long my MBA 11 battery can last. There are something thats are great on the ipad:
Casual internet browsing
I use maxjournal for work and personal
Reading e-books.
But there are somethings that are just awesome on the MBA and i can feel the difference of productivity using this machine vs the Ipad. If went with the 13in, i wouldn't have this dilemma but the fact that I went with a screen size thats close to the ipad's. It made it alot harder.
The biggest advantage the ipad has is that no matter what I did, the battery lasted 8 to 10 hours. That's what really made it awesome.
3. macrumors 6502a
very curious to hear about battery life. One of my favorite things about the iPad is the battery life, which is really just incredible. I love my iPad very much but ultimately it does hinder a few things I would like to do, and in my experience the app store hasn't really taken off with great apps like I would have loved to see.
4. macrumors 65816
I sold my iPad but then right after my wife got pissed without telling her that I sold it. We know our daughter religiously praising the iPad. It is a lifesaver in many ways.
She agreed I can buy MBA 13" but she wants at least wifi 16gb for our daughter.
Believe me, the iPad is hard to let go. Now ny daughter keeps asking for her "APAD" and it's breaks my heart..
5. macrumors 65816
Tell her to wait a few more months for APAD 2.
6. macrumors 6502a
lol, funny you say that.
Now my girlfriend said tonight when I got her "you don't even consider me in selling your iPad, I've got over 260 games of scrabble played!" lol,
7. macrumors 65816
I told you, the iPad is a piece of art lol!
Serioulsly, I never think about MBP.. In this time, do you really use a cd,DVD? It's all about memory stick now or data storage whatever that is. Eliminate the SuperDrive and all the way to MBA
8. macrumors 65816
Buy her one.
9. macrumors 6502a
I think she's trying to say she wants the iPad and wouldn't mind having his. ;)
Then he's got a good reason to get the MBA. :D
10. macrumors 6502a
Yeah basically my girlfriend loves the iPad and doesn't want me to sell it, so now i, contemplating weather I want to drop the money on an MBA or mbp.
11. macrumors 65816
MBA > MBP at this time of writing.
12. macrumors 6502a
why sell iPad? As much as MacBook Air is very nice (i have one), it still does not replace iPad as the choice for the toilet :)
I did sold my 2009 Mac Pro buy the Air.
13. macrumors 6502a
The iPad is cool and all. But it just can't replace a full computer.
14. macrumors regular
Yeah OP you should keep the iPad AND get the MBA also :D
but seriously sounds like you should get the 11" MBA instead of MBP
15. macrumors regular
To all those wondering what to do, the iPad is a very different device. I have one and won't replace it with an Air although I will be getting the air.
I bought the ipad for what it's good at, it's great to pick up in a morning and read the paper and great for films etc
The air is a computer, I wouldn't wonder round reading the paper or books on that!
16. macrumors regular
The best thing is to have both the iPad and the MBA, at least one of each for every couple/family LOL :D
I wouldn't get both for myself but I'm glad my wife is getting an 11" MBA, it will be great for her daily activities but also for me when we will be traveling. On the other hand I would never give up my iPad, I'm certainly using it less now after 5 months of having it compared to the beginning, but it's great to have it there on my bedside table with my ebooks and pdfs and videos in it. I must admit that it's limited, you just can't do everything on it, and when I hit an all-Flash website I'd like to throw it out of the window :D Anyway, 3G connectivity plus the astonishing battery life make it better than the MBA for some things.
17. macrumors member
18. macrumors 6502a
I had my ipad since launch sold it and bought the MBA however I miss the ipad and the battery life very much,I did get a good price for my Ipad so will use that and buy the next rev. next year.
19. macrumors 68040
I've invested too much in iPad cases to sell it.
Ebay listing: " iPag 32Gb + 125 cases"
Don't know if I could recoup.
20. macrumors 68030
I am sticking to both of them. I am keeping my iPad + MBA, but MBA 13 inch not 11.6.
21. macrumors 6502a
I have the 11.6" MBA and have been carrying both the iPad and the MBA. I said last week in another thread that I wasn't selling the iPad. Today, my iPad is listed on craigslist. I already have an iPhone 4 for games and the MBA for emails and internet. I would just use the iPad as an e-reader and for that purpose, I might just as well get a smaller and cheaper e-reader.
22. macrumors 68040
I would do that myself, and just carry my Kindle 3. But the Kindles (and most other ereaders i've seen) aren't' that great for Pdf's which the iPad does very well.
I've got a lot of pdfs.
Note: i've got a feeling these smaller 7" tabs coming out, aren't going to be very good pdf readers. The iPad with the 10" screen is a bare minimum for me. Otherwise I would have to zoom constantly, which is distracting (and tiring)
Share This Page
|
global_05_local_5_shard_00000035_processed.jsonl/34318
|
Thunderbolt port and mini displayport adapter
Discussion in 'MacBook Air' started by Peter Harrison, Sep 13, 2012.
1. macrumors 6502a
Peter Harrison
Hey, quick question.
I had a 2011 Macbook Air, back before Thunderbolt was introduced. I sometimes wanted to hook the laptop up to a very large TV for movies and sports so I got the mini-DisplayPort to HDMI adapter. It worked fine, though not for audio. That sucked. Anyway...
I've now ordered a 2012 Macbook Air, which does have Thunderbolt instead. I've heard that the port is essentially the same as mini-DisplayPort, except obviously using the new standard that is much quicker.
I'm considering getting a monitor at some point to use as an external display. It will be a sensible size for computing, not my TV, but I imagine most monitors these days do use HDMI anyway. My question: I kept my miniDP to HDMI adapter, will it still work on the 2012 MBA, plugging into the Thunderbolt port? The alternative obviously being that I need to buy some identical-looking Thunderbolt to HDMI adapter.
2. macrumors 6502
Back in the day before the standard changed I got myself an iwires (from the apple store) mini DP to HDMI cable not adaptor this also does audio and I use it on my 21.5 inch monitor and my 37inch tv and I have never had and trobble with it. Also make sure under the sound pane in system preferences that the tv is set as the speaker output.
3. macrumors 6502a
Peter Harrison
Thanks, that sounds excellent. Obviously a simple cable would be preferred to a cable and adapter, which looks a lot uglier.
But in the last few minutes I've also discovered something else. Apple isn't selling Thunderbolt to HDMI at all, it's still Mini-DP to HDMI, so my adapter should be absolutely fine. Thunderbolt is fully backwards compatible.
But my adapter never worked with audio through HDMI. I notice the product Apple now stocks is called the Moshi Mini-DP to HDMI adapter "with audio". Did they remove the old adapter at some point and replace it with one that does work with audio? :/
4. macrumors 6502a
There is no Apple branded mDP-HDMI adapter. All adapters in the Apple Store are from 3rd party suppliers. The good news, all will pass audio, and all will work in either an mDP or Thunderbolt connector.
If you are purchasing from a store other than Apple or an Apple reseller, you will need to check carefully as to whether it supports audio pass-thru. A popular one is available from monoprice (I have several that work well).
While it is possible to make an mDP to HDMI cable as one assembly, this is not approved by the HDMI licensing authority. Some may still be available in the market, but you may not be able to find one that supports audio pass-thru.
So, the best solution is the adapter + HDMI cable.
5. macrumors 68030
3rd party manufacturers just ignored that edict and they are easily available.
Here, here and here and all promising audio support.
6. macrumors 6502a
Peter Harrison
Now, maybe. My adapter that didn't pass audio came from the Apple store. Now the exact one isn't there anymore and they have another which says "with audio" in the store's title of the product, so I'm guessing they got complaints and chose a different 3rd party this time and replaced the adapter. Anyway, I have HDMI cables all over the place, so I'll get one of those adapters. Thanks. :)
Anyone want a non-audio adapter? ;)
Share This Page
|
global_05_local_5_shard_00000035_processed.jsonl/34319
|
Why should I pick a specific designer instead of online group designer sites?
Discussion in 'Design and Graphics' started by twiggy0, Mar 29, 2012.
1. macrumors 6502
I know this is a controversial topic, but I would really like the input of designers as to why I should pay a specific designer instead of a group of online designers. (I won't name any sites)
I understand the money aspect, in which it takes away from the money of the design community, but from my perspective I save money, and I'm just starting a business and can't afford to pay a huge amount for a design I'm not certain to like.
I plan on paying $700 to get a design input from over 150 designers on a site.
With such wide range, I am almost guaranteed to get a design I like.
If I pick a specific designer, I might only be charged lets say $300-$400, but I wouldn't have nearly as much choices, I might not like the design, and I would lose my deposit if I don't end up picking that design. (most designers ask for a deposit percentage)
Can designers please give me some input on this? I would be happy to help support the designer industry but I see it as such a huge disadvantage to me in terms of end product.
I'm all ears though.
2. macrumors 68000
Don't you have a friend/relative who has Photoshop on their computer that can do it for free? :)
Seriously if you hire a specific designer, you will get something you like. After all you can look at the specifics designers portfolio/website and decide if you like his or her style. You will have a person you can talk to and describe what you are after. You will have something that will suit your business and your clients much better.
The difference is like buying food from a supermarket vs. buying fresh food from a farmer. The supermarket has a much wider selection than the farmer, but the farmers food will taste better and be more healthy than anything from the supermarket.
3. macrumors 65816
Which does the military prefer, carpet bombing or laser guided smart weapons?
Drop lots of bombs from 40,000 feet and one might hit, but the wind, navigation error, or many other things could mean they are all miles off target.
Chose the right smart weapon and the results will be as good as your aim with the laser.
Talk to several designers, work with the one that appears most promising, and give them good information on what you want. That is the best chance of success.
4. macrumors 68040
In the end you're only getting one design. If 150 low-quality designers give you options, you're still only going to get one low-quality design.
5. macrumors 6502
That's the thing though, the fact that I place a $700 reward attracts some of the best designers of the site. I've reviewed many projects and some designs are actually very well done. (to my taste, which I'd say is decent haha)
But I'll look into some designer portfolios and see what I can do with that. Would a professional designer charge me less than $700?
EDIT: Should I look in the forums and start asking designers for portfolios, or post more or less what I'm looking for with my reward price and wait for designers to contact me with either samples or their portfolio?
6. macrumors 68040
You get what you pay for. I can't imagine many quality designers using a site like that.
7. macrumors 6502
Some designs (especially when you place a $700 reward) are really good, quality made designs.
8. macrumors regular
And then they'll use the logomark, they made for you, for other people looking for a cheap logo.
9. macrumors 6502
Not exactly.. I'd have specific guidelines and am looking to incorporate my company name in the logo.
If you can help me by telling me how exactly I should look for designers, I would appreciate it though.
10. macrumors 604
As others have stated you'll get MUCH better results from an individual designer verses a design site. You really really do get what you pay for, and if you go cheap, it'll show and will be reflected in your business.
I know you think you've seen good logos on these sites, but in reality its rare to find a truly good logo on sites like this. Professional designers usually don't use sites like this.
Think of it this way, would you rather have a ton of inexperienced college kids create a logo that is reflecting your business and is seen by your customers (which is what happens on these sites), or would you rather hire a professional who knows how to take the time to custom tailor a logo to your business? (Mind you, you will get a LOT of samples from one designer, they don't just make one logo and give it to you).
I say go with a single individual, not as a graphic designer (which I am not) but as someone who has had several friends who started businesses, every single one went the cheap route, got something they thought looked good, only to realize it was a poor design later down the road and had to pay a professional to redo everything for them. Not only does this cost more money in the end but its confusing to customers when a logo changes.
11. macrumors 6502
How would I go about finding a graphic designer?
12. macrumors 65816
Established ones will often appear in local business directories. Being able to communicate face-to-face is a huge benefit in getting your requirements across clearly, and in choosing the right person, so stay local if you can.
13. macrumors 6502
I understand that you'll have specific guidelines to ensure that your design is unique. However, as a designer who checked out these sites I can tell you that you are right that $700 will attract a lot of designers, but many of these designers who work hard on these sites to make money will blanket these sites with design after design using stock vectors and spend as little time designing as possible (time=money so it's less about the individual design to them and more about how many they can churn out in an hour). So while you may pick one that looks good there's a good chance that elements from that design have already been used 100 times over for other clients. Also, out of those hundreds of designs there will only be a hand full that will A) meet your guidelines and B) are well designed. A good designer will give you a hand full of options and multiple variations before settling so that perceived "value in bulk" is not very relevant.
I do understand you feel that with your guidelines things will be different, but trust me when I say that you will get better value for your $700 if you find a good local designer who is specifically catering to your design needs. It's like buying a tailored suit verses buying off the rack, but in this case they're the same price. For me the tailored suit will win every time. Good luck!
14. macrumors 6502a
Apple Key
Have any of those designers created internationally recognizable logos? Or even nationally recognizable?
Unless you are a graphic designer, no offense, but you are not qualified to appropriately judge the quality of a logo. You will think you have received a great logo in the end and won't know any better.
I highly suggest you read this article:
15. macrumors 68040
Before design I was in the Air Force, ironically carpet bombing was the choice because it was cheaper as well :cool:
@OP the crowdcsouring sites do have their place, though I am not a fan I would ask the question regarding the purposing of the design. Is it one off of continuing work?
If it's one off then you *might* get lucky with crowdsourcing, however I'd strike up a working relationship with a good designer. I'd suggest checking a few portfolios then go from there because for long term design and branding it will be worth it in the long term having someone who can help you build up the branding.
16. macrumors 65816
Can you imagine working in an office and having your boss say, "I want you to write up a proposal. I'm having ten other people write up the same proposal and I'm only going to pay the person who's proposal I choose."
Is that how you'd like to work?
If not, why would you ask others to work that way?
There are some ethics involved here.
Just because you can dangle a coin and make the monkey dance, doesn't mean that you should.
17. macrumors 6502
Ok, I decided I'm going to look for a local graphic designer to help me.
I have just one problem at this point: I have no idea where to start looking. :confused:
18. macrumors 65816
Google "graphic design + your locality" or "ad agency + your locality"
Then look through the online portfolios that any good one should have available.
Oh, and thank you for not making the monkey dance. :)
19. macrumors 6502
I typed "google logo design miami" and "google graphic design miami" with nothing promising. Went through a few pages of google.
I feel like everyone ends up just picking the monkey dancing because it's so simple, stress-free and the best part is that you have a 100% money back guarantee if you don't like any of them.
Seems like no specific person wants my $500+ to make me a professional design. :(
Anyone have experience with doing designs through internet?
Can they recommend me anyone? (link to portfolio?)
20. macrumors 68000
crowdsourcing websites are more of a headache in the long run than anything else. majority of the so called designers there don't know anything about print production or design in general. when you get files in jpeg or psd format and can't do anything with them aside from the original task you paid for, or when you realize that the logo designed for your company is nothing more than some free clip art from the web with default system font, that's when you know it's time to hire someone local.
21. macrumors newbie
Im a professional designer
Im a professional graphic designer and photographer. My website is www.fearghal.co.uk.
Feel free to drop me an email with your requirements and we can chat about it further so that I can understand what you want, I'll give you a quote and we can take it from there. mail (@) fearghal (.) co (.) uk
I've only read a few posts here and most of them are true. No descent designer would use one of those crowd sourcing sites. Rather than just rattling out a design to suit your requests, good graphic designers will advise you on specific things, highlight problems with your brief, and actually help you figure out what you want. Importantly, make suggestions and ask questions. You should think of it as an investment up front. Invest in a good brand, get it right the first time and start your company off on a good footing.
A good brand will last you for ever.
22. macrumors 6502
Thanks for sharing that link. I'm a Web Developer. My expertise is mostly on the code and technicalia end of things, but I do a bit of design as well and I've often wondered how these design mob sites fared.
If you read through the article though, you'll notice that even though they're saying the design you get is subpar, they do think there's a place for it and I'd have to agree. Not everyone is able to make proper use of a strong brand and a lot of these tiny businesses won't be around in a couple of years anyway. The ones that succeed, could redo their logo later on if they survive.
That's not to say you don't wan't to do things right the first time, but if you've got a very limited amount of money to spend and you have to spread it out wisely, a mob design option might be your best choice.
Similarly in my Web development world, I sometimes tell struggling new business owners not to hire me and to pay for some inexpensive do-it-yourself drag and drop online-design Web package and just hack something together. Sometimes, what they come up with is good enough even if it's not very inspired. Also, some of these people need to fail first before they're ready for me. A new business owner's first website is often a train wreck, but it's a great exercise for them to go through so they have a better appreciation for what I do and their failed attempts often help me understand what they need and how I can best help them.
Anyway, back to the OP's question. I'd say the most important reason to hire your own designer is for the long term relationship. If you find the right person who gets to know your needs and what you like, it'll save you a ton of time over the course of that relationship. You do not want to go through what amounts to a hiring process each time you need something done... and as many other people have already said, you're not likely to find great designers willing to work this way. You might get lucky and catch a good one who's just passing through the mob design jungle as a step in his or her development, but anyone who's really good is not going to be there for long.
23. macrumors regular
24. macrumors 6502a
Apple Key
I agree, there is a place for this type of design. This is how the world works. There is always a market and a place for low quality work, medium quality work and high quality work. Same thing applies in any field.
It really depends on the company you are trying to start. Some companies will need very strong branding to survive or compete. Others could do just fine to grow their business with a simple logo and providing their customers with quality service. There are many successful businesses with branding that is way lower quality than what you would get from one of these mob sites even.
One of the problems with someone who isn't a designer using this service is that they would not necessarily know the right questions to ask and right modifications to make to the logo. In the end they may settle for a logo which they absolutely love, but which won't work well for their business and their customers might not even like.
25. macrumors 6502
"Buyer Beware!"
Good luck with that.
Share This Page
|
global_05_local_5_shard_00000035_processed.jsonl/34320
|
Welcome to the Forum Archive!
Mastery Screen Not Displaying Effects
Comment below rating threshold, click here to show it.
Junior Member
When i play a game then i return to the mastery screen to relocate the points, the descriptions of the masteries do not appear. Also when u reset the mastery points, the points reset but not the points on the board. So u get back all your 30 points but on the board it shows you the set up u had prior to resetting.
|
global_05_local_5_shard_00000035_processed.jsonl/34331
|
Log in
The Speculationist
As green as a mercury vapor emission line, but nowhere near as bright.
14 February 1965
External Services:
• [email protected]
• SkipHuffmanAtl AIM status
• [email protected]
• skiphuffman
Stuff I intend to write about in this live journal.
First: Thoughts about human expansion into the solar system. Second: Thoughts about how we humans are handling our affairs here on earth. I may also occasionally write about other things.
If a post seems out of context, I am probably expanding on something confusing that I wrote earlier.
[edit]Ok, fine. That was my intention, an a lot of my early posts. Now I mostly forward science articles and whine about stuff.
flying car television, science, space settlement, theatre
Welcome to the new LiveJournal
Send feedback
Switch back to old version
LiveJournal Feedback
Send another report Close feedback form
(optional, if you're a LiveJournal user only)
(optional, if you're a LiveJournal user only)
(not shown to the public)
Provide a link to the page where you are experiencing the error
Please take a survey
Take a survey
Welcome to LiveJournal
Create an account
|
global_05_local_5_shard_00000035_processed.jsonl/34335
|
Schizophrenia Is The New Ad Gimmick
Walking westward on Prince St. between Mulberry and Mott Streets, I heard a woman's voice in my head whispering, "Who's there? Who's there?" Not like I "heard" a woman's voice like when I wear flared jeans with skinny shoes and I "hear" a woman's voice in my head say, "Wait, you've got to be kidding?" but like an actual woman's voice in my head. This usually means I've had a psychotic break.
But! Then I noticed that, above a billboard for some A&E show called Paranormal State were some speakers that looked like hypersonic sound beams, a device which uses your skull as a speaker—that is, it transmits soundwaves that resonate against whatever surface they hit.
The billboard says 73% of Americans believe and I'm assuming that that means 73% of Americans believe in ghosts. So if that's true, why try to convert the skeptical/not crazy 27% by beaming voices into their heads? That's just greedy. Also it leads to a lingering sense of serious mental violation. How soon will it be until in addition to the Do Not Call list, we'll have a Do Not Beam Commercial Messages Into My Head list?
|
global_05_local_5_shard_00000035_processed.jsonl/34337
|
How Apple Is Dogfighting To Control Your News
How Apple Is Dogfighting To Control Your News
Apple move: Banishing Flash. One of Apple's most prominent maneuvers was its decision to exclude Adobe's Flash animation technology from the iPad, as with the iPhone before it. When CEO Steve Jobs unveiled the tablet device in January, it had no support for Flash, and none is likely forthcoming: in an iPad-related meeting with Wall Street Journal editors, Jobs trashed Flash as unstable and unsecure, and said it would be "trivial" for the newspaper to dispense with it in preparation for the Apple tablet.
How Apple Is Dogfighting To Control Your News
Publisher countermove: Baking Flash into apps. The publishers aren't just going to flush their Flash investment. It's massive; since our post about Jobs' Flash rant at the Journal, we've received emails from media types defending the Adobe software.
You can read five of the best emails here in an accompanying post. Taken together, they strongly contradict Jobs' claim that it would be "trivial" for publishers to ditch Flash in preparation for the iPad. Our emailers said Flash is deeply integrated into news outlets, powering sophisticated video players, interactive graphics and — hello? — advertising that would be difficult if not impossible to duplicate using JavaScript and other technologies supported natively on the iPad.
As one online producer told us, "Flash for interactive graphics is irreplaceable," while ditching it "requires broad changes across multiple properties... Oh, sure, just use Javascript: well guess what, we don't have a bunch of code junkies in our newsroom."
Luckily, Adobe has some little-talked-about software it calls Packager for iPhone. Set for wide release some time in the second quarter, the packager compiles Flash code down to code that will run natively on the iPhone. In simpler terms, it converts Flash code into iPhone code.
Will Apple allow this? Adobe's Jeremy Clark told us it already has:
iPhone applications built with Flash Platform tools are compiled into standard, native iPhone executable packages and no runtime interpreter is necessary to run the application. Over 30 Applications built using the [pre-release] Flash Packager for iPhone have already been accepted in the iPhone app store so we're confident that our method fits within the rules of the iPhone App Store.
All of the apps highlighted on Adobe's website are games or entertainment oriented, but that's changing:
Wired has been working with Adobe, and used Adobe Air to power the demonstration tablet edition featured in its recent video "Wired Magazine on the iPad." Wired is probably hoping, then, to use an iPad version of Adobe's Flash Packager to get its content onto the Apple tablet. Wired could design its e-magazine in Flash, export using Adobe's tool, and distribute through the iPad App Store. As Editor Chris Anderson told us,
It's fair to say that Wired's preferred path (indeed, the one we're on) is cross platform, starting with the Adobe authoring tools we already use every day to put out the print magazine (InDesign, etc).
How that emerges in e-reader form depends on the platform—sometimes it's a straight save as Adobe Air, sometimes it requires going through a cross-compiler tool. But the ultimate aim is create once, read
everywhere, with all the fine-grained design flexibility we have in print combined with the new interactive power of tablets.
The only complication is performance: The iPad's Apple A4 processor is weaker than those in most personal computers, so Wired will have to be especially careful with its Flash programming.
How Apple Is Dogfighting To Control Your News
Apple move: iStore for magazines and newspapers. Although no one will go on record, we're told that Apple's working on its own built-in iPad store for magazine and newspaper content — a sort of "iNewsstand" to complement iBooks, the bookstore, and iTunes, the music store. It's a predictable move, the most logical and consumer-friendly way to distribute e-magazines and e-papers via the iPad.
Without a central application for managing subscriptions to perdiodicals, after all, users will end up accumulating a messy jungle of magazine and newspaper "apps" on their iPads, each requiring a separate installation and bringing to the table its own user interface quirks.
Publisher countermove: Sticking to apps. There's no telling how publishers will respond to Apple's iMagazine stand because it doesn't exist yet; pricing, interface, format, revenue split and conent rules are still unknown. But the content creators do have one bit of leverage: If they don't like Apple's terms, they can threaten to keep selling standalone apps through the App Store. No one publication has as much invested in the iPad user experience as Apple, after all, so why should the publishers care if their apps clutter up the device?
How Apple Is Dogfighting To Control Your News
Apple move: Censoring content. Apple is already censoring content on iPhone apps, but it's sending mixed messages: The company banished thousands of apps containing "sexually arousing content" like women in bikinis while letting the Playboy and Sports Illustrated Swimsuit Edition apps stick around. It seems likely Apple will have to get more consistent and clear with the rules on the iPad, if only to save itself from headaches. Magazines and newspapers seem to be flocking to the device in large numbers, and their apps promise to be chock full of racy pictures, racy advertisements and even racy PDF copies of the print edition (horror!). The clearer Apple can be up front, the fewer fights it will have with publishers.
If it keeps the rules for iPad app content especially restrictive, Apple will have leverage to encourage magazines to distribute through its own iPad periodicals store. Just allow more free expression in the magazine/newspaper store than in the app marketplace.
Publisher countermove: Retreat to the Web. Apple can set all the rules it wants for content distributed through its own stores. But no one says publishers have to be in Apple's store in the first place. If Apple's policies prove too restrictive — or, worse, too hard to predict — publishers can simply publish whatever they want on iPad-optimized versions of their websites. NPR has already developed such a site to filter out Flash content for iPad users; racier publishers could produce iPad sites to preserve their freedom of expression. In fact, Apple's PastryKit framework allows publishers to come awfully close to duplicating the iPhone/iPad interface in a Web app.
Apple move: Banning apps with Flash baked in. Steve Jobs really seems to detest Flash. So past might not be prologue: Just because Apple allowed onto the iPhone 30 apps cross-compiled with Adobe's Flash Packager (see above) doesn't mean the company will allow cross-compiled Flash apps in the future.
In fact, Wired's parent company Condé Nast seems worried about Apple banning such apps. CEO Chuck Townsend told Peter Kafka of All Things D he is uneasy developing complex iPad editions like Wired's at other titles, due to Apple's antipathy toward Flash. So he's porting other magazines to the iPad using a less ambitious strategy of simply duplicating print pages within the app. That approach would require far less Flash coding, and thus there would be far less lost if Apple banned the technology used in Flash Packager.
How Apple Is Dogfighting To Control Your News
Publisher countermove: Rally the geeks. Flash Packager isn't the only tool that takes unsupported code and turns it into native iPhone/iPad software; Novell's MonoTouch pulls off a similar trick by pre-compiling programs from the Mono programming framework. There are already games in the app store pre-compiled from a Mono game platform, in fact. If Apple tried to ban Wired's tablet edition and the other Flash Packager apps, it would have to try and explain why MonoTouch apps aren't banned, too. If Apple did ban MonoTouch apps on top of Flash Packager amps, it will amount to demolishing not one but two major avenues for iPhone and iPad apps. Fewer apps means less energy and excitement around the Apple products.
If outmaneuvering Apple sounds like an increasingly technical endeavor, that's because it is. But if old-line publishers want to have any hope at exploiting Steve Jobs' technologies without getting unduly exploited in turn, they should have started reading up on such geeky matters months ago.
|
global_05_local_5_shard_00000035_processed.jsonl/34339
|
This Is Heaven Compared to the 80s
The Way We Live Now: better than we used to. Despite everything, our living standards are rising! Can you believe it? Even with no jobs or pensions, we're seeing improvement. We pay for it all on credit!
"The average annual income was $24,079 per person in 1980 in inflation-adjusted dollars, according to Bureau of Economic Analysis data. Last year, it was $40,454 per person." Fucking game set match, motherfuckers! It's right there in the USA Today. Of course, in the 80s they had The Gipper, and his living presence was probably worth at least $10K per year for every man, woman, and child in America, but still. We're doing better today. Believe it.
Our problem is not that we don't have a lot. It's that we feel like we don't have a lot. We had slightly more a couple of years ago, and losing that small luxurious edge makes the average American suffer pain equivalent to amputating the foot of the average Indonesian family's first-born son. That's how seriously we take our ability to buy granite countertops, here in America. Public pensions are dead, but all that means is that our benefits system is catching up with our wallets; as long as we can pretend we have some sort of safety net, the number of ducats in our bank accounts give us very little real happiness.
For most Americans, all we want is positive perception. We're willing to buy "century bonds" that won't mature for 100 god damn years just because we like the warm feeling of owning something backed by the full faith and credit of whoever might be running our government a century from now (Katelynn Clinton III). In that sense, we are ignorant and blissful. Also, how we let these huge and comforting megabanks grow until there is no hope of saving ourselves when they eventually collapse? Yea, same thing.
Buck up, Americans. At least it ain't 1980. You have twice as much money. You have twice as many ice cream flavors. And you're still alive. Let's "win the future"—for the Gipper!
|
global_05_local_5_shard_00000035_processed.jsonl/34343
|
But shit, niggas like me wasn’t meant to go to heaven
We was meant to be alone, die hanging from a ceiling
Try praying for forgiveness, but God told me to shut up
from Vince Staples (Ft. BrandenBeatBoy) – Taxi Lyrics on Genius
The protagonist of the song has a morbid, fatalistic view of life; in his mind, he was always destined for a tragic, self-inflicted death, without even the hope that an afterlife offers.
“Try praying for forgiveness” is a more emotional line than it might at first appear. Since Vince is an open Atheist and often criticizes religions, for him to pray into something he doesn’t believe in shows he’s in a very dark, desperate, guilty, and helpless state of mind. Alternately, the line might show that the fictionalized Vince of the song may be a believer, unlike the real Vince.
The verse sounds like it’ll go on, but when God tells him to shut up, Vince does—the song (and therefore the album) ends abruptly and unsettlingly, much like the young man’s life.
To help improve the quality of the lyrics, visit Vince Staples (Ft. BrandenBeatBoy) – Taxi Lyrics and leave a suggestion at the bottom of the page
|
global_05_local_5_shard_00000035_processed.jsonl/34348
|
The Police Saved a Sex Doll from Drowning Because They Thought It Was a Real Girl
"Oh my god! Is that a naked lady drowning in a river? We must help. Call backup! I'm going in after her." That must've been what 18 policemen in Shandong, China were thinking when they saw a 'woman' submerged in the river—embarrassingly for the police, it wasn't a real person they rescued but a sex doll.
Yes, a deflated and dirtied sex doll where personal parts are inserted for human pleasure. Even more embarrassing though, is that the policemen (all 18 of 'em!) spent nearly an hour trying to rescue the doll. Aren't those things supposed to be cooperative? What were all 18 policemen even doing? Trying to compete against each other to see who could rescue the naked lady? Working on a pickup line? Not one of them thought, hey, maybe she looks like a fake person?
Combining this hilarious incident with the mistaken mushroom identity of a sex toy that happened a month ago in China, it's clear that someone needs to give China a heads up on the sex toy industry. We'll gladly do the honor: toys are not mushrooms, dolls are not people. [RocketNews24 via Metro UK]
|
global_05_local_5_shard_00000035_processed.jsonl/34366
|
View Single Post
10-20-2012, 03:24 PM
Registered User
BoxOfChocolates's Avatar
Join Date: Mar 2010
Location: Stankonia
Posts: 8,585
vCash: 500
From Adrian Dater:
About the NHL lockout, I'm hearing...
...that there is a lot of backroom talking right now, and that we should see "something new" happen by early next week. I don't think anyone has an idea, though, which way things could break. Union is still adamant about that first year. The sides really aren't all that far apart, though. This shouldn't be all that hard to bridge. But that doesn't mean it'll happen. There are fragile egos on both sides and, as we've seen, they are fully capable of acting like little 2-year-olds when they don't get their way.
Meanwhile, life will go on with or without the NHL...
BoxOfChocolates is offline Reply With Quote
|
global_05_local_5_shard_00000035_processed.jsonl/34395
|
You are here
Melanzane Alla Parmigiana's picture
Melanzane Alla Parmigiana is a delicious eggplant preparation with an unforgettable flavor. The amazing texture and taste of this filling casserole is simply something you shouldn't miss. Try this Melanzane Alla Parmigiana recipe.
Eggplants 2
Flour 4 Tablespoon
Salt 1⁄2 Teaspoon
Pepper 1 Pinch
Oregano 1 Pinch
Eggs 4 , beaten
Olive oil 4 Ounce
Tomato sauce 2 Cup (32 tbs)
Grated mozzarella cheese 1 Cup (16 tbs)
Grated parmesan cheese 4 Ounce
Bechamel sauce 1 Cup (16 tbs)
Peel eggplants and slice into 1/2"/1 cm slices, then dredge in flour mixed with salt, pepper and oregano.
Dip slices in beaten egg until well coated.
Heat oil in a large frying pan and cook eggplant for 1 minute on each side, adding more oil if necessary.
Pour a little tomato sauce in the bottom of a casserole dish, and cover with a layer of eggplant slices.
Mix half the mozzarella cheese with the rest of the tomato sauce, parmesan cheese, and bechamel sauce.
Pour a little of this mixture over eggplant slices, and continue to layer, ending with sauce.
Cover with remaining mozzarella cheese, and bake in a preheated 350°F./180°C oven for 40 minutes.
Turn off heat and let sit in oven 5 minutes more.
Remove and serve.
Recipe Summary
Side Dish
Lacto Ovo Vegetarian
Rate It
Your rating: None
Average: 3.9 (20 votes)
Melanzane Alla Parmigiana Recipe
|
global_05_local_5_shard_00000035_processed.jsonl/34411
|
Drawings by Jean Vincent
Charcoal and Conte sketch on Scratch Paper - About 4" X 5-1/2"
Popeye - October 5, 2003 - Rough draft
From Photo by Pete Mones
Links to More Drawings of OLD VEHICLES here
Or use drop-down menu below
Links to Drawings of ALL SUBJECTS here
Find Pictures BY DATE here
Find Everything on this Site here
Jean's Free Ecards - No Ads
powered by FreeFind
Drawings by Jean Vincent
You can write to Jean Vincent at:
|
global_05_local_5_shard_00000035_processed.jsonl/34413
|
As Another Woman Goes Missing, Two Others Get Justice
Another day, another slew of stories about missing white women. 21-year-old Leah Hickman, a student at Marshall University in West Virginia, has been missing since Saturday night. There are no leads in her disappearance, and her friends have created a Facebook page to help aid in the search for her. Her absence was noted when she missed her shift at Dress Barn on Saturday. (I am desperately trying to keep myself from making a joke about how she probably ran away to avoid another Saturday night at the Barn of Dresses, but I'm too classy for that.) In other news, an arrest has finally been made in the murder of Emily Sander. Emily's original disappearance received a lot of salacious press because of her foray into internet nudie pics. Israel Mireles, with whom Emily left a bar on the night of her death, is now in police custody.
And lastly, police officer Bobby Cutts, who has been in jail for the murder of his pregnant girlfriend Jessie Davis, admitted to killing her. Though Cutts originally plead not guilty, he confessed the murder to his high school friend, Myisha Ferrell. Ferrell has agreed to testify against Cutts. Jessie Davis was nearly nine months pregnant when she was killed.
Student's Disappearance Baffles Family, Friends [ABC News]
Arrest in Student-Porn Actress' Death [Breitbart]
Cop Admits to Killing Pregnant Girlfriend [ABC News]
Earlier: Missing Porn Star Wasn't Even A Porn Star
Men Like Bobby Cutts Are More Common Than You'd Think
|
global_05_local_5_shard_00000035_processed.jsonl/34414
|
Doug Feith Defends Torture, But Knows Nothing Of Beaver
Today, yet again, another Bush Administration toady who isn't Karl Rove, Harriet Myers or Josh Bolton will head up to Capitol Hill to testify before Congress that everything is hunky-dory, they were just following orders, torture isn't really torturous, blah, blah, blah. But today, the Windy's own Spencer Attackerman is on the case so we got our mocking muscles ready (it's like Obama's workout, only minus the hotness of Reggie Love and with a lot more bad jokes) and proceeded to debate the appropriate punishment of the Bush Administration criminal types, the relative worth of Monster energy drink, German versus American gas prices, offshore drilling and whether AP Washington Bureau Chief Ron Fournier is a huge suckup or completely biased. It's all after the jump, people.
MEGAN: Just for the record, I thought it important to note at this juncture that I spent 12 Euro this morning on a T-shirt that says "Good Bush, Bad Bush" and features a picture of a woman yanking down her underwear and one of George Bush, but mostly just because it was 12 Euros and a nice heavy T-shirt. I'm hoping to wear it, like, around the Republican convention or something.
And, I have been wondering for the better part of the last week what gas costs here vs. in America between the exchange rate and the liter/gallon conversion and in the last 3 minutes I have calculated it. At today's exchange rate, gas is about $9.36/gallon in Germany (at least in this part of Germany). So, um, I think we've got a long was to go gas-price-wise.
SPENCER: the Germans had better lift their ban on offshore oil drilling then how else will they maybe bring the price of gas down 3 cents in maybe 30-40 years?
MEGAN: I mean, not even Bush fucking believes that shit, he just wants more gas because you know he ain't getting back on a Segway any time soon.
SPENCER: also, you know what's disgusting? Monster Energy Drink. I don't know how people drink this shit, but I have like 15 oz to go and while the Sunk Cost Fallacy doesn't apply to, say, investment strategy or the Iraq war, I feel like it has a certain logic when it comes to morning beverages.
I drove to Baltimore and back on Saturday but thanks to the miracle of Zipcar's gas-dedicated credit card I did not purchase gas
MEGAN: What happened to you drinking coffee? All those "energy" drinks — and especially Red Bull — taste list over processed Mountian Dew to me.
SPENCER: you, my Carolla-wielding friend, are fucked. I like Red Bull
MEGAN: Luckily, I hardly drive my Corolla.
SPENCER: hahahaha one of my friend's status message is "Now I have Toyota Corolla. Just like everybody else."
MEGAN: I mean, I've had it 8 years in December and it's got like 65,000 miles on it, and that includes trips home and all the driving I used to do for work.
SPENCER: I had to stop in a magazine shop to buy a an offensive magazine to get offended at in public and all they had was Monster Energy Drink.
MEGAN: I've just bought fashion magazines to do something with later when I have a scanner, but there's one in which the nipples are airbrushed out just like in America! Anyway, we should probably also talk about the whole Pat Tillman investigation that's going nowhere fast, if only to get to the following quote which I found horrible.
MEGAN: The fuck? And now he's head of the AP's Washington Bureau? I guess it just goes to show you can have political opinions and still get to the top of your profession as a journalist or something like that. Maybe as long as they're Republican.
SPENCER: ok, I saw my old boss flag this, but honestly, BFD. Fournier wrote a source-greasing email that didn't say anything particularly offensive. Reporters do this all the time — Rove would call it "strategery"
MEGAN: I just meant the creepy religio-patriotism about it skeeves me. But I'll trust you on that and defend you when your emails come out in 6 years or something for sure.
SPENCER: As to Fournier's political leanings, I remember watching Recount with you — Fournier was the guy who calls Ron Klain on election night to tell Gore not to concede, which is way more partisan than this email to Rove
MEGAN: Omg, you're so right. So he's really just a slimy suck-up like I always was as a lobbyist. Ah, the good old days.
SPENCER: or am I just part of the journalistic problem now by not being offended by it?
MEGAN: We're all part of the problem, right? Do we care to comment on Rove defending ignoring subpoenas or is it par for the course and we're done caring?
SPENCER: I'm actually trying to write a piece about shit like this for a magazine-that-shall-not-be-named, and I want to call it "The Politics of Retribution"
MEGAN: By the way, Der Speigel's website apparently has a timer counting down to the end of the Bush Administration. And if one more person asks me who is going to win, I'm going to say something crazy like "Ralph Nader" and then laugh hysterically and start speaking in tongues. About the subpoenas thing?
SPENCER: see, Rove and the rest of them will only respect coercion and force, but Obama's candidacy/presidency is predicated on hope and all that shit
MEGAN: So they don't know how to react to people being polite to them?
SPENCER: so the piece would be about how he should use the Senate Democrats and Attorney General John Edwards to launch an onslaught of persecution aimed at uncovering the abuses of the last 8 years
MEGAN: Aw, angry Johnny! I miss him and his pretty hair.
SPENCER: like a smart strategy for Obama in Year One would be to order a mass declassification about, like, rendition, torture, the U.S. attorney firings, everything you see covered on TPM
MEGAN: Ooh, that would be awesome. And not just because maybe someone would eventually hire me to dig through all of that shit and write about it.
SPENCER: not only does that bring all of this shit out into the light, it a) distracts the press while Obama launches into his universal health care/Iraq withdrawal agenda and b) it gets the right to lawyer up and cower in fear, constraining it from blocking said agenda and there's more! Implicitly, it acts as a really satisfying fuck-you
MEGAN: But, it does make Ben Ginsburg and his skeevy lawyer ilk a shit ton of money.
SPENCER: like, "Oh, you want U.S. persons communications' deemed merely 'relevant' to 'foreign intelligence information' wiretapped under a blanket warrant? Cool! Well, Mr. Feith, every time you call Ahmed Chalabi, I'mma be on the other line"
MEGAN: Oh, Dougie Feith! It'll be like all our favorite criminals seated on big panels. It'll be the left-wing McCarthyism. We'll get our own Fred Thompson.Except for he's Watergate, but you know what I mean.
SPENCER: or: "Oh, you want to be able to put a black bag over a motherfucker's head, google him, strap him up in the belly of a C-130 and drop him off into the middle of nowhere? You got it, Mr. Rumsfeld! One minute you're at your Kalorama crib complaining to Joyce about why she can't love you longtime like Midge Decter and the next you're dropped off on the side of the road in Spain, where Judge Baltasar Garzon has an indictment out for you for war crimes. Send me a postcard from the Hague!"
MEGAN: Well, hopefully you know what I mean, because I don't really, but in another interesting German story, I once worked at a language lab in college and got her hear the testimony of Bertold Brecht before the House Committee on Un-American Activities and he wrapped all them bitches up in knots, drove out to Dulles and hopped a plane to East Berlin. Where would Feith go?
SPENCER: speaking of Feith, he's going to be testifying to a House Judiciary panel at 10 about his role in authorizing torture which is why I can't stay crappying with you much longer
MEGAN: Totally cool, are you blogging it for Windy?
MEGAN: Are they going to ask him about the Beaver memo?
SPENCER: I believe they will! Mr. Feith, how familiar are you with a certain 2002 Beaver communication...?
MEGAN: So many double entendres, so little time.
SPENCER: Congressman, I can safely say no Beaver has ever talked to me, and if one did, I would not listen.
MEGAN: Mr Feith, are you saying you have no familiarity with anything Beaver related?
SPENCER: christ this Monster shit is DISGUSTING and it's making my chest hurt
MEGAN: Um, then, I think you should stop drinking it, your $1.75 be damned.
|
global_05_local_5_shard_00000035_processed.jsonl/34449
|
Reader Mathue sent us this clip over the weekend, captured while he and his friends were playing some DayZ. It shows, he says, the kind of language - shrieked at the top of her lungs - that caused the cops to reportedly turn up to one of his pal's houses, fearing MURDER.
From 0:56 onwards, you can see what he means. His friend is SCREAMING stuff like "WHO ARE YOU?" and "DON'T SHOOT ME PLEASE I'M BEGGING YOU", which in the context of zombie survival game DayZ is totally understandable, but to anyone listening in from outside, would sound like bloody murder was taking place.
Concerned neighbours raised the alarm fearing the worst, and it was only when three cops turned up to see if everything was OK that the (embarrassing) truth was revealed.
How to Get the Cops Called on You During DayZ [Normal Difficulty]
|
global_05_local_5_shard_00000035_processed.jsonl/34456
|
Let's Make Robots!
Wii IR camera as standalone sensor
Using the Wii Remote IR camera directly with an Arduino
Wii-IR-Camera-schem.pdf11.63 KB
Wii-IR-Camera-board.pdf11.3 KB
wii_remote_ir_sensor_sample.pde2.5 KB
The Wii Remote became a very intersting tool for hacking and other uses where it not has been mentioned for. After the first hacks appears in the internet a lot of people are doing great stuff with it.
This tip&walkthrough is about using the IR camera from the Wii Remote as a standalone sensor. It is based on hack of a japanese guy named kako. There also exists a Make article
This sensor is great for tracking infrared sources. It can track upto 4 sources independently and give out the coordinates and the strength ob each tracked object. The IR camera has an I2C interface which can be easy accessed by a microcontroller. Here an Arduino board has been used.
Wii Remote disassembling:
To get the IR camera out of the Wii Remote, the Wiimote must be disassembled. A Tri-Wing screw driver has been used for this task. The IR camera is on the front of the board. To get the IR sensor out a hot air gun is been usefull.
This walkthrough only works for an original Wii Remote. There exists some Wii Remote clones, which are cheaper than the original one but they have different sensors with unknown pinout, so be warned!
The schematic slightly differs from Kako's aproach, it has been taken form the CC2 ATM18 project. A quartz oscillator has been used. A frequency bettween 20..25MHz will work. Unfortunately the sensr is a 3.3V device. Some level conversion must be done before connecting it to a 5V Arduino board. The sensor gets it power source from 2 diodes in series with a 5V from the arduino board which give roughly 3.6V. 2 pullup resistors on the I2C pins limits the voltage down to 3.6.
Schematic and a board layout is atached to this article.
• Wii Remote IR Camera (from a original Wii Remote, not a clone!!)
• 24Mhz quartz oscillator (or 25MHz, but not a resonator!)
• 2x diode 1N4148 or equivalent
• 2x elecrolytic capacitor 10uF
• 1x ceramic capacitor 100nF
• 2x resistor 2.2kOhm
• 1x resistor 22kOhm
• perf board 60 x 25 mm
• pin bar 1x4
• pin bar 2x4
• bar jack 2x4
The Arduino control software is also based on Kako's sources. It simply initialise the IR camera sensor and sends the readed blob information to a PC: The sourcecode has been slightly modified to work with the PC software.
The PC software is also taken from the CC2 ATM18 project and can be downloaded here.
An Arduino sketch is attached to this article. At the moment I am working on a processing sketch for graphical represantation of the Wii IR Camera output.
To be continued...
Comment viewing options
I have tried to solder wires on the camera's pins but it's really hard. After having managed to solder all of them I pulled on one wire and the complete pin went out from the camera. So I would recommand to attach the camera directly to a PCB as in the article.
I had no choice but to cut the black box containing the camera. You can see the pictures in the official Arduino forum. The sensor has eight copper lines with a 0.875mm pitch (SMD standard ?). So it appears even harder to solder...
I'd be interested to use this setup to track the position of a hand in three dimensions.
However I'd like to have an idea about the sampling rate of the data. Does anyone know it ?
Also, does the sensing work well in a very short range (0 to 50 cm) ?
Thank you in advance.
The sample rate should be 200Hz, when using I2C. Here is video about this:
You will need a glove with IR LEDs or reflective material on your fingers and a IR LED beamer. Johnny Lee has done this. The video shows how it works:
Never tested it by myself but would like to hear more about it.
200 Hz ? Great ! In between I've read somewhere that it was only 100 Hz, so it's even better !
Yes I saw Lee's video. This might be really interesting. One question: is his setup is there any technical reason to put all the IR LEDs so close to the camera ? Wouldn't it be better to spread them on a bigger surface ?
Don't know if 200Hz is the real sample rate, maybe it's only the maximum sample rate for reading the sensor with I2C.
A single IR point source would be the best, I think. Spread the LEDs on a bigger surface, will give you different reflections for each finger.
I managed to extract the camera from the Wiimote and have alost finished to solder it on new wires. But before trying to build the circuit I'd like to ask additional questions:
1. Is the 25 MHz crystal necessary ? Couldn't this signal be generated by the Arduino board (I am using an Uno) ?
2. As the Uno has a 3.3 V output is the voltage conversion from 5 V necessary or not ?
I think AVR is limited to a clock signal of fclk/2 so on Arduino that'd be 8MHz. The mbed I used is running 96MHz so 25MHz is no big deal. Easy enough to just use a separate 25mhz crystal/oscillator/clock thingy.
That seems logical. Thanks.
What about the voltage ? Should I use the 5 V output and additional components or the 3.3 V output ?
This is a first time i use i2c comunication and still not good with programing
i use ATmega8 minimum system as the processor, CodeVisionAVR for compiler and this is my project
this program use to show the output from ir camera to lcd, but look like this program is not complete.. i don't know how to fix it.. i've try to use ATM18 project but... very hard to find atmega88 in jakarta (indonesia) and i need ito finish this project before december. anyone can fix my program? thx in advance
|
global_05_local_5_shard_00000035_processed.jsonl/34458
|
Image Archive
Detailed Record
Title: "International One Design 2, 1949"
Accession Number: 1984.187.125072F
Type: safety negative
Maker: Rosenfeld and Sons
Date: 1949-09-09
Description: 4x5 safety negative photographed by Rosenfeld and Sons on September 9, 1949. Image of 33' Bjarne Aas International One Design sloop ARGYLEE (built 1937 in Fredrikstad, Norway). Visible in image: port quarter view of ARGYLEE (IC/2) "MBYC" on port broad reach heeled over rail down under marconi-rigged mainsail, crew member draped over port rail to stabilize boat, whitecaps on the water, another sloop and land in background. CREDIT LINE: Mystic Seaport, Rosenfeld Collection. For more information see: SLEEK, text by John Rousmaniere, p. 101 and 118. Handwritten on negative sleeve: “IC 2 Argylee / 9/9/49" and stamped: "125072F / 5083".
Mystic Seaport Image ID m411075 Information regarding reproductions
|
global_05_local_5_shard_00000035_processed.jsonl/34459
|
Best Free System Restore Tool: Clonezilla
When it comes to creating a perfect copy of a system disk for future restoration, Lifehacker readers love the open source and versatile Clonezilla. It can't do real-time mirroring like the second and third-place winners DriveImageXML and Macrium Reflect Free, but it's powerful, versatile, and can easily grow with you as your disk imaging and backup needs expand.
For more information about Clonezilla and the other Hive Five candidates, check out last week's Hive Five: Five Best Free System Restore Tools and dive into the Call for Contenders to see other tools your fellow readers use.
|
global_05_local_5_shard_00000035_processed.jsonl/34460
|
Improve Your Fitness Through Group Training
As the saying goes, there is strength in numbers. Turns out this crowd mentality is especially helpful if you're trying to meet a fitness goal.
Photo by BL1961.
The New York Times relays the story of runner Dathan Ritzenhein, who'd found himself in a fitness slump. After trying to re-energize himself through various other training methods, Dathan decided to seek out and train with fast runners.
Feeding off the energy of the other runners helped him step up and train harder than did running solo. The article argues that "the right workout companions...can make all the difference," especially because solo runners and other workout enthusiasts who go it alone may underestimate their own ability. Having someone alongside you during your workout routine can help prevent this from happening.
A final benefit of group training: You're held more accountable in a group than you are training on your own. If you run or hit the gym in groups, has it improved your performance? Let us know in the comments.
|
global_05_local_5_shard_00000035_processed.jsonl/34461
|
Build a Collapsible Workbench for Easy Storage Anywhere
The bench is made from wood and cut in a way that it can click together quickly. When it's not in use, everything breaks down and fits inside the top. The total cost is only about $34 for the materials, and schoonovermr's guide is incredibly easy to follow. If you're hurting for space but still want a nice workbench, head over to Instructables for the full guide.
Collapsible Workbench | Instructables
|
global_05_local_5_shard_00000035_processed.jsonl/34468
|
Re: HTTP and half close (WAS: HTTP client abort detection)
From: Carl Kugler <[email protected]>
Date: Wed, 14 Mar 2001 08:57:52 -0700
To: Miles Sabin <[email protected]>
Cc: [email protected]
Message-ID: <OF99DF2538.169E640E-ON87256A0F.00573C13@LocalDomain>
There is a long-expired Internet-Draft, "HTTP Connection Management"
that has some good discussion of related issues.
<!--StartFragment-->The HTTP/1.1 Proposed Standard [HTTP/1.1] specification
is silent on
experience was desired before the specification was frozen on this
of these two extremes deal with "connection management" in any
workable sense.
Miles Sabin <[email protected]> on 03/14/2001 07:35:00 AM
To: [email protected]
Subject: HTTP and half close (WAS: HTTP client abort detection)
Scott Lawrence wrote,
> Miles Sabin wrote:
> > as a full client close ...
> it open for persistence, perhaps suffering the time-wait
> doesn't).
Thinking about this some more, I'm coming to the conclusion that
there's a genuine and problematic gap in RFC 2616. Just to make
sure there's no misunderstanding here, I want to emphasize that
I'm quite well aware of the benefits for servers of clients
sending the first FIN.
As far as I can make out there's nothing in RFC 2616 which
unambiguously rules out any of the following possible client and
server implementations,
1. Clients which send a FIN early ... ie. by doing a half close
as soon as their last request is sent, but possibly before
the corresponding response has been received.
2. Clients which don't (half or full) close until all pending
responses have been received.
A. Servers which treat an early client FIN only as a half close
and continue processing and sending pending responses.
B. Servers which treat an early client FIN as an indicator of a
client abort, and hence abandon processing and sending pending
The problem is that clients of type 1 aren't fully interoperable
with servers of type B ... if a type 1 client does an eager
half close a type B server would abandon it's response.
So at least one of these two should be ruled out by RFC 2616.
Unfortunately I don't see that either unambiguously is. And if
neither are ruled out by the spec, then interoperability
considerations mean that it's not safe for client or server
implementors to adopt either. In real-world terms, I'm pretty
sure that most existing user-agents (nb. I'm not talking about
clients generally) are of type 2, and most servers are of type
Here are some pro and con considerations wrt type 1 clients and
type B servers.
* The pragmatic good citizenship argument that a client should
send the first FIN, and sending it immediately after its
request has been sent, before any or all of the response has
been received, makes that likely.
On the other hand, this isn't the only way for a client to send
the first FIN. In scenarios which allow a client to half close
eagerly it should also be possible for it to send Connection:
close, in which case the server would know that the connection
is terminating. Hence the server, assuming it sends a Content-
Length or chunks, could quite happily defer closing after
sending the response, on the assumption that the client will
close soon and that it can close it's own end when that
On balance, I don't think any conclusion can be drawn from the
above. Early half close helps servers, but there are other
ways of getting the same effect.
* A similar pragmatic argument that if servers are allowed to
treat early client FINs as client aborts, then they might be
able to avoid expensive response processing.
In the particular case of a proxy server, such a FIN might be
detected during proxy-side DNS resolution, but before the
initiation of a connect to an origin server. If the proxy
isn't going to cache the now unwanted response (either because
it's not a caching proxy, or because it's confident the
response will be uncacheable) then it would make sense not to
attempt to forward the request to the origin server at all.
* From 8.1.4 Practical Considerations,
Servers SHOULD always respond to at least one request per
connection, if at all possible. Servers SHOULD NOT close a
connection in the middle of transmitting a response, unless a
network or client failure is suspected.
So long as we read 'middle' of a response liberally and allow
it to cover any point in time between the servers receipt of
the request up to the completion of the sending of the
response, then this seems to support type 1 clients against
type B servers.
Nevertheless, it's very hard to see the practical difference
between a 'network or client failure' and, say, the behaviour
of a user agent when a user hits the stop button because the
server hasn't managed to deliver a response sufficiently
* If we aren't so liberal with the reading of 'middle' in the
above quoted para, then we have,
A client, server, or proxy MAY close the transport connection
the connection is being closed while it was idle, but from
the client's point of view, a request is in progress.
This suggests that a server which detects an early client FIN
would be within it's rights in exploiting the persistent
connection close race condition to avoid processing or sending
any response at all: it's free to close the connection at any
time, and it's not yet started to send it's response. The only
difference between this scenario and the typical cases is the
exact location of the request message ... on the wire, in the
TCP receive buffers, or in a server buffer.
* If type 1 clients are legitimate, this parenthetical comment in
4.4 Message Length becomes quite baffling,
following (in order of precedence):
5. By the server closing the connection. (Closing the
connection cannot be used to indicate the end of a request
body, since that would leave no possibility for the server
to send back a response.)
because if an early half-close were legitimate, it would
provide a perfectly respectable mechanism for delimiting a
request body.
None of the above leaves me with any particularly clear idea of
what best practice might be. Unless I can be persuaded otherwise
I'm obliged to be cautious from an interoperability perspective
and assume,
* That a type 1 client implementation is inadvisable, because
it wouldn't reliably interoperate with type B servers.
* That a type B server implementation is inadvisable, because
it wouldn't reliably interoperate with type 1 clients.
Can anyone shed any more light on this?
Miles Sabin InterX
Internet Systems Architect 5/6 Glenthorne Mews
[email protected] http://www.interx.com/
Received on Wednesday, 14 March 2001 15:59:21 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34472
|
Re: I'm confused (Re: Poll for preferred API alternative)
From: Justin Uberti <[email protected]>
Date: Wed, 29 Aug 2012 15:04:05 -0700
Message-ID: <CAOJ7v-3uqp6xiStZ7As2wFAhifSoFqoLwV5_DZ4-WngpoRsGfw@mail.gmail.com>
To: Roman Shpount <[email protected]>
On Wed, Aug 29, 2012 at 9:41 AM, Roman Shpount <[email protected]> wrote:
> On Wed, Aug 29, 2012 at 12:09 PM, Timothy B. Terriberry <
> [email protected]> wrote:
>> Roman Shpount wrote:
>>> PeerConnection API was created to simplify interoperability, but ended
>>> up as something that will not work with SIP (modified offer/answer, a
>>> large number of SDP extensions that are not supported by anything
>>> currently existing, no support for forking) or Jingle (Jingle was not
>> This seems like a somewhat nonsensical argument. "It's possible to make
>> JSEP API calls that do things outside of 3264 O/A, therefore we should
>> throw that away all 3264 semantics and use something that has no relation
>> to it at all"?
> My main issue with offer/answer and SDP is that we are trying to use them
> to implement an API, when those things were clearly designed for a network
> protocol. A lot of things that normally fall under the real time media
> stack API cannot be expressed via offer/answer, so we add things like
> hints. The API needs to inform the signaling application about
> its capabilities (ie codecs supported, media types, security methods),
> allow the signaling stack to select its preferences (for instance listen
> only audio call with Opus codec preferred), and make the make the media
> stack generate an offer. When an answer is received, signaling stack needs
> to examine this answer, modify it according to its preferences and pass it
> on to the underlying stack. It then needs to be able to generate new
> offers, or to provide the modified answer to a previous offer. Doing this
> by manipulating SDP and by trying to fit this into offer/answer
> is cumbersome and of no real benefit.
> First of all, SDP has a clear internal structure (ie SDP lines with a very
> few known types of well defined formats). API should be dealing with this
> data packaged in the structures, not in a string blob. It is trivial to
> define a mapping from SDP to a session description object.
We currently have a SessionDescription object in the API, which can convert
to/from SDP. One can certainly imagine having specific accessors on this
object so that the description can be changed without having to do
operations on a string blob.
> Offer/answer would be better implemented by decomposing it into set of
> lower level operations, such as creating data connections, selecting
> encryption to be used over this data connection, and then selecting media
> to be transmitted over this media connection independently.
> To summarize, from my point of view, it is not that I want to through away
> offer/answer. I just think it is inappropriate as a model for an API. I
> would argue that building WebRTC API on top of offer/answer will make
> application harder to implement and control, and in the end of the day,
> will make interop with offer/answer harder as well.
> SIP and Jingle aren't the same things. One API that can handle both is by
>> necessity going to be able to do some things they might not be able to do
>> on their own. It's still much easier to build an application that
>> interoperates with SIP if everything works _mostly_ the same. Given that
>> the maintainer of libjingle is the primary author and proponent of the JSEP
>> API, I'm also pretty sure that API will handle what Jingle needs to do just
>> fine.
> I am pretty sure current API cannot be mapped to Jingle without some
> extensions or additional work. When thing mostly the same does not make it
> easier to build an interoperable solutions. It really depends on the exact
> difference. Current WebRTC proposal will only work with other WebRTC
> implementations or some sort of media and signaling translation device.
As Tim mentions... I'm pretty sure it can be mapped just fine.
> As for extensions... if you're referring to things like BUNDLE and msid,
>> these are something the SDP for a SIP-compatible application is going to
>> have to deal with, regardless of what the W3C API looks like. They're also
>> squarely under the purview of the IETF.
> These extensions are under IETF purview, but the API that controls their
> use is under W3C purview. I think, for interop it would be better to
> disable some of those features via the API instead of relying on
> offer/answer and SDP to negotiate that these things are not supported by
> the remote party.
Well, you can do this - strip anything you don't want out of the local
description, then you don't even have to negotiate it off via offer/answer.
>> And finally for forking: Mozilla would like to support it, and we've
>> discussed options for cloning PeerConnection objects that would allow you
>> to do it. Someone does need to do the real work of writing up a proposal
>> for that API and defining what the semantics are, but that would be equally
>> true of any other solution that would allow forking. There are lots of
>> issues like this, for which real work needs to be done, but all this time
>> spent debating whether or not we should throw away all the work we have
>> done and start over (again) is just keeping people from actually working on
>> them.
> I would be more then happy to help designing or implementing this within
> the current API or an alternative that will be proposed.
> _____________
> Roman Shpount
Received on Wednesday, 29 August 2012 22:04:53 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34473
|
Re: storing EARL in annotea
From: Nadia Heninger <[email protected]>
Date: Fri, 01 Mar 2002 16:31:01 +0100
Message-Id: <>
To: Libby Miller <[email protected]>, Jim Ley <[email protected]>
Cc: w3c-wai-er-ig <[email protected]>, www-annotation <[email protected]>
>Jim, I had a quick chat with ericP about this, and he says that you need
>to connect the sub query type things with each other with the variable
>names (correct me if I've got this wrong eric)
We're working it out on IRC now. I think I understand now how to structure
a query, but I'll need to do some exploring. I'm not sure I entirely
understand what's going on with what I get returned, though.
'((http://www.w3.org/1999/02/22-rdf-syntax-ns#type ?a
) :collect '(?a))
r:resource="http://iggy.w3.org/annotations/attribution/1014886708.379036" />
along with a bunch of empty earl:Person tags.
I'm assuming all the empty Person tags come from every report that matched,
but why is there only one with the annotea Attribution? It's always the
same attribution, too. Eric told me that that one attribution would point
to a combination of the reports that matched, but it doesn't seem to.
>> > A few warnings from my experience playing with annotea:
>> >
>> > 1. The documentation at
>> http://www.w3.org/2001/Annotea/User/Protocol.html
>> > is wrong. Annotest gives errors if you try to just post straight XML
>> to
>> > it.
>> I have two clients which both post straight XML without problems, the
>> above page is wrong, but only in that it omits the <?xml ... > with that
>> included it works fine for me - what problems do you have?
Ahh... I had left that off. It works fine now with straight XML, thank you.
>> > Look at the source for
>> > http://annotest.w3.org/annotations?explain=false for examples of how to
>> > submit things - for example, to submit an annotation you have post your
>> > content as w3c_annotate=<url-encoded rdf>. Same idea for algae
>> queries.
>> I definately don't like this for submission - url length is getting long,
>> I don't trust proxy's even if I can trust annotest to be okay with it.
I'm still using POST. The only difference is the URL-encoding. So
apparently the script accepts both methods. I'm glad to not have to
url-encode now.
>> Also I believe it's important we agree on a namespace to use, algae
>> queries really need to know what namespace you are using and it would be
>> nice if we could all use the same one.
Yes... unfortunately, EARL looks like it's going to continue to change. Is
it time to decide on a 1.0 now?
Received on Friday, 1 March 2002 10:30:42 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34474
|
RE: use of alt attributes in decorative images
From: Bailey, Bruce <[email protected]>
Date: Tue, 6 Feb 2001 14:15:47 -0500
Message-ID: <[email protected]>
To: "'Kynn Bartlett'" <[email protected]>, [email protected]
Dear Kynn,
My sincere apologies for upsetting you. I really didn't mean to insult you,
and my "depending on the weather" cut was uncalled for. As you wrote,
perhaps you are the paragon of philosophical consistency -- and I am just
too dull to appreciate it.
I don't agree with promoting techniques (e.g., using TITLE attributes
superfluously) that interferes with the accessibility of current products
(e.g., JFW), even if future products (e.g., Edapta) will remediate such
coding. The fact that such a practice is (technically) syntactically valid,
hardly mitigates against the current usability issue.
Yep, titles are metadata. Yep, GUI browsers tend to render them as ToolTips
(what I was calling mouse-over pop-ups). Yep, tool tips can be useful
(which is why I cited their use in links, abbr and acronym). Those points
we agree on. The disagreement is that you are willing to promote (currently
useless -- the search engines don't even process them) metadata when we know
that IN ACTUAL PRACTICE this sacrifices accessibility?!? Forgive me, but I
don't believe the pre-Edapta Kynn would have suggested such a thing! You
didn't answer my hard question: Given the three (!) reasonable choices,
what is an algorithm for deciding what the ToolTips text content should be
for the following example?
<a href="foo.html" title="Big World of Foo."><img src="foo.png" title="Foo
It doesn't get much easier when one has to decide the reasonable behavior of
a screen reader!
JAWS, when faced with this vague situation, favors TITLE content over ALT.
This choice is arbitrary, but not entirely illogical.
The simple solution to this quandary is for authors to skip the TITLE
attribute -- unless one has some clear expectations for browser behavior.
Don't expect the browser (screen reader or not) to make the right choice.
As a content provider, don't expect the machine browser to read your mind!
Avoid this ambiguity.
-- Bruce
Received on Tuesday, 6 February 2001 14:16:08 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34475
|
RE: Image Galleries, Alt vs caption.
From: Charles McCathieNevile <[email protected]>
Date: Thu, 30 Dec 2004 04:15:16 -0600 (CST)
Message-ID: <61216.>
To: "B.K. DeLong" <[email protected]>
Cc: [email protected]
Hi BK,
I did a most-of-WCAG 1 review (I got bored eventually) via Hera - you can
look at it and scribble on it yourself at http://www.sidar.org/hera if you
give the URI as below and your name as chaals. Anyway, I have attached the
HTML and EARL versions of the report for your perusal...
Charles McCathieNevile [email protected]
<quote who="B.K. DeLong">
Could I get some thoughts and
> comments about the accessibility of this gallery?
> http://bkdelong.mit.edu/browsertests/imagegallery/
Received on Thursday, 30 December 2004 10:16:10 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34476
|
Re: Event handlers in xforms:bind
From: Allan Beaufour <[email protected]>
Date: Wed, 26 Apr 2006 13:16:04 +0200
Message-ID: <[email protected]>
To: "Erik Bruchez" <[email protected]>
Cc: "Xforms W3C WG" <[email protected]>
On 4/25/06, Erik Bruchez <[email protected]> wrote:
> It is my understanding that it is not possible to attach event
> handlers to xforms:bind (or rather, this wouldn't do anything since no
> XForms event targets xforms:bind).
That's also my interpretation.
> But it would be very useful, in particular, to be able to detect value
> changes from within the XForms model. For example:
> <xforms:bind nodeset="first-name">
> <xforms:action ev:event="xforms-value-changed"/>
> ...
> Currently, xforms-value-changed is only defined for controls, and upon
> refresh. The same of course goes for all the MIP changes. But you get
> the general idea.
I remember this being discussed at a f2f. Unfortunately I cannot
remember either pros or cons. Maybe somebody else can?
> Right now, the workaround involves using something like a dummy
> control to detect that kind of changes on nodes that do not have
> bound controls.
> This would be very much in line with the backplane idea and the
> further abstraction of model from controls in XForms.
> Does something like this make sense to anybody else?
Indeed, as the model should contain the backplane. This would make it
possible to push more "logic" up in the model. But there might be an
obvious reason for not doing this (which I do not know, or have
forgotten :) ).
> Do implementors address this issue?
Well, it's not an issue, it's just the spec. ;-) But no, it's not
possible in Firefox.
... Allan
Received on Wednesday, 26 April 2006 11:22:37 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34478
|
Solution for possible crash if using SSL
From: Kallweit, Heiner <[email protected]>
Date: Wed, 26 Jul 2000 14:09:02 +0200
Message-ID: <01EBDB2D6DFDD211A98200805FF50CF901188D64@SV002871>
I ran into serious trouble when using a CGI-programm that (not only)
story short: HTHost_forceFlush -> HTTPEvent_Flush ->
-> HTSSLWriter_write -> HTSSLReader_read -> HTHost_forceFlush
If your machine is fast enough (mine was) this can cause a stack overflow.
I stopped the recursion by changing HTHost_forceFlush to the following:
PUBLIC int HTHost_forceFlush(HTHost * host)
static BOOL in_flush=NO;
HTNet * targetNet = (HTNet *) HTList_lastObject(host->pipeline);
int ret;
if (in_flush || targetNet == NULL) return HT_ERROR;
HTTRACE(CORE_TRACE, "Host Event.. FLUSH passed to `%s\'\n" _
host->forceWriteFlush = in_flush = YES;
while ((ret = (*targetNet->event.cbf)(HTChannel_socket(host->channel),
targetNet->event.param, Event_FLUSH))==HT_WOULD_BLOCK);
host->forceWriteFlush = in_flush = NO;
return ret;
Until now I found no side effect.
Regards, Heiner
Received on Wednesday, 26 July 2000 08:09:44 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34479
|
Re: W3C RDF Validator question
From: Marjolein Katsma <[email protected]>
Date: Mon, 20 Jan 2003 21:46:54 +0100
Message-Id: <>
To: "Emmanuel Pietriga epietriga-at-yahoo.fr |rdf/1.0-Allow|" <[email protected]>
Cc: [email protected]
Thanks, that's great - I'll be looking forward to your announcement on this list (at least that's where I assume it will be announced)!
At 15:30 2003-01-20, Emmanuel Pietriga wrote:
>the W3C RDF Validator should soon be updated. I am currently updating it to ARP 2, but I need to run some tests before making it public.
>Emmanuel Pietriga ([email protected]) | MIT - Laboratory for Computer Science
>World Wide Web Consortium (W3C) | Room NE43-344
>tel: +1 617.253.5327 | 200 Technology Square
>fax: +1 617.258.5999 | Cambridge, MA 02139
>Jeremy Carroll wrote:
>>The underlying engine is ARP.
>>parseType="Collection" was supported in the ARP released with Jena 1.6.0,
>>and the latest ARP also supports rdf:datatype.
>>I would suggest downloading the latest ARP from
>>and running it locally while the W3C catches up.
And thanks, Jeremy!
>>>Does anyone have any idea when the validator will be updated to
>>>support everything in the revised RDF/XML syntax (8 November
>>>2002) as described in http://www.w3.org/TR/rdf-syntax-grammar
>>>(which is what I'm using)? Or will this need to reach
>>>recommendation status before it will be implemented in the validator?
>>>Marjolein Katsma
>>>HomeSite Help - http://hshelp.com/
>>>Java Woman - http://javawoman.com
Marjolein Katsma
HomeSite Help - http://hshelp.com/
Java Woman - http://javawoman.com
Received on Monday, 20 January 2003 15:47:07 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34481
|
From: Boris Zbarsky <[email protected]>
Date: Sun, 09 May 2010 08:42:42 -0400
Message-ID: <[email protected]>
On 5/9/10 3:27 AM, Tab Atkins Jr. wrote:
>> Is this really a good use of time, though? Is this more important than
>> other parts of CSS2.1 that need spec and implementation work and are at risk
>> (run-in, the rest of the anonymous table stuff, etc)?
> The effort of changing the spec to match my expectation is here is
> very little. Certainly other stuff needs attention, but changing
> abspos elements from "leave a placeholder" to "don't leave a
> placeholder" is pretty small in terms of the table-cell creation algo.
If you're going to take the easy way out and leave auto-offset behavior
completely undefined (even more so than "normal", note), then yes. I
personally would object to the WG doing that.
> In terms of author expectations, the expectations of this author are
> that an abspos element leaves the same trace behind it as a
> display:none element, since that's how it appears to work in every
> other context.
Hmm. OK, fair.
> Some quick testing shows that instead, setting float appears to make
> the element ignore its display:table-cell value
Yes, see CSS2.1 section 9.7.
> and thus get itself wrapped in an anonymous table-cell. Is that what actually happens in
> the layout engine?
> I acknowledge that it may not be a realistic change, given the current
> interop. But it's one that leads to a more intuitive model, and I'd
> like to pursue the possibility at least somewhat.
OK, but then you actually need to spec the behavior for this possibility
instead of leaving it completely undefined.
Received on Sunday, 9 May 2010 12:43:17 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34482
|
From: Dean Jackson <[email protected]>
Date: Thu, 14 Nov 2002 01:07:38 +1100
To: "Harmon S. Nine" <[email protected]>
Cc: [email protected]
Message-ID: <[email protected]>
On Fri, 20 Sep 2002, Harmon S. Nine wrote:
> Could the SVG standard include a value of "negative"or "complement" for
> the clip-rule attribute? Or perhaps allow something like this to be
> specified when the clipPath is being constructed?
> This "negative" or "complement" specification would take a clipPath and
> perform a bitwise complement on it, thus, for instance, changing a
> clipPath that consists of a filled-circle into a rectangle the size of
> the viewport with a circular hole in it.
> Currently, the only way to obtain such a clipPath is to use a
> complicated path-element that consists of a rectangle the size of the
> viewport with other shapes inside the rectangle as part of this same
> path. A clip-rule of "oddeven" will then give the desired result. This
> seems overly complicated. The method described in the above paragraph
> would be much easier.
Hi Harmon,
Apologies for the late reply.
The Working Group had a look at your request recently and agree
that it would be a useful feature.
Expect to hear back from us sometime in the SVG 1.2 timeframe
(it's too late for SVG 1.1, and we aren't doing new features there
Received on Wednesday, 13 November 2002 09:07:54 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34483
|
From: MURATA Makoto <[email protected]>
Date: Thu, 30 Oct 2003 01:24:57 +0900
To: [email protected]
Cc: Murata <[email protected]>
Message-Id: <[email protected]>
Here is a rough sketch. Having presented this sketch, I ask the TAG to
reconsider its decision to publish an I-D that updates RFC 3023. It
would be nice if somebody from W3C (probably some member of I18N WG or
XML Core WG?) can help me. I think that further discussion about the
content of this I-D should be moved to the IETF-XML-MIME ML.
By the way, I cannot find image/svg+xml in the IANA list and cannot find an I-D.
I find an I-D for application/rdf+xml, but no RFC yet.
1) deprecate text/xml, text/xml-external-parsed-entity, and text/*+xml
- the MIME canonical form with short lines delimited by CR-LF, making
UTF-16 and UTF-32 impossible
- Casual users will be embarrassed if XML is displayed as text, while
experts can certainly save and then browse XML documents.
- Worries that the absence of the charset parameter of
text/xml and text/*+xml is particularly harmful, since the
default of that parameter is US-ASCII
2) the optional charset parameter is RECOMMENDED if and
only if the value is guaranteed to be correct
- Server implementers or Server Managers SHOULD NOT specify the
default value of the charset
parameter of text/xml, application/xml,
Application/xml-external-parsed-entity, */*+xml, or
Application/xml-dtd, unless they can guarantee that
that default value is correct for all MIME entities of these media
3) Fragment identifier
At present, RFC 3023 says:
As of today, no established specifications define identifiers
for XML media types. However, a working draft published by
W3C, namely "XML Pointer Language (XPointer)", attempts to
define fragment identifiers for text/xml and
application/xml. The current specification for XPointer is
available at http://www.w3.org/TR/xptr.
We have XPointer recommendations but are not ready to bless
XPointer. We should say so.
4) Possible reasons for not providing the charset parameter for specialized
media types
I think that "This media type is utf-8 only and thus does not need any
mechanism to identify the charset" is a perfectly good reason, since
"UTF-8 only" is a generic principle. This should be mentioned in the
5) Needs a real example for the +xml convention.
Application/soap+xml should be mentioned in Section 8 (Examples).
6) Update References
Reference to three XPointer recommendations without blessing them as
fragment identifiers of XML media types.
Reference to MathML Version 2 rather than MathML Version 1.1
Reference to Scalable Vector Graphics (SVG) 1.1
Although XML 1.1 is not a recommendation yet, I think that we should
mention it and say "It is very likely that XML 1.1 will reference to
this document".
7) New Appendix: Changes from RFC 3023
We need a summary of these changes
MURATA Makoto <[email protected]>
Received on Wednesday, 29 October 2003 11:28:00 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34484
|
Re: URL parsing and IPv6 addresses
From: Brandon Gillespie <[email protected]>
Date: Fri, 16 Aug 1996 09:19:07 -0600 (MDT)
To: www-talk <[email protected]>
Message-ID: <[email protected]>
On Thu, 15 Aug 1996, Fisher Mark wrote:
> >> Any more opinions? Speak now or forever hold your peace...
> >
> >more characters 'special'.
> If anyone has information from the IPv6 working group as to why they picked
> list?
Actually, using '.' is also backwards compatable, just change the URL
spec so that either '.' or ':' can be the port seperator on a IPv4 addr,
and a '.' will also work with v6. You know a v4 addr is a quad, so the
fifth seperation is going to be the port, ala:
The only problem from that point is figuring out if its a hostname or an
address :)
Frankly, I dont like the comma, its not visually correct/appealing.
-Brandon Gillespie-
Received on Friday, 16 August 1996 11:20:06 UTC
|
global_05_local_5_shard_00000035_processed.jsonl/34511
|
Your tutorial on Torque says: Logging configuration To have any logging messages sent to the console add the following to a file named and place this in your classpath (putting it in your target/classes will do the trick). I am sure to a Java programmer, this is a no-brainer, but I am new to this realm, have only taken a beginning course in Java. I have scoured your site, and the internet search for references to "target/classes" and I cannot find where this is, can you update your Tutorial, so it is clear where this is. In the meantime, can you tell me where to put this file? Thanks in advance, Tim Tim Ahrens Web Developer IMI, Inc. 408-428-9888 x328 Fax: 408-428-0715
|
global_05_local_5_shard_00000035_processed.jsonl/34513
|
Hi, I'm looking to use ApacheDS as a UDDI repository, wher can I find some documentation on ApacheDS UDDI service? Best Regards, Stevens Gestin DISCLAIMER : This email and any files transmitted with it, including replies and forwarded copies (which may contain alterations) subsequently transmitted from the sender, are confidential and solely for the use of the intended recipient. The contents do not represent the opinion of the sender except to the extent that it relates to their official business.
|
global_05_local_5_shard_00000035_processed.jsonl/34517
|
On Tue, May 06, 2008 at 02:21:53PM +0200, Dirk-Willem van Gulik wrote: > What is the downside/penalty for making this a default ? Or should this > always be an optional thing - set at ./configure time ? > The whole point of dtrace is that there really shouldn't be a penalty while no probes are being used. I'd prefer to see it on by default where available, but still leaving an option to explicitly disable it. vh Mads Toftum -- http://soulfood.dk
|
global_05_local_5_shard_00000035_processed.jsonl/34535
|
Gmail for Android Now Lets You Save Attachments to Google Drive
Image: Stan Schroeder/Mashable
Other new features include an explanation as to why a message ended up in the spam folder, improved support for languages that are written right to left, and easy access (via a swipe from the left edge) to the side navigation menu.
The new features will be rolling out "over the next few days." You can download the latest version of Gmail for Android over at Google Play.
Google recently announced more than 1 billion copies of Gmail for Android have been installed, making it one of the most popular apps for Android.
Load Comments
What's New
What's Rising
What's Hot
|
global_05_local_5_shard_00000035_processed.jsonl/34543
|
What is meta? ×
I am guessing this has to do with the amount of views, answers, upvotes and time it was posted but is there a place where I can get a more detailed explanation of this?
Also I have seen questions asked several months ago re-appear because they have many views/upvotes.
share|improve this question
AFAIK, it's up there for as long as it doesn't get bumped off by more recent active questions and isn't negatively downvoted for some amount of time. (there's others such as delting, migration, etc.) A post is "active" when an answer is submitted, an edit was made or other action that usually has a revision. – Jeff Mercado Sep 21 '11 at 22:07
1 Answer 1
up vote 12 down vote accepted
Jeff explained this in detail on the Stack Exchange blog:
Here’s how it works. Starting with a list of the last 3,000 active questions:
• drop questions containing any of your ignored tags
• drop questions scoring -4 or lower
Next, apply the following score formula to the remaining questions:
• your interesting tags: +1,500 per interesting tag, up to +2,000 total
• your top 40 scoring tags: maximum of +1,000 per tag (scaled), up to +2,000 total
• question score: +200 × score, up to +1,000 total
• total answer score: -200 × score, up to -1,000 total
• number of answers: -200 × answers, up to -1,000 total
• number of views: -15 × views, up to -1,000 total
• question last activity date: -1 × (seconds / 15)
Count it all up and take the top 90 by score.
On Metas, the threshold to drop questions (the 3rd bullet) is -8 or lower.
share|improve this answer
great! thanks.. – amosrivera Sep 21 '11 at 22:19
Wow, how fast does that computation run? It seems really intensive. – Adel Sep 22 '11 at 7:29
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
global_05_local_5_shard_00000035_processed.jsonl/34544
|
What is meta? ×
I'm enjoying bounty hunting more than I expected. I like the challenge of answering tough questions and I am learning advanced stuff from the answers questions I cannot answer. However, I'm getting pretty annoyed at the lack of support for bounty hunting.
On a site like apple.stackexchange.com where the focus is pretty narrow and there are currently only 2 questions with bounties, I don't need tools. But Stack Overflow is too big and diverse.
As I write this, there are 304 questions on SO that have bounties. Probably 80% I have no clue about because they are advanced questions about environments (like Android programming) where I'm far from expert. I think in the past week I've only been able to answer about 2-3% of the bounty questions. (I'm not complaining about that, just setting the stage for the feature requests.) Yes, I can favorite and ignore some tags to help pick out more likely questions from the list, but it's still a very long list. At 50 questions per page it is 7 pages.
Even worse, I cannot figure out how the list is sorted, but it seems to be sorted in some way where I'm more likely to see a question I've already looked at or even answered near the top of the list. So hunting for new questions is really tedious and not what I want to be spending my time on. (That is, I want to spend my time answering questions, not hunting for them.)
After reading the comments, I think perhaps the best solution would be for there to be a 'bounty' tag that is automatically added to posts when a bounty is offered and automatically removed when the bounty is awarded. This would even allow me to search across several sites for questions with bounties.
So I'd like some tools. Enhancements to the search feature would be a great place to start.
• Add hasbounty:1 to select only questions that have an active, uncollected bounty.
• Add some way to exclude from search results questions I've already answered.
• Add some way to exclude from search results questions I've already viewed.
It would also help to have the Bounty Questions list ("featured" tab) sortable. I'd probably sort by most recently asked first. Maybe highest bounty first. Maybe most recently edited question (where creation counts as editing) first.
Likewise, it would be helpful to have options to completely exclude from the list:
• Questions tagged with ignored tags
• Questions not tagged with favorite tags
If I could get only one thing off my wish list, it would be the hasbounty:1 search option. Then I could use the rest of the search tools to pare down that list as needed.
Oh, and a convenient way to save and re-run searches would be nice, too. :-)
share|improve this question
Go to the "featured" tab in tags you're familiar with. – Daniel Fischer Apr 24 '12 at 20:55
Bounties are sorted in the order of expiration (oldest first). You can filter the bounties by tags, by visiting the featured tab, eg stackoverflow.com/questions/tagged/… – Rob W Apr 24 '12 at 20:56
Apparently some people don't need the help – Some Helpful Commenter Apr 24 '12 at 22:33
Thanks for the tip about viewing featured questions by tag. Helps some, for sure, but I cannot combine tags this way and there's a lot of overlap among the individual tag pages. So it gets me more relevant questions, but now I'm seeing the same question on 3 different pages. I really want the search feature so I can find questions with bounties that haven't been answered (for example) that I have a good shot at answering. – Old Pro Apr 24 '12 at 23:38
@OldPro "but I cannot combine tags this way" you can combine tags while searching featured questions. Or I understood wrong your comment? – ajax333221 Apr 25 '12 at 3:26
@ajax333221 I had no idea that you could run a search and if the only search terms were tags you would get a "featured" tab in the result. Thank you for letting me know. So an alternative would be to make the "featured" tab available for all search results. Not sure I prefer that alternative from a UX standpoint, but that is something the community can discuss. – Old Pro Apr 25 '12 at 4:34
You must log in to answer this question.
Browse other questions tagged .
|
global_05_local_5_shard_00000035_processed.jsonl/34545
|
What is meta? ×
share|improve this question
@TheLQ: Well, actually, yes they do. Perhaps you don't, but don't think for a moment that that is the same thing. – Lightness Races in Orbit Apr 3 '13 at 17:39
@LightnessRacesinOrbit He was rounding the number just a tiny bit :) – Camilo Martin Jun 11 '14 at 6:14
@CamiloMartin: What makes you say that? – Lightness Races in Orbit Jun 11 '14 at 9:03
@LightnessRacesinOrbit The people who care about a 12-year-old posting a question on how to parse HTML with regex because of his age (instead of because of the question being a common duplicate) are probably people who manage services like websites and are used to follow the law to the letter to avoid being sued and such. – Camilo Martin Jun 11 '14 at 16:56
@CamiloMartin: That doesn't answer my question, unless you meant he was rounding the "number" "nobody". I don't really understand what you're saying, though. – Lightness Races in Orbit Jun 11 '14 at 17:23
@LightnessRacesinOrbit I'm saying he was likely rounding the number of people that care about the real-world age of arbitrary StackExchange users down to zero, i.e., "nobody really cares". – Camilo Martin Jun 11 '14 at 18:46
@CamiloMartin: I see. – Lightness Races in Orbit Jun 11 '14 at 18:58
1 Answer 1
up vote 54 down vote accepted
How do I use Stack Exchange if I'm under 13?
Why is this restriction in place?
Other options
share|improve this answer
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
global_05_local_5_shard_00000035_processed.jsonl/34546
|
I'm looking for detailed empirical studies of machine learning algorithms that try to shed light on the strengths and weaknesses of different methods. Most papers take the form "here is a standard method... here is my method... I win! Yay!" I'm looking more for conclusions along the lines "method X does well under situation 1 and poorly under situation 2." I think these types of references would be very useful for anyone getting started with or learning to apply machine learning techniques to real-world problems. Some references off the top of my head are:
C. Perlich, F. Provost, J. Simonoff Tree induction vs. logistic regression: a learning-curve analysis JMLR 2003
A Niculescu-Mizil, R. Caruana An empirical comparison of supervised learning algorithms ICML 2006
E. Bernado Mansilla, T.K. Ho On classifier domains of competence ICPR 2004
I. Rish An empirical study of the Naive Bayes classifier IJCAI 2001
Surveys of work within specific problem domains (e.g. imbalanced data, very large data, very small data, high-dimensional data, non-stationary data) would also be of interest as long as the strengths and weaknesses of algorithms are discussed. I would imagine that there are tons, but they are highly scattered in time and publication venue. Any pointers would be great. Don't be afraid to promote your own work.
asked Feb 09 '11 at 08:52
Troy%20Raeder's gravatar image
Troy Raeder
I found the R. Caruana paper (the one listed) and his others very enlightning and helpful in creating high quality ensembles. In case you haven't seen this already here is a presentation he gave about the results of that paper.
(Feb 13 '11 at 00:22) Ben Mabey
2 Answers:
Here's another one: "An empirical evaluation of supervised learning in high dimensions" by Caruana et al. 2008
answered Feb 09 '11 at 19:52
Yisong%20Yue's gravatar image
Yisong Yue
edited Feb 10 '11 at 14:15
ogrisel's gravatar image
Here's a freely available copy from the IMLS website: http://www.machinelearning.org/archive/icml2008/papers/632.pdf
(Feb 15 '11 at 23:01) Sean
Some possibly low-quality comparisons:
1. If the MetaOptimize challenge results get posted, it will be one more empirical evaluation of a number of ML and NLP techniques. Link to relevant MO question.
2. Comparison of Artificial Neural Network with Logistic Regression as Classification Models for Variable Selection for Prediction of Breast Cancer Patient Outcomes.
answered Feb 11 '11 at 21:16
probreasoning's gravatar image
edited Feb 23 '11 at 15:10
Your answer
toggle preview
powered by OSQA
|
global_05_local_5_shard_00000035_processed.jsonl/34560
|
Submitted by Websblob 1109d ago | opinion piece
What the Hell is Sony Planning for E3?
Week after week we keep seeing Sony announce game after game. Whether it be Little Big Planet Karting, Soul Sacrifice, God of War or Playstation All-Stars the games just keep coming and we're only weeks away from E3. It really asks the question, "Just what the hell is Sony thinking?". (Industry, PS Vita, PS3, Sony)
« 1 2 »
Snookies12 + 1109d ago
Obviously they've got something major planned if they're able to announce these things so easily before E3. They must have either an incredible amount of exclusives they haven't even mentioned, or some major update for Vita/PS3. If not, they're the biggest idiots ever lol.
LOGICWINS + 1109d ago
Agreed. I'm sure the show will be Vita focused in terms of PS3/Vita compatibility. EVERY PS3 exclusive they show should be compatible with Vita, or else the slogan "Never stop playing" is bull. The two games I'm hoping they show for the PS3 are GG's next project, GTAV, and Agent. These three games plus demonstrations of The Last of Us, GOW: Ascension, and Battle Royale would make for a perfect conference in my eyes.
xursz + 1109d ago
Last Guardian trailer, Final Fantasy Versus XIII teased, Agent release date, DUST 514 showcase, Playstation Vita cross-play showcase (LBP, Playstation's Battle Royal, etc.), Killzone Vita, one more surprise Vita exclusive.
Gaming101 + 1109d ago
lol so Sony is going to limit all of its announcements to one event when there are several events peppered throughout the year? Maybe given Sony's big announcements at E3, developers and publishers would rather be announced outside of E3 so they aren't overshadowed by other huge announcements? If you get overshadowed by other announcements you have completely wasted a lot of time and money going to E3 because now noone is talking about you - you need to think strategically about when you release info.
Dee_91 + 1109d ago
maybe they will show actual gameplay for those games announced or more news on the games they announced.
Freak of Nature + 1109d ago
LBP carting could be added to your list to get it closer to perfection. And of course the last guardian gameplay footage along with **Media Molecule's new IP** would be uber Fantastic!
#1.1.4 (Edited 1109d ago ) | Agree(4) | Disagree(2) | Report
irepbtown + 1109d ago
I think new exclusives, and most likely important info for games which we are dying to hear about such as the Last Guardian and so on.
I'm usually not too bothered about E3 and I miss a lot of it (I live in UK). But I think this year I will fight time and stay awake to see what Sony will reveal. No adjective can describe my current feeling.
Sony, Amaze me...
Oh, and I think Sony are extremely confident hence they are announcing game after game before E3. Something big will go down. Something which no script writer can predict.
metsgaming + 1109d ago
not necessarily, by announcing them early they get coverage yet if they announced them during E3 they would not get nearly as much attention especially because of nintendo's new hardware.
#1.2 (Edited 1109d ago ) | Agree(10) | Disagree(24) | Report | Reply
egidem + 1109d ago
What gives you the impression that they've already announced their big surprises before E3? We know what they've announced but not what they haven't. We'll have to wait. Judging from past conferences, Sony usually aims to top its previous conferences the previous year.
Ninty's hardware might garner some attention because it's new. It doesn't mean that all focus and attention will be drawn towards them. For once I'm more interested in what Sony has to show at E3 than Ninty's shady mysterious hardware.
metsgaming + 1109d ago
im not saying this is all they have but just thinking about a reason why they would purposefully reveal some of their games this is one of the things i thought of. They could also have done it to make room for all their other announcements and they needed to clear up time. Fact is Nintendo will get the attention by the gaming "media" because of new hardware. They may drop some major bombs like a new Monster hunter vita (just an example think something really shocking) and it would overshadow their other announcements that by themselves are significant but overshadowed. Why throw everything at one day which maybe overshadowed by other stuff when you can reveal the stuff over time and make sure that it gains attention.
#1.2.2 (Edited 1109d ago ) | Agree(13) | Disagree(1) | Report
miyamoto + 1109d ago
At least they have something to show for every kind of gamer out there unlike the other ....
This announcements are better than nothing or cramming them all up in a two hour show.
Substance over Style peeps.
PlayStation E3 Drop BOMBS!
PS3 Price Drop
PS Vita Price Drop
PS Move Price Drop
PS2 Price Drop
PSP Price Drop
PS3 games Price Drop
PS Vita games Price Drop
PSN Price Drop
PlayStation Plus Price Drop
The Last Guardian gameplay
The Last of Us gameplay
Agent trailer
GTA V trailer
Nino Ku Ni trailer
Call of Dooty PS Vita trailer
Killzone PS Vita trailer
Monster Hunter PS Vita trailer
PlaStation All Stars PS Vita
A New exclusive Mascot Character Based Game from SCE Japan
#1.2.3 (Edited 1109d ago ) | Agree(8) | Disagree(8) | Report
iamnsuperman + 1109d ago
It will be vita stuff and also more gameplay footage from the announced stuff. To be honest who cares if things are announced before E3. It makes very little difference as long as they have some new footage to show. I can't remember but didn't Sony do this last year as well
BISHOP-BRASIL + 1109d ago
Yes they did and they also pretty much stated exactly why they do that.
E3 is a huge event, it usually holds tens of announcements of new games. By spreading the announces just before E3 they build up hype for their presentation as well as giving the media outlets plenty of time to do their stuff.
It's not that Sony is afraid of media focusing in other games, it's just that coverage at E3 is quite poor all around, mostly because every journalist in there need to be done as quickly as possible in order to attend to the next event, check the next booth or make the next interview.
It doesn't mean they won't have surprises in there, they usually have some, but there's merit to this "crazyness" of outing your announces before the main event.
WeskerChildReborned + 1109d ago
Yea that make's sense but i don't think it has to do with the PS4 but probably alot of more exclusives like you said, Maybe they're pushing the exclusives so PS4 can arrive soon.
zag + 1109d ago
Um, just wondering and I can't remember exactly but didn't sony say they where going to skip this years E3 and do something next year?
might have been around feb march
rob20090 + 1109d ago
Sony's press conference is at 6pm PST Monday, June 4th.
BattleAxe + 1109d ago
Hopefully they announce the shooter that Sony Santa Monica is working on.
Wintersun616 + 1109d ago
That was fake. A shady ad on some site, and people reported it asked for gamertag and about achievements, not PSN name and trophies.
srcBFMVBMTH + 1109d ago
Well, a game like God of War Ascension wouldn't need to be announced at E3 because it's pretty much guaranteed to sell. And also considering how well known Naughty Dog is, same would go for The Last Of Us. All they need to do to keep the hype fueled would just show gameplay/ trailers etc. New IP's (not counting PSABR and The Last Of Us) would need to be announced at E3 to get more coverage because........well they're new lol. I mean we got Sucker Punch, Media Molecule, Guerrilla Games, Quantic Dream, Evolution Studios, and Team ICO making new games for the PS3, and Team Siren, Sony San Diego, SCE Cambridge, SCE Liverpool, and SCE London working on new games for the PSVita. That pretty much screams a shit load of new IP's. And not to mention games like Agent, FFVXIII, and The Last Guardian. They seriously need release dates already -__-
#1.7 (Edited 1109d ago ) | Agree(6) | Disagree(1) | Report | Reply
showtimefolks + 1109d ago
weren't you here last year or the year before when hey announced most of their big games in advance. I see them giving ps3 30 minutes at the most and spending rest of time of Vita.
for Vita to succeed this E3 is very important so expect a new bundle with same price but a memory card and a game most likely uncharted.
last of us gameplay
sly 4
all-star battle royal
killzone 4 as a surprise
QD may debut its game after heavy rain
Valve could announce LFD 3 on sony's stage and LFD 1-2 coming to ps3
but i am not expecting any big surprises besides agent
ronin4life + 1109d ago
No, not necessarily... they do this every year.
Announcing everything in a very plotted out fashion and then detailing it all during the show is just how they have approached e3 for the last several years.
morkendo23 + 1109d ago
as a old school gamer SONY has left us old school gamer behind. so most likely it'll be something NEW school vita and FPS,MMO'S we old school gamers be prepare for more FPS.............
JESUS is GOD ......
im done with video games......................... ...56 and done!!!
#1.10 (Edited 1109d ago ) | Agree(1) | Disagree(6) | Report | Reply
vegnadragon + 1109d ago
People seem to have short memory now in days. Nothing new here, they did the exact same thing last year
cannon8800 + 1109d ago
How crazy (or stupid) do you guys think it would be if they showed the ps4 at e3? I would be amazed, what about you guys?
humbleopinion + 1109d ago
I think that last year they similarly announced all the games before E3 and people asked the same "what are they hiding" question, but eventually no new games were announced except for Twisted Metal which already leaked months before that.
I think this is a smarter strategy that Sony is employing: announce the game in a dedicated press-conference for the initial coverage, and then use the stage time for actual gameplay presentation in E3. No point showing trailers with no interaction on stage when you can show them all throughout the year.
But having at least a single surprise is always good.
#1.13 (Edited 1109d ago ) | Agree(0) | Disagree(2) | Report | Reply
avengers1978 + 1108d ago
I bet there is a bunch of vita stuff, possible some PS3 updates, and a ton of games, but They could very well be unveiling there new console, In fact it wouldn't surprise me at all to see a new Playstation and Xbox at this years E3
dangert12 + 1109d ago
Trusted sources say they will allow myself to reveal the long awaiting and what many of you thought to be cancelled....I'm not allowed to say It's In my contract that I do not...lol anyways hopefully sony don't show us everything before e3 and then show us again at e3 like they did last year
LOGICWINS + 1109d ago
Yeah bro me too. I'm dying to reveal the news that Sony has restricted me from revealing in my contract. I'll reveal the news on N4G...about ten minutes after Sony reveals it themselves, then I'll tell everyone "I TOLD U GUYS....I TOLD YOOOOU!!"
soundslike + 1109d ago
Or you could take steps to get a fair share of anonymity and let us know early ;)
1. Go to your library
2. Make a new account using their computers
3. Release the goods
4. Watch the hype ensue.
Just sayin'
#2.1.1 (Edited 1109d ago ) | Agree(11) | Disagree(2) | Report
Pintheshadows + 1109d ago
So it's The Getaway or Eight Days?
Nykamari + 1109d ago
It'sThe GetAway in Eight Days! LOL!!!!
#2.2.1 (Edited 1109d ago ) | Agree(3) | Disagree(4) | Report
dangert12 + 1109d ago
Yes...thats what I'm hoping for though I don't have a contract with sony all that was rubbish lol
Patriots_Pride + 1109d ago
My guess would be to expect alot of VITA announcements, which is not a bad thing for a VITA owner like me.
But if you really look at the big picture Sony has to push the VITA hard at E3 cause it not selling as well as the 3DS and you can bet that Nintendo will announce some new Mario, Pokemon or Zelda game for the 3DS - would love to see a 3DS Smash Bros.
metsgaming + 1109d ago
well its the vitas first big show since release if they dont focus on it then that would send a terrible message. Sony themselves already have supported it alot, its the third parties that got to get the lead out.
Sgt_Slaughter + 1109d ago
The Super Smash Bros 3DS game is coming out in 3-4 years time since the developers just got started after finishing Kid Icarus. It will have some cross-play with the Wii-U version.
NastyLeftHook0 + 1109d ago
What the Hell is Sony Planning for E3?
What isn't sony planning?
Patriots_Pride + 1109d ago | Funny
My birthday party -_-
#4.1 (Edited 1109d ago ) | Agree(34) | Disagree(0) | Report | Reply
Mr_cheese + 1109d ago
jacksonmichael + 1109d ago
Hopefully a price drop or a bundle for the PS3. I just got YLOD a couple days ago. Info on the Smash Bros ripoff would be sweet, too.
Stop 1109d ago | Spam
jacksonmichael + 1109d ago
Way to be constructive with your criticism, everyone. My PS3 failed. I'm buying a new one. Sony is copying Smash Bros. If you think they aren't, you're deluding yourself, and for no good reason. The game will be great. Even if it wasn't the first of its kind.
#5.2 (Edited 1109d ago ) | Agree(3) | Disagree(12) | Report | Reply
Virtual_Reality + 1109d ago
Smash Bros is not the first of its kind neither.
jacksonmichael + 1109d ago
I knew someone would say that. Of course it isn't. That doesn't really give Sony an exemption.
mobhit + 1109d ago
Going by your logic, Nintendo copied too.
DeletedAcc + 1109d ago
The Last of Us gameplay - oh fuck i cant wait *__*
LakerGamerEnthusiast + 1109d ago
F7U12 + 1109d ago
oh sweet jesus asian boobs.
Jihaad_cpt + 1108d ago
silicone you mean in a sluttish looking Asian woman
smashcrashbash + 1109d ago
Ummmm. why is it people make such a fuss about Sony revealing things before E3? Who cares? Does it mean that we are not still going to get it? If you get your Christmas presents early does that mean you won't want them anymore? Why is everyone making such a big deal? It just gives them more time to show gameplay of what we already know rather then waste time introducing everything.
Jreca + 1109d ago
If they announce new games now, the news are unrivalled. If they show everything on E3, their own games are going to block themselves from the spotlight, and of course all the other ones. It's simply a matter of focus.
Bigboss19 + 1109d ago
I'm hoping for a price drop on ps3 since I just got the yellow light on the 15th of this month....I wouldn't mind seeing hitman or a new mgs for the Vita tho
extermin8or + 1109d ago
if it's not the Getaway who cares? or Syphon Filter that's been like a certainty at E3 for last few years, lets see if it's finally here :S
Kingscorpion1981 + 1109d ago
I can only see Syphon Filter for the Vita!
extermin8or + 1109d ago
Btw I meant if it's not the getaway who cares in reference to canned games being revived I can't think of any other notable ps3 exclusives that were announced then cancelled or thought to be cancelled (and Agent doesn't count as they've reiterated that it's still in development....)
MrWonderful + 1109d ago
Final fantasy 7 remade will surface. My theory on why versus has been absent is that it was really 7 with different character models to throw off the public and at E3 they will show off the game fully remastered and announce a release date pulling off the greatest kept secret on gaming history.
Lord_Sloth + 1109d ago
I dunno...I'd love a remake of VII but I wanna see VSXIII remain it's own title.
Outside_ofthe_Box + 1109d ago
lol Nice try man, nice try.
MakiManPR + 1108d ago
If Sony and Square-Enix do that it will be the worst thing they've done in their entire life.
Acquiescence + 1109d ago
The more they keep revealing stuff before E3...
the more it makes me think that maybe we'll see something new regarding The Last Guardian at their actual conference.
Kyosuke_Sanada + 1109d ago
Silly article. Everyone knows they are planning to install Kara AI's to consoles which will manage Vita/PS3 data via the next update.
Kara PSN Plus version comes with the new "Digital Debauchery" expansion.....
#13 (Edited 1109d ago ) | Agree(3) | Disagree(1) | Report | Reply
TheBrownBandito + 1109d ago
TheLyonKing + 1109d ago
Vita games
Ps3 price drop
Some exclusives
Some gameplay from last of us
Move titles (that might or will bomb)
Surprise announcement of the Last of us
But who know the last point is more of a tgs thing.
TheLyonKing + 1109d ago
SHHHUgar I meant last Gaurdian for my last point! Honest.
Pintheshadows + 1109d ago
If we see the Last Guardian gameplay, Last of Us gameplay, GoW Ascension singleplayer gameplay, more Battle Royale and have a pleasent surprise i'll be happy as far as the PS3 goes.
On a sidenote today has been awesome. West Ham promoted, Chelsea win the CL and inbetween i've been playing Max Payne 3.
xtreampro + 1109d ago
Hopefully they show Agent, FFvs13 and a KZ4 tech demo for the PS4. I'm hoping they also announce an Ultra-slim PS3, I've had my 60GB launch model since Oct 2007 and I don't play PS2 games any more so I definitely wouldn't mind selling it for a more sleeker looking and smaller PS3.
#16 (Edited 1109d ago ) | Agree(4) | Disagree(1) | Report | Reply
TheColbertinator + 1109d ago
They are planning to cancel all future PS3 game and console development.Sony will also return to making radios and computer chips
MasterCornholio + 1109d ago
Lol I didn't know that you hated Sony so much. Thanks for the laugh.
#17.1 (Edited 1109d ago ) | Agree(13) | Disagree(4) | Report | Reply
paddystan + 1109d ago
Syphon Fitler for PS3 and/or the Vita. That would be awesome!
Lord_Sloth + 1109d ago
Sequel to Omega Strain would rock!
dcortz2027 + 1109d ago
This, and some "The Last Of Us" gameplay would make my E3! But this is Sony we are talking about, I'm sure they have plenty of stuff up their sleeves like new game announcements, gameplay footage for upcoming games etc. I can't wait!
Godmars290 + 1109d ago
Hopefully something no one sees coming - and its great.
SSKILLZ + 1109d ago
something of grand magnitude your mind will be Beyond! blown away
Afterlife + 1109d ago
Show Final Fantasy X. They also need to reveal if there's any changes made to the game besides the obvious.
Unlimax + 1109d ago
I'll put all my hopes on the live stream , And see what will be upcoming .
Adva + 1109d ago
Speech about Vita + Ps3 sales
Last of Us demo + date
GOW demo + date
All Star Brawl demo + date
2 Multiplatform demos + exclusive content
1 or 2 new game announcement(s)
Killzone vita
3-4 vita games
--- The End ---
^^ Realistic prediction rather than "OMG 10 new games announced"
#23 (Edited 1109d ago ) | Agree(6) | Disagree(7) | Report | Reply
tweet75 + 1109d ago
maybe sony just gave up to leaks and decided to release all the biggest game release info before e3 and just go into greater detail at the expo
OllieBoy + 1109d ago
Brand new The Last Guardian trailer and a release date for Fall 2012...
Pipe-dream, I know.
ArronC07 + 1109d ago
PS3 price cut, Vita price cut and PS4 sneak peak.
LakerGamerEnthusiast + 1109d ago
Yes, nope(although memory cards hopefully), hopefully.
modesign + 1109d ago
if sony was smart they would announce a release date for the last guardian. and maybe a title from SCE london (getaway, eight days, something)
WooHooAlex + 1109d ago
Didn't they do this a few years ago as well? When they came out and announced LittleBigPlanet 2, Killzone 3, inFamous 2 and one more big title (that I'm drawing a blank on atm) in the months leading up to E3. I'm fine with this, just as long as they keep some announcements for the show.
And its not like they don't have more stuff to show us. They could show off what Sucker Punch has been working on. Quantic's next game could be there, maybe one of the three new games that Guerrilla has been working on will show up? Plus there are rumors of a new PaRappa and Syphon Filter. Which would be pretty cool.
For their show, I would kick it off with stage demos for the Last of Us and Assassin's Creed III.
#28 (Edited 1109d ago ) | Agree(0) | Disagree(0) | Report | Reply
Eyesoftheraven + 1109d ago
The demise of Xbox. (I joke).
momthemeatloaf + 1109d ago
They are getting the ps3 games out if the way now so e3 will be focusing on vita. Sad but true
« 1 2 »
Add comment
New stories
A Trip to the Nintendo World Championships Qualifiers
12m ago - 750 competitors entered, one walked away with the finals in sight. Random Nintendo reports back f... | Wii U
Blade & Soul – Official panda Lyn figurines now on sale in China
21m ago - Tencent Games, the publisher of Blade & Soul in China, has started selling panda Lyn figurines on... | PC
See what games are coming out in 2015
Video: Teaser Trailer Illuminates Forthcoming Wii U JRPG Project Light
29m ago - NL: ''Turn-based combat in a cyberpunk world Indie developer Neko.Works has released a teaser tr... | PS4
Terraria Development Confirmed for Wii U
33m ago - Developer 505 Games is handling Terraria development for the Wii U version of the game, Europe's... | Wii U
What We Want From Fallout 4
49m ago - Fallout 4 has finally been announced, however these are 5 things we want in the final product. | PC
|
global_05_local_5_shard_00000035_processed.jsonl/34578
|
[an error occurred while processing this directive]
BBC News
watch One-Minute World News
Last Updated: Monday, 7 May 2007, 15:43 GMT 16:43 UK
Obituary: Lord Weatherill
Bernard Weatherill
Bernard Weatherill: tailor, soldier, politician and Speaker
Lord Weatherill became the 154th Speaker of the House of Commons, against the wishes of former Prime Minister Margaret Thatcher, at the start of her second term of office in 1983.
He was the backbenchers' choice for the job, and remained a staunch champion of their rights. He became a very popular figure in the House.
He had a brisk manner, which owed much to his military background.
Bernard Weatherill - often known as Jack - was the son of the Bernard Weatherill who founded the family tailoring firm. After war service with the Indian Army he rejoined the firm, and worked as a tailor himself. Later he became managing director.
Authority questioned
He got involved in politics while living in Guildford, where he was chairman of the local Conservatives. He was elected Conservative MP for Croydon North East in 1964, and became a spokesman for small businesses.
But in 1967 he was made an opposition whip and - after the Tory victory of 1970 - a government whip. He was the party's deputy chief whip throughout the next Labour government, but was appointed Deputy Speaker when the Conservatives returned to power in 1979.
When George Thomas retired at the 1983 dissolution, a number of names were canvassed for his successor.
Despite Bernard Weatherill's general popularity there were doubts about his authority over the House - critics recalled a debate on Welsh constituency boundaries which got completely out of hand while he was in the chair.
And Mrs Thatcher had her own ideas about who should have the post. But the will of the whips and back-benchers prevailed, and Bernard Weatherill was duly elected.
Lord Weatherill presiding over the House of Commons
Lord Weatherill was Speaker during an unruly period
He was chosen because he was trusted by all, and partly because he had never been a minister, unlike some of the other candidates.
Everyone liked him because of his charm, courtesy and modesty. In his acceptance speech he told how - on his first day at the Commons, he had been in the lavatory and had overheard one MP say to another, "I don't know what this place is coming to, Tom, they've got my tailor in here now."
Lord Weatherill said he aimed to emulate Arthur Onslow, Speaker for 33 years in the 18th Century, whom he saw as having established the impartiality of the chair.
His first year was difficult - he was criticised for not clamping down quickly enough on the rowdies, particularly during prime minister's questions.
Unruly time
Lord Weatherill wasn't over-critical of the behaviour of MPs, saying that many earlier parliaments had been far worse. And he thought the best MPs were sometimes the most unreasonable - it was their job to question things.
He endeared himself to back-benchers by allowing more private notice questions, so compelling ministers to come to the despatch box to explain decisions.
He had to handle the Commons at a time when there were some highly contentious issues about, including the miners' strike of 1984-5 and the Westland affair of 1986.
He was occasionally indiscreet: in a speech to the parliamentary press gallery a year after taking office he spoke of it being the "Frustration Parliament", and referred to some Conservatives who, he said, had got in by mistake and lost their previous jobs and pensions.
The following day he apologised to the House for his light-hearted remarks.
Lord Weatherill once expelled a Labour MP for referring to Britain's tame judges. A few months later he ruled that when Neil Kinnock said he did not believe Mrs Thatcher, it was not the same as calling her a liar.
Last to wear wig
Lord Weatherill favoured televising the Commons - he thought radio distorted what went on and that television would let people see the true picture. By becoming the first Speaker after cameras were first allowed into the House, he became a well-known public figure.
He was the last Speaker to wear a wig. He once said he liked it because it enabled him to pretend he didn't hear certain things.
Lord Weatherill always carried in his pocket a thimble given to him by his mother when he was first elected to Parliament. It was to remind him of his humble beginnings.
One of the legacies of his wartime service in India was his vegetarianism, which he took up after seeing the Bengal famine of 1942.
Another was his ability to speak Urdu, which helped in dealing with ethnic minorities in his Croydon constituency.
Lord Weatherill, who had a twin sister, was married, and had two sons and a daughter.
Ex-Speaker Lord Weatherill dies
07 May 07 | UK Politics
The BBC is not responsible for the content of external internet sites
Has China's housing bubble burst?
How the world's oldest clove tree defied an empire
Why Royal Ballet principal Sergei Polunin quit
Americas Africa Europe Middle East South Asia Asia Pacific
|
global_05_local_5_shard_00000035_processed.jsonl/34603
|
Clint Eastwood Gave the Worst Speech of the Convention That Anyone Has Ever Given
Snagging universally respected, all-American film legend Clint Eastwood to give a prime-time speech on the RNC's final night seemed like a major coup for the Romney campaign. That sentiment survived about 90 seconds into Eastwood's remarks. Unlike in his movies, he had no script — and seemingly no idea what he planned to say.
Eastwood proceeded to have an improvisational conversation with "President Obama" in an empty chair beside the podium. Though it had the potential to be a humorous bit, it was, instead, a rambling, meandering monologue about pretty much nothing, like watching someone mumble in their sleep. The few coherent thoughts Eastwood managed to get around to making were puzzling and off-key. He seems to think that Romney wants to bring American troops home from Afghanistan immediately. He mocked Obama for giving speeches at colleges about student loans.
The AP reported seeing Romney aides wincing backstage. It was an absolute train wreck.
Outside the Forum later in the night, David Frum was shaking his head. "It was one of those things that must have seemed like a good idea when it was first proposed," he told us, chuckling. "Teleprompters are extremely useful."
Ultimately, though, Frum doubts the Eastwood disaster will hurt the Romney campaign. "It amuses and entertains the political junkie," Frum said, "but so few things that happen at conventions are either net positives or negatives, because so few things matter to the people who are making the decisions. I don't think that's one of them. I think that's a Morning Joe, Hardball, Situation Room discussion."
This is true, to an extent. It's hard to imagine that any voters are saying to themselves, "That Eastwood speech was terrible. Guess I won't vote for Romney after all." The thing is, we'll never know how persuasive the speech that Eastwood didn't give — the one we all thought he would give — could have been.
|
global_05_local_5_shard_00000035_processed.jsonl/34639
|
Happy Birthday to Me! on Plancast http://plancast.com/p/15cy/happy-birthday?utm_source=rss&utm_medium=feed&utm_campaign=feed I'm celebrating my 51st Birthday at Andys. If your not doing anything, I would love for you to join me. Who knows, maybe I'll sing a tune or two. You ... Feedo Feed <![CDATA[Happy Birthday to Me!]]> http://plancast.com/p/15cy Tue, 13 Apr 2010 19:10:37 +0000 [email protected] start=2010-04-15T17:00:00+0000; end=2010-04-15T21:00:00+0000; scheme=W3C-DTF;
|
global_05_local_5_shard_00000035_processed.jsonl/34640
|
Meta Battle Subway PokeBase - Pokemon Q&A
Hitmonlee, Hitmonchan, or Hitmontop?
0 votes
Right now I have a hitmonlee @ Lv. 53
Jolly/Wide lens
Mega Kick
Blaze Kick
HiJump Kick
Fake Out
Should I Keep on training it to Lv. 100? Or should I switch to one of its other evolutions?
asked Nov 28, 2013 by Mr. Blazo
Competitive or In-game? :3
Like you know anything about competitive... :P
No, I don't. xD
I think hitmonlee is based on Bruce Lee and hitmonchan on Jackie Chan...
I don't know about hitmon top.
They are.
Hitmontop is based on a spinning top. :o
2 Answers
0 votes
Best answer
It all depends on what you're using them for. Hitmonlee and Hitmonchan can both run very nice offensive sets in a competitive battle, but they don't go far in the way of defense, even though special defense is a higher stat. Both have fairly low defense and a Scizor with Technician/Aerical Ace could most likely easily wipe out these two. Hitmontop can be either defensive OR offensive, or kind of a "anti-attacker", knowing moves to deal out heavy damage while still being able to withstand attacks.
I also want to note the abilities of the Pokemon. Hitmonlee has Limber and Reckless, both of which being almost next to useless in a competitive battle, especially since Hitmonlee learns 0 recoil moves (someone correct me if I'm wrong on that). Hitmonchan, however, runs a VERY nice ability, Iron Fist. Many people have some kind of 'punch move on a Hitmonchan, usually Thunderpunch to deal with flying types. This can deal out some major damage, especially if you've properly EV trained a Hitmonchan for attack. Hitmontop can have either Itimidate or Technician, the better of the two being Technician. This can be very effective when running a Bullet Punch/Mach Punch set.
To wrap up:
- Hitmonlee: most speed and attack of the three; crappy abilities (in my opinion)
- Hitmonchan: most defense of the two; a REALLY effective ability
- Hitmontop: a very good all around Pokemon; fairly good ability, could definitely work to your advantage.
I can't really answer your question if you should keep raising that particular Pokemon or choose another; that is solely up to you. I can, however, help you in any way possible make your decision. So I hope this helps!
*EDIT- as MeloettaMelody pointed out in the comments Hitmonlee's High Jump Kick CAN be powered up with Reckless. I overlooked this at first since it was Crash Damage not Recoil, however both types of moves are powered up.~
answered Nov 29, 2013 by excadrill444
selected Dec 3, 2013 by Mr. Blazo
If you want opinions on movesets just ask
Just pointing out that Hi Jump Kick is powered up by Reckless, so it can be useful for Hitmonlee.
That IS true i didn't see that my bad
Hitmonlee also has unburden, which is a pretty good ability.
For general use, hitmontop > hitmonlee > hitmonchan
yes but it's hidden
0 votes
It depends do you want power (hitmonlee) or coverage (hitmonchan). I would stay away from hitmontop(not that high of an attack stat, little ghost coverage(2 moves)). Personally I would go hitmonchan with the following moveset (ice punch, thunder punch, fire punch, and a fighting type move of your choice).
answered Nov 28, 2013 by black hole solrock
|
global_05_local_5_shard_00000035_processed.jsonl/34641
|
Politwoops Deleted Tweets from Politicians
Original Dutch version:
Frank LoBiondo (R) @RepLoBiondo
Encourage #SouthJersey residents to subscribe to my e-newsletter via my website for regularl updates http://t.co/xf4T1hBn
Screenshots of links in this tweet
|
global_05_local_5_shard_00000035_processed.jsonl/34647
|
Take the 2-minute tour ×
Coding a new website. Basically, for position elements using CSS I would use percents and figured that was the best thing to do since everyone's monitor was a different size. But I noticed a lot of big websites use pixels.
I don't understand how you can use pixels to position things since we all have a different size screen?
share|improve this question
It's elastic layout vs fixed layout, and not all pages benefit from the former. See this – treecoder Feb 17 '12 at 4:25
You forgot em. Favour em over px – Raynos Feb 17 '12 at 14:55
2 Answers 2
The fluid layout provides, in general, better user experience, but is not without flaws. When using fluid layouts, you are testing them on medium-small to medium-large screens, but you certainly can't do complicated fluid layouts with CSS2 (no JavaScript, no HTML5/CSS3) which works perfectly well on any device, from a small mobile phone to a 30-inches full-screen mode.
The fluid layout is also more expensive in most cases. This is true for the visual design, the ergonomics and the HTML/CSS development.
The fixed layout gives the ability for a developer to say:
1. I gathered the statistics about the browsers of my visitors, and know that 95% of users of my website have a resolution width between 1024 and 1920 pixels. Instead of spending a month designing a fluid layout, I will target the resolution of my 95% of users, and spend the free time to implement some cool features instead.
2. If I have enough time and resources, I'll do a dedicated version for mobile phones.
The fact that fixed layouts are easier to implement is important here. I have a choice: either I do a fluid layout which will in all cases look bad on very small or very large screens, or I spend the same amount of time and money creating two or three layouts for different resolutions.
Aside the cost, fluid layouts have also some issues you can't solve without HTML5/CSS3 or JavaScript.
• Example 1: Programmers.SE has a fixed layout. This means that on any resolution, the main text I read (the questions and the answers) will be no longer than, say, 800 pixels in width. Given the current font size and the line spacing, this is ok to be able to read fast.
If Programmers.SE moves to fluid layout, your current question would we 1 600 pixels width on my screen at this moment. This will be totally unusable, and I wouldn't be able to read a long text without requiring to minimize the browser window and adjust it for the website.
• Example 2: in a recent project, I split the text in two columns for the browsers which support this. Given the font size and the line spacing, one column is not really readable, but two are perfect. Since I know the width of the page, I know how the text will appear in 93 to 94% cases: the other 5% are using browsers that don't support text columns, and the final 1 to 2% are using custom font sizes (i.e. people who don't see well and enlarge the fonts by default and/or people who don't have the font I use on the page).
With a fluid layout, I wouldn't be able to do that without HTML5: on a 30-inches monitor full-screen, two columns would still be unreadable: it would require four or five columns instead. On a screen of a large mobile phone, two columns would be unreadable neither, since there will be one to three words per line.
• Example 3: you have a topmost menu with the parts of your website. There are five parts, and the elements are float:lefted. With a fixed layout, it magically works, and fails if the user specifies the larger font for accessibility reason. With a fluid layout, what will happen with those elements if the user resizes the page to a width smaller than the total width of those five elements?
share|improve this answer
Okay so if I do position:fixed; then it should be the same on any computer? But I don't understand how if I say to position a div element 200 px from the top, even if it's position fixed, how will it not look different on a smaller screen, such as on my laptop? – Norm Feb 17 '12 at 4:44
Sorry, I don't understand your comment, and maybe I misunderstood your original question when editing it. If yes, please revert my changes and try to explain your question in a more explicit way, give some examples, etc. As I understood it, it has nothing to do with position:fixed (which is used to keep an element fixed when scrolling), but rather with specifying width:60% vs. width:850px. – MainMa Feb 17 '12 at 4:52
Oh okay. Well I want to position my navigation bar using a div element, but when I say to position it like top:100px; left:200px; it will look different on my 1990x1020 monitor and my small 15 inch laptop screen. For example, the positioning from top:100px will look fine on my 1990x1020 monitor, but on my small laptop the element will be pushed too far down and be in the middle of the page. – Norm Feb 17 '12 at 4:58
Pushed too far down? Well, it's expected to remain at x:100, y:200. If it's not what you see, you may review your markup or post a separate question on Stack Overflow. – MainMa Feb 17 '12 at 5:07
Yes because on a smaller resolution screen, the pixel size is smaller. So it will look different, and in some cases elements may collide. – Norm Feb 17 '12 at 5:09
There are a couple situations when I will use pixel based positioning.
The first being when I am creating absolute positioned menus or items.
The other situation I can think of is when I create webapps. I want the environment to give a desktop like appearance utilising all of the space. The layout is dynamic in sizing(some times I use YUI2's layout which is really helpful. See here: http://developer.yahoo.com/yui/layout/). So on window resizes the layout also resizes.
Pixel based positioning gives you more control and the look will remain static however as mentioned when switching resolutions everything will be displayed larger or smaller.
share|improve this answer
Well as for me, I have a picture background, and my navigation bar collided with the words at the top of the picture on my large monitor after I coded it on my laptop monitor (smaller). – Norm Feb 17 '12 at 5:13
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34648
|
Take the 2-minute tour ×
Being an IT student, I was recently given some overview about design patterns by one of our teachers. I understood what they are for, but some aspects still keep bugging me.
Are they really used by the majority of programmers?
Speaking of experience, I've had some troubles while programming, things I could not solve for a while, but Google and some hours of research solved my problem. If somewhere in the web I find a way to solve my problem, is this a design pattern? Am I using it?
And also, do you (programmers) find yourself looking for patterns (where am I supposed to look, by the way?) when you start the development? If so, this is certainly a habit that I must start to embrace.
UPDATE: I think when I ask if programmers do use them, I'm asking if when you have a problem to solve you think "Oh, I should use that pattern".
share|improve this question
Not an exact duplicate question, but my answer would be the same. programmers.stackexchange.com/questions/70877/… – pdr Mar 28 '12 at 9:46
Most of that overrated, overhyped "design patterns" are only relevant to the OOP programming. And there are many good reasons to stay away from OOP for as long as possible. – SK-logic Mar 28 '12 at 11:07
I find myself using Big Ball of Mud more often than I'd like. Just kidding. – sashoalm Mar 28 '12 at 14:17
The only answer is yes with the conditional statement of "When appropriate" – Rig May 16 '12 at 18:18
13 Answers 13
up vote 98 down vote accepted
When I was a novice programmer, I loved design patterns. I didn't just use design patterns. I inflicted them. Wherever and whenever I could. I was merciless. Aha! Observer pattern! Take that! Use a listener! Proxy! AbstractFactory! Why use one layer of abstraction when five will do? I've spoken to many experienced programmers and found that just about everyone who reads the GoF Book goes through this stage.
Novice programmers don't use design patterns. They abuse design patterns.
More recently, I find that keeping principles like the Single Responsibility Principle in mind, and writing tests first, help the patterns to emerge in a more pragmatic way. When I recognise the patterns, I can continue to progress them more easily. I recognise them, but I no longer try to force them on code. If a Visitor pattern emerges it's probably because I've refactored out duplication, not because I thought ahead of time about the similarities of rendering a tree versus adding up its values.
Experienced programmers don't use design patterns. Design patterns use them.
share|improve this answer
@schlingel - Why are you calling the OP Sir? – Oded Mar 28 '12 at 19:34
@Oded I reserve my right to be called "Sir" under these circumstances and enjoy it. – Lunivore Mar 28 '12 at 22:27
Sir, yes sir. No offence meant! – Oded Mar 28 '12 at 22:40
In Soviet Russia.. :) – Sorantis Mar 29 '12 at 11:10
Good answer, but in the last line you tried to be profound, but it is nonsensical. How about "Experienced programmers don't choose design patterns, they let them happen." – Garrett Hall Mar 29 '12 at 13:22
Adding to Lunivore. Id like to quote this from the head first book
## **Three steps to great software** ##
• Make sure the software does what the customer wants
• Apply good object-oriented principles
• Strive for a maintainable, reusable design
It is during the third stage, after your system is working the way it is supposed to.Its time to apply the patterns to make your software ready for years to come.
share|improve this answer
I've programmed for about 7 years in C++ and learned patterns about 2 years ago. Most patterns probably have some applications, but in my usage, some are better than others. You have to think why you are using them.
The iterator pattern has actually made my code more confusing and has added unnecessary complexity. I can get the same maintainability using typedefs for STL vector types. And any changes I make to the class being iterated I also have to make to the iterator class.
The factory method, however, has been extremely useful, based on the polymorphism it provides. I forget who said it, but the statement "reusability means that old code can use new code", is definitely true with the factory pattern.
I've used the template method pattern for years without even knowing it was a "design pattern".
The Observer pattern has been helpful in some cases, sometimes not. You have to sometimes predict complexity to determine if the overhead complexity of the Observer pattern is worth it. We have a program that uses about 10 subscribers and there could be many more, so the observer/subscriber pattern has been helpful. Another program, however, has two GUI displays. I implemented the observer pattern for this program, and it has been largely unnecessary, simply because it added complexity and I don't anticipate adding more displays.
I think those who say to always use patterns assume that your program is going to be infinitely complex, but like with everything, there is a break-even point in terms of complexity.
share|improve this answer
do you use them?
Yes, experienced programmers definitely do. You can avoid using most design patterns (excluding simple singleton stuff) for now; but the more you program and the more complex systems you build, the more you will feel the need to use design patterns. If you still avoid it, you will begin to feel the pain when you have to expand your system and change it as per new requirements.
Not necessarily. A design pattern refers to a specific way to design classes, their behaviours and interactions to achieve a specific goal (or avoid a specific problem). What you may have come across may not really be a design problem but a specific sequence of steps to program a certain API. For example: there is a certain sequence to establishing a socket connection. Do it wrong and your socket wont communicate. That sequence of steps does not constitute a pattern.
do you (programmers) find yourself looking for patterns
Yes. Design patterns embody the "prevention is better than cure" axiom. If you can spot a particular upcoming design problem beforehand, you can prevent massive redesigns to accommodate changes later. So it pays to know design patterns beforehand and look for places where you need to use them as you build your application.
where am I supposed to look btw?
Since you are a student you have probably not seen the typical problems that inspire design patterns. I strongly recommend that you look at Head First Design Patterns. They first present a design problem and then show how a particular pattern can solve/avoid it.
share|improve this answer
You will learn patterns as you start working with existing projects. There are so many patterns out there for you to learn, and it is not worth the time to master all of them as it depends on what project you are working on. Whenever you run into one, learn about it to see how it is used.
share|improve this answer
Learning patterns is not just learning something. You learn what you can do with programming languages. I for myself learned a lot about object-oriented programming just by learning how a pattern works (the composite pattern in this case).
As mentioned by Oded, most programmers use them sometimes without even recognizing it. The beauty of patterns is that you can adress specific problems with a predefined pattern so you don't have to think a lot about architectual things.
share|improve this answer
Don't look for trendiness
Any standard programming solution to a certain problem can be considered a design pattern, it doesn't matter how popular they are, or if other programmers use them or not.
You might already be using a design pattern that hasn't been invented/specified yet.
Don't try using them, try thinking in their terms
The problem with design patterns is that sometimes programmers want to fit their problems into them when it is the other way around.
Remember design patterns' design convention have a typical problem to solve, you can even combine design patterns to tackle other bigger problems. This is kind of typical in Service-Oriented Architectures, just see some of the SOA patterns there are.
Look for them in the wild
There are plenty of open source projects where you will find applied design patterns. One example that comes to mind is Joomla: you will find singletons, observers. GUI libraries will have the decorator pattern, command pattern implemented, and maybe even flyweight.
There are other patterns such as data patterns, for example the Doctrine Project alone has used, the active record pattern (1.x), entity manager pattern(2.x), unit of work, repository, query object, metadata mapping, data mapping, and other more general ones like the strategy pattern, and decorator pattern.
There are just so many interesting solutions to choose. See Martin Fowler's Patterns of Enterprise Architecture, there are also data model patterns.
Just learn them for when the time comes
Learn them, know them, obsess over them and when the time comes you'll know how to solve programming problem x, you will be a better programmer already by that time.
Become an architect
I'd say that being able to think in pattern terms to solve problems, effectively turns you into a software architect. Even if you don't want to be a software architect per se, your solutions will have more technical quality, be cleaner and better scalability —in terms of design— by default.
share|improve this answer
Design Patterns were not taught when I was in school. And, for most of my programming career, I've worked with legacy, non-object oriented code. In recent years, I've tried to learn them because they sound like such a good idea. However, I must confess that every time I've ever picked up a book or tried to read a tutorial on the subject, my eyes have glazed over and I've not really learned anything practical about them at all.
I can't believe I just admitted that in public. I think I've probably just lost any geek cred I may have established over the years.
share|improve this answer
Are they really used by the majority of programmers?
I would guess yes. In ADO.Net there is a DataAdapter class to give a simple example though depending on what area you want to specialize the patterns may vary.
Speaking of experience, I've had some troubles while programming, things I could not solve for a while, but google and some hours of research solved my problem. If somewhere in the web I find a way to solve my problem, is this a design pattern? Am I using it?
No, that isn't a design pattern to my mind. A design pattern tends to have some arrangement of classes and methods that define the recipe of a pattern.
I'd prefer to think of what you did there as a common practice. Beware of copy and paste coding though.
And also, do you (programmers) find yourself looking for patterns (where am I supposed to look btw?) when you start the development? If so, this is certainly a habit that I must start to embrace.
Sometimes as I see the same code over and over, I may find a way to refactor the code into a pattern or if I remember a solution to a similar problem involving a pattern I'll take it out and use it. Head First Design Patterns has more than a few patterns in it while Refactoring would be a suggestion for coding practices that may lead one to find various patterns. If you want another possible starting point, look at Microsoft's Patterns and Practices.
share|improve this answer
yes, most programmers I've ever encountered use the most common pattern there is, the Big Ball of Mud. They usually start out with well-designed architectures, but usually end up here, especially if they start to think "we must use design patterns" all over the place and refactor mercilessly.
share|improve this answer
The link made my day – dukeofgaming Mar 29 '12 at 7:43
Ouch. That web-page needs some text formatting. – Nailer Apr 24 '12 at 14:39
Generally speaking, no. There are times when patterns emerge from my code, but in general, I don't look for them and I certainly don't say "Oh, the Bridge Pattern would solve my problem!".
Here's the thing. Most patterns are abused and misused without people ever considering if they're good design. Patterns are not atoms. Code is not comprised of X permutation of patterns. Not to mention that not all patterns are actually good ideas, or that some languages have language-level solutions that are vastly superior to some patterns.
share|improve this answer
+1: I agree that patterns sometimes are misused. Sometimes I see code where the programmer has used a pattern just because he or she knew it and thought it was cool, but that made the code just unnecessarily complex and hard to read. Like cracking a nut with a bulldozer. Regarding language-level solutions, isn't a language-level solution just a pattern that is directly supported by the language? Or what is the difference? – Giorgio Mar 28 '12 at 18:40
@Giorgio: Framed another way, some patterns are used to work around the fact that the language lacks a way to express a certain thing neatly. – Daenyth May 16 '12 at 16:38
As implied by the answer pdr linked: design patterns are names given to things people were doing anyway, intended to make it easier for people to discuss those things.
In general it's worth learning them when starting out, because they give you some insight into solutions people have found to work, so you can build on years of experience, and trial & error.
The discussion of motivating problems included with patterns may give you insight into good ways to attack your problem in the first place, but unless your pattern knowledge lets you recognize there is a well-known existing solution, you still need to focus on just solving the problem first.
If it turns out to use one or more existing patterns then great, you have ready-made names that will make it easier for others to understand your code.
share|improve this answer
+1 for a significant summation of what I wanted to say: Focus on the problem, and then, and only then, if the problem looks like one a pattern solves, apply the pattern. – Joshua Drake Mar 28 '12 at 12:39
+1 for stating the true fact that design patterns are names given to things people were doing anyway. – miraculixx Dec 20 '12 at 0:27
Whether one recognizes them or not, most programmers do use patterns.
On the day to day work however, one does not start programming with a pattern in mind - one recognizes that a pattern has emerged in code and then names it.
Some of the common patterns are built into some languages - for example, the iterator pattern in built into C# with the foreach keyword.
Sometimes you already know the pattern you will be using as a common solution to the problem at hand (say the repository pattern - you already know you want to represent you data as an in-memory collection).
share|improve this answer
That's pretty much what I was going to say, patterns are a language to help programmers communicate design and implementation ideas, not a set of programming lego bricks which can be slotted together to implement a system. – Mark Booth Mar 28 '12 at 10:08
@DeadMG Why is that? – Kris Harper Mar 28 '12 at 10:50
@DeadMG: I must have been blinded by the blindingly stupid idea - I can't see why you think it's stupid ;-) – Treb Mar 28 '12 at 11:48
@root45 - You see friend, the foreach construct makes programming too easy. How can one feel superior to others if a complex task like iterating over a collection is easy? I for one still code in assembly for all my CRUD apps. Clearly the sentiment of @DeadMG is one of pure pragmatic genius. – ChaosPandion Mar 29 '12 at 1:14
@DeadMG - LINQ wasn't around when .NET 1.0/1.1 was. yield didn't exist then either. Do you think that breaking old code by removing a keyword is a better option? – Oded Mar 29 '12 at 8:29
protected by GlenH7 Mar 22 at 15:22
Would you like to answer one of these unanswered questions instead?
|
global_05_local_5_shard_00000035_processed.jsonl/34649
|
Take the 2-minute tour ×
I have a very basic game loop whose primary purpose is to check for updates & changes to a list.
I have contemplated using event driven programming to replace the game loop/list idea with an ObservableCollection, however I just have this big cloud of doubt on event driven programming. I'm posing these questions to those with experience with event driven programming:
1. What are good ways to test & build an event driven programming design?
That is, how will I know that I cannot run into "a sequence of unfortunate events"? I want to avoid the events I didn't plan for.
2. Are there unit tests schemes to test event driven programming?
Because I have little reputation at the time of asking this question, I can't post more than two links: Observable collections are found at:
share|improve this question
2 Answers 2
up vote 4 down vote accepted
I will leave aside the wisdom (or not) of this particuar design for a game engine, I will address the core concern about the testability of event-driven designs.
When you state
I want to avoid the events I didn't plan for.
I presume you mean a sequence that you did not plan for, like Create/Delete/Update instead of the expected Create/Update/Delete, where the former might try and update a non-existant object and therefore crash or otherwise fail.
The entire nature of event-driven programming has to actually presume a near-random order of events, and so effectively your pre- and post- conditions become very important. Since you cannot fully qualify the order of events, you need to be robust against accidental ordering. So long as your Delete leaves no stray pointers, then a delayed Update should unambiguously fail, or at least have some well-defined behavior.
As far as unit-testing goes, most of the common suggestions would still apply. Generate test cases for various boundary conditions, as well as common cases to try and achieve appropriate coverage and interaction testing.
Perhaps in your game, I should be able to PickUp an item, which would add it to my inventory. If I attempt to PickUp an item, which was just destroyed for some reason (perhaps an ogre stepped on it), I should not somehow get an undamaged object due to a race condition. Either I got the object before it was crushed, or it is now gone, but nothing inbetween. If each event is atomic with respect to game state, then you won't get a partial object someplace it should not be.
Good Luck
share|improve this answer
you presumed correctly, i.e. "sequence that you did not plan for, like Create/Delete/Update...". because of this post, I'm thinking about leaving my game loop idea in place. – Maelstrom Yamato Feb 12 '13 at 19:38
if you don't mind, please share your thoughts on a game engine. – Maelstrom Yamato Feb 12 '13 at 19:49
Consider reading the original REST paper - a robust event system can take advantage of concept like statelessness of the messages and idempotency. – ptyx Feb 12 '13 at 21:36
@ptyx do you have a link to the REST paper? – Maelstrom Yamato Feb 14 '13 at 15:31
I used XSLT as a general purpose programming language for 6 years at work. XSLT is not exactly "event" driven, but it is input-driven - meaning that there is no pre-defined order of execution or limited number of paths that the execution of the code can take. Rather, each node in the input data tree triggers some code to be executed. It's totally data driven, which I imagine is similar to your event-driven model.
Anyone who has spent a few years writing software for a large organization knows that there is no end to the variety of data that gets into big systems. Every time you account for one condition, a new one pops up that no-one ever imagined. When your input data defines the order of execution of your code, then the paths you need to test are equal to the total number of data conditions that your code can be exposed to - a number which grows daily. The people using XSLT to write batch data crunching processes would be paged repeatedly at home, at night, pretty much every night as new data conditions were "discovered" by the program.
When you let your data drive your software, you will often run into a "sequence of unfortunate events." Fault tolerance is a big deal. Emergent behaviors rule the day. Limits are your friends. Obviously for a game universe to feel real, the fewer limits, the better. You must choose your limits very wisely. After all, a game is art, and an important part of any artistic statement is its limits.
Speaking of art, John Cage wrote a piece of music, "Four minutes and thirty three seconds" where the performer sits, making no sound, for that length of time. The point of this artistic composition, as far as I'm concerned, is not the picture, but the frame. Everything has limits. You are making a game about one thing, not another thing. It is one game, not another game. The better you can choose limits, and the more appropriate those limits are for your art/game/project, the easier it's going to be, and the better the experience will be for your users/gamers.
All software (like all music and art) has limits. Choosing those limits wisely may be the single most determining factor of the success of any software design. But with event- or data-driven programming, this is perhaps even more critical, if such a thing is possible, because every bit of non-essential flexibility that you allow in such a system will punish your development team with bugs and unimaginably complicated testing. First make as many limits on your event-driven system as possible, and only remove or extend a limit when doing so has a major, positive effect on the system as a whole.
Unit testing (generally black-box testing) usually takes the form of, "given certain inputs, does the software being tested return the expected outputs." For a very versatile event-driven system, you need a lot of variety in those inputs. Maybe you could capture all the interesting kinds of input data that you've seen in your game so far and make tests from those. Maybe you can add a new test whenever you encounter a new and interesting data condition. But with a sufficiently complicated system, there is no way to test every data condition, only the currently known conditions.
Maybe it would be good to build that kind of sanity testing (bounds-checking, things null or not null, no exceptions, etc.) into some sort of logging utility that would alert you when your method sees something new in the wild?
Hopefully a statistician will answer your question too and tell you how to cover a high percentage of the possible data conditions in a meaningful way. But until then, choosing the limits of your data- or event-driven software wisely is the only way I know to control the complexity of a meaningful set of unit tests.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34670
|
@Proceedings {export:177506, abstract = {
Modern attackers increasingly exploit search engines as a vehicle to identify vulnerabilities and to gather information for launching new attacks. In this paper, we perform a large-scale quantitative analysis on bot queries received by the Bing search engine over month-long periods. Our analysis is based on an automated system, called SBotScope, that we develop to dissect large-scale bot queries. Specifically we answer questions of “what are the bot queries searching for?” and “who are submitting these queries?”. Our study shows that 33{\%} of bot queries are searching for vulnerabilities, followed by 11{\%} harvesting user account information. In one of our 16-day datasets, we uncover 8.2 million hosts from botnets and 13,364 hosts from data centers submitting bot queries. To the best of our knowledge, our work is the first large-scale effort toward systematically understanding bot query intentions and the scales of the malicious attacks associated with them.
}, author = {Junjie Zhang and Yinglian Xie and Fang Yu and David Soukal and Wenke Lee}, booktitle = {the 20th Annual Network and Distributed System Security Symposium (NDSS) 2013, to appear}, month = {February}, publisher = {Internet Society}, title = {Intention and Origination: An Inside Look at Large-Scale Bot Queries}, url = {http://research.microsoft.com/apps/pubs/default.aspx?id=177506}, year = {2013}, }
|
global_05_local_5_shard_00000035_processed.jsonl/34677
|
1 Comment
For anyone that has been looking at posts over here, or carefully listening to Reserve Bank speeches, the topic of the real exchange rate is an important one for understanding the New Zealand economy. Many of the “concerns” or “issues” being raised at present are really just a function of some view of the real exchange rate. Via the RBNZ we have a graph of the real exchange rate (RER) here: Now this drives the question, what has caused the change in the real exchange rate – what shocks have we experienced that have pushed it up, and what proportion of the increase was due to these shocks. Chris McDonald at the Reserve Bank decided to have a go at answering that question. With so many factors driving the dollar, “causation” is hard to appropriately appropriate between causes – and so his primary focus is on the correlations and their magnitude, albeit within a framework that will help to show what the more important drivers are. So what is his conclusion:
• International factors relevant to New Zealand explain more (60 percent) of the exchange rate variance over our sample than idiosyncratic and domestic factors.
• The most important international factor is likely to be export commodity prices, though our empirical analysis is not conclusive. For instance, high commodity prices can explain why the exchange rate is at current high levels. But, high commodity prices may be partly a result of current low foreign interest rates.
• The best domestic indicator for the exchange rate is house price inflation. While this indicator also reflects international factors, its movements over and above the impact of these appears to capture some key domestic information for the exchange rate.
Now this doesn’t tell us anything about the key issue of the New Zealand dollar being “persistently overvalued” or not. But it does indicate the commodity prices have been a major driver of the increases we have seen. On top of that another interesting point was raised:
The RER response to the other domestic shocks suggests some of them may not be well identified. Notably, an unexplained fall in the 90-day interest rate and an unexplained fall in the output gap both have little impact on the RER. Practically, we expect these shocks to cause quite large movements in the RER. However, once we allow for the correlation of these variables with the international and New Zealand real house price inflation variables, these shocks (despite being not so well identified) have a relatively small impact on the results.
So within this decomposition, the impact of a monetary policy shock (change in 90day bill rate) or exogenous change in AD (fall in output gap) are poorly identified – and seem to have little impact on the RER. The author believes thsi result doesn’t pass the smell test, which is fair enough – after all the author has the best knowledge of what their empirical model is saying (especially since no empirical results are included). However, if we were to take it at face value it would suggest that the RBNZ’s ability to actually change the RER with monetary, even in the relatively short term, is limited.
|
global_05_local_5_shard_00000035_processed.jsonl/34688
|
Take the 2-minute tour ×
Here is our problem:
We have several webservers, which should be reached from public. The database servers that store the data for the web apps on those webservers though shall not have a public IP.
So, since I want to be able to connect to the SQL servers using ssh for example, and those servers need to talk with each other, I had this idea:
| |
Webserver 1 Webserver 2 Database Server
| | |
-------------- vLAN --------------
Workstation (my PC)
My idea was that I can connect to the vLAN using PPTP so that I have access to all servers in that LAN, but the database server remains unvisible to the public.
Is this infrastructure a good idea?
share|improve this question
1 Answer 1
You just described a DMZ. There's no need to vpn to get to this. Simple routing from the internal network to the DMZ is normally sufficient.
A typical company network looks like this.
Internet -- firewall -- dmz
protected networks
The only time I would resort to a VPN is to access the protected networks or management services/ports on the dmz servers from somewhere on the internet....like from home.
share|improve this answer
But what do I do, when my LAN is available to quite a lot of people, that do not necessarily are employees at my company, but friends / freelancers etc? I do not want to expose my servers to my LAN so that everybody who is in the LAN can just connect to the servers. – Sebastian Hoitz Feb 23 '10 at 20:07
Well non employees shouldn't be on the company LAN. But that's another issue. You could set appropriate firewall rules for accessing the servers in the DMZ. Such as only allowing ssh from LAN to DMZ and setup the computers to only allow publickey authentication and limiting what IP addresses can access the DMZ. – 3dinfluence Feb 24 '10 at 0:26
Where I work we have a separate guest network coming off our firewall which only allows access to the internet. If you have equipment capable of vlans you should be able to put all the company network ports on a company network...maybe put a wireless AP on one too. Then put another AP and the rest of the ports on a guest network. – 3dinfluence Feb 24 '10 at 0:29
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34689
|
Take the 2-minute tour ×
I am trying to login to mysql via terminal and phpmyadmin it says cannot access localhost.
Below is the return message when trying to login via the terminal
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) root@x27:/var/lib/phpmyadmin# mysql --user=root -pass root
I installed the mysqladministrator GUI too.
share|improve this question
have you set a password for root? – Alistair Prestidge Jun 15 '10 at 15:50
If nothing else works you could simply reset the root password. dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html – Zoredache Jan 27 '11 at 4:35
1 Answer 1
You would need to reset the root password - it looks like you supplied an incorrect one.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34690
|
Take the 2-minute tour ×
Is there any free/opensource software for windows (desktop) which will save historical cpu/net/io usage and let me see charts based on this historical data ?
share|improve this question
3 Answers 3
up vote 1 down vote accepted
Well there are two ways you could do this.
Your first way is to use tools built into windows already. Just go to Start > Run and enter
Performance Monitor should open up and you can follow this link to learn how to create a log:
Click Here
The other tool you could use which is free/opensource is nagios which relies on snmp data. Make sure to go to Start > Run and enter
And make sure SNMP trap service is running. Make sure to also configure your snmp community string to something other than "public"
For information on setting up snmp on your Windows box you can follow this link:
Click Here
Performance monitor built into Windows is by far easier to use and will likely have all the data that you require. Nagios is far more extensive but is also a bit more to do config-wise.
share|improve this answer
Just to clarify a point. Nagios is not based on SNMP even though it can be configured to use SNMP. – Khaled Feb 18 '11 at 19:00
Sorry but I've forgotten that this question is related to Windows XP – przemol Feb 22 '11 at 9:57
Enabling SNMP and collecting with MRTG or any of its RRDTool-based successors. Of course, Performance Monitor (built into the OS) will do it too.
share|improve this answer
You could check out nagios. This is a terrific tool that can handle what you need plus quite a bit. It does require some time but once it is up and running it is rock solid.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34691
|
Take the 2-minute tour ×
I've got a Windows 2008R2 standard server running DHCP services. We've noticed that certain clients are receiving inconsistent DHCP replies. We have over 175 Windows workstations in this VLAN that don't seem to have trouble getting DHCP leases. However, PXE-booting clients trying to reach our DHCP server are able to get a lease inconsistently. Additionally, we tried using the "dhcping" tool against our DHCP server and found that roughly two of every three requests time out with "no answer" -- and this holds true when we set the timeout value on dhcping to 20seconds. After a failed attempt, however, we may get a dhcp lease reply immediately with dhcping. This leads me to believe that this issue isn't confined to PXE booting clients, but something more systemic with my LAN layer2 or DHCP. And that possibly my 175 windows clients are experiencing this in some form without my knowledge. We have over 30% of our scope available so the addresses are there. I was unable to find anything in the Windows server "DHCP-Server" log. Of course, my goal is to have my DHCP server reply to every request that it receives on the LAN!
share|improve this question
Have you enabled DHCP logging:technet.microsoft.com/en-us/library/dd183684%28WS.10%29.aspx – Guido van Brakel Apr 12 '11 at 20:14
3 Answers 3
Check your switches and routers for DHCP snooping options. Snooping can rate limit DHCP requests and responses.
share|improve this answer
As you mentioned you were using VLANs, I am assuming your PXE vlan & Windows vlan are separate... and/or you have 1 DHCP server serving the various VLANs with their appropriate address. Have you looked into the appliance/software package performing the Relaying? (DHCP doesn't traverse routes without a relay. Some appliances call these a "helper" service) Perhaps your helper service isn't configured properly or is having troubles keeping up.
share|improve this answer
This is all occuring within one VLAN. The IP network is a /23. I did a quick glance at the DHCP scope options and server configuration -- everything seems rather default, not extra options or bells/whistles. – verbalicious Apr 12 '11 at 19:20
If they're all in the same VLAN, then it points to networking issues. Do you see any other network problems? (packet-loss, congestion, mis-configured QoS, frequent bad CRC checks... etc...) – TheCompWiz Apr 12 '11 at 19:27
My suggestion would be to run a packet capture on the DHCP server and look for DHCP packets coming in to the DHCP server, starting with the DHCP Discover packets. Try to key on one client so that you can identify the packets that are captured. If you don't see DHCP discover packets reaching the server from the MAC address of the client you're keying on, then the packets are likely getting lost in the network. If they come in but don't go out, then it's a server/service problem. If they come in and go out but don't make it to the client then it's a network or client problem.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34692
|
Take the 2-minute tour ×
I'm trying to run a program that requires glibc 2.7, but I'm running it on CentOS 5.5. Is there any way to do this? I can't upgrade to CentOS 6.
share|improve this question
You're going to have to upgrade or see if the program can be compiled against the older glibc in CentOS 5 – ewwhite Mar 22 '12 at 1:23
2 Answers 2
up vote 7 down vote accepted
Hmm. glibc 2.5 is a dependency on pretty much everything in CentOS5. If you change it to glibc 2.7, your box will explode.
Here's some discussion over in the CentOS forums:
share|improve this answer
I ran into this issue a couple times, most recently while back using Snort. Attempting to upgrade to 2.7 will.definately blow up yourto box. I rendered my test system unbootable when I tried this previously. Your best bet is to upgrade if at all possible. Failing that many applications can be recompiled against glib 2.6. If you have to deal with a no install from source on your production infrastructure take a look at how to build RPMs. A lot of products either include a spec file or there is one publically available.
share|improve this answer
Did you have a problem with your keyboard when writing the first part? – CommunistPancake Mar 22 '12 at 2:03
New mobile phone. Autocorrect needs some adjustment. – Tim Brigham Mar 22 '12 at 2:13
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34693
|
Take the 2-minute tour ×
I want to make a rewrite that under the url www.example.com will be site example2.com/www but I want it not to change the url in browser to example2.com/www/.
So after entering in browser http://www.example.com I will see site under http://example2.com/www but in browser it still be http://www.example.com
I tried something like:
RewriteRule (.*) http://example.com/www [L]
I put this in virtualhost config of www.example.com it works but with changing url.
Is there any different way to rewrite http and https in this manner?
EDIT... I'm using LiteSpeedBalancer and www.example.com is virtualhost only to show example.com/www (So no contest is under www.example.com this domain is only to rewrite to example.com/www)
It's complicated and at first I made example with www.example.com rewriting to example2.com/www to point that those are different virtualhosts.
share|improve this question
So you want to display some other web site's content as if it were your own? I don't think so. – Michael Hampton Aug 17 '12 at 5:45
Both this website are on my server ... and to be honest it wasn't my idea (but developers :/) and I made example2 only to be clear that this are two websites. I will change this – B14D3 Aug 17 '12 at 6:00
Perhaps you should also explain what it is the developers are really trying to do, so we can tell you exactly why it's impossible. :) – Michael Hampton Aug 17 '12 at 6:05
What's wrong with ServerAlias? Or does it need to have an HTTP host header that includes the www? – Shane Madden Aug 17 '12 at 6:09
2 Answers 2
A workaround is to use frame forwarding in your control panel.
share|improve this answer
could you expand your answer for more details – B14D3 Aug 17 '12 at 7:06
It is possible to achieve what you describe with mod_proxy but as others have commented you need to ask yourself "Why?". This would be a whole lot simpler to virtual host on example2.com
In any case, you will need the mod_proxy mod_proxy_http and possibly mod_proxy_html modules loaded
Then something like this in your Apache config
# turn the general proxy off
ProxyRequests Off
# pass requests for / to the backend /www
ProxyPass / http://example2.com/www
# fix and redirect headers from the back end
ProxyPassReverse / http://example2.com/www
# fix any domains in cookies from the backend to the frontend.
ProxyPassReverseCookieDomain example2.com example.com
# fix any cookie paths form /www to the front end /
ProxyPassReverseCookiePath / /www
Then if the html on the backend site still manages to bring back the /www then you can open your can of worms, load mod_proxy_html and try:
ProxyHTMLEnable On
ProxyHTMLURLMap /www /
There's a lot more you can do with mod_proxy_html in the config guide
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34694
|
Take the 2-minute tour ×
So basically my question is fairly straightforward (and I apologize if this is a duplicate question): how does my ISP find and connect to a server using just that servers IP address?
I understand basic networking concepts but after I enter a URL (say 'google.com') and its resolved into an IP address, what exactly does the ISP do with the IP address to physically connect to the (in this case google's) server?
The way I understand it, the internet is a very, very complicated network of computers, so does it use similar methods as my local router. Like, for example, when I request a local address such as, my router connects me to the machine on the network with the IP address that it has associated with a MAC address which in turn is associated with either a physical LAN port or is broadcasted to the correct wireless device. Is this also how the internet works just on a much, much larger scale. Or do I have the basic local concept of a network screwed up as well.
Sorry if it sounds like I'm rambling but I've always wondered how exactly this works.
share|improve this question
closed as not constructive by Sven, EEAA, Brent Pabst, HopelessN00b, Wesley Oct 1 '12 at 3:14
You expect someone to answer how the internet works in the small textarea below. You're having a laugh. – Ben Lessani - Sonassi Sep 30 '12 at 23:18
Of course not. I completely understand how complicated the internet is. I was just wondering how my ISP goes from an IP Address to a physical machine. – Brandon Sep 30 '12 at 23:25
It usually doesn't. If it knows where the machine is (usually a local IP) then it routes to it, else it gets send to the default gateway. (and yes, I know I simplify things here. And ISP will have multiple routes, more and less expensive routes etc etc) – Hennes Sep 30 '12 at 23:28
@Brandon - the same way your local network does. It tries its own routed subnets, failing that, passes upstream to its gateway. Until it gets to an ASN, at which point it will look at its route table (of the entire internet) and pick the most appropriate route to the router responsible for the next hop. – Ben Lessani - Sonassi Sep 30 '12 at 23:30
2 Answers 2
up vote 2 down vote accepted
The very short answer is:
Your ISP does it in the same way your network does it.
The long answer would be quite long, and large part of that have already been written.
I suggest you start with this post on subnetting. Once you understand how (IPv4) routing works you can imagine a small ISP with a similar setup, or a large ISP with several links to other providers.
Next, read up on routing cost and how that gets automatically implemented. Articles such as This Wikipedia article on the border gateway protocol will help.
For anything more than that you best buy a nice thick book and reserve a weekend.
share|improve this answer
Thanks, that wiki article was essentially what I was looking for. – Brandon Sep 30 '12 at 23:34
I think the fundamental concept here that you must first grasp is that the Internet is a packet-switched network, unlike a telephone network which is circuit-switched. This means that a circuit is never actually established between your server and the remote one, nor dedicated to it.
Instead what happens is that you send a packet of data out to your ISP with an address. Much like the postal service, the ISP's routers inspect the beginning of the packet's destination address to figure out in which direction (eg. to which other router, possibly at another ISP) to send it. It goes through this step (known as a hop) repeatedly until it reaches its destination.
However, it is entirely like relaying mail, and entirely unlike making a phone call. The establishment and teardown of a connection is entirely virtual; it does not correspond to physical connections.
share|improve this answer
So how does the ISP know which direction or other ISP to send the packet in/to? Is there some sort of global registry or something similar? – Brandon Sep 30 '12 at 23:28
It, too, is distributed. Every router has a routing table which hints the direction things go according to their prefix. You have a default route (if it doesn't know where else to send it, it sends it to the default gateway), and possibly several other routes (eg. Germany can be accessed through router X). – Falcon Momot Sep 30 '12 at 23:32
|
global_05_local_5_shard_00000035_processed.jsonl/34695
|
Take the 2-minute tour ×
I bought my domain from namecheap a week or so ago. Today, I bought hosting from nearly free speech and have built my site. I've uploaded my site to the nearly free speech servers.
I can access my site at mysite.nsfhost.com. On Nearly free speech, it says that the associated domain is mysite.com
But I didn't change anything on the namecheap side of things, and I have no idea how to point mysite.com to the nsfhost.com
share|improve this question
closed as not a real question by RobM, mdpc, Dave M, Khaled, James O'Gorman Apr 1 '13 at 16:33
Who is going to host your name service? Nearly free speech, namecheap, or someone else? – David Schwartz Dec 25 '12 at 19:48
1 Answer 1
up vote 1 down vote accepted
Namecheap has a DNS control panel. All you need to do is set up an A record for your domain that points to the IP address of your nsfhost server. There is a video on setting up an A record using the namecheap control panel here
share|improve this answer
|
global_05_local_5_shard_00000035_processed.jsonl/34696
|
Take the 2-minute tour ×
I have just started a gig and have inherited a large collection of heterogeneous UNIX systems of the following flavors all running: hpux (11.11, 11.31), aix, mpras, sun solaris (sun 8 9 10), redhat (as3, as4, as5) , and suse (9 10 11). What would be the ideal from management's point of view is to have all of these systems configuration controlled and managed from hopefully one program. It is understood that each of these operating systems will have different base configurations. The items to be managed include patches, packages, and configuration files.
I have just started looking at cfengine, and looking at some type of do-it-yourself hybrid using subversion.
Management would prefer to use a commercial package if possible and was wondering if you all had any ideas of this type of application or vendors?
Thanks for your ideas, pointers, experiences, etc.
share|improve this question
6 Answers 6
I highly recommend contacting the folks at Reductive Labs and getting a supported install of Puppet. Puppet can handle a huge range of platforms, and if management wants to spend money, Reductive will provide the experience of a commercial offering, and will give you more than your money's worth. The strength of Puppet in this kind of environment comes from two major things: 1) it has a abstraction library that does a great job of abstracting away platform differences and 2) it doesn't insist on being the sole source of truth, so you can do an incremental rollout -- very important in an environment of already-deployed systems.
share|improve this answer
I don't know of anything that will support something "out of the box" for such a wide variety of platforms. And I'm guessing that your management wants something commercial for that reason. Personally, I'd tell them to butt out, they're not the ones that'll be "managing" it anyway. But then, I'm from the old 'n cranky school of system administration. :)
Considering the diversity of your environment, you're going to end up doing a lot of tweaking and tinkering anyway, so IMHO, you're better off starting from an open foundation anyway.
Look at Capistrano, chef, cfengine, puppet, or if you're a python guy, Fabric (which looks promising, but is still pretty young).
share|improve this answer
I would recommend the infrastructures.org site. It may be a bit outdated, but the concepts are solid. Think about your infrastructure as a whole, which will have a lot more pieces than just configuration management software. Their checklist is a good starting point - using a VCS, gold server, directory server, monitoring, etc. are all pieces of the whole solution.
Ideally, you should be able to plug a new server into your network, add it to a central configuration file, and boot it up to have the OS and packages automatically installed and configured without manual intervention. In practice this takes a lot of work, and there are usually rough edges, but it's a goal.
share|improve this answer
I see two projects here. The first one being building out a holistic management system being asked here, the other being a standardization. Even if you need to stick with all the different base operating systems, for whatever reason, you need to get a handle on the release proliferation. Before too long, if not already, you will run into completed unsupported platforms.
Look for the support windows from each of those vendors and get a handle for how quickly you should begin migration.
For example: RedHat, SuSE, Solaris
share|improve this answer
If you have the budget, one recommendation would be to use one of the IBM Tivoli System Automation products. From their website:
One of the key management capabilities that the product family offers is a single point of control for managing heterogeneous high availability solutions and IT service dependencies that can span Linux®, AIX®, Windows®, Solaris® and IBM z/OS®.
Unfortunately, they don't list HP-UX, although if you're wanting to manage access and security, you could look at the IBM Tivoli Access Manager, which does support HP-UX as stated on their website:
Manages and secures business environments from your existing hardware (mainframe, PCs, servers) and operating system platforms, including Windows®, Linux®, AIX®, Solaris, and HP-UX.
Disclosure: I don't have any experience with Tivoli Access Manager, although I was formerly an IBM employee as part of the pSeries / System p development team.
share|improve this answer
Since you started to have a look at cfengine, and your management want a commercial package, try Nova (the commercial version of cfengine) : http://www.cfengine.com/nova
Same as cfengine 3, with extra features (database management, ldap connection, extra reporting, monitoring, etc).
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34697
|
Take the 2-minute tour ×
I need to move dozens of static websites (plain html) from Windows/IIS to Linux/Apache. As you may know, Linux is case-sensitive and I'm pretty sure there may be hundreds of html files with file references in one case and the referenced file in another case :(
Is there a tool that will check/fix this (by fixing the reference or renaming the files on the filesystem)?
Thanks! JFA
share|improve this question
Thanks all for the answers. I'm going to rename all files to lower-case and I'm going to use mod_rewrite (which I had no idea existed). Thanks again! I Love ServerFault! – JFA Oct 7 '09 at 15:24
5 Answers 5
up vote 1 down vote accepted
You may want to investigate simply enabling mod_speling. It can handle most of the case issues for you. If having the correct case isn't that important to you you could just enable this and move on.
share|improve this answer
Hey Zoredache thanks for the mod_speling tip! This is what I finally used as mod_rewrite was working properly but what if some remote users still upload stuff with case inconsistencies...(I'd have to then modify files to lower-case and so on...) This mod_speling is just what the doctor ordered! Thanks! It works beautifully! – JFA Oct 7 '09 at 22:51
If you feel this or another answer is the best you should consider accepting it. – Zoredache Oct 7 '09 at 23:30
Renaming the file to lower case isn't enough, because inbound links or bookmarks that have uppers are going to get "404 - file not found" errors. Rename the files, then use mod_rewrite to force urls to lower:
share|improve this answer
As far as I know there isn't a tool specifically designed to do this, however, if you want to delve into Linux a little bit, I would suggest creating a script using awk, sed and possibly grep to search for and replace all of the incorrect entries. However, this method might still take a while especially if you don't have much experience with the command line or shell scripting.
Alternatively, I believe IDEs such as Dreamweaver and whatnot have the ability to rename a file and update all of the references, so you may look into that. It means that you would have to do it for every file, but it might still be faster than trying to write a script that would catch all possible cases. Aptana is another IDE that might do this, although I have not used it for said purpose.
share|improve this answer
I would just use a shell script, perl, or the rename command to rename them all to lower case. This stackoverflow post has a bunch of different methods including shell and perl, but actually, most of those will break if spaces...
I would do the following with zsh, although not the fastest option probably:
for file in **/*.html; do
if [[ "$file" != "${(L)file}" ]]; then
mv "$file" "${(L)file}";
share|improve this answer
Actually, most of those will break if spaces in the file name...sigh... – Kyle Brandt Oct 6 '09 at 19:51
You could serve your websites from a partition mounted in linux with filename translation set to case insensitive.
If you use a vfat (fat32) filesystem on linux you can mount it as follows
$ mount -t vfat /dev/XXX /var/windows/websites
Any filename inside /var/windows/websites will be treated the same way as windows does.
share|improve this answer
Thanks for this tip. I had no idea about this. Anyway, I think I'm going to try the other methods as I plan to usePOSIX ACL's and enable quotas on ext3. – JFA Oct 7 '09 at 15:22
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34703
|
The Last Supprrrrrr
by wootbot
2nd Place in Derby #310: Double-Take Derby , with 169 votes!
Cat 1: Hey, did you hear? It's the last sup-purrr!
Cat 2: Great! I was feline pretty hungry!
Cat 3: Are we going to a sit-down place? I've got somewhere to be, so I can't stay fur long.
Cat 1: Where are you going? Catmandu?
Cat 2: I bet she's got a man who's coming to whisker away.
Cat 3: It's none of your business so let's put a paws on this conversation, please.
Cat 1: Oooh, someone's getting catty!
Cat 2: What, did we stir up something unfortunate from your hiss-tory?
Cat 3: Nah, I'm just feeling a little Russian blue.
Cat 1: Aww, sorry you're feeling a litter down.
Cat 2: Yeah, swipe those tears away.
Cat 3: Shucks, I'd be lion if I said you two weren't the best friends a girl could have.
Cat 1: Wait, what were we talking about?
|
global_05_local_5_shard_00000035_processed.jsonl/34714
|
Follow Slashdot stories on Twitter
Forgot your password?
Fedora Core 5 Review 40
Posted by Hemos
from the whether-to-install-or-not dept.
Mark writes "A full review of the latest Fedora Core release, code named "Bordeaux", the Fedora Core 5, which has proven itself to be one of the best Linux Distributions out there. "
Fedora Core 5 Review
Comments Filter:
• by gr8dude (832945) on Monday March 27, 2006 @08:43AM (#15001913) Homepage
This review seems to be nothing but a set of screenshots that illustrate the OS in one moment or another, meaning that it is just one of the many similar ones out there.
"Thanks to the Gnome Theme Manager it is also very easy to change and modify your desktop theme." As if this was some sort of a new boombastic feature :-)
I am still waiting for a review which can explain a non-Linux person [such as myself] why the GUI is so slow. My guess is that the video card's hardware acceleration is not used. Other reviews [ lation_notes.html [] ] were more helpful, and explained that this distro is not shipped with nVidia's or ATI's drivers. Moreover [taken from the above mentioned link]:
"The kernel that ships with the Fedora Core 5 release iso images is not compatible with third party 3D graphics acceleration drivers."
How is THAT supposed to NOT anti-attract a newbie?
Is there somebody who can explain things in a simple way?
• Single page version (Score:5, Informative)
by Shawn is an Asshole (845769) on Monday March 27, 2006 @08:52AM (#15001945)
For those who don't like to click through 5 pages to read an article, here is a link to the print version [].
• by fizze (610734) on Monday March 27, 2006 @09:11AM (#15002021)
Take a look at 1203 []. It is a Live CD that is showcasing the latest developments of 3D accellerated GUIs.
Just burn it, put it in your drive and boot your rig - how more newbie-friendly can it possible get ?
PS: Here's a list of supported graphics cards: []
• Re:Yum? (Score:4, Informative)
by Shawn is an Asshole (845769) on Monday March 27, 2006 @09:51AM (#15002235)
Is it so much to ask that the default setup is changed to apt?
Yes it is. Apt doesn't support multi-arch, which unfortuantley is required if you're running a 64 bit processor (Real, Macromedia, etc need to wake up and relesae 64 bit versions...).
You can use apt, several repositores still support it. Apt is still included in Fedora. In my experience, though, yum seems to handle conflicting repositores better.
I'm tired of Yum's idosycracies. It's gotten better, but as of 2.3.2, yum has no local cache search, no download resuming, and still bombs out if it can't contact a respositiory.
That is a major anoyance. Espcially if, like me, you're stuck with only dialup being available. It seems to be a little better in this release than previous, but it still needs work. Is it really so much to ask to be able to cache the repo data? Yes I'm aware of -C.
• Re:Yum? (Score:4, Informative)
by grasshoppa (657393) <skennedy AT tpno-co DOT org> on Monday March 27, 2006 @09:55AM (#15002256) Homepage
1) If you had read the patch notes, or even the FA, you'd have realized that up2date has been replaced with pup. No, I'm not going to tell you the difference. You'll just have to figure it out for yourself.
2) Fedora also comes on DVDs, you may have heard of that. Also, for anybody with at least one other nfs capable server at home mounts the image over the network. It's the only way to fly.
• Re:Yum? (Score:4, Informative)
by HaydnH (877214) on Monday March 27, 2006 @09:56AM (#15002263)
I'm officially requesting /. change it's name to /FUD!
"yum has no local cache search, no download resuming..."
The local cache for yum is located in /var/cache/yum/, if the file is already downloaded it will not download it again, it will only redownload the repomd.xml file again and continue. A useful distinction is the progress bars "###" & "===", the first is reading and the second is downloading.
yum is very strict on how to handle errors and personally if I was getting a kernel upgrade (or something else important) via yum I would definately want it to be careful! This is mentioned in the YumTodont [] - the discussion [] linked from the YumTodont gives some good insight on the topic aswell.
• Fixing Flash (Score:5, Informative)
by Kelson (129150) * on Monday March 27, 2006 @04:23PM (#15005395) Homepage Journal
For anyone trying to use the Flash plugin on Fedora Core 5, you may have noticed that it only shows images, not text.
It turns out that Flash has hard-coded the font paths and is still looking in /usr/X11R6/lib, but the new R7 X server doesn't use the X11R6 paths anymore. (The same problem will happen with any distro that uses's new modular X server)
You can work around the problem [] by creating /usr/X11R6/lib/X11 and symbolically linking to /etc/X11/fs and /usr/share/X11/fonts.
mkdir -p /usr/X11R6/lib/X11
cd /usr/X11R6/lib/X11
ln -s /etc/X11/fs
ln -s /usr/share/X11/fonts
Also, if you have SELinux running in enforcing mode, you need to allow text relocations on the Flash library.
chcon -t texrel_shlib_t /path/to/
With any luck, Macrodobe will fix both of these in an upcoming version of the plugin.
I found the solution in the comments on a Mozilla bug report. Remember, Bugzilla doesn't allow direct links from Slashdot, so if you really need to read the bug discussion, go to and search for bug 317655.
Fundamentally, there may be no basis for anything.
|
global_05_local_5_shard_00000035_processed.jsonl/34715
|
Forgot your password?
Comment: DEs are Important - Drop the Teacher (Score 1) 656
by Ropati (#43875397) Attached to: Ask Slashdot: How Important Is Advanced Math In a CS Degree?
As has been posted earlier mastering differential equations is an exercise in symbol manipulation, but the underlying equations are really important.
Mathematics is an ordering of nature via symbols. In the ordering of nature, Newton realized that most equations had a second level of ordering that described the original equation. These equations of differentiation and integration were achieved by making differencing ratios and approaching a limit. Differential formula can be used in every field of science. They are used regularly in Computer Science usually as an algorithm to optimize a process.
Learning to manipulate these equations in your situation is probably unnecessary. Understanding what these equations are used for in the real world is very useful. I suggest you consult Google for each equations use in real world situations, if only to give you some mnemonic for learning this stuff. (You can probably consult Google for the DE problem answers too.) If you know how the equation/formula is used in the real world you might see a use for the same concept in a program, hence it is good for a degree in CS.
On the flip side, a good teacher should be able to make this stuff come alive and be far less dry then you make it out to be. Your academic career will flourish if you spend a lot more time researching your teachers for next semester. Consider a different institution if the student consensus is that there are no good math teachers where you toil.
Comment: How 20th Century (Score 1) 393
by Ropati (#37502982) Attached to: Ask Slashdot: Calculators With 1-2-3 Number Pads?
Why use keypads?
Justice Department Seeks Ebonics Experts 487
Posted by samzenpus
China's Nine-Day Traffic Jam Tops 62 Miles 198
Posted by samzenpus
from the living-on-the-road dept.
Posted by samzenpus
from the who-would-jesus-sue dept.
Study Says Your Personality Doesn't Change After 1st Grade 221
Posted by samzenpus
from the everybody-I-ever-needed-to-be-I-was-in-first-grade dept.
Man Repairs Crumbling Walls With Legos 106
Posted by samzenpus
The "King of All Computer Mice" Finally Ships 207
Posted by samzenpus
Comment: More than a little flawed (Score 1) 168
by Ropati (#31386234) Attached to: Wear Leveling, RAID Can Wipe Out SSD Advantage
Henry Newman may know SSD drives but he doesn't know enterprise storage. Henry, enterprise shops don't talk about MB/s unless they are streaming video or working on their laptop.
All IO in the a storage networked enterprise are random. Most important IOs are usually small block (databases). There is no concept of MB/s of bandwidth except to gauge channel capacity. Any one who does enterprise storage works in IOPS. SSD drives smoke for random IOPS to the tune of 50x for writes and 200x for reads (MLC vs same size 15k RPM drives). These are significant numbers. Even if we lost 1/2 the write IOPS to wear leveling, that would be 25x faster. Want your database to scream.
RAID controllers will only be able to do RAID 10. Most RAID controllers can do RAID 10 in their sleep. The bottle neck will now be the channels in and out of the controllers. The first roll out of SSD storage in the enterprise will be direct attached SSD trays to bus attached controllers with the most external channels (bandwidth).
SSD drives are going to choke SAN channels. In a couple of years when administrators want to network their SSD drives there will be a really big push to get better pipes in the SAN. I wonder if inifiniband will get back in the mix?
This kind of disruptive technology keeps us employed.
Comment: Take it to the board (Score 4, Insightful) 490
by Ropati (#31358770) Attached to: A Public Funded "Microsoft Shop?"
If the hospital is tax payer funded, then you have every right as a taxpayer to take this memo to the board.
I would suggest that you gather a number of like minded taxpayers (and voters) and make a visit to the board to explain your stance.
You might want to do some research and find that your IT director got a free beer (golf trip) out of this. Fodder for the meeting.
Comment: Re:The facts about urban wireless towers (Score 1) 791
by Ropati (#31314216) Attached to: Killer Apartment Vs. Persistent Microwave Exposure?
George probably has it right.
This is low level non-ionizing radiation, so the only real effect is body heating. Generally body heating is dispersed (except in the eyeballs and testicles) by the flow of body fluids. It takes a lot of power to heat a human body (even eyeballs). There probably isn't enough heat being generated in your body by radio wave absorption to be measured.
However you do sleep in one position. These types of antennas are highly directional and they could have hotspots. Cell towers operators don't care about RF hazards except to satisfy the FCC. If you are worried, you could put some grounded foil on the wall between your bed and the antenna and make a modified tin foil hat.
by Ropati (#31229586) Attached to: Windows 7 Memory Usage Critic Outed As Fraud
MS still hasn't fixed the storport driver with an OS release:
Nor does MS make it easy to write 3rd party drivers. There documentation is usually incorrect and the samples inoperative. If MS can't get their drivers to work, how is a vendor suppose to do it.
As for beta drivers, forget it. This guy expects every vendor to spend hours of dev time making drivers for a growing tree. No. No. No.
Nobody even tried to write a driver for 2008 until it was RTM, and that isn't much of a window.
Porsche Unveils 911 Hybrid With Flywheel Booster 197
Posted by timothy
Directed Energy Weapon Downs Mosquitos 428
Posted by samzenpus
One Variety of Sea Slugs Cuts Out the Energy Middleman 232
Posted by timothy
Fundamentally, there may be no basis for anything.
|
global_05_local_5_shard_00000035_processed.jsonl/34721
|
Radically new internal combustion engine to be produced in China
04/9/2013 | Reuters
A new type of internal-combustion engine that achieves fuel savings and lower emissions is getting a production boost in China, where a $200 million factory is planned by Zhongding Power to build the "opposed piston, opposed cylinder" engine developed by Detroit startup EcoMotors. The design combines four pistons and two cylinders, greatly increasing efficiency and power while reducing parts and affording the option of running on a variety of fuels.
View Full Article in:
|
global_05_local_5_shard_00000035_processed.jsonl/34723
|
New DrFrame Events, SimpleDebugger
• For the next version,
instead of using
DrFrame.AddPluginOnNewFunction('PluginName', func)
you can just use:
DrFrame.Bind(DrFrame.EVT_DRPY_NEW, func)
There does not seem to be a performance hit,
although if you do not remember to add event.Skip(),
then things could get a bit odd. (Of course, that is always the case).
The only plugin that used the old version (that I wrote) is SimpleDebugger, which I will be retiring soon anyway.
|
global_05_local_5_shard_00000035_processed.jsonl/34747
|
When More Expensive Processors Actually Cost Less
This post is primarily a reminder that SQL Server is very expensive to license. Which, in turn, means that smart organizations are always looking for ways to cut costs when it comes to licensing SQL Server. In this post I’ll cover two ideas, or options, for ways to help with that.
Related: Save Thousands in Licensing Costs for SQL Server AlwaysOn Availability Groups
Virtualization – Or, Growing Into your Licenses as Needed
At the risk of making things overly-simplistic: Suppose that you’ve got a SQL Server workload that you know will require around 12GHz of processing power now and will grow into needing an estimated 15GHz of processor within 6 months and then weigh in at needing around 18-20GHz of processing in a year. Assume, too, that you’re planning on being able to keep the hardware you purchase for this workload for 2-3 years and therefore you want/need to throw a 2-Socket server at the problem because that will allow you to throw 2x10-core processors (at 3.4GHz each) at the problem.
Without Virtualization
If you don’t end up using virtualization, then right out of the gate—even if you only purchase/deploy one of your processors (i.e., start with 10 cores instead of 20), then you’re going to need to license ALL of the cores on the physical box for your ‘physical’ SQL Server workload. Assuming roughly $3,000 for Standard Edition 2-core license packs and roughly $18,000 for Enterprise Edition 2-core packs, you’re now looking at 5 packs or either $15,000 or $90,000 depending upon whether you’re running Standard Edition or Enterprise Edition. (Likewise, once you ‘slap’ your other processor (i.e., an additional 10 cores) into this machine, go ahead and count on doubling your licensing costs.)
With Virtualization
On the other hand, and assuming the exact-same host and processors, let’s assume that you decided to use virtualization. Initially, you’d only need a virtual machine (VM) with 4 cores (i.e., 4 cores * 3.4GHz = 13.6GHz)—a perfect fit since 4-cores are the MINIMUM number of cores you can license on a VM anyhow. At this point, you’d only need 2x 2-core packs—so roughly $6,000 for Standard Edition or roughly $36,000 for Enterprise Edition.
In six months you’d end up needing to bump your VM up to a total of 6 cores (6 * 3.4GHz is 20.4GHz) and pick up an additional 2-core license pack ($3,000 or $18,000). Likewise, six additional months later you’d either need to take that VM up to 8 or 10 cores depending upon load—and pick up additional licensing packs as needed.
The point, though, is that with virtualization thrown into the mix, you end up only using and licensing the SQL Server processing power that you absolutely need (compared to paying for the entire box when virtualization isn’t being used). Granted, it’s arguable that within 12 months after initial deployment, both approaches end up with (let’s say) all 10 cores licensed on both servers (virtualized or not). But, if you’re using virtualization, you were able to defray when those costs were incurred. Moreover, if you’re LEASING licenses (i.e., say you’re running on hosted hardware and leasing SQL Server licenses via an SPLA license), then you’ve saved a huge amount of money—especially if you’re using SQL Server Enterprise Edition.
More Expensive Processors
It should go without saying, but another way that savvy organizations can save significant money when licensing SQL Server, is to pay for more expensive processors. Stated differently, if SQL Server is licensed per core, then you want to make sure that the cores your buying are giving you the most bang for your buck. So, instead of buying cheaper, lower-end, 2.0GHz processors, you should really be looking at the fastest processors you can find.
In my experience, however, it’s pretty common to find bean counters and managers that try to nit-pick and fiddle with many of the options on newer servers to try and help keep the costs contained. The problem, however, is that this kind of savings is very shortsighted when it comes to licensing SQL Server.
To put this into perspective, assume the scenario mentioned above (and assume we’ll be using virtualization)—where we have an initial need for 12GHz, then 15GHz (6 months later) and 18-20GHz (a year later). If we were to use, say, Intel Xeon E5-2660v2 processors (10 cores at 2.2GHz for roughly $1,552), then we’d need 6 of those cores right out of the gate to meet our requirement for an initial 12GHz of compute. On the other hand, if we were using Intel Xeon E5-2690v2 processors (10 cores at 3GHz each—for roughly $2,057), then we’d potentially be able to squeak by at just 4 processors to get our initial 12GHz of processing power—instead of six. Out of the gate that’d be one whole 2-core licensing pack less. Yes, the processor would cost roughly $500 more, but we’d be saving a further $3,000 or a further $18,000 in initial licensing costs right away.
Saving $18,000 with Higher Clock-speed Processors
As such, sometimes this might mean you actually want to look at slightly FEWER cores—at higher frequencies when dealing with SQL Server. So, for example, the Intel Xeon E5-2690v2 has 10 cores at 3GHz each (i.e., 30GHz total), while the Intel Xeon E5-2687Wv2 has only 8 cores—but with each at 3.4GHz instead (i.e., 27.2GHz total). Both are fairly comparable in terms of the total GHz they bring to the table (and in terms of price), but if you needed roughly 27-30GHz for SQL Server Enterprise Edition, the 8 core processor would end up being a full $18,000 less to license.
Related: Find SQL Server Cost Savings
Please or Register to post comments.
What's Practical SQL Server?
Michael K. Campbell
Blog Archive
Sponsored Introduction Continue on to (or wait seconds) ×
|
global_05_local_5_shard_00000035_processed.jsonl/34748
|
Take the 2-minute tour ×
I am trying to use in the following way:
public User getModel() {
return user;
And I get the following error message generated by Eclipse:
Multiple markers at this line
• The method getModel() of type UserAction must override a superclass method
• implements com.opensymphony.xwork2.ModelDriven.getModel
share|improve this question
Did you try searching for Multiple Markers before posting? – Old Pro Apr 23 '12 at 21:45
What is your class definition? class Clazz extends ModelDriven<UserAction> ? – Gren Apr 23 '12 at 22:13
1 Answer 1
It means that there are multiple problems at that line. One of which is the fact that you can't use @Override if a method with the same signature does not exist in the superclass.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34749
|
Take the 2-minute tour ×
I stumbled over node.js sometime ago and like it a lot. But soon I found out that it lacked badly the ability to perform CPU-intensive tasks. So, I started googling and got these answers to solve the problem: Fibers, Webworkers and Threads (thread-a-gogo). Now which one to use is a confusion and one of them definitely needs to be used - afterall what's the purpose of having a server which is just good at IO and nothing else? Suggestions needed!
I was thinking of a way off-late; just needing suggestions over it. Now, what I thought of was this: Let's have some threads (using thread_a_gogo or maybe webworkers). Now, when we need more of them, we can create more. But there will be some limit over the creation process. (not implied by the system but probably because of overhead). Now, when we exceed the limit, we can fork a new node, and start creating threads over it. This way, it can go on till we reach some limit (after all, processes too have a big overhead). When this limit is reached, we start queuing tasks. Whenever a thread becomes free, it will be assigned a new task. This way, it can go on smoothly.
So, that was what I thought of. Is this idea good? I am a bit new to all this process and threads stuff, so don't have any expertise in it. Please share your opinions.
Thanks. :)
share|improve this question
Please note: Workers are a browser specification- not a Javascript feature. – ƊŗęДdϝul Ȼʘɗɇ Apr 28 '13 at 0:21
Well, I see that. My question was about node.js - server code and not about client side! – Parth Thakkar Apr 29 '13 at 16:21
Just a clarification- I see that the original question was about Webworkers in NodeJs, which is impossible- NodeJs uses "Threads". However, there is a NodeJS module floating around that allows WebWorker syntax within the NodeJs runtime. – ƊŗęДdϝul Ȼʘɗɇ Apr 29 '13 at 18:52
7 Answers 7
up vote 163 down vote accepted
Node has a completely different paradigm and once it is correctly captured, it is easier to see this different way of solving problems. You never need multiple threads in a Node application(1) because you have a different way of doing the same thing. You create multiple processes; but it is very very different than, for example how Apache Web Server's Prefork mpm does.
For now, let's think that we have just one CPU core and we will develop an application (in Node's way) to do some work. Our job is to process a big file running over its contents byte-by-byte. The best way for our software is to start the work from the beginning of the file, follow it byte-by-byte to the end.
-- Hey, Hasan, I suppose you are either a newbie or very old school from my Grandfather's time!!! Why don't you create some threads and make it much faster?
-- Oh, we have only one CPU core.
-- So what? Create some threads man, make it faster!
-- It does not work like that. If I create threads I will be making it slower. Because I will be adding a lot of overhead to the system for switching between threads, trying to give them a just amount of time, and inside my process, trying to communicate between these threads. In addition to all these facts, I will also have to think about how I will divide a single job into multiple pieces that can be done in parallel.
-- Okay okay, I see you are poor. Let's use my computer, it has 32 cores!
-- Wow, you are awesome my dear friend, thank you very much. I appreciate it!
Then we turn back to work. Now we have 32 cpu cores thanks to our rich friend. Rules we have to abide have just changed. Now we want to utilize all this wealth we are given.
To use multiple cores, we need to find a way to divide our work into pieces that we can handle in parallel. If it was not Node, we would use threads for this; 32 threads, one for each cpu core. However, since we have Node, we will create 32 Node processes.
Threads can be a good alternative to Node processes, maybe even a better way; but only in a specific kind of job where the work is already defined and we have complete control over how to handle it. Other than this, for every other kind of problem where the job comes from outside in a way we do not have control over and we want to answer as quickly as possible, Node's way is unarguably superior.
-- Hey, Hasan, are you still working single-threaded? What is wrong with you, man? I have just provided you what you wanted. You have no excuses anymore. Create threads, make it run faster.
-- I have divided the work into pieces and every process will work on one of these pieces in parallel.
-- Why don't you create threads?
-- Sorry, I don't think it is usable. You can take your computer if you want?
-- No okay, I am cool, I just don't understand why you don't use threads?
-- Thank you for the computer. :) I already divided the work into pieces and I create processes to work on these pieces in parallel. All the CPU cores will be fully utilized. I could do this with threads instead of processes; but Node has this way and my boss Parth Thakkar wants me to use Node.
-- Okay, let me know if you need another computer. :p
If I create 33 processes, instead of 32, the operating system's scheduler will be pausing a thread, start the other one, pause it after some cycles, start the other one again... This is unnecessary overhead. I do not want it. In fact, on a system with 32 cores, I wouldn't even want to create exactly 32 processes, 31 can be nicer. Because it is not just my application that will work on this system. Leaving a little room for other things can be good, especially if we have 32 rooms.
I believe we are on the same page now about fully utilizing processors for CPU-intensive tasks.
-- Hmm, Hasan, I am sorry for mocking you a little. I believe I understand you better now. But there is still something I need an explanation for: What is all the buzz about running hundreds of threads? I read everywhere that threads are much faster to create and dumb than forking processes? You fork processes instead of threads and you think it is the highest you would get with Node. Then is Node not appropriate for this kind of work?
-- No worries, I am cool, too. Everybody says these things so I think I am used to hearing them.
-- So? Node is not good for this?
-- Node is perfectly good for this even though threads can be good too. As for thread/process creation overhead; on things that you repeat a lot, every millisecond counts. However, I create only 32 processes and it will take a tiny amount of time. It will happen only once. It will not make any difference.
-- When do I want to create thousands of threads, then?
-- You never want to create thousands of threads. However, on a system that is doing work that comes from outside, like a web server processing HTTP requests; if you are using a thread for each request, you will be creating a lot of threads, many of them.
-- Node is different, though? Right?
-- Yes, exactly. This is where Node really shines. Like a thread is much lighter than a process, a function call is much lighter than a thread. Node calls functions, instead of creating threads. In the example of a web server, every incoming request causes a function call.
-- Hmm, interesting; but you can only run one function at the same time if you are not using multiple threads. How can this work when a lot of requests arrive at the web server at the same time?
-- You are perfectly right about how functions run, one at a time, never two in parallel. I mean in a single process, only one scope of code is running at a time. The OS Scheduler does not come and pause this function and switch to another one, unless it pauses the process to give time to another process, not another thread in our process. (2)
-- Then how can a process handle 2 requests at a time?
-- A process can handle tens of thousands of requests at a time as long as our system has enough resources (RAM, Network, etc.). How those functions run is THE KEY DIFFERENCE.
-- Hmm, should I be excited now?
-- Maybe :) Node runs a loop over a queue. In this queue are our jobs, i.e, the calls we started to process incoming requests. The most important point here is the way we design our functions to run. Instead of starting to process a request and making the caller wait until we finish the job, we quickly end our function after doing an acceptable amount of work. When we come to a point where we need to wait for another component to do some work and return us a value, instead of waiting for that, we simply finish our function adding the rest of work to the queue.
-- It sounds too complex?
-- No no, I might sound complex; but the system itself is very simple and it makes perfect sense.
Now I want to stop citing the dialogue between these two developers and finish my answer after a last quick example of how these functions work.
In this way, we are doing what OS Scheduler would normally do. We pause our work at some point and let other function calls (like other threads in a multi-threaded environment) run until we get our turn again. This is much better than leaving the work to OS Scheduler which tries to give just time to every thread on system. We know what we are doing much better than OS Scheduler does and we are expected to stop when we should stop.
Below is a simple example where we open a file and read it to do some work on the data.
Synchronous Way:
Open File
Repeat This:
Read Some
Do the work
Asynchronous Way:
Open File and Do this when it is ready: // Our function returns
Repeat this:
Read Some and when it is ready: // Returns again
Do some work
As you see, our function asks the system to open a file and does not wait for it to be opened. It finishes itself by providing next steps after file is ready. When we return, Node runs other function calls on the queue. After running over all the functions, the event loop moves to next turn...
In summary, Node has a completely different paradigm than multi-threaded development; but this does not mean that it lacks things. For a synchronous job (where we can decide the order and way of processing), it works as well as multi-threaded parallelism. For a job that comes from outside like requests to a server, it simply is superior.
(1) Unless you are building libraries in other languages like C/C++ in which case you still do not create threads for dividing jobs. For this kind of work you have two threads one of which will continue communication with Node while the other does the real work.
(2) In fact, every Node process has multiple threads for the same reasons I mentioned in the first footnote. However this is no way like 1000 threads doing similar works. Those extra threads are for things like to accept IO events and to handle inter-process messaging.
UPDATE (As reply to a good question in comments)
@Mark, thank you for the constructive criticism. In Node's paradigm, you should never have functions that takes too long to process unless all other calls in the queue are designed to be run one after another. In case of computationally expensive tasks, if we look at the picture in complete, we see that this is not a question of "Should we use threads or processes?" but a question of "How can we divide these tasks in a well balanced manner into sub-tasks that we can run them in parallel employing multiple CPU cores on the system?" Let's say we will process 400 video files on a system with 8 cores. If we want to process one file at a time, then we need a system that will process different parts of the same file in which case, maybe, a multi-threaded single-process system will be easier to build and even more efficient. We can still use Node for this by running multiple processes and passing messages between them when state-sharing/communication is necessary. As I said before, a multi-process approach with Node is as well as a multi-threaded approach in this kind of tasks; but not more than that. Again, as I told before, the situation that Node shines is when we have these tasks coming as input to system from multiple sources since keeping many connections concurrently is much lighter in Node compared to a thread-per-connection or process-per-connection system.
As for setTimeout(...,0) calls; sometimes giving a break during a time consuming task to allow calls in the queue have their share of processing can be required. Dividing tasks in different ways can save you from these; but still, this is not really a hack, it is just the way event queues work. Also, using process.nextTick for this aim is much better since when you use setTimeout, calculation and checks of the time passed will be necessary while process.nextTick is simply what we really want: "Hey task, go back to end of the queue, you have used your share!"
share|improve this answer
Amazing! Damn amazing! I loved the way you answered this question! :) – Parth Thakkar Jul 1 '12 at 5:56
Sure :) I really cannot believe there are extremely mean people out there down-voting this answer-article! Questioner calls it "Damn Amazing!" and a book author offers me writing on his website after seeing this; but some geniuses out there down-votes it. Why don't you share your bright intellectual quality and comment on it instead of meanly and sneakily down-voting, huh? Why something nice disturbs you that much? Why do you want to prevent something useful to reach other people who can really benefit from it? – hasanyasin Jul 1 '12 at 12:39
This isn't a completely fair answer. What about computationally expensive tasks, where we can't "quickly end" our function call? I believe some people use some setTimeout(...,0) hacks for this, but using a separate thread in this scenario would surely be better? – Mark Mar 7 '13 at 20:49
@hasanyasin This is the nicest explanation on node that I found so far! :) – Venemo May 10 '13 at 11:57
@Mark Generally, if it's that computationally expensive, there are options/modules for tread/process workers... In general for these types of things, I use a Message Queue, and have worker process(es) that handles a task at a time from the queue, and work that task. This also allows for scaling to multiple servers. Along these lines, Substack has a lot of modules directed at provisioning and scaling you can look at. – Tracker1 May 22 '13 at 23:33
I'm not sure if webworkers are relevant in this case, they are client-side tech (run in the browser), while node.js runs on the server. Fibers, as far as I understand, are also blocking, i.e. they are voluntary multitasking, so you could use them, but should manage context switches yourself via yield. Threads might be actually what you need, but I don't know how mature they are in node.js.
share|improve this answer
just for your info, webworkers have been (partially) adapted on node.js. And are available as node-workers package. Have a look at this: github.com/cramforce/node-worker – Parth Thakkar May 27 '12 at 11:28
Good to know, thanks. Docs are very scarce though, I have no idea whether it runs in a separate thread, process, or simply runs in the same process, and I don't have the time to dig into the code, so I have no idea if it will work for your case. – lanzz May 27 '12 at 11:30
@ParthThakkar: That project hasn't been touched in 3 years (2 when you posted), and hasn't made it past 0.0.1. – Mark Mar 7 '13 at 20:51
@Mark: The reason for my ignorance on that is that I am not a professional programmer yet. Heck, I am not even in a university. I am still a High School fellow, who keeps reading about programming - besides managing the school work. So, it isn't remotely possible for me to have knowledge about all such issues. I just posted what i knew... – Parth Thakkar Mar 10 '13 at 5:25
@Mark: Although it was nice of you to point out that about the history of the project. Such things will be taken care of in my future responses!! :) – Parth Thakkar Mar 10 '13 at 5:26
(Update: Web workers are going into io.js - a Node.js fork - see below.)
Some clarification
Having read the answers above I would like to point out that there is nothing in web workers that is against the philosophy of JavaScript in general and Node in particular regarding concurrency. (If there was, it wouldn't be even discussed by the WHATWG, much less implemented in the browsers).
You can think of a web worker as a lightweight microservice that is accessed asynchronously. No state is shared. No locking problems exist. There is no blocking. There is no synchronization needed. Just like when you use a RESTful service from your Node program you don't worry that it is now "multithreaded" because the RESTful service is not in the same thread as your own event loop. It's just a separate service that you access asynchronously and that is what matters.
The same is with web workers. It's just an API to communicate with code that runs in a completely separate context and whether it is in different thread, different process, different cgroup, zone, container or different machine is completely irrelevant, because of a strictly asynchronous, non-blocking API, with all data passed by value.
As a matter of fact web workers are conceptually a perfect fit for Node which - as many people are not aware of - incidentally uses threads quite heavily, and in fact "everything runs in parallel except your code" - see:
But the web workers don't even need to be implemented using threads. You could use processes, green threads, or even RESTful services in the cloud - as long as the web worker API is used. The whole beauty of the message passing API with call by value semantics is that the underlying implementation is pretty much irrelevant, as the details of the concurrency model will not get exposed.
A single-threaded event loop is perfect for I/O-bound operations. It doesn't work that well for CPU-bound operations, especially long running ones. For that we need to spawn more processes or use threads. Managing child processes and the inter-process communication in a portable way can be quite difficult and it is often seen as an overkill for simple tasks, while using threads means dealing with locks and synchronization issues that are very difficult to do right.
What is often recommended is to divide long-running CPU-bound operations into smaller tasks (something like the example in the "Original answer" section of my answer to Speed up setInterval) but it is not always practical and it doesn't use more than one CPU core.
I'm writing it to clarify the comments that were basically saying that web workers were created for browsers, not servers (forgetting that it can be said about pretty much everything in JavaScript).
Node modules
There are few modules that are supposed to add Web Workers to Node:
I haven't used any of them but I have two quick observations that may be relevant: as of March 2015, node-webworker was last updated 4 years ago and node-webworker-threads was last updated a month ago. Also I see in the example of node-webworker-threads usage that you can use a function instead of a file name as an argument to the Worker constructor which seems that may cause subtle problems if it is implemented using threads that share memory (unless the functions is used only for its .toString() method and is otherwise compiled in a different environment, in which case it may be fine - I have to look more deeply into it, just sharing my observations here).
If there is any other relevant project that implements web workers API in Node, please leave a comment.
I didn't know it yet at the time of writing but incidentally one day before I wrote this answer Web Workers were added to io.js.
(io.js is a fork of Node.js - see: Why io.js decided to fork Node.js, an InfoWorld interview with Mikeal Rogers, for more info.)
Not only does it prove the point that there is nothing in web workers that is against the philosophy of JavaScript in general and Node in particular regarding concurrency, but it may result in web workers being a first class citizen in server-side JavaScript like io.js (and possibly Node.js in the future) just as it already is in client-side JavaScript in all modern browsers.
share|improve this answer
In many Node developers' opinions one of the best parts of Node is actually its single-threaded nature. Threads introduce a whole slew of difficulties with shared resources that Node completely avoids by doing nothing but non-blocking IO.
That's not to say that Node is limited to a single thread. It's just that the method for getting threaded concurrency is different from what you're looking for. The standard way to deal with threads is with the cluster module that comes standard with Node itself. It's a simpler approach to threads than manually dealing with them in your code.
For dealing with asynchronous programming in your code (as in, avoiding nested callback pyramids), the [Future] component in the Fibers library is a decent choice. I would also suggest you check out Asyncblock which is based on Fibers. Fibers are nice because they allow you to hide callback by duplicating the stack and then jumping between stacks on a single-thread as they're needed. Saves you the hassle of real threads while giving you the benefits. The downside is that stack traces can get a bit weird when using Fibers, but they aren't too bad.
If you don't need to worry about async stuff and are more just interested in doing a lot of processing without blocking, a simple call to process.nextTick(callback) every once in a while is all you need.
share|improve this answer
well, your suggestion - about clusters - was what i initially thought about. But the problem with that is their overhead - a new instance of v8 has to be initialised every time a new process is forked (~30ms, 10MB). So, you can't create lots of them. This is taken directly from the node docs: These child Nodes (about child_processes) are still whole new instances of V8. Assume at least 30ms startup and 10mb memory for each new Node. That is, you cannot create many thousands of them. – Parth Thakkar May 29 '12 at 14:15
This is exactly the idea of cluster. You run one worker per cpu core. Any more is most likely unnecessary. Even cpu intensive tasks will work fine with an asynchronous style. However, if you really need full-blown threads, you should probably consider moving to another server backend entirely. – genericdave May 29 '12 at 23:18
Maybe some more information on what tasks you are performing would help. Why would you need to (as you mentioned in your comment to genericdave's answer) need to create many thousands of them? The usual way of doing this sort of thing in Node is to start up a worker process (using fork or some other method) which always runs and can be communicated to using messages. In other words, don't start up a new worker each time you need to perform whatever task it is you're doing, but simply send a message to the already running worker and get a response when it's done. Honestly, I can't see that starting up many thousands of actual threads would be very efficient either, you are still limited by you CPUs.
Now, after saying all of that, I have been doing a lot of work with Hook.io lately which seems to work very well for this sort of off-loading tasks into other processes, maybe it can accomplish what you need.
share|improve this answer
I come from the old school of thought where we used multi-threading to make software fast. For past 3 years i have been using Node.js and a big supporter of it. As hasanyasin explained in detail how node works and the concept of asyncrous functionality. But let me add few things here.
Back in the old days with single cores and lower clock speeds we tried various ways to make software work fast and parallel. in DOS days we use to run one program at a time. Than in windows we started running multiple applications (processes) together. Concepts like preemptive and non-preemptive (or cooperative) where tested. we know now that preemptive was the answer for better multi-processing task on single core computers. Along came the concepts of processes/tasks and context switching. Than the concept of thread to further reduce the burden of process context switching. Thread where coined as light weight alternative to spawning new processes.
So like it or not signal thread or not multi-core or single core your processes will be preempted and time sliced by the OS.
Nodejs is a single process and provides async mechanism. Here jobs are dispatched to under lying OS to perform tasks while we waiting in an event loop for the task to finish. Once we get a green signal from OS we perform what ever we need to do. Now in a way this is cooperative/non-preemptive multi-tasking, so we should never block the event loop for a very long period of time other wise we will degrade our application very fast.
So if there is ever a task that is blocking in nature or is very time consuming we will have to branch it out to the preemptive world of OS and threads. there are good examples of this is in the libuv documentation. Also if you read the documentation further you find that FileI/O is handled in threads in node.js.
So Firstly its all in the design of our software. Secondly Context switching is always happening no matter what they tell you. Thread are there and still there for a reason, the reason is they are faster to switch in between then processes.
Under hood in node.js its all c++ and threads. And node provides c++ way to extend its functionality and to further speed out by using threads where they are a must i.e., blocking tasks such as reading from a source writing to a source, large data analysis so on so forth.
I know hasanyasin answer is the accepted one but for me threads will exist no matter what you say or how you hide them behind scripts, secondly no one just breaks things in to threads just for speed it is mostly done for blocking tasks. And threads are in the back bone of Node.js so before completely bashing multi-threading is in correct. Also threads are different from processes and the limitation of having node processes per core don't exactly apply to number of threads, threads are like sub tasks to a process. in fact threads won;t show up in your windows task manager or linux top command. once again they are more little weight then processes
share|improve this answer
What about using Timers' setTimeout to defer (delay 1) the blocking function?
Isn't real multi threading (V8 may do dispatching I don't know), but this is the way javaScript is used to do blocking tasks in browsers.
share|improve this answer
Sorry, but that's not what I want. That doesn't solve the problem of having real concurrency, which I require for common tasks like image manipulation. Now, if I use setTimer, (or a better one - process.nextTick() ), it will still block the main loop. – Parth Thakkar Jun 30 '12 at 15:31
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34750
|
Take the 2-minute tour ×
Are there any libraries that provide username and password login for Google AppEngine?
While I could try rolling one from scratch, I'd rather not try to reinvent the wheel if possible.
If not, would it be possible to turn my application into an OpenId provider and then use it to log in?
share|improve this question
2 Answers 2
up vote 4 down vote accepted
Try EngineAuth. It has many different options for authentication systems, including email+password authentication.
share|improve this answer
would you still advise to use EngineAuth at this point? (Just want to check if you've come across other good options in the last 2 years) Thx in advance. – wilsonmaravilha Aug 12 '14 at 13:41
GAE, via its Users API, supports three types of login (Google accounts, Google Apps accounts and OpenId). For an example of the latter see this article.
The type of login used is defined when creating the app, see this for further details.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34751
|
Take the 2-minute tour ×
I'm trying to add an in-background alarm clock feature to an app I'm developing.
I have read up on the UILocalNotification object and its use, and am aware that it is limited to 30 seconds of audio.
I was thinking of scheduling multiple notifications (say, 5 of them) spaced 30 seconds apart to mimic continuous play, but the problem with this approach is that if the user hits Close, I won't be able to cancel any of the subsequent notifications.
As far as I know, it is impossible to remove or hide the Close button without hiding the notification entirely (i.e., setting the alertBody property to Nil).
So, I thought I might use the repeatInterval property to cause the notification to pop up every 30 seconds, but it seems that I can only set the interval to one minute or one second, and nothing in between.
The feature is meant to allow the user to choose between music and beeps for alarm audio; it seems I may have found a way to do the beeps - setting the repeatInterval to one second, create a second's worth of beeps (which would need to be timed to cleanly repeat) and use that as the notification sound.
However, for the music, this approach limits me to playing 30 seconds of audio, followed by a 30-second gap, followed by 30 seconds of audio, and so on.
I know that there is no straightforward solution here, from my reading of other posts and resources; no third-party app has access to the same functionality as the built-in alarm clock. I am hoping that someone has found a workaround or thinks of something really clever.
UPDATE: I have found that the repeatInterval doesn't help me in this case, as I can't cancel the repetitions without launching the app.
For the time being I have decided not to use a notification as an alarm per se, but have changed the feature to be a reminder (more along the lines of what the notification is intended for).
If I come up with a way to implement user-friendly, reliable alarm functionality to an app, I will update this post.
share|improve this question
"multiple notifications (say, 5 of them) spaced 30 seconds apart to mimic continuous play" - it just won't work this way 'cause LocalNotification precision limited to one minute... – Oleg Trakhman Jul 5 '12 at 9:59
Consider using background execution in your app. As far as I know apps can play music in background. – Oleg Trakhman Jul 5 '12 at 10:05
Here's an example of how someone built the local notification background alarm you allude to: stackoverflow.com/a/4197215/1264925 – Ryan Twomey Jul 5 '12 at 17:45
@Oner: I know about the one-minute precision, as I describe in my post. – bschnur Jul 5 '12 at 20:50
@Oner: I do use background execution to play music, but not as part of this alarm feature; I need something to trigger the alarm audio playing while the app is in the background, hence the use of notifications. – bschnur Jul 5 '12 at 20:53
1 Answer 1
up vote 0 down vote accepted
I am afraid you cannot accomplish this..reason being as you stated the 'Close' button. You won't be getting any call back in the app if Close button is tapped. Further even if you present notifications every 30 seconds, then there will be multiple notifications on the screen which user has to view or close. So the user experience will be crappy. I would recommended making it clear to the users that they can not set alarm with a custom sound more than 30 seconds.
share|improve this answer
I came to the same conclusion, thanks for your suggestion. – bschnur Jul 5 '12 at 21:01
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34752
|
Take the 2-minute tour ×
It throws an exception,
java.net.SocketTimeoutException at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:130)
I have to use webService with ksoap2.
Can anyone help me ?
share|improve this question
I have the same issue. The same operation that works most of the time, might not work specially right after I unlock the device, maybe it takes a bit for Wi-Fi to kick in. Unfortunately, I did not get the chance to test with 3G but I'm just hoping it will resolve itself once I switch to 3G. – Dogahe Dec 3 '14 at 14:17
1 Answer 1
HttpTransportSE androidHttpTransport = new HttpTransportSE(URL,60000);
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34753
|
Take the 2-minute tour ×
The code below works properly and shows me a list of databases in my database. In the code given below, What is TABLE_CAT and Why is it there ?
import java.sql.*;
public class Database{
public static void main(String [] args) {
Connection con = null;
try {
con = DriverManager.getConnection("jdbc:mysql://localhost:3306","cowboy","123456");
DatabaseMetaData meta = con.getMetaData();
ResultSet res = meta.getCatalogs();
System.out.println("List of databases: ");
while (res.next()) {
System.out.println(" " + res.getString("TABLE_CAT"));
} catch (SQLException e) {
System.err.println("SQLException: " + e.getMessage());
share|improve this question
2 Answers 2
up vote 3 down vote accepted
TABLE_CAT is the name of the column in your resutSet. As you are iterating over your result set row by row, using res.getString("TABLE_CAT")) allows you to extract the value from that column in the current result row. As meta.getCatalogs() returns catalog names available in a database, the catalog name is then stored under a column called TABLE_CAT.
This should make more sense to you now.
share|improve this answer
It's a simple key that can be used to extract values from the resultSet of the meta-data
You can use the ResultSetMetaData (that can be obtained from the ResultSet) to list all the column names available within the ResultSet
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34754
|
Take the 2-minute tour ×
I am looking for a general response as to the mindset of how to do this... I have a table that has a column full of possible parameters and a column full of possible values. I want to join all parameters of a certain kind to another table to give further description of those specific rows, but not have that table joined to all other rows that don't contain the specific value. It would look like this:
Parameters Values Mammal
a 1
b 3
d cat Yes
c 4
d dog Yes
e 3
d fish No
f 2
I've tried a number of ways using Case, however the table just gets very weird and there was repetition of the table being joined depending on its length. Any suggestions?
The second table has two columns, it is being joined on its own animal column to the values column where parameter = "d". It does not show up at all when parameter equals anything else. Any suggestions would be greatly appreciated! (I'm using Cache SQL if you need to know. I'd much rather have a general explanation of technique though, it helps more.)
EDIT: Sorry, here would be the two separate tables:
Table 1: Table 2:
Parameters Values Animal Mammal
a 1 cat yes
b 3 dog yes
d cat snake no
c 4 fish no
d dog rat yes
e 3 hamster yes
d fish
f 2
share|improve this question
Is that above table the first or second table or the expected output. I've reread your question several times and I can't figure it out. Can you please clarify what your table structure is and what you expected output is. – Conrad Frix Sep 27 '12 at 22:20
please post some example of data of your two separated tables, and the result expected – Gonzalo.- Sep 27 '12 at 22:29
2 Answers 2
up vote 4 down vote accepted
It sounds like your current query is using an INNER JOIN which will only include the records that match in both tables. You need to use a LEFT JOIN which will produce all records from table1 and the matching records in table2. If there is not a match from table1 then the missing side will equal null:
select t1.parameters,
case when t2.mammal is null then '' else t2.mammal end Mammal
from table1 t1
left join table2 t2
on t1.value = t2.animal
See SQL Fiddle with Demo
If you need help learning JOIN syntax there is a great article:
A Visual Explanation of SQL Joins
share|improve this answer
Two options.
First uses a subquery:
select [parameters], [values],
(select mammal from t2 where t2.animal = t1.[values]) as mammal
from t1
Second uses a left join.
select [parameters], [values], t2.mammal
from t1
left join t2 on t1.[values] = t2.animal
The other answer uses a left join but skips the null replacement provided by the other answer.
Note this was tested on MS SQL Server (T-SQL) only.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34756
|
Take the 2-minute tour ×
How can I get Eclipse to export as a non-runnable jar all the contents of JRE System Library [JavaSE-1.6] and Referenced Libraries?
I want to use -classpath to bring together several jar files rather than use Eclipse's Export > Runnable JAR file. Motivation: swapping out a single class that happens to be in a package of its own, by swapping the jar.
It's easy enough to export my own packages in (non-runnable) jars but now I need the "library" classes as well and I have not found an easy and obvious way to do that.
share|improve this question
1 Answer 1
up vote 1 down vote accepted
There is an option when you export a runnable JAR to "Copy required libraries into a sub-folder next to the generated JAR". Would that work for your case?
share|improve this answer
I did that and those libraries seem to be sufficient even though they do not include the JRE libraries. They appear in a folder not a single jar so a wildcard must be used. – H2ONaCl Oct 13 '12 at 4:22
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34757
|
Take the 2-minute tour ×
I am analyzing an electronic survey I made using Google Forms and I have the following problem.
One of the questions can take multiple answers in the form of Checkboxes as shown in the picture below. The question is in Greek so I have added some Choice1, Choice2, Choice3 etc next to each answer in order to facilitate my question.
enter image description here
In my data when someone chose lets say Choice1 and Choice2, I will have an answer which is the concatenation of the strings he checked seperated with commas.
In this case it would be:
Choice1, Choice2
If someone else checked Choice1, Choice2 and Choice4 his answer in my data would be:
Choice1, Choice2, Choice4
The problem is SPSS has no way of seperating the substrings (seperated by commas) and understanding which Choices each case has in common. Or maybe there is a way but I don't know it :)
When I, for example, do a simple frequency analysis for this question it produces a table that perceives
Choice1, Choice2
as a completely different case from
Choice1, Choice2, Choice4
Ideally I would like to somehow tell SPSS to count the frequency of each unique Choice (Choice1, Choice2, Choice3 etc etc) rather than each unique combination of those Choices. Is that possible? And if it is can you point me to the documentation I need to study to make it happen?
Thx a lot!
share|improve this question
migrated from stats.stackexchange.com Dec 27 '12 at 16:05
I have voted to close this question as off-topic with a migration to Stack Overflow. This does not mean the question is bad, but simply that it will find a better home there. – cardinal Dec 27 '12 at 14:36
@cardinal Fair enough! :) – Panagiotis Palladinos Dec 27 '12 at 14:48
3 Answers 3
up vote 2 down vote accepted
Imagine you are working with the following data, which is a CSV file you have downloaded from your online form. Copy and paste the text below and save it to a text file named "CourseInterestSurvey.CSV".
Timestamp,Which courses are you interested in?,What software do you use?
12/28/2012 11:57:56,"Research Methods, Data Visualization","Gnumeric, SPSS, R"
12/28/2012 11:58:09,Data Visualization,"SPSS, Stata, R"
12/28/2012 11:59:09,"Research Dissemination, Graphic Design",Adobe InDesign
12/28/2012 11:59:27,"Data Analysis, Data Visualization, Graphic Design","Excel, OpenOffice.org/Libre Office, Stata"
12/28/2012 11:59:44,Data Visualization,"R, Adobe Illustrator"
Read it into SPSS using the following syntax:
Timestamp A19
CourseInterest A49
Software A41.
It currently looks like the image below--three columns (one timestamp, and two with the data we want):
enter image description here
Working with some syntax from here, we can split the cells up as follows:
* We know the string does not excede 50 characters.
* We got that information while we were reading our data in.
STRING #temp(a50).
* We're going to work on the "CourseInterest" variable.
COMPUTE #temp=CourseInterest.
* We're going to create 3 new variables with the prefix "CourseInterest".
* You should modify this according to the actual number of options your data has
* and the maximum length of one of the strings in your data.
VECTOR CourseInterest(3, a25).
* Here's where the actual variable creation takes place.
LOOP #i = 1 TO 3.
. COMPUTE #index=index(#temp,",").
. DO IF #index GT 0.
. COMPUTE CourseInterest(#i)=LTRIM(substr(#temp,1, #index-1)).
. COMPUTE #temp=substr(#temp, #index+1).
. ELSE.
. COMPUTE CourseInterest(#i)=LTRIM(#temp).
. COMPUTE #temp=''.
. END IF.
END LOOP IF #index EQ 0.
The result:
enter image description here
This only addresses one column at a time, and I'm not familiar enough to modify it to work over multiple columns. However, if you were to switch over to R, I already have some readymade functions to help deal with exactly these kinds of situations.
share|improve this answer
+1 - You could either use a loop outside of a do repeat, or write this up in a macro to modify it to work over multiple columns. Another way would be to have a do repeat just search for the particular strings and return dummy variables (which would be a nicer format for multiple response sets). This would be more up front work though, although likely necessary as some point anyway. – Andy W Dec 28 '12 at 12:45
This looks good actually! Thank you I will try it tonight! One question: Do I keep the original Variable as well? If yes then why do I need it (since I now have seperate variables (1,2,3 etc) – Panagiotis Palladinos Dec 28 '12 at 15:31
@PanagiotisPalladinos, I wouldn't see any need for the original variables any longer, particularly since you have that information available as a Google Spreadsheet if you need it. The only use I can see really is tabulation of the combination of responses (since Google Forms already sorts the multiple responses in a consistent manner) rather than tabulation of each response. – Ananda Mahto Dec 28 '12 at 15:39
Unfortunately there is no easy "built-in" way to achieve this, but it is certainly achievable with spreadsheet formulae, or Google Apps Script.
Using formulae, assuming your check box question lands in column D, this will produce a "normalised" list:
and you can turn that into a two-column list and QUERY it to return a table of frequencies:
=ArrayFormula(QUERY(TRANSPOSE(SPLIT(CONCATENATE(D2:D&",");","))&{"",""};"select Col1, count(Col2) group by Col1 label Col1 'Item', count(Col2) 'Frequency'";0))
If your locale uses a comma as a decimal separator, replace {"",""} with {""\""}.
share|improve this answer
Oh man you'd think that after 17 versions of SPSS they would implement checkboxes... Thx for your answer mate. I will try it and get back to you! – Panagiotis Palladinos Dec 28 '12 at 8:14
Oops sorry, I missed the SPSS tag, my answer is a solution within Google Spreadsheets. I'll leave it for now, but comment me back if you want me to remove it (happy to do so). – AdamL Dec 28 '12 at 20:12
Google sheets adds a ", " between items, so you need to trim as well. My updated formula: =ArrayFormula(QUERY(TRANSPOSE(ARRAYFORMULA(Trim(SPLIT(CONCATENATE(RANGE&","),","))))&{"",""},"select Col1, count(Col2) group by Col1 label Col1 'Topic', count(Col2) 'Votes'",0)) – Lucy Bain May 18 at 4:58
It is easy to split the fields into separate variables as described above. Now define these variables as a multiple response set (Analyze > Tables > Multiple Response Sets), and you can analyze these with the CTABLES or MULT REPONSE procedures and graph them using the Chart Builder
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34758
|
Take the 2-minute tour ×
I am using the built in LoginButton widget in the facebook sdk, I haven't made any changes to it I just include it in my xml layout file and call setSessionStatusCallback nothing else.
However, when I click the login button facebook says I am asking for both basic info AND the friends list. I do not want permission to view the users friends, and after looking though the source of LoginButton it seems like it shouldn't be asking either, its permissions String list is empty.
Whats going on here?
update: adding my code by request.
final LoginButton facebook = (LoginButton) getView().findViewById(R.id.facebook_login);
if(facebook != null){
if(Utility.getMetadataApplicationId(getActivity()) != null){
facebook.setSessionStatusCallback(new StatusCallback() {
Request.executeMeRequestAsync(session, new GraphUserCallback() {
public void onCompleted(GraphUser user, Response response) {
Log.d("test", user.getId());
share|improve this question
I know you said you don't have anything in the permissions option, but could you post up your code so we can get a better idea of whats going on? – steve Dec 28 '12 at 23:57
@steve I updated my question and added the code as you requested. – schwiz Dec 29 '12 at 0:42
Strange. I wonder do you have any default permissions associated in the developer toolbar? The session object can request default permissions if none are stated. Thats the only thing I could think of. Sorr! – steve Dec 29 '12 at 0:56
1 Answer 1
up vote 3 down vote accepted
Asking for basic info + friends list is the most basic permissions that an app can request. If you do not supply any additional read permissions, then those two permissions are the only ones that will show up, and I do not believe you can remove them.
I believe we do this because when integrating your app with Facebook, the inherent reason is to make your app social and to provide a distribution channel for your app. So the friends list permission is added by default because you should use that permission to encourage the user to share your app with their friends if they wish to do so with the app requests dialog etc.
share|improve this answer
Yes well, I'm just implementing a SSO so it seems fishy when I ask for a friends list, the iOS counterpart does not ask for a friends list. – schwiz Dec 29 '12 at 0:39
It's probably the new login dialog that came with the new FB4A + Android SDK that the iOS counterpart doesn't have yet. I just authed an app on iOS with the old iOS SDK and you're correct in that it didn't explicitly say friends, but if you go to your app settings on Facebook, it says that they have access to your list of friends. – Jesse Chen Dec 29 '12 at 2:23
Ok thanks, I hope you change this in the future, its nice you want to promote social apps, but not really the spirit of SSO. – schwiz Dec 29 '12 at 19:31
Same issue here. One of my apps uses Facebook login (Android and iOS), which requires an access token which is used in server side to fetch email address. However it seems fishy when it asks for "Friends List" permission. We are using OAuth 2.0. Google+ login works fine. – geeth Sep 23 '13 at 7:21
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34759
|
Take the 2-minute tour ×
I already know how to check elements that are there when the document is ready:
jQuery.fn.exists = function () {
return jQuery(this).length > 0;
But this method doesn’t know elements that are added with AJAX. Does anybody know how to do that?
share|improve this question
2 Answers 2
up vote 3 down vote accepted
The method does once the ajax is loaded and appended to the DOM. You could rewrite it a bit:
jQuery.existsin = function (what, where) {
return jQuery(where).find(what).length > 0;
The you could on ajax success :
function(data, status){
if(jQuery.existsin('selector', data)){
//do foo
share|improve this answer
would it be smart to specify "body" as a "default where" ? – Martin Labuschin Dec 4 '09 at 13:13
thank you! that works for me! – Martin Labuschin Dec 4 '09 at 13:41
@Eivind please let me know the selector here. – Achyut Aug 20 '14 at 9:28
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34760
|
Take the 2-minute tour ×
I've got a series of divs ('group') with text in them, and in the bottom corner a floated div ('toggle'). The code I have works if the text within 'group' is a certain length, but since the space within varies between divs, the floated 'toggle' position does as well. I could set the 'toggle' div as an absolutely positioned element within the 'group', but then text doesn't wrap around it (and I need the text to respect the borders of 'toggle'). So, how can I go about positioning 'toggle' in the lower-right corner of my 'group' div, regardless of size? Should I just make a bunch of @media calls, or is there a better way to accomplish this? Here's my HTML:
<div class="group">
<p class="grouptitle"><a href="#">Name of group goes here</a></p>
<p class="grouptext">Brief description of group goes here. Lorem ipsum dolor sit amet, consectetuer adipiscing elit,sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut nisl ut aliquip isl isi enim ad</p>
<div class="toggle"></div>
And here's my CSS:
.group {
position: relative;
display: inline-block;
text-align: left;
width: 300px;
height: 300px;
min-height: 300px;
min-width: 300px;
padding: 10px;
margin: 12px;
background-color: cyan;
vertical-align: top; }
.toggle {
float: right;
width: 50px;
height: 50px;
background-color: green;
bottom: 0;
margin-right: -10px;
margin-top: 32px; }
Thanks for reading!
EDIT: Here's a fiddle. I need to make it so the green div stays in the bottom corner of the cyan div regardless of the text within the cyan div, and with the text wrapping around the green div.
share|improve this question
That's not CSS. – j08691 Jan 26 '14 at 2:59
Woops yeah sorry posted the SASS, let me fix that – Ber Jan 26 '14 at 3:15
I'm not sure I understand what you're trying to accomplish. Is this not what you're after? jsfiddle.net/XpK93 – monners Jan 26 '14 at 3:21
Here's a fiddle demonstrating my problem: jsfiddle.net/b2LxU/3 As you can see, I need some way to position the green div at the bottom corner regardless of the text within the cyan div, while respecting the borders of the green div. It sounds silly, but could I absolutely position the green div while maintaining it's float status (so the text wraps around the green div)? – Ber Jan 26 '14 at 3:33
So you're not using absolute positioning because you don't want the text to flow under the toggle box? – monners Jan 26 '14 at 3:39
1 Answer 1
up vote 1 down vote accepted
If I'm reading your issue correctly, I think that using:
position: absolute;
Would solve it. Here's a fiddle to show you what I'm talking about. - http://jsfiddle.net/fishgraphics/b2LxU/13/
share|improve this answer
Been a while since I looked at this, but that solution, while solving the problem of positioning 'toggle', still suffers from the problem of the text not respecting the boundary/not wrapping around 'toggle'. Heres your fiddle with a modification to show the problem. Ultimately, I got around these issues on the project I was working on by adding a png image that faded to the background color of the 'groups' div adjacent and to the left of the 'toggle', thereby making the text fade out. Hacky, but worked for me at the time. – Ber Mar 26 '14 at 23:15
@Ber If you were to take the height away from the "group" div, then you shouldn't run into that issue. jsfiddle.net/fishgraphics/b2LxU/19 – FiSH GRAPHICS Mar 27 '14 at 15:09
Wow thats rad man, thanks for the update! – Ber Mar 28 '14 at 8:15
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34761
|
Take the 2-minute tour ×
I'm looking for a diff implementation in Java. I've seen that Python has its own SequenceMatcher (with difflib), which is exactly what I need... in Java.
Is there any portage? Or is there any other class/library that performs the same in Java?
If not, where can I find the source code of that difflib (if free as in speech) to make my own implementation of SequenceMatcher in Java ?
Unfortunately, Apache Commons Lang doesn't help me much.
share|improve this question
@Bozho all links is out of date – gstackoverflow Jun 19 '14 at 9:08
3 Answers 3
up vote 9 down vote accepted
This library seems to be what you're after: google-diff-match-patch.
It has the following main features:
2. Match: Given a search string, find its best fuzzy match in a block of plain text. Weighted for both accuracy and location.
3. Patch: Apply a list of patches onto plain text. Use best-effort to apply patch even when the underlying text doesn't match.
In case you want an alternative, you could also try this: java-diff-utils
share|improve this answer
Exactly what I needed: diff, match and patch. – Olivier Grégoire Jun 9 '10 at 14:43
What about this one?
(or have a look here)
share|improve this answer
That Google search is exactly how I got here. – mjaggard Apr 30 at 21:08
Hi You can run a MR job which can use https://code.google.com/p/google-diff-match-patch/ to do the required job. I dont feel there are any tools out of the box to do your job.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34762
|
Take the 2-minute tour ×
I have two databases, A, and B. On database A, I have a table X, with 10 columns in it. On database B, I have a table Y, with 7 columns in it. 4 of the columns from these tables match, and whenever one table updates one or more of these columns, I need the other table to update these columns. How can I do this? Replication wouldn't seem to work because the table structures are different, and insert/update triggers would seem to create infinite loops.
share|improve this question
How come replication didn't help? If you just publish the 4 matching tables you should have no problem. Or do you want to update more than the 4 matching tables? – Eton B. Aug 24 '10 at 17:53
@Eton B.: Sorry, I think I was unclear on that. There are 2 tables, and 4 of the columns match on them, with unrelated columns on the side. If it was just 4 tables that matched, replication would work fine. – Brisbe42 Aug 24 '10 at 17:55
3 Answers 3
up vote 1 down vote accepted
To avoid the loops you could have your triggers not do an update if the values are equal?
share|improve this answer
Replication works fine on tables with different structures, see Filtering Published Data.
As for triggers to avoid infinite loop, you would use context information to set up that you're currently in a 'replication' trigger so that you'd avoid looping, see Using Session Context Information:
• in the trigger, you check if CONTEXT_INFO() says you're already in a trigger.
• if YES, do nothing (return)
• if NO, SET CONTEXT INFO to reflect your operation
• copy the data
• when the 'replica' trigger fires, will find your context info and do nothing
• clear context info
• return
share|improve this answer
select * into NewTable from PastTable
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34763
|
Take the 2-minute tour ×
Recently I stumbled across this pretty slick JS library called nodeJS that acts like a server side JS.
The main feature of the language being Evented I/O which gives the inherent capacity of I/O being completely non-blocking using callbacks!!!
My question is, if this kind of completely non-blocking I/O mechanism existed in the past (given event driven I/O has been around for a long time), why aren't they more popular in high level languages like C# and Java (although Java has NIO implementation that supports non-blocking I/O)?
Currently, a simple file read/write operation results in complete I/O blocking which is not the case with event driven I/O.
I'd like to gain a better understanding of event driven I/O and how it is different from what we have in Java.
share|improve this question
I am curious why you think Java/C# does not have async IO? – Kirk Woll Sep 24 '10 at 23:48
You mean using Java NIO package??. I've never used it but I know it's very capable. I will change the question to address this issue. – A_Var Sep 24 '10 at 23:55
4 Answers 4
up vote 5 down vote accepted
Java: http://en.wikipedia.org/wiki/New_I/O
.NET: http://msdn.microsoft.com/en-us/library/dxkwh6zw.aspx
public IAsyncResult BeginReceive(
byte[] buffer,
int offset,
int size,
SocketFlags socketFlags,
AsyncCallback callback,
Object state
share|improve this answer
Kirk excellent!!. But can you explain more about New I/O. Is it event driven??. I am trying to compare it with nodeJS. The reason why nodeJS is so popular is because of it's event driven I/O. – A_Var Sep 24 '10 at 23:59
I'm not sure if it's "event" driven in the sense you mean, but this is an excellent tutorial: rox-xmlrpc.sourceforge.net/niotut – Kirk Woll Sep 25 '10 at 0:13
@A_Var: An event-driven engine is actually just an abstraction of state machines. In languages where there is no built-in event-driven engine most developers simply write their own state machine using a while loop and switch statements (or a dispatch table). Sometimes developers can be bothered enough to generalize their state machine implementation to make an API out of it resulting in an event-driven library for the language. An example of this is Python's Twisted framework. – slebetman Sep 25 '10 at 0:23
Tcl had event driven I/O from the 1990's (if I'm not mistaken). Certainly before 2000 because it was when tclhttpd beat Apache in benchmark tests sometime in 2000 that people really started paying attention to non-blocking I/O. When people saw that, they started re-writing web servers. One of the early result of that was Lighttpd: one of the first non-blocking web servers written in C. At that time, using event-driven I/O in tcl via the fileevent command was already considered standard practice in the tcl world.
AOLserver had (and still does) have a tcl core and it's hosting one of the busiest sites on the web (at least in the early days): http://www.aol.com/. Though the server itself is written in C, it uses tcl's C API to implement event handling and I/O. The reason AOLserver used tcl's I/O subsystem is because it uses tcl as a scripting language and the developers thought that since someone else have written it then might as well use it.
I believe AOLserver was first released in 1995. That should confirm that event-driven I/O was already available in tcl back in the mid 1990s.
Tcl is one of the earliest, if not the earliest language to have an event-driven engine built it. The event subsystem was originally implemented for the Tk library and was later merged into tcl itself.
share|improve this answer
As I understand it, there's a widespread perception that multithreading is easier than event-driven, since in multithreaded programming every thread has a simple sequential flow of execution, while event-driven consists of lots of small fragments of code.
Of course, this is better stated elsewhere, see for example Q.2 of state-threads FAQ.
share|improve this answer
Java has bad support even for basic file I/O. These languages are created for fast creation of portable GUI applications, and not for optimized and OS dependent low level I/O operations.
share|improve this answer
I can't tell, was this answer a joke? – Kirk Woll Sep 24 '10 at 23:55
This doesn't compare with evented I/O and criticising Java I/O. Yes Java non blocking I/O via multi-threading (not pure non-blocking I/O though) is different from event driven I/O(which is pure non-blocking I/O) but each has it's own pro's and con's. Please support your statement with examples. – A_Var Sep 25 '10 at 1:13
lol. Why dont you write a book. :) – Konza Apr 19 '13 at 10:47
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34764
|
Take the 2-minute tour ×
I can't figure out the principles of dynamic programming and I really do want it. DP is very powerful, it can solve problems like this:
Getting the lowest possible sum from numbers' difference
So, can you suggest me good books or articles (preferably with examples with real code) which would explain me what is dynamic programming? I really want simple examples first of all, then I'll move on.
locked by Robert Harvey Oct 1 '11 at 20:55
closed as off topic by Robert Harvey Oct 1 '11 at 20:54
This is a great question – Zevan Dec 10 '10 at 18:55
Do you mean "Meta-Programming?" When I hear "dynamic programming" I think of pulling data from a database to modify the html being generated dynamically by the server. – Achilles Jan 28 '11 at 15:09
for example recursion, divide and conquer, backtracking and etc. – namco Jan 28 '11 at 15:13
@Achilles: When most people use the term "dynamic programming", they refer to Dynamic Programming, especially when they do so in the context of algorithms. – sepp2k Jan 28 '11 at 15:13
@Achilles: Meta-programming is certainly not dynamic programming. – dfens Jan 28 '11 at 15:49
14 Answers 14
up vote 34 down vote accepted
Given a recursive function, say:
fib(n) = 0 if n = 0
1 if n = 1
We can easily write this recursively from its mathematic form as:
function fib(n)
m = map(int, int)
m[0] = 0
m[1] = 1
function fib(n)
if(m[n] does not exist)
v = values from item1..itemn
w = weights from item1..itemn
n = number of items
W = maximum weight of knapsack
function knapsack
for w=0..W
m[0, w] = 0
for i=1 to n
m[i, 0] = 0
for w=1..W
if w[i] <= w
return m[n, n]
Additional Resources
1. Introduction to Algorithms
2. Programming Challenges
3. Algorithm Design Manual
Example Problems
Very nice answer. I am still not getting the knapsack problem, but I will keep trying to understand until I am sure how it's done. Thank you. BTW, if you could write PHP code for knapsack problem, it would be the most amazing gift for Christmas I got evar! :) – good_evening Dec 10 '10 at 20:30
+1 - and some bottom-up algorithms are called "tabular", because they are based on a table of computed results. The tables are often computed "backwards" in order to ensure each item is completed before it needs to be referenced. Simple word-wrapping can use this approach (I think Sedgewick used it as an example). It's not called "tabular word-wrapping" but I think of it that way. There's a tabular LR parsing algorithm, too, and IIRC "packrat" is basically tabular LL parsing. – Steve314 Dec 10 '10 at 23:55
Nice answer, except when you say "By making the most optimal decision at each point (each subproblem), you are making sure that your overall result is the most optimal." That is a property (called "optimal substructure") that is required of the problem for DP to work correctly, not a fact that is automatically true for all problems. Many problems lack this property and thus can't be solved with DP. – j_random_hacker Dec 11 '10 at 19:47
@j_random_hacker This is entirely correct. I've updated the answer to reflect this. Thanks! – Ian Bishop Dec 12 '10 at 19:19
+1 for that, but I suggest tweaking the actual sentence I complained about as well :) – j_random_hacker Dec 12 '10 at 20:01
In short, Dynamic Programming is a method to solve complex problems by breaking them down into simpler steps, that is, going through solving a problem step-by-step.
1. Dynamic programming;
2. Introduction to Dynamic Programming;
3. MIT's Introduction to Algorithms, Lecture 15: Dynamic Programming;
4. Algorithm Design (book).
I hope this links will help at least a bit.
fourth link is about some other kind of dynamic programming :) – max taldykin Dec 9 '10 at 12:28
@max: Edited answer. Thanks! =) – Will Marcouiller Dec 9 '10 at 20:36
IMO dynamic programming isn't about breaking the problem into simpler steps, exactly, but about avoiding duplicate calculations when equivalent steps recur repeatedly by storing the results of those steps for later re-use. – Steve314 Dec 11 '10 at 0:01
@Steve314: Well then, tell this to Wikipedia (see first link). That's about the first sentence from it. ;-) You won't be able to to avoid duplicate calculation if you don't break the complexity, since you won't be able to get the whole complexity out of it. Though I understand and get your point, that is the second step, really, because you will be able to understand a repetition and factorize it once you can see that there's a repetition. Then, the code can be refactored to avoid duplication. – Will Marcouiller Dec 12 '10 at 3:36
the thing is, all of the algorithm paradigms involve breaking the problem into simpler steps. Divide and Conquer is closest to simply stating this must be done, yet still includes lessons on how to subdivide. The greedy method is more about how to select which subproblem to handle first, and so on - the unique thing about each particular paradigm is more than just subdividing the problem, since subdividing is what all the paradigms have in common. – Steve314 Dec 12 '10 at 11:18
Start with
If you want to test yourself my choices about online judges are
and of course
You can also checks good universities algorithms courses
After all, if you can't solve problems ask SO that many algorithms addict exist here
See below
and there are too many samples and articles reference at above article.
After you learning dynamic programming you can improve your skill by solving UVA problems, There are lists of some UVA dynamic programming problems in discussion board of UVA
Also Wiki has a good simple samples for it.
Edit: for book algorithm about it, you can use:
Also you should take a look at Memoization in dynamic programming.
I think Algebraic Dynamic Programming worth mentioning. It's quite inspiring presentation of DP technique and is widely used in bioinformatics community. Also, Bellman's principle of optimality stated in very comprehensible way.
Traditionally, DP is taught by example: algorithms are cast in terms of recurrences between table entries that store solutions to intermediate problems, from this table the overall solution is constructed via some case analysis.
ADP organizes DP algorithm such that problem decomposition into subproblems and case analysis are completely separated from the intended optimization objective. This allows to reuse and combine different parts of DP algorithms for similar problems.
There are three loosely coupled parts in ADP algorithm:
• building search space (which is stated in terms of tree grammars);
• scoring each element of the search space;
• objective function selecting those elements of the search space, that we are interested in.
All this parts then automatically fused together yielding effective algorithm.
This USACO article is a good starting point to understand the basics of DP and how it can give tremendous speed-ups. Then look at this TopCoder article which also covers the basics, but isn't written that well. This tutorial from CMU is also pretty good. Once you understand that, you will need to take the leap to 2D DP to solve the problem you refer to. Read through this Topcoder article up to and including the apples question (labelled intermediate).
You might also find watching this MIT video lecture useful, depending on how well you pick things up from videos.
Also be aware that you will need to have a solid grasp of recursion before you will successfully be able to pick up DP. DP is hard! But the real hard part is seeing the solution. Once you understand the concept of DP (which the above should get you to) and you're giving the sketch of a solution (e.g. my answer to your question then it really isn't that hard to apply, since DP solutions are typically very concise and not too far off from iterative versions of an easier-to-understand recursive solution.
You should also have a look at memoization, which some people find easier to understand but it is often just as efficient as DP. To explain briefly, memoization takes a recursive function and caches its results to save re-computing the results for the same arguments in future.
Only some problems can be solved with Dynamic Programming
Since no-one has mentioned it yet, the properties needed for a dynamic programming solution to be applicable are:
• Overlapping subproblems. It must be possible to break the original problem down into subproblems in such a way that some subproblems occur more than once. The advantage of DP over plain recursion is that each of these subproblems will be solved only once, and the results saved and reused if necessary. In other words, DP algorithms trade memory for time.
• Optimal substructure. It must be possible to calculate the optimal solution to a subproblem using only the optimal solutions to subproblems. Verifying that this property holds can require some careful thinking.
Example: All-Pairs Shortest Paths
As a typical example of a DP algorithm, consider the problem of finding the lengths of the shortest paths between all pairs of vertices in a graph using the Floyd-Warshall algorithm.
Suppose there are n vertices numbered 1 to n. Although we are interested in calculating a function d(a, b), the length of the shortest path between vertices a and b, it's difficult to find a way to calculate this efficiently from other values of the function d().
Let's introduce a third parameter c, and define d(a, b, c) to be the length of the shortest path between a and b that visits only vertices in the range 1 to c in between. (It need not visit all those vertices.) Although this seems like a pointless constraint to add, notice that we now have the following relationship:
d(a, b, c) = min(d(a, b, c-1), d(a, c, c-1) + d(c, b, c-1))
The 2 arguments to min() above show the 2 possible cases. The shortest way to get from a to b using only the vertices 1 to c either:
1. Avoids c (in which case it's the same as the shortest path using only the first c-1 vertices), or
2. Goes via c. In this case, this path must be the shortest path from a to c followed by the shortest path from c to b, with both paths constrained to visit only vertices in the range 1 to c-1 in between. We know that if this case (going via c) is shorter, then these 2 paths cannot visit any of the same vertices, because if they did it would be shorter still to skip all vertices (including c) between them, so case 1 would have been picked instead.
This formulation satisfies the optimal substructure property -- it is only necessary to know the optimal solutions to subproblems to find the optimal solution to a larger problem. (Not all problems have this important property -- e.g. if we wanted to find the longest paths between all pairs of vertices, this approach breaks down because the longest path from a to c may visit vertices that are also visited by the longest path from c to b.)
Knowing the above functional relationship, and the boundary condition that d(a, b, 0) is equal to the length of the edge between a and b (or infinity if no such edge exists), it's possible to calculate every value of d(a, b, c), starting from c=1 and working up to c=n. d(a, b, n) is the shortest distance between a and b that can visit any vertex in between -- the answer we are looking for.
+1 for a really nice explanation of when dynamic programming is a suitable solution. – GWW Dec 11 '10 at 21:32
Almost all introductory algorithm books have some chapter for dynamic programming. I'd recommend:
If you want to learn about algorithms, I have found MIT to have some quite excellent videos of lectures available.
For instance, 6.046J / 18.410J Introduction to Algorithms (SMA 5503) looks to be quite a good bet.
The course covers dynamic programming, among a lot of other useful algorithmic techniques. The book used is also, in my personal opinion, quite excellent, and very worthy of a buy for anyone serious in learning about algorithms.
In addition, the course comes with a list of assignments and so on, so you'd get a possibility to exercise the theory in practice as well.
Related questions:
As part of a correspondence Mathematics MSc I did a course based on the book http://www.amazon.co.uk/Introduction-Programming-International-mathematics-computer/dp/0080250645/ref=sr_1_4?ie=UTF8&qid=1290713580&sr=8-4 It really is more of a mathematical angle than a programming angle, but if you can spare the time and effort, it is a very thorough introduction, which seemed work for me as a course that was run pretty much out of the book.
I also have an early version of the book "Algorithms" by Sedgewick, and there is a very readable short chapter on dynamic programming in there. He now seems to sell a bewildering variety of expensive books. Looking on amazon, there seems to be a chapter of the same name at http://www.amazon.co.uk/gp/product/toc/0201361205/ref=dp_toc?ie=UTF8&n=266239
Planning Algorithms, by Steven LaValle has a section about Dynamic Programming:
See for instance section 2.3.1.
MIT Open CourseWare 6.00 Introduction to Computer Science and Programming
If you try dynamic programming in order to solve a problem, I think you would come to appreciate the concept behind it . In Google codejam, once the participants were given a program called "Welcome to CodeJam", it revealed the use dynamic programming in an excellent way.
|
global_05_local_5_shard_00000035_processed.jsonl/34765
|
Take the 2-minute tour ×
We are in the process of migrating our bug tracking to Bugzilla from a really old version of track and I am running out of Advil.
We have a legacy application that has been around for a long time. Mix in the fact that our versioning management has been through a few iterations it generated a lot of different versions in the wild. To make matters worse, because of contractual limitations it is not always possible to upgrade the clients to the latest and greatest, so we must branch, fix, test and release, on the version they currently have, yielding yet another version number.
The end result is that the version combo box is ludicrously long. Lastly, for various reasons, we want to track three different version information : the version in which the bug was found (version), the version in which we plan to fix (milestone) the bug and the version in which it has ultimately been fixed (open to suggestions). here is my problem in fact... this can actually be multiple numbers where we did a retroactive fix for some of these customers (this happens VERY often).
This is where I need your collective wisdom :
How do you keep track of these versions (found, planned and multiple fixed) in Bugzilla?
What are the best practices around linking versions and bug tracking ?
It seems that cloning the bug for each version is a good way to track, thus the target version is always tracked in the milestone as well as the fixed version, and the buggy version is always the native version.
Also to have each clone block the original bug make it a good way to trace the history back to the original submission.
Although I have accepted the answer I still welcome your input.
share|improve this question
4 Answers 4
up vote 3 down vote accepted
Often, if we need to fix something in multiple released versions (generally branches in the source code repository), the bug will be cloned for each branch so that all the commits and release status can be tracked separately. I think the only time we don't do this is when the change is not directly related to the codebase itself and cannot be fixed simply by updating our libraries.
As for version tracking in general, this has struck me as a reasonable way to do things, given that we generally only need to support 2-3 major versions (plus the trunk) at any time. If you have multiple disjoint versions that need supporting, e.g. customer-specific deployments, then things are going to be harder to track. (Arguably this is going to cause headaches in general and it would be better to unify things to a more central version theme).
share|improve this answer
Yes id does cause headaches, main reason why we want to move to bugzilla now later to subversion for the code. Although we will be cleaning a lot of complexity enough remains to be problematic – Newtopian Jan 14 '09 at 4:36
Using a clone for each "active release branch" seems to be the closest thing to a standard way to handle this as exists. For tracking what is desired or needed to be done in the next release, it works well. But it sucks anytime you want a unified view of the bug, information is spread through a multitude of clones. At best the clones are linked as dependencies of the "main" bug, and the summary helps with which branch they refer to, allowing you to hover over each one to see the branch and status. Further, it dilutes searches, you now have n copies of the same bug. – Peter May 14 at 1:00
I use Bugzilla to keep track not only of bugs, but also of new features, enhancements, and vague ideas. For each planned and released version, I have a Tracking Bug (something that I saw on the original Mozilla bugzilla, and found to be useful).
So if you have a bug report, you enter the bug with the version number that it was reported. Create additional bugs (one for each version you plan to fix it in) which all depend on (block) the original bug and block the version-specific Tracking Bugs.
If all bugs blocking the original bug are closed/verified (whatever your QA implements), you can also close the original bug.
share|improve this answer
we do have an alternate means to track tasks but We do track requirements also in Bugzilla, mainly all that touches the source code will be tracked in bugzilla, thanks – Newtopian Jan 14 '09 at 4:40
We are using jira and still have this problem. I think it is a question of requirements and how are versions used rather than any one tool.
Who uses versions and how do they use them?
How are versions related to milestones in a project plan?
We use a 4 dectet version (major.minor.patch.buildNO). buildNo is the SVN head revision # at time of build. Each version is stored in JIRA and issues have an affects version and fixed-in version field that's a multi select.
After a short while we have many versions. Jira does allow us to control the list in two ways 1. Archive versions (greyed out from pick list) 2. Merge versions (rolls several versions together into a new version - no undo)
We have used Archive, but have avoided Merge due to the lack of the undo. So we still have a list of many many versions.
I'm sure you could probably accomplish a merge action in Bugzilla with some scripting and time, the question is: when is it OK to merge several older versions together?
If I have released, do I need to know that I have 17 builds between start and release? Do I need to keep the knowledge of a bug being found in build 1, fixed in 2, found again in 7, fixed again 9? Or is Found in release 1.0.0 fixed in release 1.0.1 good enough?
i'm going to ask a large question on this topic later on today, but I know the basic answer already: - Depends on how your team wants to track things.
Implementation is fun, but it all comes down to requirements, goals and working back from user experience to solution. Which is rough when people don't necessarily know how that want to use something that doesn't quite exist in the form they'd like to use.
share|improve this answer
I like the build numbering scheme.. makes it sooo much clearer than a synthetic number. Thanks. On the rest basically it is all a matter of what metric we wish to compile against the software version and what granularity we need it to be. – Newtopian Aug 19 '09 at 1:15
I was looking for a similar feature in TFS, and while doing some investigation, I found that there is an enhancement request to manage "sightings" in Bugzilla: "Bug 55970 - (bz-branch) Bugzilla needs to deal better with branches (implement sightings)": https://bugzilla.mozilla.org/show_bug.cgi?id=55970
There is also a proposed design: https://bug55970.bugzilla.mozilla.org/attachment.cgi?id=546912
For information, we are going to implement something similar in TFS 2010, with a "Bug Parent" or "Bug Master" to hold the information about the bug itself (repro steps, severity, technical info, impacted components...), that can have child of type "Bug Child" or "Sighting" that will hold the information specific to a given branch (target milestone, priority, specific information for that branch...).
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34766
|
Take the 2-minute tour ×
I am dynamically creating an IFrame then adding a script tag to it with code that should execute when the frame loads, however it is never executed.
$stage.html("<iframe src='javascript:;' id='frame'></iframe>");
$frame = $("#frame");
frame = $frame[0].contentDocument ? $frame[0].contentDocument : $frame[0].contentWindow.document;
script= frame.createElement('script'),
head = frame.getElementsByTagName("head")[0];
script.innerHTML = "window.onload = function() { alert('loaded'); }";
I am assuming the window onload event is triggered before the code is added to the IFrame. Is there anyway to ensure it is called? Also it needs to be contained with in window.onload as it is for a JS based code editor so the iframe has to react as it would in a browser window. Cheers.
share|improve this question
I think the window is already loaded so you're code missed the onload event. If it's not already loaded then the getElementsByTagName will fail. I'm not sure about this though so I didn't make it a formal answer – qwertymk Jan 20 '11 at 5:12
Good point. Is there another way I can inject code into an iframe and have it execute on window.onload? – Louis Jan 20 '11 at 5:14
Maybe on .ready ? – Stefanos Kalantzis Jan 20 '11 at 6:06
1 Answer 1
up vote 1 down vote accepted
Solved by using window.write().
frame = document.createElement("iframe");
frame.src = "javascript:;";
win = frame.contentWindow;
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34767
|
Take the 2-minute tour ×
I'd like to know if there is a way to check how many references a Java object has. As far as I could check the only way to do that is using JVMTI through a JNI interface. Is there a pure java (without using native libraries) solution to get this information?
We are developing an educational tool for data structure animation(to be used with students implementation of certain algorithms), so it would be very nice if we could check for "released" objects on the most non-intrusive way (I´m trying to avoid forcing the user of this tool to call a method such as ObjectReleased(objRef) in order to update the data structure animation for an element removal or something simmilar)
share|improve this question
Reference counts aren't tracked in a JVM. The only way to know is to count them. – Gabe Mar 17 '11 at 2:25
There is a library that can be used to get notification when an object is garbage collected. Please refer to this link for more details sourceforge.net/projects/gcradar – R.daneel.olivaw Dec 13 '13 at 5:28
@AmrenduPandey, do not simply "bold some letters" -- that is not an appropriate way to improve a question. Edits should be substantial, not just be formatting changes. – Charles Jan 28 '14 at 5:07
2 Answers 2
From your description, it seems you care less about the actual count of references than to simply know when an object has been collected. If this is the case, you can use WeakReference or PhantomReference to determine when a referenced object is ready for finalization.
Hope this helps.
share|improve this answer
Java doesn't offer this option natively as far as I know.
Here you have some guidance on how to do it manually:
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34768
|
Take the 2-minute tour ×
I have two collections: news and subscribes. Every news item has an array of strings - "tags". Every subscribe also has such "tags".
Subscribe's news items are items having all tags that subscribe has, and may be more. News item's subscribes are subscribes having any of this item's tags, but no any more.
When I want to get a Subscribe's news, I'm doing such request on Ruby MongoID:
NewsItem.where(:tags.all => @subscribe.tags)
How can I get all subscribes for some news item?
For example:
item.tags = ["foo", "bar"]
subscribe1.tags = ["foo"]
subscribe2.tags = ["bar"]
subscribe3.tags = ["foo", "bar"]
subscribe4.tags = ["foo", "bar", "baz"]
item.subscribes should give subscribes 1..3, but subscribe4 should not be included, because it has a "baz" tag that is not included in item.tags
share|improve this question
I'm not 100% clear on what is expected here. Would you be able to provide: sample objects, expected query results. That will help us to craft the appropriate query. – Gates VP Apr 26 '11 at 6:28
I've edited example, so I hope you will understand it. – sandrew Apr 26 '11 at 14:07
2 Answers 2
up vote 1 down vote accepted
Based on your description, you don't really want an $all. Instead, you are looking for some form $subset operator. There is JIRA request for just such a thing, however it is not implemented at this time.
share|improve this answer
That's actually what I need. Sad, that it was not implemented yet. – sandrew Apr 27 '11 at 8:09
You should perform the matching "on newsitem creation", so you do this operation on-demand and frequently. Turn the query around and do
Subscriber.all_in(tags: news_item.tags)
to find subscribers having all the tags of the newsitem. Is this how you wanted it?
In any case, with many subscribers, this will quickly get very intensive. You may use delayed job to handle it in the background. You should experiment with flattening arrays or setting other keys in order to index and speed up search.
share|improve this answer
no, that's not that I want. the way you showed (with all_in) it will give ["foo", "bar", "baz"] subscribe for news item ["foo", "bar"]. The reason I'm asking is that any of MongoDB standard operators (such as $in, $nin, $all etc) do not provide this functionality. Also, I use background job, but in this job to perform news items list caching I need to get subscribes lists for each news item. – sandrew Apr 25 '11 at 9:37
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34769
|
Take the 2-minute tour ×
This simple code doesnt work... can anyone help me find the problem... it gives me 500 internal server error
include "../twitter-async/EpiCurl.php";
include "../twitter-async/EpiOAuth.php";
include "../twitter-async/EpiTwitter.php";
function init($oauth_token = null, $oauth_token_secret = null)
return new EpiTwitter(TWITTER_CONSUMER_KEY,TWITTER_CONSUMER_SECRET,$oauth_token,$oauth_token_secret);
$twitter = init();
echo "success";
<title>Untitled Document</title>
share|improve this question
Well, there's a typo in your init function(TWITTER_CONSUER_SECRET vs TWITTER_CONSU M ER_SECRET), but I don't think that will cause a 500 error. – Sean Walsh May 21 '11 at 3:46
yup I did that still doesnt work – koool May 21 '11 at 3:51
Create a file that has the contents <?php phpinfo(); ?> and put it in the same place as the file you are receiving the 500 error on. If you still receive a 500 error when you hit that file, you'll know that something is wrong with the server config and not necessarily with this code. – Sean Walsh May 21 '11 at 4:02
nope server is fine but I cannot find curl multi init in phpinfo – koool May 21 '11 at 4:07
1 Answer 1
up vote 1 down vote accepted
is there an error control in EpiTwitter class?
try this:
ini_set("display_errors", "1");
echo (init()) ? "success" : "error";
share|improve this answer
still same error – koool May 21 '11 at 3:53
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34770
|
Take the 2-minute tour ×
Possible Duplicate:
Java - tell if a String is interned?
I would like to have a list of the string that have been internalized by a jvm, either because they are literal are because the method intern as been called on them. How can I generate it?
share|improve this question
marked as duplicate by Brian Roach, pkaeding, EJP, Donal Fellows, Graviton May 28 '11 at 1:37
out of curiosity: why do you need it? – Bozho May 25 '11 at 22:18
This is not a duplicate. the first question is about a string, the second one about obtaining a list. – Guillaume Coté May 26 '11 at 14:37
I have a problem of out of memory in perm gen space, seeing which string are added could help me why the perm gen is growing so much at certain point. – Guillaume Coté May 26 '11 at 14:40
I added another related question, since this one was not understood as I intended : stackoverflow.com/questions/6180006/… – Guillaume Coté Jul 8 '11 at 15:22
2 Answers 2
You can get the total size of all interned strings as:
$ jmap -permstat 543
Attaching to process ID 543, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 19.1-b02-334
14584 intern Strings occupying 1603648 bytes.
finding class loader instances ..Warning: skipping invalid TLAB for thread t@44819
share|improve this answer
The question is about the list, not the number. – Guillaume Coté May 26 '11 at 14:34
How can I generate it?
You can't within a running program. There is no API for iterating the intern'd string pool.
You could in theory do it via a debug agent. It would involve:
1. Traversing the reachable objects to find ALL String instances.
2. For each one, testing if str == str.intern().
However, this process is going to be expensive, and is going to pollute the string pool (the permgen heap) with lots of Strings that have been interned unnecessarily. Besides, this only works when all application threads have been stopped by the debug agent, so an application can't use this approach to examine its own string pool.
share|improve this answer
This is theoretically doable on a small application, but it will fail with an out of memory in perm gen space with a huge application. Even if I increase the perm gen space enough, the goal is to make comparison of the list at different point. It won't be possible to do a valid comparison since the lecture modify the result. – Guillaume Coté May 26 '11 at 14:44
|
global_05_local_5_shard_00000035_processed.jsonl/34771
|
Take the 2-minute tour ×
Is there a simple way to use libraries intended for the Arduino IDE with the C and assembly code I write for AVR-G++/AVR-GCC?
I'm trying to use the Adafruit Wave Shield library, but simply including the header and cpp files don't do much good. Can I compile it somehow and link it to my C code? Or perhaps just find a way to make it compile with my C code.
Currently, when I try to do something simple like:
#include "WaveHC/WaveHC.h"
SdReader card;
I am greeted with:
70: undefined reference to `SdReader::init(unsigned char)'
share|improve this question
2 Answers 2
I use this makefile to compile all my code for Arduino without using IDE. You can use both Arduino libs as well as user libs in this makefile.
Update: There is also a tutorial, which explains how to setup and use this makefile.
share|improve this answer
You can build the Arduino code with CMake. I have built largish Arduino projects without using the IDE this way. You can use whatever tools you want to build the Arduino code, it is just a C/C++ library. You mainly need to make sure you have all of the preprocessor settings right (F_CPU? Maybe some others).
Build using Cmake might help you. Basically, I would make a library file for the Arduino library, a library file for the shield library, and an EXE file for your code.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34773
|
Take the 2-minute tour ×
I'm looking for a way to retrieve the equivalent of what's shown on https://www.facebook.com/me/allactivity. Both FQL & Open Graph are fine.
On the graph, /me/feed shows something similar, but it's missing likes, and tags and other things.
Any ideas are appreciated.
share|improve this question
1 Answer 1
up vote 2 down vote accepted
This is currently not possible. There is no endpoint to get to this data.
The only available data will be anything that appears as a post (e.g. Youtube, Twitter). For this you can query stream via FQL. You can also try endpoints /music.listens and /video.watches but they will retrive data for the current application not all.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34776
|
Twinnrova 9 січ 2013 о 16:53
Cooperative Play
Hey guys, I want to set up Co-Op to play with a friend. How would I go about doing that? I'm not looking for ZDaemon or Skulltag or anything like that.
Показані коментарі 14 із 4
< >
Heks 9 січ 2013 о 20:41
From Vanilla Doom? The typical answer is "way too much hastle to be worth it." Chocolate Doom is almost identical to Vanilla (DOS) Doom but designed to run on modern hardware without emulation (DOSBOX). Give it a try and use Doomseeker to see Chocolate Doom servers.
Twinnrova 9 січ 2013 о 20:53
Thank you, I will look into it and tell you how it goes. :)
Twinnrova 11 січ 2013 о 22:45
I've come to the conclusion that it would be a lot less hastle to just chill and play solo taking turns than setting all this up. Thank you for the advice though. :)
Heks 12 січ 2013 о 13:18
Have you tried Odamex? If you use the "exec coop-doom.cfg" command when you launch the server, it emulates the original DOS games pretty well.
Показані коментарі 14 із 4
< >
На сторінку: 15 30 50
Опубліковано: 9 січ 2013 о 16:53
Дописів: 4
|
global_05_local_5_shard_00000035_processed.jsonl/34783
|
View Single Post
Republic Veteran
Join Date: Jul 2012
Posts: 99
# 101
05-14-2013, 02:04 PM
Still waiting to hear if the UI color scheme's can be set up so that I can select one of the specific types to use on certain characters. As it stands, selecting "Default" is the only way to make sure that your LCARS color scheme is faction-specific. If you select any of the other options, they get universally applied to all characters on an acount. It would be nice if I could use the "Voyager" color scheme on my Federation characters, while my Romulan and Klingon toons keep their default settings.
|
global_05_local_5_shard_00000035_processed.jsonl/34797
|
Take the 2-minute tour ×
I want to fire "svn update" command from a php script , the user for which is "apache". How can I assign permissions to apache user to execute "svn update" ?
share|improve this question
1 Answer 1
up vote 3 down vote accepted
I would create a command in the sudoers file, and then use sudo -u user-who-owns-svn-repo svn update in your PHP script.
The changes to /etc/sudores would be similar to:
Cmnd_Alias SVN = /usr/local/bin/svn
apache ALL=(ALL,!root,!#0) NOPASSWD: SVN
See the Sudoers manual for more info.
If you want tighter controls, make shell scripts that have the specific SVN commands and only allow Apache access to those. For example:
File /path/to/my/project/update.sh:
svn update /path/to/my/project/svn-files
File /etc/sudores:
Cmnd_Alias SVN = /path/to/my/project/update.sh
(and don't forget to chmod +x path/to/my/project/update.sh)
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34798
|
Take the 2-minute tour ×
I want to tidy a big, unsorted folder of .mp3's. I'm sure there is an option to read the mp3-Tags and copy the files to where they belong, in Linux, right?
share|improve this question
migrated from stackoverflow.com Oct 1 '10 at 16:27
This question came from our site for professional and enthusiast programmers.
3 Answers 3
The tool I maintain, audiotag, can do that.
share|improve this answer
Though not really an so question...
sudo apt-get install tagtool
(if you are on debian/ubuntu derivative)
share|improve this answer
There are many existing tools that do that. Ex-falso is one, thunar's bulk-rename is another, for example.
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34799
|
Take the 2-minute tour ×
I purchased a Dell Laptop which came with pre-installed Ubuntu Linux. I installed Windows 7 ultimate.During installation I deleted the existing partitions thinking that would completely remove Linux from my machine. Now when I start my machine, it takes me to Windows7 as expected. But I think Linux is not completely removed as I find nearly 100 Gb missing (My HDD is 320 and C: D: E: comes only to 220). In the computer management , I see 102Gb unallocated.enter image description here How can I remove linux completely from my machine and get lost 100GB back.
share|improve this question
3 Answers 3
up vote 5 down vote accepted
If it is unallocated, just create a Windows partition and use it to store data.
If there were Linux in that space, you wouldn't see "unallocated".
share|improve this answer
To be more specific - you did delete the partition, but the space then just sits there. "Unallocated" can be read as "unpartitioned". You need to either make a new partition, or extend one of your existing partitions. – Shinrai Feb 3 '11 at 18:06
Try using open source Gparted
It lets you add partitions, delete partitions, resize partitions. My experiences have been only using it on non-RAIDed drives. The tool is fairly versatile. You can use it for a variety of file-system types.
share|improve this answer
You can either:
• right-click on the partition you want to extend to fill the extra space (which might only be the E: one; I'm not sure if consumer versions of Windows can span volumes across different partitions) and click Extend Volume.
• right-click on the empty space and make a new partition (which will initially present as a new drive on your system, but you can change to be a folder on an existing drive if you so desire)
share|improve this answer
Your Answer
|
global_05_local_5_shard_00000035_processed.jsonl/34800
|
Take the 2-minute tour ×
my computer has suddenly become unstable. I run ubuntu 10.10 and the computer setup has ran well for ages. However, about 10 minuets ago that computer has become extremely unstable.
I was just about to push some code to the android emulator and this problem happened. The kernel panic is fine, but the computer wouldn't reboot properly and I had to turn the power at the wall off and on again. The BIOS then asked me to setup the computer, as settings had been damaged.
Next, as the computer attempted to boot, I got a stack trace from somewhere after the computer displayed "STARTING ASUS EXPRESS GATE" and said about the unknown root file system wouldn't load.
I rebooted, and the system booted and Ubuntu checked the disks, however about 1-2 minuets after logging in, the computer now freezes with about 50% of the screen randomly covered in green.
Has anybody got any suggestions as to what could be the problem? (I hope i've posted this to the correct site)
share|improve this question
1 Answer 1
up vote 0 down vote accepted
Very likely a hardware problem there, possibly motherboard-related. A good easy way to eliminate software as a problem would be to boot from a livecd, and see if you get the same issues.
share|improve this answer
Hm... I opened up the computer and had a fiddle with the wires and it seems to be working. Thanks anyway – Joe Simpson Mar 25 '11 at 20:57
Just an update: The problem happened again and I got the motherboard replaced. To anyone who has this: Just replace your motherboard. – Joe Simpson Apr 4 '11 at 18:41
Your Answer
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.