text
stringlengths 44
950k
| meta
dict |
---|---|
How the PVS-Studio Team Improved Unreal Engine's Code - ivank
https://www.unrealengine.com/blog/how-pvs-studio-team-improved-unreal-engines-code?hn
======
nickpsecurity
I encouraged game developers in the past to use static analysis tools to help
iron out more bugs. It's great to see Unreal Engine taking that step. A big
name like this will hopefully inspire others to take the same step.
------
GFK_of_xmaspast
Why is "(GEngine == nullptr || !GEngine->UseSound())" safer than "(GEngine ||
!GEngine->UseSound())"? Is this some kind of precedence rule, where the former
short-circuits but the latter doesn't?
~~~
UnquietTinkerer
The two conditions are not equialent - the first is true if GEngine is null or
does not use sound, and the second is true if GEngine is _not_ null and is
undefined otherwise due to dereferencing a null pointer. I assume the original
author intended to write "(!GEngine || !GEngine->UseSound())".
------
jhasse
How can the Memcmp/Memcpy bug get unnoticed??
~~~
Strom
In code bases as big as Unreal Engine 4, you can easily remove as much as 10k
lines of code and nobody will notice. It's not uncommon to have that much (and
more!) dead code even. A single missing memcpy doesn't mean much, unless it's
part of a hot code path.
All these rarely visited code path bugs matter in terms of security of course.
|
{
"pile_set_name": "HackerNews"
}
|
The Genius: Mike Burrows' self-effacing journey through Silicon Valley - neilc
http://www.stanford.edu/group/gpj/cgi-bin/drupal/?q=node/60
======
xlnt
this article sure spends a lot of words discussing physical appearance,
clothing, food, and weather. i wish i lived in a world where people focussed
on more interesting things.
|
{
"pile_set_name": "HackerNews"
}
|
PayPal is down - blazee
Paypal is, at least in europe (Germany for example), down since almost two hours. Since there is no official statement (as always) many users are reporting, that they can't login into their accounts - me neither. Several messages can be seen here: http://allestörungen.de/stoerung/paypal, in english: http://downdetector.com/status/paypal or on Twitter.
======
Petrakis
[http://www.isitdownrightnow.com/paypal.com.html](http://www.isitdownrightnow.com/paypal.com.html)
From comments here seems that its from 1 or 2 days ago too
|
{
"pile_set_name": "HackerNews"
}
|
Jerks and the Startups They Ruin - ziszis
https://mobile.nytimes.com/2017/04/01/opinion/sunday/jerks-and-the-start-ups-they-ruin.html
======
Doches
This is basically a more serious riff on Dan Lyon's 'Disrupted' \-- and having
read that book, I'm a little disappointed in this editorial as it doesn't
bring anything particular new to this (Uber) discussion.
His previous book, with its first-person descriptions of life inside Hubspot
rang _eerily_ true to me as a late-30s employee at a Bay-area unicorn, and has
caused me to re-evaluate a lot of things around me. Disrupted is hilarious,
depressing, and fucking spot-on.
|
{
"pile_set_name": "HackerNews"
}
|
How to assign partial credit on an exam of true-false questions? - one-more-minute
https://terrytao.wordpress.com/2016/06/01/how-to-assign-partial-credit-on-an-exam-of-true-false-questions/
======
gjm11
As one of the commenters points out, Tao has rediscovered the notion of a
_proper scoring rule_ \--
[https://en.wikipedia.org/wiki/Scoring_rule#Proper_scoring_ru...](https://en.wikipedia.org/wiki/Scoring_rule#Proper_scoring_rules)
\-- and the specific (rather nice, if you don't mind all the scores being
negative and the infinite penalty when something happens that you said
definitely wouldn't) _logarithmic scoring rule_ \--
[https://en.wikipedia.org/wiki/Scoring_rule#Logarithmic_scori...](https://en.wikipedia.org/wiki/Scoring_rule#Logarithmic_scoring_rule).
The Brier score -- you score minus the average squared error between your
prediction and [1 for the right outcome, 0 for the others] -- is also a proper
scoring rule (i.e., incentivizes you to report your probabilities accurately)
and doesn't penalize maximally-wrong answers infinitely. For some purposes
it's a better choice than the logarithmic score.
~~~
shoyer
In the traditional formulation, the Brier score is 0 for correct guesses with
p=1, 1 for incorrect guesses with p=1 and 0.25 for guesses with p=0.5:
[https://en.wikipedia.org/wiki/Brier_score#Example](https://en.wikipedia.org/wiki/Brier_score#Example)
When translated into a grading rule, this gives us a score range from -3 to
+1, with 0 for uncertain guesses. So it's not as harsh as logarithmic scoring,
but still penalizes confident wrong guesses more harshly than it rewards
confident correct guesses.
~~~
Retric
Sounds interesting. The problem with logarithmic scoring is your penalized for
picking actual percentages vs gaming the system.
EX: If your 99% sure of each question and there are 100 questions then getting
1 wrong gives the best points at 99%. But, randomly you can only add +1 point
from 100 correct but randomly there is a long tail of missing several.
Further, scores are not linearly valuable trading a lower max score but higher
chance of getting an A is a positive result.
Ideally you should set things up so accurate estimates give the best results.
~~~
shoyer
> Ideally you should set things up so accurate estimates give the best
> results.
This is true for both the logarithmic score and the Brier score. These are
both "strictly proper scoring rules", which is a formal way of stating your
requirement that it should not be possible to game the system. In both cases,
you get the highest score (on average) if you guess is true distribution.
~~~
Retric
The problem is if your X% sure on each of a set of questions and you want to
get an A, you want to minimize your chances of getting a bad grade more than
you want to maximizes your score. Alternatively, there is no point in setting
your odds for every question below the point that takes the minimum number of
correct answers to pass. EX: If you need to get X% correct to pass, then don't
set your odds below X%.
For a similar example consider what you bet on the final question in Jeopardy
does not just depend on your estimate of your odds, but other factors.
Of course even thinking about this stuff is distracting, you may be better off
picking a very small number of odds. Say 99%, 90%, 70%, and 50%.
------
whack
There used to be a Decision Science class at Stanford, where this exact scheme
was used. Students were warned all the time to never indicate 100% certainty
on any question, because if you ever did that and turned out to be wrong, you
would fail the entire course because of that one question: even if it happened
to be a minor homework assignment. I always thought this was a great way to
teach people the lesson that you should (almost) never claim 100% certainty in
anything, and that you should view knowledge through a probabilistic
perspective.
~~~
ianai
Unless you're arguing with someone that will take any missing confidence as
sign that you're completely wrong.
~~~
avn2109
>> "Unless you're arguing with someone that will take any missing confidence
as sign that you're completely wrong."
This is 90% of people.
~~~
marxidad
No. You have to stick to your guns that uncertainty is a valid disposition
(maybe).
------
Houshalter
I'm working on something like this right now. We asked 8 trivia questions on a
survey of users of our website, and had people assign probability estimates to
the probability they got the answer right.
First of all it seems like everyone had exactly the same probability of
getting any random question right. Some people got every question right, and
some got none right. But in exactly the proportion you would expect by random
chance - that some were just particularly lucky or unlucky.
The second is that probability estimates did not vary much based on questions
gotten right. Everyone expected to get about 44% of questions right,
regardless how many questions they actually got right. People who only got 1
right assigned the same probability as people who got 5 right.
Likewise people who estimated higher probabilities of getting more questions
right, got the same amount of questions right. And a decent percent of people
were underconfident too, and assigned probabilities too low (but got the same
amount of questions right.)
Lastly, people are really uncalibrated. Some people are just bad at estimating
probability. When they say "80% chance of something" they mean that thing will
only happen 58% of the time. You can be trained, in a relatively short time,
to become calibrated. By estimating probabilities and seeing how many you
actually got right. But most people aren't trained, so it would be a bit
unfair to put this on a real test.
------
syphilis2
Could this be simplified by asking students what they think their score on the
test will be and adjusting their final score based on how accurate that
estimate was?
Overall it seems like the biggest flaws in this system are that
1: Scores still get mapped to discrete letter grades.
2: A student's goal is not to get the highest score possible, rather it is to
ensure that he or she is most likely to get an "A".
For example:
Given a 10 question quiz, a student who knows (in truth) she is 80% likely to
answer each question correctly, and who needs to achieve a score of 0.2 to
pass. The student is led to believe that by accurately estimating her
confidence for each question at 80% she is giving herself the biggest
advantage. If she does this and she gets 4 of the questions wrong her final
score will be -1.219 and she will fail. However, if she had instead
underestimated her confidence and given each question a confidence of 0.6 her
final score would instead have been 0.290 and she would have passed. She could
of course go even further and determine the likelihood of her getting N
questions wrong and use that to determine the optimal confidence level to
select which will maximize her expected score while ensuring that she is most
likely to pass the class.
~~~
syphilis2
[http://pastebin.com/zahgmVnk](http://pastebin.com/zahgmVnk)
[http://octave-online.net/](http://octave-online.net/)
I put sample code in a pastebin and a link to octave-online if you want to
verify how I'm thinking about this.
------
scraft
I suppose if you combine this approach with the theory that nothing is 100%
certain you end up with a test that is impossible to get 100% on (which I
guess makes perfect sense as nothing is 100% certain). But back to reality, if
you set students tests where a 100% confident answer being wrong instantly
meant they fail, it feels like you are no longer testing the student on the
subject matter in question and instead are testing their ability to play the
probability meta-test game.
The task itself of working out how 'confident' you feel about an answer also
feels like an impossible task (almost like a non-technical project manager
asking a developer how long a task will take). I guess it would be fun to see
the results of tests taken using this approach and then compare the scores
with the new system, the old system and ultimately what they get as their
final grade in the subject.
------
savanaly
Fantastic and frankly hilarious scheme he has developed here. My favorite part
is that if you state your absolute certainty in true or false and turn out to
be wrong, your score is negative infinity. Only in a math class would I expect
a score of negative infinity I suppose.
~~~
repsilat
That weirdness is kinda just a consequence of wanting the scores for different
questions to be added together. A simpler (to me) but equivalent way to frame
it is to use scores that you multiply together. These scores are "the
probability of the true answer occur, assuming the students odds."
That is, if the student gave a probability of 1 and were correct, their score
for that question is 1. If they were wrong, they get a score of zero.
The interpretation of their final score his interpreted simillarly as a joint
probability, assuming independent trials.
------
SloopJon
The SAT used to have a wrong answer penalty to discourage random guessing.
This seems to me a simple, effective way to reflect a student's certainty in
an answer.
~~~
daxfohl
Funny thing is the weights they assigned didn't discourage random guessing at
all; they merely made your expected gain from guessing equal with the expected
gain from not answering (i.e. zero).
So even if you had the very slightest idea, you should absolutely guess. If
you took the test multiple times, then probabilistically you'd be _better_ off
if you always guessed even when you had no clue: some of your scores would be
higher and some would be lower than if you hadn't guessed, and usually only
your highest score is considered. Most guides overemphasized the penalty.
That said, it can screw you of course too if you happen to be unlucky in your
guessing. This is what Terry's scheme overcomes.
~~~
tamana
Terry's scheme punishes well educated people who are conservative in their
claims of confidence
~~~
Jabbles
Why should being conservative in your confidence be rewarded above having an
accurate assessment of your confidence?
~~~
Jtsummers
In the case of an exam, students are under stress and pressure that will often
lead them to underestimate themselves, especially knowing that if they're
wrong that 100% probability will net them a -infinity score for the exam, even
if they get everything else right.
This also means questions have to be _very_ carefully worded. I recall a
number of exam questions (over many years of school, not one class in
particular) where the "wrong" answers were arguably right depending on how the
sentence was parsed just based on poor use of punctuation or other ambiguous
wording.
------
tikhonj
Here's a totally different idea: lets assume that the true-false questions are
on a related topic. We can give partial credit if wrong answers are
_consistent_ with each other: that is, if there is some (simple) model of the
topic we're asking about that is close to the correct model and produces those
particular wrong results.
Intuitively, if there are a bunch of questions about the same "feature" and
you get them _all_ wrong, all those mistakes stem from the same
misunderstanding. I guess in a well-written test where questions are
conceptually spaced out this is not as much of an issue...
So to give partial credit, we try to find a simple model consistent with the
T/F answers and award credit based on how wrong the model is.
In particular, this would catch the problem where you have a single
misunderstanding that cascades into a whole bunch of wrong answers even if you
actually understood the rest of the system fine. That's one of the main uses
for partial credit in longhand answers, isn't it?
How would we do this systematically? Well, we can't, really. But there are
places where we could. I worked with a professor who did research in program
synthesis, and I remember he had an interesting idea for education: when a
student submits an incorrect program in, say, Scheme, we could try to
synthesize an interpreter _that would make the program correct_ and then
extract what the students misunderstanding was. (For example: they used
dynamic scoping instead of static scoping.) If you seeded your synthesis
system with the various kinds of things students actually get wrong, this
could be both useful and practical.
You can apply the same idea to grading a test about Scheme. Award partial
credit if somebody has a consistent mental model that just happens to be
wrong. If they got a whole bunch of questions wrong just because they mixed up
scoping rules—but understood everything else—partial credit seems fair.
I guess this is pretty explicitly rewarding _consistency_ with partial credit,
but that also seems fair in a lot of classes like CS.
To be clear, I don't actually think this approach is really practical: it's
more of a thought experiment on what could be _interesting_. Even if it was
possible, doing it on T/F questions would likely require _a lot_ of questions
since each one only provides one bit of input. If you had questions along the
lines of "what does this Scheme program produce", you could get away with
significantly fewer if you were clever about choosing them—but you'd still
want some redundancy to be at least a bit robust to the student making typos
_as well_ as conceptual mistakes.
------
SubiculumCode
Interesting derivation, but why not Receiver Operator Characteristics(ROC)? In
recognition memory research we often collect confidence ratings on a 6-point
scale for yes/no decisions, for example, and plot ROC curves, and calculate d'
discrimination scores.
~~~
SubiculumCode
[https://en.m.wikipedia.org/wiki/Receiver_operating_character...](https://en.m.wikipedia.org/wiki/Receiver_operating_characteristic)
Or am I missing something.
------
primodemus
Nitpick: His name is Terence not Terrance.
~~~
marxidad
Both spellings are equivalent modulo permutation.
~~~
nightcracker
I understand the attempt at humor but can not help but point out that the two
names are not permutations of eachother.
------
cvick
There is nothing in that method that accounts for the possibility that the
test may be flawed in some way. I suppose that the assumption here is that the
test is 'perfect' in that each question is worded in such a way that there is
one and only one 'right' answer. But, as a taker of this kind of true/false
test, if I am only allowed to provide my assessment of my own confidence in my
answer without an explanation of "why" I assigned that value, then there's no
way to answer in a way that I will not be penalized for a question that I
don't feel that I can answer as asked.
Consider assigning an additional value of '1' to each question initially as a
representation of the author's confidence that the question is not flawed in
any way. Then, if a significant segment of the test taking population
indicates that they believe that the question is flawed in some way, then, the
"author's confidence" for that question would be reduced by an amount that
wouldn't penalize me for identifying that flaw while still allowing for less
'partial credit' to those who answer incorrectly without being aware of the
flaw, or more specifically, not giving as much credit to those who answered
'correctly' in spite of the flaw.
It's also possible that the question I think is a 'flaw' is really a 'trick'
question and that in order to answer 'correctly' you must discover the trick
-- this would still allow for a better result in that it more accurately
assesses whether I really understand the question or not.
It also occurs to me that if the test administrator wants to give a test where
they can assign some manner of partial credit to an answer, then they
shouldn't give a true/false test in the first place.
------
pierrebai
I don't think his scheme buys anything. Any pointing system in which a wrong
answer has more weigh than a right answer will siply incensitive student to
aim for the fair confidence level, the one that gives -1 for a bad answer. You
need so few 'bad confidence' result to completely wreck your overall score
that it is not worth it.
The main flaw of the scheme is that it is a purely mathematical analysis.
Answers are not based solely on confidence but often relies of reading
comprehension skills. (Even at the pure mathematical level. For example
missing a square factor and a minus sign.) So you can have 100% confident
incorrect answer just because you mis-read or mis-interpreted the question.
Then you get punished hard. Given one's incapcity of self introspection and
detection of such mis-reading, and the harsh punishment for such undetectable
failure on the part of the student, his scheme is wrong headed.
~~~
mabbo
I imagine the first test or two would be a period of the students learning to
understand the system, but after that it's perfectly fair. It's a good lesson
in "You aren't 100% sure of anything, so don't claim you are".
~~~
tamana
Why fail a student for making one specific mistake once?
~~~
kwikiel
Because that will make him more focused on correctly approaching problems
instead of rushing for solution without checking immediate work.
------
soreal
The authors calls out: [Important note: here we are not using the term
“confidence” in the technical sense used in statistics, but rather as an
informal term for “subjective probability”.]
But then goes on to use a very technical, statistical version of confidence
where 0% confidence is somehow equal to 100% confidence you picked the wrong
answer.
All of this leads me to assert that despite the math being internally
consistent, it does not apply well to the situation. A rational outside
observer without reading such an article would assume that 0% confidence in
your answer is equal to a 50% chance of being right.
The phrase "confidence that the answer is 'true'" can easily be interpreted as
"confidence that the answer I marked is correct".
~~~
hfanson1
It is confidence that the answer is true. 100% being fully sure the answer is
true. 0% being fully sure the answer is false. 50% should be used for both
true and false have the same probability.
------
tamana
This moves the problem to a quiz of introspection, not a quiz of the material
~~~
sophacles
There's a pretty solid case to be made that introspection is missing from a
lot of pedagogy anyway, so this isn't necessarily bad. I can see this being
useful in many places - when testing on bits of interrelated knowledge, having
a confidence score on the test may help people learn to puzzle out information
and synthesize conclusions. Both of those skills are more valuable to the
student than the ability to dump random facts - we have google for that these
days.
------
lebca
That was beautiful. It's been a long time since I've read anything derived as
elegantly and clearly described as that. Got any more?
------
daxfohl
But a possibility of negative infinity begs the question: how certain is the
_professor_ of the answer?
------
tzs
A comment on the reddit discussion of this said that a similar scheme was used
at CMU for midterm and final exams in the "Decision Systems and Decision
Support Analysis" class offered by the decision theory department. The exams
were multiple choice with four choices per question. The commentator linked to
this handout describing the grading for the midterm:
[http://www.contrib.andrew.cmu.edu/%7Esbaugh/midterm_grading_...](http://www.contrib.andrew.cmu.edu/%7Esbaugh/midterm_grading_function.pdf)
If you assigned probability p to the correct choice, your score for that
question was 1 + ln(p)/ln(4). You were not allowed to assign a probability of
0 or 1 to a choice.
The handout points out the importance of thinking about how to approach these
tests ahead of time, and also explains the benefits of using this scoring
system:
\---- begin quote ----
I cannot stress strongly enough the need for each of you to sit down and think
about different strategies for answering the questions. This grading technique
completely removes any benefit of random guessing. Such a guess could be
disastrous. You're much better off admitting that you don't know the answer to
a question. (Placing a 0.25 probability by every option indicates that you
have no idea which answer is correct, and your score will be 0 for that
question). Assessments of probability 0 (0%) or 1 (100%) are not allowed.
These answers will be interpreted as probability 0.001 (0.1%) and 0.997
(99.7%) respectively. Your probability assessments must sum to 1 (100%). A
probability of 0.001 by the correct answer will result in a score of -4. In
contrast, a probability of 0.997 on the correct answer only earns a score of
1. Think about the implications of this before the day of the test.
I strongly recommend that you analyze the grading problem from a decision
analysis perspective. Calculate expected values (or expected utilities) for
various levels of personal uncertainty. Notice what happens if you are
overconfident or underconfident.
This grading scheme makes the midterm harder then a standard multiple choice
test, but this is the point. It has many benefits from a teaching/learning
perspective.
1) It teaches you to apply the techniques that have been discussed in class.
You have to assess your own personal probabilities and apply them to problems
that have very real (and potentially) important payoffs. It is impossible to
get these points across with the few simple lotteries demonstrated during
class.
2) It helps to removes the element of chance from the test. Because of the
severe penalty for guessing, the test will more accurately measure your
knowledge.
3) The test will also measure what you know about your knowledge.
4) By analyzing how you answer the questions, I will be able to determine
which questions are hard and which questions you believe are hard. (A
"flatter" distribution for a question indicates a lower confidence in the
question, and therefore, a belief that it is hard.)
5) It will allow you to appreciate how hard it is to assess probabilities.
\---- end quote ----
------
caf
Interestingly, another way of looking at this result is that the score is a
measure of the total information content of the respondent's answers.
------
amelius
Sorry to be negative, but this sounds like a solution to make something which
is horribly broken a teeny bit less horribly broken.
~~~
frenchy
I think it's a little extreme to say that the current popular way of quickly
assessing large numbers students is "horribly broken", but I think the most
interesting part of this has nothing to do with student assessment and more to
do with forcing the student to think about how well they know things.
A quiz can be a teaching tool as well as an assessment tool. It's often poorly
designed for the first function, but that doesn't mean it has to be.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: Tips for a newbie freelancer? - alexgaribay
What are some tips that you have for someone who is about to enter the freelancing scene?<p>What kind of tools do you use to keep track of hours/tasks/accounting?<p>What are some things you wish you knew when you started freelancing?<p>I'm about to take on my first gig and would appreciate any advice from people that have experience in this realm.
======
markdownmail
If you go through a consultant make sure the rate they quote is the rate you
get. Sometimes they'll quote a rate and then take off their cut. Normally
around 15-20%
------
Jeremy1026
I use Billings Pro on Mac, its $5/mo for the lowest tier, but the time
tracking/invoicing/estimates/etc that it offers is well worth the price.
|
{
"pile_set_name": "HackerNews"
}
|
Wire App is exploiting the open source community (Cross-Origin restrictions) - hannaysteve
https://github.com/caura/wire/issues/7
======
cocktailpeanuts
Sounds shady, but I think the real problem isn't that Wire is exploiting the
open source community, but that OP is somehow still willing to work on a
project that is clearly close-sourced.
If their server is close-sourced like OP stated, then why is he building on
their platform to begin with? There are plenty of other open source chat
protocols much more dedicated to being open.
My point is, sounds like the solution to this problem isn't to get Wire to
open up CORS but to move on to another truly open project.
|
{
"pile_set_name": "HackerNews"
}
|
Web tool 'as important as Google' - habs
http://news.bbc.co.uk/1/hi/technology/8026331.stm
======
tokenadult
The proof of this pudding is in the eating. We'll all be testing it out with
questions and seeing if we are satisfied with the results.
------
verdant
Its kind of disappointing that they aren't trying to parse natural language. I
know that would make it a ton more complex (and possibly beyond our capacity
to do), but to me that would be truly the next generation from our current
flavor of search engines. If I could truly ask a question and have the
computer understand what I'm talking about to return an answer, rather than
just trying to match keywords. That's Wolfram's goal, I think, but I don't
know if they can really get there without parsing human language.
~~~
noaharc
They do parse natural language. In fact, that was one of the biggest problems
they had to overcome, and is the main reason they currently have no plans for
creating different versions for other languages.
[http://www.readwriteweb.com/archives/wolframalpha_our_first_...](http://www.readwriteweb.com/archives/wolframalpha_our_first_impressions.php)
------
edw519
"The "computational knowledge engine", as the technology is known, will be
available to the public from the middle of May this year."
Is "the middle of May" the result of a computation? If so, then a more precise
model may take until "the middle of June".
------
marcusbooster
Wolfram is a salesman first and foremost. I'll believe it when I see it.
------
gamache
Stephen Wolfram 'as self-important as all get-out'. Wolf has been cried; _A
New Kind of Science_ left little doubt that "expert opinions" of Wolfram's
work are worth a piss-hole in the snow.
|
{
"pile_set_name": "HackerNews"
}
|
Neal Stephenson Heiroglyph project launches - bajames
http://hieroglyph.asu.edu/
======
purplelobster
I don't think progress towards big goals have stopped completely, but a lot of
work is being done in areas that will indirectly help accomplish the big
goals. Stuff like tiny computers, 3D printing, battery technology etc. Those
technologies can make bigger goals possible and more easily obtainable without
spending billions of dollars.
|
{
"pile_set_name": "HackerNews"
}
|
A webcam face filter on Matrix rain code theme. Works in browser, open source - xavierwebgl
https://jeeliz.com/demos/faceFilter/demos/threejs/matrix/
======
bradknowles
Hmm. Doesn’t seem to work on iOS.
~~~
bradknowles
Oh. It doesn’t work in a webview.
But it does work in Safari.
But I don’t see anything remotely resembling my face.
~~~
xavierwebgl
Hi. The webview does not have WebRTC yet. I will test on Ipad. I testes with
android and Chrome/Linux. You should get this stuff :
[https://www.youtube.com/watch?v=3hnls2s9KHI](https://www.youtube.com/watch?v=3hnls2s9KHI)
|
{
"pile_set_name": "HackerNews"
}
|
The Delinquent Borrowers Leading a Student Loans Revolt - petethomas
http://www.bloomberg.com/news/articles/2015-10-01/the-delinquent-borrowers-leading-a-student-loans-revolt
======
karissa
Hey! Glad to see this on HN. I'm a volunteer developer for this group, and we
are looking for more hands. If you're interested in helping us out, feel free
to reach out to me on GitHub (karissa) or Twitter (captainkmac).
Here's the list of issues (signup/login related) that I'm working on now:
[https://github.com/debtcollective/debtcollective-
web/issues/...](https://github.com/debtcollective/debtcollective-
web/issues/56)
Here's a taste of the future: "Harnessing user data (and combining it with
data gathered automatically from creditor websites and public data sets)" \-
[https://www.newschallenge.org/challenge/data/entries/debt-
co...](https://www.newschallenge.org/challenge/data/entries/debt-collective)
------
tomohawk
Borrowing money is risky, and includes risks beyond a straight up cost/benefit
analysis. It also makes the borrow more vulnerable (not just to the lender,
but to others as well). It should never be entered into lightly.
I sympathize with these folks who have gotten themselves into this situation,
but they did make the choice of their own free will. If they didn't do a
proper analysis ahead of time, it doesn't seem fair to shift the cost of their
decision onto others who did, or who are responsibly shouldering the burden
that they took on.
There is a definite moral hazard in the current system (it encourages
indebtedness and inflates the cost of education), but that doesn't make it
right to create another moral hazard in trying to correct it.
~~~
m_fayer
For those trying to escape poverty, disadvantaged teenagers must choose
between debt and the permanent surrender of their hopes and dreams. Hardly a
situation where the borrower can be expected to make a dispassionate
calculation, nor is it a situation where there's a clearly correct course to
be found.
~~~
Rainymood
Education used to be 'free' here in The Netherlands (certain amount of money
each month for students). They recently turned this into a full fledged loan.
Although the interest is very low it dramatically changes how students
approach studies and immediately start to think in terms of ROI ... I fear for
the future of Dutch education. I'm glad I just wrapped up my undergrad and am
currently borrowing as well for my Master's but thankfully (I'm one of the
few) who have 'great' career prospects.
The American system is really really screwed up if you are trying to escape
poverty.
~~~
seibelj
I think in the USA college is too expensive, but I'm not convinced it should
be free. I like the idea of students actually having some skin in the game.
Also I don't want tax dollars to pay for someone to study poetry for 4 years
for free. You'll get into a situation like Norway, where it takes a master's
to get a foot in the door, and no one gets a job until they are 30.
~~~
pvaldes
Take in mind that music industry could not exist without poetry. A successful
song with fine lyrics can have a huge impact in the economy.
~~~
overdrivetg
Imma let you finish, but [http://seatsmart.com/blog/lyric-
intelligence/](http://seatsmart.com/blog/lyric-intelligence/)
------
douche
I have to be honest, if some kind of student-loan amnesty ever goes into
effect, I'll be a little pissed that I prioritized paying mine off with my
earnings for the first half-dozen years out of school. Just imagine what I
could have done with those tens of thousands of dollars instead...
------
WBrentWilliams
I like the way that these articles and the comments ignore the elephant in the
room: The cost of living while attending college.
Here is a data point for my local State University. I happen to work there, so
I can verify that these numbers are pretty much correct. You can pay cheaper
rent, but you'll pay back the difference in car mileage and/or roommate drama.
Tuition is (roughly) 40 percent of the cost. What really drives up the student
debt is the cost of living. If we really wanted to do something about student
loan debt, I think we'd do better to apply the same political pressure to the
cost of living as we do to the rate of tuition.
Indiana University [1] Tuition: $10,388 Room and board: $9,794 Books and
supplies: $1,230 Transportation: $1,030 Personal Expenses: $2,096
[1] [http://admissions.indiana.edu/cost-financial-aid/tuition-
fee...](http://admissions.indiana.edu/cost-financial-aid/tuition-fees.html)
~~~
saturdaysaint
The "books and supplies" section would be interesting to break down. How much
of it is due to needlessly churning out new editions to cut off the used book
market? In the age of the iPad, shouldn't someone be producing first rate,
open source textbooks in core subjects?
It's a pittance of the total, but that $1,200 (x4) could still add up to a lot
with interest.
~~~
WBrentWilliams
Let me answer that with a statement and an anecdote.
The statement: The proportion of costs I outlined, from my point of view, have
always been true. I finished my first degree in 1997.
The anecdote: All the professors in my computer science program (I'm not going
to name-drop, but I had some great teachers that insisted that I learn scheme,
then Java, then C, then assembly) either had no required text or gave syllabi
that referenced several versions of their listed texts. The context was that
we were free to get the best deal possible on the texts.
------
cheriot
There's two problems when students choose an institution:
1\. Poor and too often manipulated information about a school's job
placements.
2\. Students are short sighted about debts and schools don't care because the
money is government guaranteed.
Solvable problems! Have an independent party collect and report on job
placement. Then tie a school's loan eligibility more closely to the ability of
recent graduates to pay.
~~~
evanpw
It seems like it would be even better to tie eligibility to the borrower's
major. Return on education expenses are more closely related to what you study
rather than where you study it:
[http://www.economist.com/blogs/graphicdetail/2015/03/daily-c...](http://www.economist.com/blogs/graphicdetail/2015/03/daily-
chart-2)
~~~
cheriot
If we do both then both the school and student are incentivized! (I was
thinking of the for profit schools owned by hedge funds)
------
SixSigma
Those numbers are horrific. My entire UK degree debt will be about $60k at 3%
and I won't have to make any repayments until I earn about $30k pa.
For my degree it should be a net profit for me. For less well paid occupations
one is insulated against low wages. I am happy that I contribute to the
advancements in non vocational learning.
My fellow students and I are said to contribute $40m pa to the local economy
which is much needed in this deindustrialised part of the country - all
accommodation is provided by third parties.
But $45k per year is an outrage. It surely can't be based upon the cost of
provision.
~~~
karissa
You're right, it isn't. It's largely a for-profit industry, even the 'non
profit' colleges. Most teachers, especially adjuncts, make very little in
comparison to what the students are paying -- look at NYU, 50k/year per
student in a class of 20 where the teacher is paid 5k/class. The math doesn't
add up.
~~~
keithpeter
Would a self organised University be possible in the US system?
A group of teachers using cheap space and online communications to provide
lectures/support/assessment with students then taking exams accredited by
another institution? Possibly with sponsorship/organisational support from a
voluntary sector organisation?
That is basically how most Universities outside
Oxford/Cambridge/Edinburgh/Aberdeen started in UK in the Victorian period. The
University of London was set up to validate degrees provided by various
constituent colleges in the regions. There was a strong non-conformist
(Baptist/Methodist) input on the funding and organisational side.
~~~
derpud
Faculty would love to disintermediate the ever growing administration.
Accreditation is an issue, and overcoming the importance of brand. Competing
with the for profit niche might work though.
------
Camillo
The absurd student loan system is what fueled tuition inflation in America. It
needs to be abolished.
~~~
cheriot
Do poor students not get to go to college anymore or do you have another
system in mind?
~~~
nemo44x
Work and save money before you enter a University. If you work and save for 2
years after high school you gain many advantages:
1) You will save some money to pay for school. 2) Because you are a bit older
a new world of grants are available to you since you are returning to
education. 3) Because your income will probably be fairly low you will qualify
for more grants due to this fact. 4) Gives you time to figure out what you
want to go to school for and possibly make you take it more seriously when you
start.
Another option is to join the military and qualify for a lot of additional
grants that way. 4 years of service opens a lot of doors. There are many non-
combat roles available.
There are many ways for the poor to advance themselves and not cripple
themselves financially for the next 40 years.
~~~
cheriot
I like your reasoning, but I worry that the math of minimum wage jobs can't
put people through school any more.
------
littletimmy
One way to deal with the student loans crisis is for a nationwide student loan
bailout. You don't have to erase the loans, just refinance them at a much
lower interest rate. Currently the rate is 7%, which is just insane. For a
virtually no-risk loan, which cannot even be discharged in default, that is a
terribly high rate.
Refinancing will be a transfer of wealth from the rich to the poor. Good for
the economy, shores up consumption, and all that. Has the added side-benefit
of freeing indentured slaves.
~~~
evanpw
If the current interest rate is high compared to default risk, why hasn't some
private company come in to undercut the government and make a killing? I don't
think it's because the banks aren't greedy enough. My guess would be that it's
because of the extremely high default rate on student loans, and the extremely
favorable repayment terms that government loans offer: payments capped at 10%
of discretionary income, and forgiveness after 20 years (10 if working in non-
profit or government)[1].
[1] [https://studentaid.ed.gov/sa/repay-
loans/understand/plans/in...](https://studentaid.ed.gov/sa/repay-
loans/understand/plans/income-driven)
~~~
littletimmy
If that were true, why would private student interest rate be 7% also? It has
no such favorable repayment terms.
~~~
evanpw
It's my impression that there isn't much private student-loan industry left,
having been driven out by federal loans. What's left probably targets a
different demographic from the usual one, like borrowers who have already
maxed out their federal loans, which might be a riskier group.
------
icewater0
I think part of "the system" involves learning and lending institutions
informally forecasting what the lifetime return on a student's education will
be, and then attempting to maximize the return on the student loan(s) as an
investment on those future earnings.
The learning and lending institutions clearly fucked up big-time in their
forecasting, along with everyone else who was irrationally exuberant in their
optimism about the future economy.
Now they are stuck between students on one hand (who have been sold a bill of
goods and are seriously pissed about the pittance left over after their loan
payments, if they're able to pay back at all), and the investors on the other
hand (which includes not only evil bankers' money but also people who have
been sacrificing in good faith for decades to contribute to their retirement
accounts).
So the stage is set for a nice messy society-destroying generational war that
should distract everyone from the forecasting mistakes.
------
UserRights
Free Education is the primary attribute of a free and cultivated society. If
you are living in a primitive country with no free (+higher) education, all
available resources should be focused on reaching this goal as the most
important evolutionary step.
Many underdeveloped societies and failed states can be identified by looking
at the education situation. In the worst cases education is held hostage by a
leading group of brutal oligarchs that are raping education into a "business-
model" \- these societies will need a lot of help from other nations to
overcome the dark neandertaler age they are stuck in and it is important to
teach them about the basic abilities of Homo Sapiens Sapiens - solidarity,
compassion and love.
Good Luck!
~~~
evanpw
Education is never free; it always consumes some resources. The question is
who should pay the cost: the student or everyone else?
In cases where basically everyone needs the education to succeed in society
and the positive externalities are large (e.g., teaching young children to
read), I think that the case for "everyone else" paying is pretty strong. In
cases where the education is mostly consumption (i.e., the benefits all go to
the student), the case for subsidizing is weak (e.g., a trust-fund kid getting
multiple PhDs in impractical fields to have something to do).
So the main issue becomes: where does a Bachelor's degree lie on this
spectrum?
1\. Fields of study are very heterogeneous. Some majors have a much larger
return on investment than others, so subsidizing all "higher education"
equally probably doesn't make sense.
2\. Even in Western Europe, college attendance rates are only around 1/3\. Is
it right to have that 1/3 subsidized by the 2/3 who don't go to college, and
who on average have lower income?
3\. (More speculative) To some extent, the payoff to a college education is
not because of what you learn, but in signaling that you're a smart,
conscientious, perseverant person. In that case, a degree becomes less
valuable the more people that have them, and the case for subsidizing what has
become an arms race becomes a lot weaker.
4\. If we spend a lot of money and effort propping up the current, very
expensive, forms of higher education, we might discourage better ways of
educating from emerging. One example is something like Coursera, where the
marginal cost of adding one more student is basically zero, and everyone can
learn from the very best educators in the field. Another would be separate
institutions for education and for testing / accreditation. This would help
with grade inflation, bring back the focus in the educational institutions to
teaching rather than credentialing, and provide more accountability (% of
graduates passing a common exam is a lot easier to compare than future income
of graduates, and is less confounded by things like socio-economic status of
students).
------
WeAllDieByBan
Don't pay it and default.
You'll only have to pay the income tax on the defaulted loan.
It's not like you need credit anyways since almost no one in this generation
is going to be able to afford a house down payment.
~~~
evanpw
If it's a federal loan, the government will garnish your wages [1] and
eventually your social security payments [2], and will withold any tax
refunds. And since student loans usually can't be discharged in bankruptcy,
you'll have bad credit for literally the rest of your life.
Also, you pay income tax on forgiven loans, not defaulted ones.
[1] [https://studentaid.ed.gov/sa/repay-
loans/default#consequence...](https://studentaid.ed.gov/sa/repay-
loans/default#consequences)
[2] [http://money.cnn.com/2014/08/24/news/economy/social-
security...](http://money.cnn.com/2014/08/24/news/economy/social-security-
student-debt/)
|
{
"pile_set_name": "HackerNews"
}
|
Private equity and hedge funds backing patent trolls - rms
http://www.forbes.com/free_forbes/2007/0507/044.html?partner=yahoomag
======
rms
Seems like an easy bet for a hedge fund to make... 10 million for a lawsuit
and an expected value of a lot more than 10 million. I don't blame the hedge
funds though, I blame the insane patent laws.
------
nickb
Parasites joining other parasites... oh well.
|
{
"pile_set_name": "HackerNews"
}
|
Holding College Chiefs to Their Words - tokenadult
http://online.wsj.com/article/SB124155688466088871.html
======
callahad
The sidebar has links to the complete essays. What a treat to get to read
something personally penned by my alma mater's president!
(I also cannot help but read the essay in said president's voice and cadence,
which was rather... unique)
|
{
"pile_set_name": "HackerNews"
}
|
Time for Europe to stop being complicit in NSA's crimes - jdp23
http://www.neurope.eu/blog/time-europe-stop-being-complicit-nsas-crimes
======
junto
The problem is that most people don't care. The vast majority of the public
would prefer to give up their privacy in return for free services.
The people will have their freedom slowly eroded, but at a rate that they
cannot perceive. By the time they realize it, it will be too late.
\- Edward Snowdon realised this.
\- Bradley Manning realized this.
\- Julian Assange realised this.
As technologists we are complicit in this current erosion of (our own) rights.
It is up to us to make the required changes.
The public deserve the truth, even if they cannot stomach it.
|
{
"pile_set_name": "HackerNews"
}
|
Remote Desktop Services Remote Code Execution Vulnerability - palebluedot
https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-0708
======
vb6lives
The exploit is so severe they are releasing patches for xp and server 2003
|
{
"pile_set_name": "HackerNews"
}
|
Science writer Simon Singh wins libel appeal - prakash
http://news.bbc.co.uk/2/hi/uk_news/8598472.stm
======
grellas
_Dr Singh described the ruling as "brilliant", but added that the action had
cost £200,000 "just to define the meaning of a few words"_
"These sequestered nooks are the public offices of the legal profession, where
writs are issued, judgments signed, declarations filed, and numerous other
ingenious machines put in motion for the torture and torment of His Majesty's
liege subjects, and the comfort and emolument of the practitioners of the
law." (Charles Dickens, _The Pickwick Papers_ )
------
juhgfcgvhnjm
The problem with the UK's libel laws is that you have to prove what you say.
This makes sense if I write an article calling you a child murderer - you sue
me and I have to prove it's true (ie you murdered a child) you don't have to
prove you didn't.
But this logic backfires when I accuse you of lying, I have to prove you are
wrong (that your treatment doesn't work) - you don't have to prove it does.
The UK's libel laws are also notorious for siding with the person being
libeled. That's why Hollywood celebs always sue in London, and why we can't
see the south park scientology episode.
~~~
handelaar
I've seen the South Park scientology episode a half-dozen times on Comedy
Central UK. On all other points though - yes.
------
shrikant
He didn't actually win the case - he merely won the 'right to rely on the
defence of fair comment in a libel action'.
It is a sad comment on our society that even a minor victory for common sense
(such as this) makes me bubble over in happiness.
~~~
andrewcooke
right, but this makes his case winnable. before, he was doomed...
------
tewks
Simon spoke at the Royal College of Science's Science Challenge dinner last
week. I really enjoyed his speech. It's great that he's decided to take up
this charge instead of standing down, as the vast majority of people in his
position would have no choice. However, in his words, this verdict is simply
the court's interpretation of the meaning of his article. The legal
implication of the article, now that the meaning is decided, is to be decided
later.
|
{
"pile_set_name": "HackerNews"
}
|
5 Best Books about Isaac Newton - bookofjoe
https://fivebooks.com/best-books/isaac-newton-william-newman/
======
masonic
All book links are shrouded affiliate links (tag=fivebooks001-20).
|
{
"pile_set_name": "HackerNews"
}
|
Star Wars toy photography - BobPalmer
http://geektyrant.com/news/2011/10/3/star-wars-toy-photography-is-truly-amazing.html
======
Zimahl
Great pictures. I'm going to guess that the photographer used flour as the
snow? Maybe not, it seems almost too white for flour.
Most appear to be Star Wars Lego but the third- and second-to-last images
almost look like cosplay (almost).
|
{
"pile_set_name": "HackerNews"
}
|
Does Startup Life Have To Be A 24/7 Grind? - alexmturnbull
http://blog.groovehq.com/post/47537406719/does-startup-life-have-to-be-a-24-7-grind
======
outside1234
The reason this meme exists is that there is a huge economic benefit to (some,
not all) founders and VCs to perpetuating it.
It attracts only young impressionable shutins to the startup and they feel
like if they aren't working 18 hours a day they are slacking. Plus there is
the "Stanford/Google effect" in play where everyone claims to be
studying/working all the time but aren't if you factor out the foam dart / BBQ
/ bro time.
~~~
w1ntermute
> foam dart / BBQ / bro time
Oh god, all my life I've seen people waste _so much_ time and then claim that
they're _so busy_. It has developed into a huge pet peeve of mine, especially
when they get pissed when I arrive to work at 8 AM and get up to leave at 5
PM. And I've done more in those 9 hours than they've done in their last two
16-hour days.
~~~
Swizec
Coincidentally, you'll find that it is impossible to spend more than ~6 hours
of focused time on a single type of task. If you have different types you
might be able to stretch your total focus to about 8 or maybe 9 hours.
So if your main job is coding, you can probably spend about 6 real work hours
on that. This leaves you with another 3 hours to spend on stuff like email, or
writing or whatever. Certain types of hobbies do count into this as well.
~~~
willurd
"Coincidentally, you'll find that it is impossible to spend more than ~6 hours
of focused time on a single type of task"
I'm not sure who did this research, but I can tell you from personal
experience that this is simply not true. Maybe I'm a freak of nature but when
I was first learning how to code I would hold one man hackathons in the summer
from the moment I got up until deep into the night. Similarly, when I was
younger and played a lot of RPGs, I would play quite literally all weekend,
pausing only for sustenance and a little bit of sleep, and usually
multitasking on the food part, far surpassing 6 hours of focus time.
~~~
inDigiNeous
When I was younger I could do it easily too, hack through the night with the
power of some caffeinated cola drinks, easily doing 12h hackathons. When I was
16 to about 20 years old, that is.
Nowadays it's a bit different. My body just can't take the sitting in front of
the computer for so long. Maybe it's a good thing though, but it really makes
me want to think about how I spend my time on the computer and seek those
creative hours when I am able to get into the zone and just code.
Usually I find myself doing from anywhere to 2 to 4 hours of concentrated
creative work a day, and that's it.
~~~
willurd
After speaking with a couple of people that have ADHD (I've never been
diagnosed, though I probably would have been with my behavior and
inattentiveness during my school years), I've worked up a pet theory that so-
called "ADHD" or "ADD" is what allows people to hyper focus like this.
Granted, of the people I've talked to, including myself, that hyper focus
really only extends to things they are truly interested in, but it's real
nonetheless. I found this out for myself when I got into programming (as
stated above).
As for me, I've noticed the amount of time I can focus on something go down a
little over the years as well, though I still occasionally get inspired and
hyper focus like I used to.
~~~
HeyLaughingBoy
That's not just your pet theory, it's commonly accepted by psychiatry.
One of the defining characteristics of ADD/ADHD is ability to focus for
extended periods. The main reasons ADD gets a bad rap is that people (often
children) are either focusing on something that adults don't want them to
focus on, or there is nothing interesting that holds their attention so they
become easily distracted.
~~~
willurd
300 years in the future kids will be in school, uploading into their brains
the story of how in the early 21st century leading psychiatrists and willurd
independently discovered the fact that ADHD is actually a super power. Or so I
imagine...
------
andrewljohnson
My start-up is also not a grind. My wife and I are the founders, and we put in
our hours early on, but now we work a very regular 9-5 schedule. At least one
of us has to leave around 5:15 to pick up our son.
Our employees are also not asked to work crazy hours - we don't rush to get
releases out, we're profitable and growing, and we're not concerned about
flipping the company tomorrow. Our employees are also now in their late 20s,
and we expect them to have children and normal lives too, and we want them to
stick with us through that.
~~~
stephenhuey
Our founders have been true to their promise of work/life balance, and I
believe this contributes greatly to our team's success. We're quickly
expanding around the country without any VC breathing down our necks and
growing at a quicker pace than competitors who try to emphasize the usual work
hard play hard mantras. Some of the other comments on here ring true when they
suggest that certain kinds of investors benefit most from employees burning
themselves out. I think our well rested team with plenty of time for friends
and family tackles problems with healthier vigor than our tired competitors!
------
steven2012
I work at a YC startup, and we don't work a 24/7 grind. The VP of Engineering
believes that it's not sustainable to work like that, and you can't build a
successful company by burning out your engineers. Sure, when there are site
issues, it's all-hands on deck, and if you need to get something in for the
next push, then definitely you need to work your butt off, but that's just
being professional. For the most part, the pace is really great, and we are
all very productive.
~~~
jes5199
You know, there exists a technique that makes "get something in for the next
release" obsolete. Basically: have the marketing and sales people sell and
market the thing your engineers wrote last week, not the thing you hope they
will write next week.
~~~
matwood
That works once you have discovered your customers and market fit. Until then,
you need to sell first, then build.
An acquaintance of mine built her company by selling her idea all over town.
She got a few local businesses to sign on, and told them it would be a month
of set up time. Over the course of a month she got the business setup, hired
employees, bought computers, etc... 5 years of hard work later she sold the
business and is currently vacationing at some random place in the world.
------
swampthing
I'm all for not burning out, but I feel like there's something missing from
all these articles admonishing people to work less.
It's great that people can work 5-6 hours a day, no weekends, and still build
a great business. But if that's truly more efficient, should we expect to see
some massive billion dollar companies run that way? How much did Bill Gates,
Larry & Sergey, Jeff Bezos, etc. work when they were getting their companies
off the ground? When they look back, do they feel the time was wasted, or do
they feel it was necessary?
Or to the Metalab example, who's to say that the success Andrew experienced
when he cut back on his work was not due in part to the long hours he had put
in previously?
Not trying to admonish people to work more - just saying there's some missing
analysis here!
~~~
ebiester
We think of productivity as "how much can you get done in an hour" * "how many
hours you work."
Unfortunately, as humans, we need recharge time. If I code 80 hours this week,
the first twenty hours will be very productive, the next 20 will be slightly
less productive, the next 20 will be significantly less productive, and the
last 20 might be hurting more than helping.
After a day of rest, I might be back at the same productivity as I was for the
middle 40 last week, and degrade from there.
The irony is that the more hours I work, the less I get done, the less
difficult problems I can tackle.
Meetings take less out of people, be we still make poorer decisions by tiring
our brains rather than taking passive time to rest, and using spare cycles to
reflect on how we can squeeze more productivity out of the time we can work.
This, of course, ignores that some people burn out easier than others, and
some have higher peaks than others. Working a truly focused 20 hours is more
exhausting than 50 hours of semi-focused work for many people, and isn't
rewarded in many corporate cultures. Find your own personal style.
~~~
swampthing
Yea, I get the theory - my point is just that we need more than anecdotes when
it comes to posts like this.
------
russelluresti
I think the difference is what you're building the startup for.
Many startups aren't built to last. They're built to get to a certain size and
then sell off their product and make a decent profit from it all. In this
case, burnout isn't a worry because, if you get to the point where you're
burnt out, your startup failed anyways (in that it didn't achieve the goal of
attracting the attention of a larger startup-eating company). It's in these
environments that the 24/7 grind is most apparent. Or companies that started
with this model and then realized they couldn't sell and now have to try to
turn their quick-buck idea into something that lasts.
Some startups, though, are built to last. These are places where burnout is a
concern because they plan to be around for 10 - 15 years and they don't want
to have to replace their entire staff every two years. In these companies,
getting a good work:life balance is important for attracting and retaining
talent.
People tend to think that if you're passionate about what you do (or love what
you do), then you want to spend every waking hour doing it. But this kind of
behavior is self-destructive. Even if you think you want to do it now, it will
eventually wear you down and be unsustainable. Everyone needs to take breaks
and have other interests and activities. And sometimes you have to force
yourself or your employees to strike that balance.
------
eah13
I've always admired people who get more done in less time. We're in hustle
mode in our startup right now but I still try to block out time that feels
wide open to keep an eye on where we're headed and wind down a little. No use
in racing ahead if it's a dead end road.
To continue the vehicular metaphor, it feels like hustle culture advocates
burying the needle in the red zone, even though that's the least efficient
point on the power curve. And even though that's less a predictor of success
than what direction you're headed in.
There are absolutely times when you and your team need to live in the red zone
of your tach. But really that's a temporary solution to being caught in the
wrong gear. Amazing opportunity, can't upshift = gotta floor it anyways. But
curves ahead = can't floor it. All depends.
Hmm. That metaphor actually did OK.
~~~
purplelobster
It's also better to drive slowly towards your goal than to speed in the wrong
direction and never look at your map to get your bearings. After a few side
projects that never went anywhere, I spend more time taking walks and just
thinking about the direction I'm going and I'm spending less time doing
unnecessary work as a result. Sometimes you need a break in able to come back
to the project refreshed with a new outlook on things.
~~~
eah13
I've also found walks to be super valuable. They're times where you can't get
distracted because there's nothing else to do but think, like in the shower
(which is why so many great ideas happen in each). They also get a little
physical activity into your day which helps the brain work.
When I take walks with my wife we have great convos that won't happen on the
couch when we're exhausted. We walk this circular path around Duke's East
campus, which is about 2 miles. Having a defined path like that helps too- you
don't have to get distracted by where you're going or be temped to end it
early.
------
casca
TL;DR: not if you work for Groove, a pre-profitable startup that's received
$1.25M in seed funding[1]
<http://www.crunchbase.com/company/groove>
~~~
flog
Absolutely.
If you haven't been paid in 6 months, and you've got a 2 month runway you're
going to be working long hours.
------
jennyjenjen
This is exactly what Mike Alfred of BrightScope tells entrepreneurs,
particularly the young ones just going through school and figuring out where
they want to be in the scope of things. Until reading this, I'd hardly heard
anyone speak out this way. I think it's particularly important to put this bug
in kids' ears early; it's not sustainable to live and work like that, and
eventually it destroys friendships and relationships and takes a toll larger
than initially perceivable.
If it happens that 24/7 weeks exist for a little while, it happens. I get
that. But I think what bugs me most about this "24/7 grind" attitude is that
it's not a magic pill for a product that doesn't work or isn't destined to
become profitable (and I'm using 'destined' lightly here). I've seen plenty of
entrepreneurs go all-in on something and fail not because they didn't put the
work into it -- but because it wasn't a product that could succeed, even with
a tough grind.
------
segmondy
It doesn't, but it does too. Time is 24/7 and never stops even if you do. If
you have a great idea and you believe that in it, you will be fueled to go
24/7. If you are afraid other's might beat you to it, you will have no choice
but to go 24/7. The worst feeling in the world is having a great idea, slack
at it, and having others thoroughly beat you and stomp you to the ground.
While they are gaining traction, you are playing catch up. And you never catch
up. No founder wants to experience that, this is why startup life for most is
a 24/7 grind.
Because the prize most are after is the highest level. If you want to start a
business making $500k a year. You don't have to grind. But if you want a
chance at millions or billions then yes, you do have to grind. You are
competing at the highest level. It's like preparing for the Olympics.
In everything in life, there are exceptions. But such are not the norm or the
rule.
~~~
mindcrime
Not sure why you got downvoted for that. I can see how a reasonable person
could disagree, but your position is certainly not outlandish. In fact, I - by
and large - agree with you. Especially this bit:
_If you want to start a business making $500k a year. You don't have to
grind. But if you want a chance at millions or billions then yes, you do have
to grind. You are competing at the highest level._
Yeah, if your ambitions are that grand, you can't really expect anything to
come easily... personally I feel like you have to be willing to scrape, kick,
scratch, claw, bleed, hustle and basically battle your ass off if you're going
to get there.
------
curt
Can't recall the study but a researchers analyzed optimal work behavior. They
found that the most/best work is produced when an individual works 7-9 hours
per day. The more creative the task (ie coding) the more skewed the optimal
time is towards 7 hours/per day. You also want to break the day into 2-3
segments with breaks in-between.
------
mindcrime
Well, that depends. For those of us doing the "bootstrap while working a
dayjob" thing, yeah, it basically does become a nonstop grind. Well, nearly
nonstop. Everybody has to take a break sometime.
For me, I allocate damn close to every hour I have outside of my dayjob to
working on Fogbeam Labs. Take out sleep time, and time to eat (plus occasional
diversions like grocery shopping, etc.) and it's basically:
1\. get up and go to the dayjob
2\. leave the dayjob and drive to Starbucks or Panera Bread
3\. sit there and work on the startup for 4-5 more hours
4\. drive home, eat, sleep
5\. lather rinse repeat
6\. Except Sat. and Sun, which is pretty much:
7\. work on the startup all day
Fun? In some ways yes, in some ways no. Healthy? Probably not. Necessary?
Well, I think so or I wouldn't be doing it.
My cofounder, on the other hand, doesn't go to quite the same extremes I do,
which is fine. I tend to be a little extreme by nature, and I don't really
expect anybody else to do the crazy shit I do. :-)
~~~
kayoone
Thats sounds like a good plan to self destruction. No Social activities at all
? No other hobbies?
~~~
mindcrime
Well, that depends. Right this minute, basically no, but that's because I'm
working on the road for my dayjob, and when I fly in for the weekends, I don't
get home until around 1:00am Sat. morning, which inhibits me from getting up
early enough for what would otherwise be my regularly scheduled mountain bike
group ride.
But, yeah, when I'm in town fulltime, I do about a 2 hour MTB ride on Sat.
mornings, weather permitting.
And, during football season, I do take out time to go to a sports bar and
watch the Dolphins game on Sundays. But even then, I take my laptop so I can
work during the breaks, or if the game turns into a blowout.
Beyond that, though, not a whole lot. I'm almost 40 and I'm running out of
time to achieve some of my dreams... so it's pretty much "nose to the
grindstone" right now.
But like I said... everybody needs a break every now and then. I will
infrequently just take a random day off and do no startup work at all,and just
kick back and watch movies, or go to the hackerspace and tinker with some
Arduino stuff or something. I don't do it very often, and I feel somewhat
guilty when I do, but I usually feel a bit recharged after one of those days.
~~~
jbooth
So your relaxing time is spent rooting for the dolphins? Now _that's_
unsustainable.
~~~
mindcrime
_sigh_ Tell me about it. _sigh_
------
SurfScore
At the end of the day it's just not one size fits all. Kobe Bryant sleeps 4
hours a night (he just tweeted that a couple weeks ago), and plays a
ridiculous amount of basketball every day. If I tried to do that I would end
up in the hospital. There's some people that will be able to work 12-16 hours
a day, and there's some people that can get things done in 6. Execution is ALL
that matters. Get it done in 6 hours or 15 minutes, just get it done.
The (original) article references Aaron Levie of Box, and how he works all day
long and doesn't take vacations. Box is valued in the billions, MetaLab is
not. Is there some correlation there? I'm not sure, but most of the "famous"
founders encouraging long hours tend to have had larger companies.
------
macinjosh
Yes, how else will founders gain their smug sense of superiority?
------
spoiledtechie
Fuck yes you do.
This is retarded. Ya, you can start relaxing a bit when your company is 50+,
but when you startup is just beginning, before profit, then fuck yes.
~~~
untog
Fuck no, you don't. Or maybe you do. How about we stop making universal rules?
You can bootstrap a startup quite successfully just using a few hours a week.
But not all startups. Depends what you're doing.
~~~
mindcrime
_You can bootstrap a startup quite successfully just using a few hours a week.
But not all startups. Depends what you're doing._
I guess that's true, when you consider something like patio11 and his BCC. It
doesn't sound like he was pouring 60+ hours a week into that (or maybe he was,
somebody correct me if I'm wrong, please).
But I'm also guessing that the set of startups that you can build with that
small a time investment, is pretty small. Especially if you're talking about
something that's intended to be a scalable startup, something that can be a
billion dollar business someday.
~~~
jes5199
if time was fungible, then working 40 hours instead of 60 hours would only
make it take 50% longer in calendar-time to get the thing done. And if you
have some other income stream, then that extra time might not matter much. And
if getting enough sleep and seeing your family makes you mentally healthier
the calendar-time penalty might be less than 50%. Maybe a lot less.
|
{
"pile_set_name": "HackerNews"
}
|
Judge rules California coffee shops must display cancer warnings - ayanai
http://thehill.com/policy/healthcare/380925-judge-rules-coffee-sellers-must-include-cancer-warnings-in-california
======
malcolmgreaves
What an absolute waste of everyone's time.
Prop 65 has such a high false positive rate that it's worthless; it does
nothing to inform nor protect citizens. When there's a sign on everything, the
"warning" doesn't matter anymore. Moreover, just because something has a
carcinogen doesn't automatically mean it's unhealthy nor that it will harm
you.
[1]
[https://www.ncbi.nlm.nih.gov/pubmed/3575799](https://www.ncbi.nlm.nih.gov/pubmed/3575799)
[2] [http://www.businessinsider.com/almost-everything-causes-
canc...](http://www.businessinsider.com/almost-everything-causes-
cancer-2016-5) [3] [https://www.cancer.org/cancer/cancer-causes/general-
info/kno...](https://www.cancer.org/cancer/cancer-causes/general-info/known-
and-probable-human-carcinogens.html) [4]
[https://www.huffingtonpost.com.au/2016/08/15/17-carcinogenic...](https://www.huffingtonpost.com.au/2016/08/15/17-carcinogenic-
foods-you-probably-eat-every-day_a_21452232/)
~~~
dmm
> doesn't automatically mean it's unhealthy nor that it will harm you.
Examples: sunshine, candles, birth control
~~~
another-one-off
Realistically, sunshine has caused more cancers than every nuclear disaster
added together.
If we could, we would absolutely ban it. It is grossly unsafe.
~~~
njarboe
Probably causes more cancers each day than every nuclear disaster added
together. On the other hand, without sunshine, the Earth would be frozen world
covered with ice, void of life.
------
chomp
When people say "overregulation hurts small business in California," here's an
example. Imagine just wanting to sell people coffee and not knowing you need
to warn people about cancer risks of all things! Imagine getting fined by this
regulation, or the myriad of others of which you could possibly run afowl.
Just yikes.
~~~
fapjacks
Don't forget about the $800 Alternative Minimum Tax faced by _every_ company
registered in the state (or doing business in the state, or having workers in
the state, or...). That's $800 even if you make zero dollars, or if you're a
nonprofit, or if you lose money. California literally says "for the
_privilege_ of doing business" in the state. The ole small business
assassination program.
~~~
techsupporter
> California literally says "for the _privilege_ of doing business" in the
> state.
I'll start out by saying that I think that an $800 minimum is waaaay too high.
But I continue by pointing out that it's not for "doing business," it's a tax
for having the legal fiction of limited liability[0] enforced by the state.
You only pay the minimum tax if you are a corporate entity (Inc or LLC).
That's a subtle, but in my opinion important, difference.
If you're willing to transact business in your own name as your own personal
liability, the state--at least for this tax; I'm not conversant with the full
roster of non-corporate business taxes in California--doesn't charge you for
the privilege. If you want the shield of your personal assets separate from
your corporate persona then, yes, the state charges you for that privilege.
Every state has some sort of fee or tax. Some are very small (Wyoming is only
$30 or $40, I believe) while some, like California's, are quite large on the
minimum end.
0 -
[https://www.ftb.ca.gov/businesses/faq/712.shtml](https://www.ftb.ca.gov/businesses/faq/712.shtml)
~~~
fapjacks
So the state tax code literally says, "For the privilege of doing business in
the state". [0] Their words, not mine. Whatever way you describe or justify
it, the state defines it in this way specifically. Also, sole proprietorships
can only be for-profit ventures in California. So if you wanted to start a
non-profit, you're paying the AMT.
[0]: [http://codes.findlaw.com/ca/revenue-and-taxation-code/rtc-
se...](http://codes.findlaw.com/ca/revenue-and-taxation-code/rtc-
sect-23455.html)
------
peterwwillis
Other foods with acrymalide: black olives, prunes, dried pears, coffee,
roasted tea, rice crackers, anything fried, anything baked, breads, cookies,
nuts, chocolate, baby food, and of course, cigarettes.
[https://www.fda.gov/Food/FoodborneIllnessContaminants/Chemic...](https://www.fda.gov/Food/FoodborneIllnessContaminants/ChemicalContaminants/ucm053549.htm)
~~~
vmarsy
Interesting table! Some have much more than coffee, in the 1000s ppb
Coffees show a huge difference between not-brewed and brewed. Here some
examples from that link:
not-brewed --> brewed (ppb)
458 6
377 6
411 6
539 7
3747 93
Compared to the French fries values, those 6s and 7s looks very negligible.
However, the article says: "At issue is a chemical, acrylamide, which is
produced while roasting coffee beans."
So is the danger when coffee is being roasted? brewed too? Is that a concern
for people (employees especially) spending a lot of time in coffee shops
because of airborne acrylamide? or acrylamide left on surfaces or something?
------
joshavant
When I worked at Apple, it was fun to walk in some buildings and notice the
California 'cancer warning' sign posted in the lobby. From that, you knew
there was probably a hardware engineering lab somewhere in that building,
because I believe they had to post those due to carcinogens released from
soldering. (And, inside of Apple, that kind of knowledge would only be shared
on a need-to-know basis, otherwise.)
~~~
zaroth
No, they put the signs on every building everywhere because it’s cheaper than
not putting them up and possibly getting fined $2,500 for everyone who ever
walked inside.
It’s impossible to construct a building which would not contain some element
which would trigger the legal requirement of posting the sign.
The sign conveys you absolutely no information. Look closer and you will see
they are posted on every commercial building run by someone smart enough to
know to post them. The more entertaining places I’ve seen them is grocery
stores and preschools.
------
mc32
Of all the things people make fun of California for, this is one we deserve.
The cure: Have a fast limit on the list and in order to introduce new items on
the list of items needing the warning, you have to remove one from the list.
Barring that, require some threshold of known effect, before something can be
added to the list.
~~~
OrganicMSG
I've seen a magnetic tack hammer with a varnished wooden handle that had one
of those stickers on it here in the UK. Was wondering if it was in case you
chewed the varnish.
Perhaps someone should just put a label on the roadsigns as you enter the
State of California saying, "May contain nuts".
------
nnq
Side question: why don't we actually _encourage and fund research into
developing solid, reliable, and standardized tests for carcinogenicity?_
Everybody with at least some above basic knowledge of molec bio realizes that
things like the Ames test are like complete jokes... so we don't even f know
which of the substances around us actually cause cancer in humans :|
Yeah, this is the kind of research that would _never_ advance anyone career,
bring anyone glory, or bring science closer to a "cure" for cancer. And yeah,
once you'd want to start actually employing a reliable test in the wild, you'd
probably severely reduce your changes of a successful career (and maybe even
of continued personal survival), because you'll soon step on some pretty huge
toes.
But really, _improving tests and sensors_ should be a top priority of bio-
medical research... It might slow down some industries, but hopefully not that
much. "Moonshot" projects like "curing" cancer or diabetes are "fun", but
we're building a world where we're surrounded by stuff of which we have no
idea what's safe or what not.
My _personal_ current worry is actually with the cornucopia of chemicals with
neurodegenerative effect that could be all around us, but this reminded me of
the fact that we're also basically in the "dark ages" when it comes to
determining carcinogenicity in a practical way (hint: it's not about what you
can do as part of a funded research project in a few years, it's about having
a portable test kit that any average Joe could use to test the hundreds of
thousands of chemical compounds and mixtures of them that are all around us,
and then in the tones of likely mostly wrong data resulting from it "fish" for
maybe actually relevantly dangerous stuff, and then do real research to
confirm those ...but then again modern medical research is allergic to
"fishing data for patterns" too, so maybe this would need some reframing).
------
JoshMnem
It would make more sense to warn customers about the health dangers of the
sugars and calories.
~~~
qplex
Or about drinking very hot beverages. Let your coffee or whatever cool down a
bit and you're at much lower risk.
[0] [http://time.com/4369809/very-hot-drinks-are-probable-
cancer-...](http://time.com/4369809/very-hot-drinks-are-probable-cancer-
trigger-says-who/)
------
b6
I rarely visit California, but when I do, sometimes the cancer warnings make
it look like I'm on the set of a movie trying to warn me about some absurd
future. OK, the parking lot could give me cancer. OK, the elevator could give
me cancer.
------
anigbrowl
The Chamber of Commerce could easily sue on the grounds that these regulations
are overbroad and even the incumbent governor is on the record as saying so,
but the cost of compliance (displaying a sign) is low and it's handy to have
something to kvetch about.
Also, note that it was a nonprofit, the American Cancer Society, that sued to
enforce this, not the state of California.
~~~
dragonwriter
> The Chamber of Commerce could easily sue on the grounds that these
> regulations are overbroad
There are no regulations at issue, there is an initiative statute; and, while,
yes it is easy to sue and claim a statute is overbroad, it is much harder to
win such a suit: “overbroad” isn't just “covers more conduct than good policy
judgement would permit”, but “exceeds any constitutionally permitted purpose
to impose a chilling effect or outright prohibition on constitutionally
protected conduct.”
In fact, as that claim would be a defense in any lawsuit under the act, if it
was such an easy win, it would have been made in one of the cases—like the
present one—under the act.
~~~
anigbrowl
Hmm, you make a good point. I assumed they wouldn't have standing to bring it
up if they weren't litigating against the state itself, and that they were
limited to arguing on the facts about the acrylamide.
------
mirimir
Other baked and fried foods also contain acrylamide. So what about muffins and
donuts? Or french fries? And then there are all the carcinogens in heated
unsaturated oils.
~~~
Finnucane
Also things you probably shouldn’t eat all the time either.
~~~
mirimir
True enough.
But I suspect that heating _any_ food much over 100 °C will create
carcinogens. Even cooking rice, if it browns at all by accident. And indeed,
some folks cook at 50-70 °C to avoid that. Sprouted grain "bread" baked at ~50
°C is especially strange.
~~~
mywittyname
I was actually told this by a physics professor in college (in a class about
science and public policy). He claimed that basically anything that is browned
via heat becomes a carcinogen, so policy makers and scientific researchers
have a moral obligation to ensure that "is known to cause cancer" remains a
meaningful statement, because taken literally, it applies to almost anything.
~~~
mirimir
Right. You have acrylamide. Plus polycyclic aromatic hydrocarbons (PAH). But
PAH, at least, get degraded quickly, and don't accumulate in fat, unlike
similar halogenated compounds, which do. Arctic peoples are loaded with them,
because they're transported north and condense out. Distillation, basically.
------
alexandercrohde
From wikipedia:
Acrylamide was discovered in foods in April 2002 by Eritrean scientist Eden
Tareke in Sweden when she found the chemical in starchy foods, such as potato
chips (potato crisps), French fries (chips), and bread that had been heated
higher than 120 °C (248 °F) (production of acrylamide in the heating process
was shown to be temperature-dependent).[17] It was not found in food that had
been boiled[17][18] or in foods that were not heated.[17]
------
userbinator
I had to look again at the date of the article, since I remember Starbucks
already had these warnings a long time ago:
[https://www.flickr.com/photos/shakataganai/6039225908](https://www.flickr.com/photos/shakataganai/6039225908)
That doesn't make the situation any less absurd, however.
------
quickthrower2
Warning: Lack of coffee (or similar stimulant to get you through the long day)
may put your tech job at risk!
------
cowpig
Judge rules hospitals must display cancer warnings; longer life linked to
increase cancer risk
~~~
quickthrower2
Every business that doesn't suffocate it's customers should display cancer
warnings. Oxygen is carcinogenic.
------
freedomben
My buddy has a sticker on his laptop that says, "not legal in California."
Note: He placed the sticker there himself as a joke. His laptop isn't _really_
illegal in California.
~~~
Groxx
Yet.
------
onetimemanytime
sheer stupidity. Eating causes cancer via weight gain. Breathing air causes
cancer. Eventually they'll have to list only those few things that will _not_
cause cancer.
~~~
daveFNbuck
That's the law in California. Everywhere you go there are warnings that the
area has chemicals that cause cancer. I'm surprised Starbucks didn't have
these warnings already.
~~~
coding123
I actually remember seeing a sign like that in a Starbucks like 8 years ago in
like Vacaville CA of all places.
------
exabrial
California continues to drive the cost of doing business up.
~~~
Pilfer
Coffeeshops only need to print a piece of paper and tape it to the wall.
The cost of compliance is less than $0.20
I think there are better arguments for your PoV than this one.
~~~
philipodonnell
Do not fall into the trap of assuming that because cost of complying with the
letter of an incremental regulation seems like it would low to you, that means
that it does not impose a burden on business owners.
Regulations are cumulative. This is yet another thing to add to the list of
notices that have to be up and yet another thing that an unaccountable city
inspector in a bad mood might decide it not quite displayed prominently for
their liking and fine you.
That is what businesses complain about, not the cost of each regulation, the
cost of complying with all the regulations, all the time, with no way to
defend yourself and no one to tell you if its enough (and defend you if
someone disagrees later).
~~~
exabrial
What's the cost of non compliance for not putting up a $0.20 sign
------
mmagin
Interesting: [http://schachtmanlaw.com/the-council-for-education-and-
resea...](http://schachtmanlaw.com/the-council-for-education-and-research-on-
toxics/)
------
dacox
I always thought it was funny seeing this in California. Coffee? Getting a bit
surreal.
I was visiting once and saw a plaque on a (condo?) building with the warning.
~~~
Skunkleton
My apartment build has such a plaque because it was built on the site of an
old factory or something. I am a recent import to California. I was not
prepared for the amount of state sponsored signage.
------
BooneJS
I’ll take the risk that a cup of Philharmonic will be my end.
------
mindfulplay
California seems like a state of contradictions. Antivaccine yet some of the
best universities. Too much regulation yet opiod overdose is prevalent.
~~~
dragonwriter
> California seems like a state of contradictions. Antivaccine yet some of the
> best universities
The antivaccine movement is a very small proportion of the population, and one
which (unlike the universities) is opposed by the people setting policy
(California has a reasonably strong vaccine mandate, and is in the process of
further narrowing the allowable exemptions.)
> Too much regulation yet opiod overdose is prevalent.
California has the third lowest opioid overdose death rate in the nation [0];
insofar as California actually has “too much regulation” and that is a thing
that could reasonably be expected to be contradictory to a relatively high
opioid impact (both of which premises are less than clearly established), no
such contradiction is evident.
But, aside from the relevance of that claimed contradiction, sure, California
is a large and diverse state, not a uniform hive mind. It has plenty of
contradictions. It would be weird if it didn't.
[0] [https://www.kff.org/other/state-indicator/opioid-overdose-
de...](https://www.kff.org/other/state-indicator/opioid-overdose-death-
rates/?currentTimeframe=0&sortModel=%7B"colId":"Opioid%20Overdose%20Death%20Rate%20\(Age-
Adjusted\)","sort":"asc"%7D)
~~~
fipple
> The antivaccine movement is a very small proportion of the population
Not really. Less than 50% of Pixar employees' children at their day care are
vaccinated. And these are people with much more money, and thus more political
clout, than average.
[https://www.wired.com/2015/02/tech-companies-and-
vaccines/](https://www.wired.com/2015/02/tech-companies-and-vaccines/)
------
kazinator
> _A nonprofit research group_
After salaries, bonuses and all sorts of frivolous expenses, no doubt.
No such things as "non-profit group" in this world; just a group with an
accounting ledger in which a column labeled "profit" works out to zero.
Natural, traditional foods and drinks don't need any warnings. The ingredient
is roasted coffee and water; go bleepin' google if you want to know how either
might be bad for you.
|
{
"pile_set_name": "HackerNews"
}
|
The “backfire effect” is mostly a myth, a broad look at the research suggests - hhs
https://www.niemanlab.org/2019/03/the-backfire-effect-is-mostly-a-myth-a-broad-look-at-the-research-suggests/
======
stupidcar
This is like one of those Knights and Knaves logic puzzles we used to do at
school:
Amy believes the backfire effect is real, whereas Ben believes it is fake. Amy
reads an article saying the backfire effect has been disproven, then passes
her new opinion on to Ben. What will be Ben's revised opinion of the backfire
effect be if 1) the backfire effect actually exists, 2) the backfire effect
doesn't exist.
~~~
balabaster
I find this premise amusing... I find it amusing because my ex (whose name
really is Aimee) often had opinions that my life experiences and education
suggested to me were suspect. Aimee used to do a bunch of reading, but the
material she read was often from sources or authors I didn't find credible and
thus while their arguments seemed plausible, they weren't enough to invalidate
my original opinion.
My name really is Ben. My thoughts and opinions on such matters were largely
unaffected by hers in many cases, unless she presented credible evidence that
led me to re-evaluate my opinion, which did occasionally happy. Often though,
it was just "someone else agrees with me, thus I'm right," which does nothing
to sway my opinion.
Two people having the same opinion doesn't make them right and doesn't negate
my opinion. It just means they both disagree with me. Humanity also believed
in the geocentric model of the universe at one point. That doesn't make it
right. It took one person to change the entire opinion of the world to the
heliocentric model. Now admittedly, I'm not Copernicus, but we're all capable
of finding evidence to support or refute our opinions on the world. It's just
that very often people use the opinions of others as support for their own
instead of exercising critical thinking and forming their own opinions.
~~~
RealityVoid
The thing is, forming your own oppinion is _hard_ and you need a great deal of
insight to reach some form of truth.
I, for example, if I lived a couple of hundred years back, would have probably
believed the world is flat, since, well, it's the easiest oppinion to have by
just looking around. The model we have now is incredibly sophisticated and not
at all intuitive, we just think it is because we're used to it. Things that we
are used to seem the most obvious and simple in the world.
We are limited with how much we can dig deep into stuff. So many times we have
to reffer to other people that know what they are doing. I, sometimes, even
reffer to myself at some different point in time. If I know at some time I
looked deeply at some problem and reached a conclusion then, I will just take
it for granted and not always retrace my thought process. It's a useful mental
shortcut, but one that can leave me making oppinions on old or incomplete
facts.
~~~
Retric
People did not really believe the world was flat hundreds of years ago.
Similarly the “new” world was in regular contact with the old for the last
several thousand years. It’s only 53 miles from North America to Asia, even
less if you include the two islands between them.
PS: We have a lot of ideas about how wrong people used to be. But, I suspect
most people never really formed an option on most of this stuff because they
never really considered the question.
~~~
phreeza
If they were in regular contact, why did smallpox have such a devastating
effect on Americans?
~~~
Retric
Population density. Smallpox dies out with lot’s of tiny isolated communities
with minimal contact to each other.
------
supergauntlet
This link isn't terribly blogspammy or anything, but the original article has
more information: [https://fullfact.org/blog/2019/mar/does-backfire-effect-
exis...](https://fullfact.org/blog/2019/mar/does-backfire-effect-exist/)
And the original paper:
[https://fullfact.org/media/uploads/backfire_report_fullfact....](https://fullfact.org/media/uploads/backfire_report_fullfact.pdf)
~~~
gpvos
It's more readable, better organized, and shorter, too.
------
autoexec
So someone named Amy Sippet who works at a charity called Full Fact decided
it's mostly a myth, not after conducting studies or independent research but
after reading just 7 previous studies on the backfire effect and noting that
some studies didn't seem to her to support it and that most of the studies she
read were conducted in the US. The article links to and quotes from her
twitter account.
This major breakthrough in the social sciences shares space in the very same
article with an observation about how you can use Instagram to find "the
internet’s darkest corners.” and some random stuff about Apple.
Maybe it's just the backfire effect here, but I'm not really convinced.
~~~
sterkekoffie
Limiting a literature review to 7 studies based on frequency of citation is
pretty understandable, especially for a relatively young research topic. Of
the 2 studies which showed a significant backfire effect, the findings of the
first were considered "overstated and oversold" by its own authors and the
second only found evidence of the effect in one of four situations. The second
was also partially replicated in a later study which found no evidence of the
backfire effect.
That being said, the author of the review probably would not want you to be
convinced by a single pop-sci piece covering her work. This "major
breakthrough," as you describe it, does not in fact share space with any
tweets or mentions of Instagram, but can be found here:
[https://fullfact.org/media/uploads/backfire_report_fullfact....](https://fullfact.org/media/uploads/backfire_report_fullfact.pdf)
------
rsj_hn
In an age when nothing seems to replicate, this wouldn't surprise me. But
1\. There is an obvious conflict of interest in having a fact checking
organization claim to "debunk" studies showing that fact checking is
ineffective.
2\. This "refutation" is a press briefing, not a peer reviewed study, and
doesn't have a lot of facts to back up the claim, other than looking at the N
difference in only 7 handpicked studies.
This is an amusing example of what happens when so called authorities are held
to account. Let's say someone reports that fact checking organizations are a
waste of taxpayer dollars. Then the fact checking organization comes out with
a 'Fact Check: False'. It seems there should be some prohibition about being
an authority on criticisms of yourself.
~~~
avs733
I've always struggled with the phrase 'replication crisis.' Really we have a
'mindset to approach science crisis.' The replication crisis is a couple of
things to me...
1) A problem with how research is published and treated as 'new knowledge' \-
some of which is being solved (slowly) by efforts such as the OSF. Other
efforts include registered reports, journals for nonsignificant findings, etc.
The problem is journals seek to publish a 'contribution' and define
contribution in a way that misrepresents science.
2) A problem with the promotion and rewarding of academics...see 1 and also
the drive to produce work in academia results in rewarding minuscule steps
that don't have larger coherence because the paper not the impact is rewarded.
People get tenure for publications not scientific discoveries. This obviously
links to 1
3) Often replication failures are a feature not a bug - but fundamentally we
think about and talk about science in way to absolutist of a frame. This is
especially a problem in social sciences which try and perform 'science' to
gain equal treatment to 'hard science.' They aren't worse or better...they are
different, their science is harder because context matters - but it is still
science. There are myriad studies in psychology performed on white men from
Harvard in the 40's (forgive my facetiousness). If I rerun the experiment and
it 'fails to replicate' is the theory 'wrong', 'invalid', 'incomplete', or
context specific? The choice of term there is critical to a set of
philosophical beliefs about science that can actually imperil science by
making different experimental results into an _inherently_ bad thing.
------
rjkennedy98
This all depends on how you measure it. Take Roe v Wade for example. Abortion
pre-Roe v Wade was really not that big of a political issue. Just yesterday I
was listening to the radio on a long drive and a company called "Patriot
Network" was advertising a cellphone network for "Patriots". The reason?
Verizon and other big phone companies donate to Planned Parenthood.
~~~
sb057
Part of the reason it wasn't a very big issue pre-Roe v Wade is that on-demand
abortions were illegal in 46 states prior to it.
~~~
kirsebaer
"In 1971, delegates to the Southern Baptist Convention in St. Louis, Missouri,
passed a resolution encouraging “Southern Baptists to work for legislation
that will allow the possibility of abortion under such conditions as rape,
incest, clear evidence of severe fetal deformity, and carefully ascertained
evidence of the likelihood of damage to the emotional, mental, and physical
health of the mother.” The convention, hardly a redoubt of liberal values,
reaffirmed that position in 1974, one year after Roe, and again in 1976."
[https://www.politico.com/magazine/story/2014/05/religious-
ri...](https://www.politico.com/magazine/story/2014/05/religious-right-real-
origins-107133)
~~~
zaphar
There is an enormous amount of nuance in that statement. The real change that
occurred was that nuance evaporated from the discussion of the subject.
------
emmelaich
Meta -- but does anyone else get tired of the word 'myth'?
These days, instead of opinions and arguments being countered, we have 'myths'
busted.
It's even weirder when it is 'myth' you've never heard of.
~~~
keiru
Yes, it's tiresome. Not refering to this one in particular, but many internet
champions of truth aggrandize their Goliath to post shitty myth busting
articles. In a recent post I chose to say "mistaken belief" to avoid sounding
like captain fact-checker.
------
backfired
To paraphrase:
> The effect: " _An ideologue believes a (wrong) thing. You tell them, no.
> They double down. Your effort to refute has backfired._ " This is actually
> rare.
It doesn't matter, because we've all been there. We've all experienced clashes
of opinion, and none of us like being wrong, so we resist adjusting.
Lots of us do it. It's a natural part of childhood. Being afraid of the dark.
Believing in ghosts. Santa Clause. The Easter Bunny. The Tooth Fairy. How did
you cope?
What about other aspects of belief? Religion? What aspects of religion does
one regard as history? How many people remain religious in the face of
contrary evidence?
Suddenly, this effect seems less rare.
~~~
username90
The backfire effect is obviously not universal, otherwise it would be
impossible to teach for example Newtonian physics. So if you present strong
enough proof people will change their minds.
I think the problem is that people don't really learn what a strong proof is,
so when two people who each have a weak understanding of the topic tries to
argue both will see that the other is ignorant but not that themselves are
ignorant. After the discussion both will have correctly refuted a lot of
claims from the other side, thus "strengthening" their own views. If this
often happen to you then you are likely not as well informed as you think you
are.
In short, I believe that "you can't reason someone into a position that you
didn't reason yourself into" is true, while the popular "you can't reason
someone out of a position that they didn't reason themselves into" is wrong.
~~~
Torgo
Not necessary. Science advances when entrenched proponents of the old theory
die out.
------
ocschwar
I don't know. Having read it only makes me believe more strongly in the
backfire effect.
------
diminoten
"Fact finding organization finds that fact finding matters, despite some
rumors that facts don't matter."
Right.
------
doc_gunthrop
“Read not to contradict and confute; nor to believe and take for granted; nor
to find talk and discourse; but to weigh and consider.” -Francis Bacon
------
Der_Einzige
The term used for this by philsophers at least after hegel is "the dialectic".
I've always thought that many philsophers uncritical acceptance of a
dialectical process within the world was absurd. I am happy to see some
evidence from science indicating this to be the case.
------
darkpuma
Belief in the backfire effect confirms the bias many people have towards
believing their political opponents are idiots, immune to rational thought.
~~~
Veen
That’s only true if you believe your own side and yourself aren’t just as
vulnerable.
~~~
darkpuma
I didn't say one side experiences this more than another.
There are rational people on all "sides". Rational people may come to disagree
with each other if they have different information available to them, or if
they simply have different priorities.
Anybody who thinks _all_ of their opponents are irrational, by virtue of being
their opponent, are themselves irrational in that respect. However that says
nothing about the political distribution of this particular cognitive bias. I
have made no claims concerning that.
------
dr_dshiv
"I didn't think the backfire effect was real, but now I'm not so sure."
Is there a catchy name to describe the effect of disbelieving social science
research?
~~~
spamizbad
Not sure, but social science research is currently in the midst of a
replication crisis. And not necessarily due to anyone acting in bad faith: you
can run an experiment twice a few months apart and get opposite results. No
idea on how to fix, but probably needs to invest in more robust experiment
design, larger samples, and perhaps more advanced statistical methods. Sadly
all of those demands cut against the "publish or perish" mentality in academia
which often favors quantity over quality. A shame, since most social science
researches want to do the best work possible but are constrained by $$$ and
the broader culture of expectations in academic research.
~~~
germanlee
It's in a replication crisis because pretty much none of it is science ( no
replicable testing possible - hypothesis, experiment, theory ). It's why
Richard Feynmann associated social science with pseudoscience.
[https://www.youtube.com/watch?v=tWr39Q9vBgo](https://www.youtube.com/watch?v=tWr39Q9vBgo)
Because real science destroyed the credibility of religion and religion in
much of the world is no longer a credible social control tool, the elites
needed a new form of religion to control society. That new religion is social
"science". Whereas religion controlled everything from economics, schooling,
family, culture, society, law, etc, now they all fall under the
pseudoscience/religion called social "science".
\---------------------------------------------
Reply to ziddoap.
Considering you tossed around "illumati-esque", I doubt you are interested.
I consider social science to be a pseudoscience for the same reason richard
feynmann did. Did you bother watching what he had to say?
Social "science" is a humanities. It belongs in the category with philosophy,
ethics, literature, religion, etc.
Just because I said it is a pseudoscience doesn't mean that I think it is
useless or bad necessarily. No more than I think literature, ethics,
philosophy or even religion is bad.
I just think social "science" is a "religion" trying to latch onto the good
name of real science. Just like creationism "science" or all the other fake
"science" trying to gain credibility by associating itself with science.
~~~
feanaro
I see this position somewhat often, almost unavoidably accompanied by a
reference to Feynman. The position is, of course, pure nonsense if you take a
few moments to think it through.
Not only has the world moved on drastically from when Feynman, a non-expert in
the area, wrote that essay, but it is also ludicrous to claim that a part of
existence is unamenable to scientific study. If it exists and has an effect,
it can be studied. There is no reason to believe human behaviour and thought
is beyond this.
~~~
CriticalCathed
>Not only has the world moved on drastically from when Feynman...
You're right, social science got even less replicable and less scientific.
>If it exists and has an effect, it can be studied.
Yes, you're right. But that doesn't mean that you can ground it in empirical
evidence or effectively apply the scientific method of inquiry. Philosophy is
a method of studying human behavior -- it is not, however, science. And for
substantially the same set of reasons the social sciences are also not
science.
~~~
feanaro
> You're right, social science got even less replicable and less scientific.
You'll need to substantiate this claim, of course.
> Yes, you're right. But that doesn't mean that you can ground it in empirical
> evidence or effectively apply the scientific method of inquiry.
Why not?
> Philosophy is a method of studying human behavior -- it is not, however,
> science. And for substantially the same set of reasons the social sciences
> are also not science.
You are simply repeating the old misconception I've hinted at: that human
behaviour is off-limits to scientific inquiry, even though it is real and
physical. I fail to see why this would be the case. We are, after all, talking
about measurable, quantifiable things inputs and outputs regarding human
behaviour.
~~~
Kaiyou
Because most humans behave sufficiently different from each other. Even if you
experiment on a subset of humans and get knowledge about this subset, a
different subset of humans could react completely different. It's so bad that
even the same subset of humans could react completely different if you do the
same test 50 years later.
~~~
feanaro
> Even if you experiment on a subset of humans and get knowledge about this
> subset, a different subset of humans could react completely different.
They _could_ , but that does not mean they _do_. There are quite obviously
rules and patterns to much of human functioning. Denying so seems like human
hubris.
Even if each human displays unique behaviour for a particular trait, knowing
that it is so for that particular trait is useful and therefore still amenable
to scientific exploration. Even if humans reacted randomly in some situation,
the random behaviour would be subject to a probability distribution and
knowing it would be useful.
It's hard for me to see where exactly the leap to "it's impossible to study
human behaviour scientifically" is necessary, particularly when we have so
much evidence to the contrary.
~~~
Kaiyou
It's not science if you don't reliably get the same output if you provide the
same input. It's useful, sure. But it's not science.
~~~
feanaro
Sorry, but this just sounds like a deepity. Science is a _process_ , not a
result.
It holds for most of science most of the time that you don't reliably get the
same output if you provide the same input (because you don't know all the
variables or the entire set of equations). Only when a phenomenon is
completely known does this stop being true.
But when is a phenomenon completely known? After all, for a long time we've
known classical mechanics to be completely known... Except it wasn't. And
during the time we thought it was, you could get into exactly the type of
situation you describe above: for the "same" input, you could get a different
output, depending on the components of the stress-energy tensor you were not
aware were relevant. The effect was subtle there of course, but there are many
examples where it's not (e.g. the entirety of biology and medicine).
So I disagree with this description of science.
EDIT: Also, it completely slipped my mind the first time around because it's
such a stupidly strong counterargument, but by your definition the entirety of
modern physics (quantum mechanics, quantum field theory and beyond) is not
science.
~~~
Kaiyou
Science is useful because it has the power to predict. It gains this power
from getting the same output when providing the same input. If what you are
doing doesn't have the power to predict it's not science. You can still apply
the scientific method to what you are doing and if you're applying that method
you might as well call yourself a scientist and what your doing science, but
then again, a few hundred years ago scientists didn't yet know that things
like alchemy weren't science, so they applied the scientific method to it and
figured out that it isn't useful.
------
orblivion
That thing you thought was true all your life is actually false.
1 year later: That thing you thought was false for the last year was actually
true.
At every turn trying to make us feel stupid. It's almost like somebody is
trying to drive us crazy.
~~~
britch
Understanding is a process. Believing what we have the best evidence to
believe is not stupid. If the evidence changes and you shift your belief you
are not stupid.
I think the reality is that humans are very complex, especially how we relate
to each other. Studies that try to boil down our behavior and provide
actionable results are highly demanded, but are very difficult to prove.
~~~
greglindahl
Believing the best evidence is not a good idea if the evidence is weak. One
thing that many science journalists are not very good at is conveying how
strong the evidence is.
~~~
tempestn
It's a good idea as long as the belief is proportionately weakly held.
~~~
RealityVoid
But his is not the way most people think about things. For most people it's
white or black, it is or it isn't. Speaking of degrees of belief makes you
look like an indecisive fool at times.
~~~
tempestn
If I were running for office I might be a bit careful about how I expressed
such things, but having a degree of belief proportional to the degree of
evidence appears the opposite of foolish to me. I hold that belief moderately
strongly.
|
{
"pile_set_name": "HackerNews"
}
|
101 Great Computer Programming Quotes - hollywoodcole
http://www.devtopics.com/101-great-computer-programming-quotes/
======
hollywoodcole
"I think Microsoft named .Net so it wouldn't show up in a Unix directory
listing."
|
{
"pile_set_name": "HackerNews"
}
|
Gcapizzi/moka: A Go mocking framework - kiyanwang
https://github.com/gcapizzi/moka
======
gcapizzi
Just tried to post about Moka on here and I find out that someone has done it
before me :D Thanks OP!
I'm the author of Moka, so if anyone has any question or feedback for me,
please shoot :)
|
{
"pile_set_name": "HackerNews"
}
|
Square jumps in market debut - petethomas
http://reuters.com/article/idUSL3N13E45620151119
======
chollida1
A couple of interesting things about the IPO. A Lot more the trades happening
are tagged as coming from trade reporting facilities, which means they were
crossed/filled by internalizes, than is normal.
This usually means retail flow( regular folks) using E-Trade, Ameritrade,
Interactive Broker, Scott Trade, etc to buy/sell shares. These firms get paid
to send their orders to HFT firms who fill the orders internally(ie not on an
exchange) and then use trade reporting facilities to notify the markets of
these orders.
Also supporting this thesis is the number of small and odd lot trades.
So you could argue that main street likes Square more than Wall street.
With the market down and Match.com's IPO popping less, this is a very good
opening for them.
Having said that, the real test for most new stocks is 6 months out when
they've had 2 reporting quarters, options start to trade and short interest
numbers( a measure of how many people want to short the company) start to have
some validity.
Question to anyone more informed than me......
Do squares late series investor's liquidation preference kick in at the IPO
opening price or the vwap price through the day?
If it's the former, then the late stage investors are going to get a great
double dip of getting a huge increase in shares due to the ipo price being
below their strike and get the benefit of this pop.
Maybe late stage investors aren't as dumb as people around here have been
claiming lately?
~~~
exelius
Yeah; this happens a lot with big tech IPOs. Typically they will spike on day
1 with lots of individual investors buying in to the hype train. I would
expect to see Square drop precipitously over the next week/month as the
professional investors can make money by pushing the valuation down, then
it'll stabilize somewhere near the IPO price in 6-12 months. Company
fundamentals have very little to do with share price in the first year or two
of an IPO; it's almost entirely psychological.
And nobody's been claiming that late-stage investors are dumb; in fact quite
the opposite. Late stage investors tend to be very large, professionally
managed equity funds - they are as savvy as investors get, and they're strong-
arming early stage investors into unfavorable liquidation preferences. They're
also the type of investor to have media / analyst contacts they can use to
push down the IPO price knowing that it's going to pop. The discussion with
late stage investors is really "the fact that late-stage investors are sucking
up a disproportionate share of the profits from unicorn exits is endangering
the VC investment model".
~~~
chillydawg
None of what you just said has any basis in reality. If you really did know
all of that, you would have spent every penny you have a) buying the stock at
IPO and b) selling it tomorrow and then immediately shorting it and waiting.
~~~
justinv
It's incredibly difficult to short an IPO within the first 30 days because the
SEC requires the underwriter to wait at least 30 days to lend out shares for
shorting.
So then you'd have to get a retail or institutional investor who JUST
PURCHASED shares to lend them to you to short.
In which case, why would they bother buying them in the first place.
~~~
scurvy
This isn't entirely true. If you hold securities in a margin account, your
broker can lend the shares out to another client of theirs to short. You have
no say in the matter. It's up to your broker to balance the number of longs vs
shorts internally. But you can most definitely short on day one. I've done it
and my account balance only has 1 comma in it.
If the short to long ratio gets too close, they will close out shorts or try
to borrow from other clearing firms so that they don't end up naked. Early on,
most firms will just close the short.
~~~
prostoalex
> This isn't entirely true. If you hold securities in a margin account, your
> broker can lend the shares out to another client of theirs to short. You
> have no say in the matter.
I don't think what you said is 100% true either. Charles Schwab brokerage
definitely asks before lending out, advertises the interest the borrower is
willing to pay on the loaned shares, enrolls the willing lender into Schwab
Securities Lending Fully Paid Program, and then buys an insurance policy from
Lloyd to cover counter-party default. All of this is done via FedEx letters
and hand-written signatures.
Sounds like some brokerages don't ask and keep the fees and accrued interest
to themselves, which is shady. ETFs and mutual funds don't have to ask
anybody, of course, and usually pass on revenue from securities lending to
their customers via lower fees.
~~~
scurvy
Are you sure that's not only for cash accounts. Pretty much every margin
account agreement I've ever seen says that they can lend out your holdings
without your explicit consent or that acceptance of the agreement is explicit
consent.
Cash accounts are different creatures.
------
mikeash
This seems to be a pretty common thing. Why don't IPOs use some sort of
auction structure to discover the right price at the time of the IPO, rather
than setting a price up-front and trying to figure out the optimal number
ahead of time?
~~~
ryw90
Google used a Dutch auction in their IPO. But then again they are pretty
obsessed with auctions. There is some research on auctions for IPOs, e.g.
[http://www.nber.org/papers/w16214](http://www.nber.org/papers/w16214), that
suggests they are too complex for investors.
~~~
evanpw
And Google still had a 15% first-day pop, which is just about the normal size
for a traditional bank-run IPO.
~~~
chillydawg
It absolutely it not normal. If there was ALWAYS a 15% pop, then IPO prices
would be 15% higher.
~~~
tveita
It's not a given that a stock that just popped 15% trades the same as a stock
that was offered at a 15% higher price in the first place.
------
marcusestes
The financial press has spent the last two weeks lambasting them for a
valuation that was essentially a down round from their last private financing.
I'm sure they were aiming for a small pop, but 60% does make it look like they
left money on the table and that they might have taken some of the heat off
the criticism around the discounted valuation.
They just needed to get the deal done ([http://avc.com/2015/11/getting-the-
deal-done/_](http://avc.com/2015/11/getting-the-deal-done/_).
------
stanfordkid
They didn't per-se leave money on the table. The 60% pop is for shares that
are actively trading. The actively traded amount is much much less than the
amount that was taken in by institutional investors that would agree to hold
the stock for an extended period of time.
~~~
cowsandmilk
Side question on this: Are there ever efforts to have varying lock-up periods
for different institutional investors? So there is a multi-stage gradual
introduction of more actively traded shares?
~~~
prostoalex
Yes, for example, FB [http://www.thestreet.com/story/11666479/1/heres-the-
rest-of-...](http://www.thestreet.com/story/11666479/1/heres-the-rest-of-
facebooks-lock-up-schedule.html)
------
btilly
The proper conclusion is that Square raised a lot less money than it could
have raised, and investment bankers are very happy with the cream they are
skimming off the top for their friends.
A pop makes for a good headline. But really it is a terrible deal.
~~~
adventured
That's not necessarily accurate if the pop today is being caused by average
public investors. As someone else noted, it would mean main street likes the
stock more than Wall Street.
~~~
btilly
How not accurate?
The half about investment bankers and their friends being happy is guaranteed
to be accurate because they bought low and now get to sell high.
As for the other half, it is theoretically the job of Wall St to figure out
what main street will like. However Wall St makes money by being bad at that
job. Is there any wonder that they are consistently terrible at it?
------
wslh
It is weird that I can't find neither the SQ symbol or SQUARE in Google
Finances but it is available on Yahoo.
~~~
jonknee
Not that surprising, IPO days are weird for that sort of thing. It will show
up here:
[http://www.google.com/finance?q=NYSE%3ASQ](http://www.google.com/finance?q=NYSE%3ASQ)
~~~
mcpherrinm
In my experience, stocks never show up immediately after IPO on google
finance. I always end up using Yahoo to watch the price for the first few
days.
~~~
jonknee
It's really a shame that Google has abandoned Google Finance. Seems like a
pretty low-cost high-reward vertical for them (advertising to people who care
about financial markets is lucrative).
------
pkaye
So they basically left money on the table?
~~~
chimeracoder
These days, you "have to" leave money on the table, or your IPO is deemed a
"failure" (see: Facebook).
~~~
uptown
What would you rather have ... less money but CNBC and the internet calling
you a huge success, or more capital available to run your business but CNBC
and today's internet echo chamber calling you a failure. I'd choose the
latter.
------
vskr
It is truly amazing how bankers always get away with this kind of incompetent
non-sense. Every single time a stock sees a "jump", it is because those
"analysts" did a poor job in evaluating stock price. And company going IPO is
leaving money on the table, which is stolen by those greedy (too polite a term
for bankers) bankers.
~~~
remarkEon
-> It is truly amazing how bankers always get away with this kind of incompetent non-sense.
-> And company going IPO is leaving money on the table, which is stolen by those greedy (too polite a term for bankers) bankers.
Well, it appears both these statements can't be true. Bankers can't both be so
stupid so as to leave such amazing amount of money on the table...and then
also be so smart that they reap the rewards of significant underpricing.
------
tosseraccount
How much of the stock is publicly available? What is the corporate structure,
i.e. is this a one share, one vote set up?
~~~
ksherlock
"We have two classes of authorized common stock: the Class A common stock
offered hereby and Class B common stock. The rights of the holders of Class A
common stock and Class B common stock are identical, except with respect to
voting and conversion rights. Each share of Class A common stock is entitled
to one vote. __Each share of Class B common stock is entitled to ten votes
__and is convertible at any time into one share of Class A common stock. "
"After the completion of this offering, our existing stockholders will
continue to hold all of our issued and outstanding Class B common stock and
will hold approximately % of the combined voting power of our common stock. As
a result of their ownership, they will be able to control any action requiring
the general approval of our stockholders, including the election of our board
of directors, the adoption of certain amendments to our certificate of
incorporation and bylaws, the approval of any merger or sale of substantially
all of our assets, and certain provisions that impact their rights and
privileges as Class B common stockholders. See “Description of Capital
Stock.”"
------
yggydrasily
Yahoo finance says the market cap is only $340M, but all the articles leading
up to today said their valuation was going to be $2.5-3B.
Did they really only IPO less than 20% of the total company and keep the rest
private?
~~~
joshu
That isn't how it works. They create some new shares and sell them. The rest
of the shares are held by the current owners.
------
randomsearch
I gotta ask, why will Square succeed in a market that is surely going to be
controlled by device (phone/watch) manufacturers?
------
strongcrypto
From what I remember, IPO prices tend to aim about 15% below what the market
value should be. It's a big incentive to get the big players (institutional
investors) to get in on the IPO and get the stock moving. Pretty sure Square
might have offered a deeper discount to make the offering look successful.
Regardless of the actual prevalence of that strategy, I don't think we'll have
a meaningful narrative about Square's IPO until the market has it for at least
a few days.
~~~
noroom
What you remember from where?
~~~
strongcrypto
Financial coursework, not anything I can cite but this should work:
[http://www.crab.rutgers.edu/~yaari/Articles-PDF/IPO-
Hauser-Y...](http://www.crab.rutgers.edu/~yaari/Articles-PDF/IPO-Hauser-Yaari-
JLEweb.pdf). Estimates that this IPO discount is around 18.4% in the US.
"The price discount varies over time and across initial public offering (IPO)
mechanisms but remains economically significant everywhere. Span- ning 4
decades and 38 countries, Ritter’s (2003) survey of international studies
reports a mean first-day return (commonly equated with the offer price
discount, or underpricing) ranging from 5.4 percent (Denmark) to 257 percent
(China), with a median return of 20.7 percent...[the] U.S. mean return [was]
18.4 percent [from] 1960–2001"
------
Bud
This is no surprise for anyone who is using Square Cash. Clearly the best
product on the market for its purpose.
~~~
natrius
It seems unlikely that they'll be able to transfer money for free
indefinitely. Their competitors hold on to balances for a reason: it's the
only way to build a sustainable business on top of free money transfers.
~~~
glass_of_water
Square Cash recently introduced the ability to send money w/ your credit card
(before it was just debit). Square takes 3% of those transactions. Also, I
think they charge 1.9% for non-personal use when sending money w/ a debit
card.
------
gchokov
Big shorting coming.
|
{
"pile_set_name": "HackerNews"
}
|
Boycotts or buycotts? The role of corporate activism - samizdis
https://phys.org/news/2020-08-boycotts-buycotts-role-corporate.html
======
samizdis
Corporate Sociopolitical Activism and Firm Value,
The Journal of Marketing,
[https://journals.sagepub.com/doi/10.1177/0022242920937000](https://journals.sagepub.com/doi/10.1177/0022242920937000)
|
{
"pile_set_name": "HackerNews"
}
|
Java (or JVM Lang) Based Startups - We Want To Hear From You - friendlytuna
http://java.dzone.com/articles/are-you-java-or-mobile-based
======
mindcrime
Here's one: Fogbeam Labs[1]. Our stack is JVM based, mostly being Groovy /
Grails, and some plain old Java. There may be a place for some Scala and/or
Clojure down the road, depending on how things shake out.
[1]: <http://www.fogbeam.com>
|
{
"pile_set_name": "HackerNews"
}
|
What It Means to Design for Growth at The New York Times - infodocket
https://open.nytimes.com/what-it-means-to-design-for-growth-at-the-new-york-times-2041e0f5e64a
======
Fjolsvith
Probably doesn't have anything with axing the political cartoon department.
[1]
1\. [https://www.mercurynews.com/2019/06/13/political-cartoons-
ne...](https://www.mercurynews.com/2019/06/13/political-cartoons-new-york-
times-cancels-editorial-cartoons/)
|
{
"pile_set_name": "HackerNews"
}
|
The coming shortage of helium and how government policy is creating it. - stretchwithme
http://www.scientificamerican.com/blog/post.cfm?id=the-coming-shortage-of-helium-2010-06-30
There's a mandate that sales cover the costs associated with the helium reserves, so they've lowered the price to increase the sales in the short run without regard to the long term strategic value of having a huge chunk of a limited world supply.
======
Robin_Message
That is a nasty linkbait-y retitling. As far as the Helium Privatisation Act
goes, the government was being accused of "waste" in 1996 when it was sitting
on a vast stockpile of helium (from prior military uses.) It decided to sell
it off over a controlled period so as to not depress the price excessively and
put private suppliers out of business. They gave themselves the deadline of
2015 to have it all sold, presumabley so as to reduce the government debt in a
reasonable time frame. [1]
Now it seems that helium is much more valuable but the price is depressed
because this stockpile has to be sold by 2015. So something that made sense at
the time now doesn't, and a new act is needed to say they can take longer to
sell the helium. Government fail? Only if they don't change the policy.
Oh, and the HPA almost unanimously approved by Congress (411-10, with 12 not
voting.) [2]
EDIT TO ADD: I am grumpy about this because titles like that generally go with
the articles along the lines of "nasty government interfering with the
market." But in this case, the report wants the government to be the
government, to act strategically, not to sell at the market price but above
it, and generally to interfere. The 1996 Act stopped the government doing its
job in order to cut perceived waste and debt. The act should never have passed
congress in the first place, or at least not in such a fixed way, but it did
because it fitted people's stupid, prejudiced narrative that the government is
too "big".
[1] [http://www.helium.com/items/874929-understanding-the-
helium-...](http://www.helium.com/items/874929-understanding-the-helium-
privitization-act-of-1996)
[2]
[http://projects.washingtonpost.com/congress/104/house/2/vote...](http://projects.washingtonpost.com/congress/104/house/2/votes/137/)
~~~
pbhjpbhj
>So something that made sense at the time now doesn't, and a new act is needed
to say they can take longer to sell the helium.
This sort of thing shouldn't require an act of congress/parliament.
A general rule of not selling >$Y of goods that are above X% of the yearly
market without treasury approval would seem more sensible.
~~~
stretchwithme
The post office can't even stop delivering mail on Saturday without an act of
Congress. The wheels of bureaucracy continue to turn despite whatever signals
the market sends.
------
binarymax
Is helium being used not only because it is inert but for its other properties
as well? I mean if an application just needs an inert gas could it use argon
instead?
~~~
phreeza
It plays a big role in cryogenics, too. Helium is the standard way of cooling
(non-HTC) superconductors down to operating temperature.
~~~
narkee
Right on. MRIs need liquid helium to cool their superconducting magnet, and
they need to be topped up every so often, to accommodate boil-off. It's
getting pricier to do so, increasing the cost of an already expensive
apparatus.
------
phreeza
OK so how does one profit from this? I have tried to find a way to buy Helium
futures before, but haven't been able to. Does anyone know where/how this is
possible?
~~~
mhb
Why would you think that it's a better idea to buy helium now than it was to
buy lithium before the recent discovery of large lithium deposits in
Afghanistan?
~~~
phreeza
I don't follow... More lithium was found, whereas helium is going to become
scarce. What do the two have in common?
~~~
harshpotatoes
Well, we could always find another large deposit of helium in some
undiscovered salt mine. Unlikely, but there are still parts of the earth
largely unsurveyed. So maybe. After that, maybe in 50 years we'll start mining
helium3 from the moon, or producing it ourselves in a fusion reactor.
~~~
phreeza
From what I understand, He-3 would actually be consumed in plausible energy-
netting fusion reactions, which is the reason for the whole lunar mining
speculation. I think prices would need to be pretty steep before lunar mining
makes economic sense.
~~~
harshpotatoes
The reaction you're thinking of is: Deuterium + He3 -> proton + He4 + energy.
So the reaction also yields helium. Helium is a very common product from
fusion reactions. We can produce it now, just not in very large quantities.
But cryogenics have money, and a need for helium, meaning they might be
willing to pay for it being produced in such a way in the future.
~~~
evgen
Given the amount of energy that is produced for every atom of helium you are
getting this is an unlikely path to adding to the global helium reserves. We
would be able to power a small country with the energy that would be produced
in the course of getting enough helium to fill a small blimp.
------
cstuder
Dammit, what shall I fill my blimp with now? Hot air?
~~~
rmundo
I was just wondering earlier today how long world helium reserves could last.
For airships a 90% reduction in carbon emissions won't matter if helium
becomes the economic bottleneck. Sort of depressing how soon we seem to be
reaching a point where depletion of earth resources is inevitable.
Hydrogen would be interesting if we could make it safe enough. It could double
as a fuel source for the ships, too.
~~~
Groxx
Unless we find a way to make profitable fusion.
|
{
"pile_set_name": "HackerNews"
}
|
Children raised in greener areas have higher IQ, study finds - onetimemanytime
https://www.theguardian.com/environment/2020/aug/24/children-raised-greener-areas-higher-iq-study
======
dhosek
There is a well-known correlation between the greenness of an urban (or
suburban) space and income level of the residents.
[https://yaleclimateconnections.org/2020/06/low-income-
neighb...](https://yaleclimateconnections.org/2020/06/low-income-
neighborhoods-often-have-fewer-trees-than-wealthier-counterparts/)
[https://caseytrees.org/2017/11/d-c-s-poorer-neighborhoods-
fe...](https://caseytrees.org/2017/11/d-c-s-poorer-neighborhoods-fewer-trees/)
[https://www.audubon.org/news/in-los-angeles-rich-
neighborhoo...](https://www.audubon.org/news/in-los-angeles-rich-
neighborhoods-enjoy-more-street-trees-and-lot-more-birds)
|
{
"pile_set_name": "HackerNews"
}
|
Teen Solves Quantum Entanglement Problem for Fun - sunsu
http://www.wired.com/wiredenterprise/2012/06/ari-dyckovsky/
======
hello_asdf
"With his paper, Ari Dyckovsky has helped show that you can have quantum
entanglement with vastly different particles, not just particles that are
similar."
This was pretty much the summation of his research at least from what I
noticed in the article. Can someone explain this for me, please?
|
{
"pile_set_name": "HackerNews"
}
|
The Instagram Effect: Causing Silicon Valley to fund the wrong companies? - iProject
http://gigaom.com/2012/07/19/is-the-instagram-effect-causing-silicon-valley-to-fund-the-wrong-companies/
======
dkrich
“You see people graduating from top tech schools… and they’re starting
companies in internet and mobile,” said Corey Reese, CEO and co-founder at
Ness Computing. “They’re not starting companies to enhance the life
expectancy, by and large.”
This is what's known as the law of supply and demand, and isn't going to
change. Sure I can start a business trying to sell salads because that's what
people should be eating, but I'm not going to stay in business if people
really want cheeseburgers and pizza.
This whole article also kind of points the finger at people in Silicon Valley
and seems to say "Hey, you went to a good school and there are a lot of people
around you with a lot of money. Why aren't you curing cancer?"
Two things- first, curing cancer is really fucking hard. Has the author of
this article ever tried? Or is he assuming that some smart people somewhere
else should be trying? I highly doubt that the guys behind Instagram are
capable of curing cancer. So it's not as if for them it was interchangeable.
It's also not as if the money that went to them would have resulted in a cure
for cancer. To assume that it would have is to assume that there is somebody
out there in SV going door-to-door trying to get funding for a cancer cure
that would work.
Secondly, there is a lot of ongoing research in the "important" industries at
Universities and research laboratories that do not rely on VC money. VC's
couldn't afford to fund this research. It requires grants and donations on a
massive scale. The simplicity behind this article is absurd.
~~~
eshvk
I think there is this (from my limited experience rather naive) belief that if
a bunch of smart computer people get together, they can solve problems that
have flummoxed biologists for long. This is something pervasive which afflicts
even the smartest ones (See Steve Yegge's talk
<http://www.youtube.com/watch?v=vKmQW_Nkfk8> on how The Data Mining needs to
be used more to solve problems in biology).
Having spent some time in grad school foolishly believing that all biology
needed was a sprinkle of math and a generous dollop of python skills, I think
the problem is that in Computer Science we have problems that are sometimes
insanely hard but due to the several layers of abstraction, are inherently
rather deterministic and much less noisy than real-world systems. I think the
best comprehensible example comes from the physical layer of most electronics
where several of the modeling equations are naive approximations designed to
work under specific conditions. The complexity becomes even more overwhelming
when you look at biological systems: The nonlinearity especially in say
genetic hacking becomes apparent when you realize that each tiny modification
has so many unintended effects (making modeling in a computer environment
especially difficult) which is why biologist go through the grind of running
all those "boring" real world experiments in labs. I think there are quite a
few research labs that are using data mining and machine learning techniques
to try to solve problems but we are at most quality places well beyond needing
"smart people with computer skills". After a while, fundamental research
requires, like you said, money on a massive scale which VC's really can't
afford.
~~~
Dn_Ab
You make good points but swing the pendulum too far the other way.
No, a bunch of smart computer people won't solve problems that have flummoxed
biologists for long. But a bunch of smart computer, biology, math, physics and
chemistry people will. You are right in that ignoring bench work + domain
knowledge and thinking one can come in write a bunch of programs and solve
cancer is not only an insult but foolish.
Not all the problems have flummoxed biologists for long. The problem of
sequencing improving faster than computers is a new one. And one where
computer people can help. In fact modern sequencing techniques rely on
computers to be viable.
Modelling phenotype from gene sequences and predicting possible drug
interaction will become real soon.
Biology is complex yes but some of it is complexity introduced in an attempt
to avoid difficult but simplifying concepts like non linear PDEs. Funny that
you mention electronics. See what Yuri Lazebnik has to say about this.
[http://protein.bio.msu.ru/biokhimiya/contents/v69/pdf/bcm_14...](http://protein.bio.msu.ru/biokhimiya/contents/v69/pdf/bcm_1403.pdf)
Computer people can help with tools, modelling, analysis and prediction.
~~~
eshvk
I am sorry if I came off as disagreeing with your main points because I
actually don't. Just to say a couple of things: At least from looking at such
cross-functional bio/c.s. teams work here in Grad. school, I have seen two
types:
1\. Where the bio (in this case wildlife) people have extremely limited ideas
about what happens during a simulation or modeling experiment, this leads to
ridiculous attempts to say develop multi-agent models for hyena cooperation
which if you read the paper make so many ridiculous assumptions that it
doesn't really make sure what if any utility can be obtained as inferences
from those models.
2\. The other much more rarer situation where the bio people have a keen
understanding of the basics of both C.S. and Math and are well aware of the
limitations of what they are getting into. Such collaborations usually turn
into using a C.S./Math (usually grad. student) person in order to delve deep
into the technical specifics which the bio guy could understand with some
help.
Yuri's article is interesting and while I do agree that some cross pollination
of ideas (such as is happening in systems biology) is useful. I am not so sure
I am that enthusiastic as he is about drawing so much hope from the
engineering analogy (because there is an element of silver bulletism to it:).
------
dasil003
It's a gold rush. We can't sit around and wring our hands over this.
Mobile/social are cheap and easy to get into and as long as Facebook is buying
them for a billion dollars, it's going to give better ROI than more
"important" ideas.
Now I'm pretty sure this will all wash out over the coming years as Facebook
works harder to monetize and the true value of this stuff becomes better
understood. The bottom line is that a lot more people will pay a lot more
money to live longer than they will to send instagrams. But in the meantime
you can't blame VCs for irrational exuberance when year-old, twenty-employee
companies are selling for _a fucking billion dollars_.
~~~
wpietri
I can blame them a little.
People who are chasing currently hot things are missing the chance to invest
in the next hot thing. VCs have a sweet deal: they get fat salaries, have nice
offices, and have plenty of support staff. If anybody can afford to play the
long game, it's them.
The only reason things are flipping for $1bn is that they are worth that to a
much bigger company. Instead of going after the quick $1 billion company, I'd
rather see them going for the "slow" $60 billion company (Facebook) or $200
billion company (Google).
~~~
aclimatt
Venture Capital doesn't generally work that way unfortunately. VCs typically
shoot for 10x over 5 years with the fund's total lifespan being approximately
10 years (from initial investment to return). So it's hard for VCs to make
longer-term investments in companies like Google which took 14 years to get
where they are today. (Sure, in the case of Google they could have flipped it
sooner, but even a few years ago it wasn't worth what it is today.)
~~~
lurker14
Google's IPO was in 2004 at age 6-years and returned 100x on investment, and
has had "regular" (on the high-end) big-cap growth since.
------
sreyaNotfilc
Is Instagram really worth $1 Billion? Honestly we don't know. I would say no,
but I'm wrong on a lot of things. I'm also right on a lot of things. Who
really knows?
As far as Facebook is concerned (well Mark is concerned) it is. But it begs
the question of "Why?".
* Surely Facebook has all the users in the world. Getting a few more million really isn't that substantial. * And even getting more users is a good thing, I'm fairly certain that 90% of those users are already on Facebook. So what are they really gaining? * The technology behind Instragram isn't remarkable. From what I've read, they are a picture app with filters. Yes, those effects are pretty cool, but I find it hard to believe that a group of people that are able to build the social juggernaut that is Facebook couldn't set aside 10 guys to create filters and effects already built into Facebook.
Now perhaps its something else. Mark probably realized that they system was
already in place through Instagram that all that was needed was to purchase
them and slap a big ol' Facebook sticker on the cover. Done... Next task... I
really don't know.
I believe that Mark is allowed to do whatever. Afterall, he did invent
Facebook. A potential money-making powerhouse. We are essentially the
outsiders looking in. We haven't built a multibillion dollar company, so we
can't really dictate that this is a bad move or not. Its working so far for
Facebook (yes the stocks are not as great as they wanted, but hey at least
they are out there playing the game).
All I know is, its not my money, so it doesn't affect me. I would've just
built the thing myself. But then again, I'm not the billionaire 28 year old.
I'm just the 28 year old trying to start my own company.
~~~
veyron
Instagram, if it was allowed to expand and develop more social features, could
have actually left Facebook in the dust. Seeing as how Facebook is perceived
to be worth far more than a billion dollars, 1B is relatively cheap.
~~~
sreyaNotfilc
You could be right. I have been reading a lot about startups as well as
listening to podcasts and whatnot. It seems to me that a lot of them only
exist only to get acquired. Perhaps that was Instagram's reason for starting
in the first place.
The reasoning is probably that its really hard to get acquired by a company.
But, its extremely hard to build a lasting company. So, pick your poison. I'm
not sure how I would react if I had to sell a company that I painstakingly
created. Would I want the "quick" cash or just say "No, I'm not done yet"? I
cannot say. I guess it depends on where you're when/if a suitor does come
along to purchase your company.
------
amirmc
_"13 percent of deals were in the mobile sector with 30 percent of those
companies involved in photo or video technology."_
So I work that out to be >95% of VC deals were with companies that were _not_
mobile/video/photo related.
Unless I'm missing something, I don't see a problem here.
------
paulsutter
“You see people graduating from top tech schools… and they’re starting
companies in internet and mobile,” said Corey Reese, CEO and co-founder at
Ness Computing. “They’re not starting companies to enhance the life
expectancy, by and large.”
Good for them. Internet and mobile are perfect for the first company you
start. Elon Musk's first company was a non-world-changing Internet company
(Zip2), but good for him that he started it. I'm sure he learned a ton about
building a company, hiring people, etc.
------
michaelochurch
The reason VCs are funding lightweight companies _isn't_ that VCs want to fund
lightweight companies. It's something else, and it's easy to see the reason if
you understand the sociology of VC-istan.
VC-istan has become a good-ol'-boy network like any other. You need
connections. You need the VCs to see you as a real person so that, instead of
getting an indefinite put-off or a horrible term sheet, you either get (a)
funding, quickly, on decent terms, or (b) a sit-down explanation of why your
idea was rejected and either (i) introductions to investors who might be
interested, or (ii) an EIR gig to pay the bills until you happen upon an idea
that has wings. If you're not on those kinds of terms with VCs, and I doubt
more a very small percentage of people are, then you shouldn't be wasting your
time in VC-istan.
If you're a real founder, not getting funding means that a VC will pop you in
to a high-level position at a portfolio company that one would never get on
the open job market, and that will make up for the 6 months you spent without
income. Actually, you're going to have more than one of these offers.
If you're not a real founder, not getting funding means that, congratulations,
you spent months hustling with no income and no results.
The reason why lightweight companies are getting all the funding is that
building up and maintaining those VC contacts is expensive and time-consuming.
It's easily as large a time commitment as a full-time job, if not worse, and
it takes months before you see any payoff.
VCs aren't preternaturally biased in favor of lightweight companies, but the
people who are doing heavyweight stuff don't have the time to become "real
founders" (i.e. "made men" in the VCs' eyes, who are entitled to sit-down
explanations and EIR gigs upon rejection). So they have to pursue other routes
to funding.
~~~
pg
This is simply false. VCs have a network among themselves, but they don't
expect founders to be part of it.
~~~
michaelochurch
I think it's pretty accurate to say that they'll readily fund a band of
outsiders if they think the idea's good, but still, they also put out terms
that make it easy to take over the company if they don't see the founders as
"real founders", "made men", "like us", or however you want to describe that
status.
Also, here's the difference for real founders vs. the rest of us. Real
founders get alternatives put in front of them when their ideas are rejected--
EIR gigs to pay the bills, spots in portfolio companies. The rest of us get
zilch; spent months with no income and nothing to show for it. Real Founder
social status doesn't mean that bad ideas get funded, but it does mean you
don't have to deal with the extreme income risk associated with startups for
non-Real Founders-- a risk that stops making sense to take after age 25.
~~~
pg
No, the reason experienced founders get better terms from VCs is that have
more experience dealing with VCs.
------
indiecore
In addition to social apps being simpler and easier to start you also don't
have to deal with the reams and reams of paperwork and certifications that
getting into healthcare entails.
|
{
"pile_set_name": "HackerNews"
}
|
Research estimates vaping is at least 95% less harmful than smoking - bonif
https://www.gov.uk/government/news/phe-health-harms-campaign-encourages-smokers-to-quit
======
jostmey
How did they determine vaping is 95% less harmful than smoking? I don't see
references to the methods they used to make this assessment. Where did that
number come from? As far as I can tell, the number was just made up!
~~~
meikos
You do know the number 95% comes very frequently from statistical analysis
right?
~~~
JeremyBanks
You know putting "right" at the end turns your comment from helpful to
dickish, right?
------
kuhhk
I don’t see how many different types of “juice” they tested, maybe I missed
it? I always hear (on HN) that it is certain types of flavored juice that is
the most harmful, so I’m interested in _which one(s)_ they tested
------
tigershark
I call bullshit, they can’t just put out numbers like this. I remember that
one of the last times that I stopped smoking happened because I was literally
overdosing on nicotine from vaping almost continuously while drinking. I
stopped for almost one year at the time because I was completely nauseated by
vaping. I think that night was much worse than anything I smoked before,
probably comparable to a red Marlboro smoked cutting the filter out (thanks
Stephen king and the black tower.. -.- )
~~~
jostmey
Hey, big tobacco is dead but welcome to era of big-vape!
------
sosilkj
Heavily editorialized HN title. The "research" mentioned in the article is not
discussed in any detail, nor is any reference provided.
------
imchillyb
No sources or study data included with the article.
This is not news, this is propaganda.
------
Udik
I don't get why it is so difficult to check this. At this point there is
probably a good amount of people that has been exclusively vaping for the past
10 or 20 years, after decades of smoking. The statistics for lung cancer and
other smoke-related illnesses 10 or 20 years after quitting smoking completely
should be already well known. It should be enough to compare the two to
understand the difference in harm between smoking and vaping?
------
perfmode
Phillip Morris owns Juul now, so it doesn’t surprise me that there might be
efforts to throw smoking under the bus globally.
~~~
LancerSykera
Altria, not Phillip Morris, bought a 35% stake, not full ownership.
~~~
gomox
Altria is Phillip Morris Inc's new name.
------
onetimemanytime
So it could be 99.99% but they are holding back...better to be pleasantly
surprised. :)
Isn't it ironic that Philip Morris owns a nice chuck of Juul that aims to get
rid of smoking. My guess is that they'll milk this for decades and then pay a
small % of their profit to settle claims. Then they'll jump on another
bandwagon, maybe buy El Chapo's Select. Rinse, repeat.
------
user5994461
Misleading title. That's just a filling sentence in the middle of the article.
------
jaybna
Now contains 100% less radium...
|
{
"pile_set_name": "HackerNews"
}
|
I Used to Be a Human Being - spking
http://nymag.com/selectall/2016/09/andrew-sullivan-technology-almost-killed-me.html
======
woodandsteel
I think this is a profound essay.
Let me just emphasize one point. I think that to find out what you really
want, what would be truly rewarding, often you have to sit with your feelings
for a good while, and without distraction.
------
gipp
Not a new sentiment, to be sure, but probably the best articulation of it I
have seen.
|
{
"pile_set_name": "HackerNews"
}
|
Litecoin nears $10 - up from $4 a day ago - samwilliams
http://www.ltc-charts.com/period-charts.php?period=5-days&resolution=hour&pair=ltc-usd&market=btc-e
======
oleganza
It's going to be interesting. There's inherently no technical difference
between LTC and BTC (block intervals are not drastically different in
practice, and nominal count of coins is meaningless). Network effect-wise,
Bitcoin has much more hashing power (and thus resistance to 51% attack) than
Litecoin, but it is also more valuable one, so it also does not matter much.
I don't buy there's any "diversification" in holding both LTC and BTC. The
technical and legal risks are mostly the same. To me, in the long run, it's
more profitable to put 100% of your money in the most liquid of them, which is
currently Bitcoin. Simply because people want one money: one most marketable,
most widely accepted asset.
LTC is still valuable because we are in a very early speculation phase and
most folks got used to presence of both gold and silver, and there's very
strong incentive for recent Bitcoin miners with video cards to pump up LTC (as
that's the only thing to mine on their equipment right now) and sell out for
Bitcoin while they can. Either Bitcoin or Litecoin must pop, there's no
economic reason for them to stay in some balanced relation. I bet even gold
will largely lose wealth as it's moved to Bitcoin, but then gold has very
different risks and features, so it may find its balance with Bitcoin.
~~~
neomindryan
> there's no economic reason for them to stay in some balanced relation.
Actually, there might be. The fee needed to get a bitcoin transaction into the
blockchain is determined by a market, and it is a flat number, not a
percentage of the transaction. If bitcoin succeeds, it may not be practical to
use it for smaller transactions.
~~~
oleganza
It's neither a flat number, nor a percentage. Fee is not hard-coded in the
protocol, everyone can decide on the fee himself. Users decide how much they
want to pay and miners decide how to prioritise transactions. The fee is
freely floating in both LTC and BTC, no difference here. The only practical
difference is liquidity and it is produced only by network effect which tends
to create "one of a kind" winner in a competition between very similar
networks.
~~~
neomindryan
If the maximum block size limit ever constrains the volume of transactions,
what will happen to the fee people decide to pay?
~~~
oleganza
Block size limit will be raised as soon as there's a pressure of transactions.
Miners are not interested in devaluing their savings because Bitcoin is too
expensive to use for newcomers.
[http://blog.oleganza.com/post/43849158813/this-is-how-
block-...](http://blog.oleganza.com/post/43849158813/this-is-how-block-size-
limit-will-be-raised)
------
fenollp
LTC is a ponzy scheme, unlike BTC.
[https://en.bitcoin.it/wiki/Litecoin#Pump_and_Dump_Scheme](https://en.bitcoin.it/wiki/Litecoin#Pump_and_Dump_Scheme)
~~~
genericacct
No, they all are, and frankly, I am putting the following line in my hosts
file until you all grow up a little and go discuss your toy currencies
somewhere else
127.0.0.1 news.ycombinator.com
~~~
dingaling
> 127.0.0.1 news.ycombinator.com
That's quite inefficient. Your browser ( and any other apps ) will attempt to
connect to your legacy IPv4 localhost, time-out and fail, assuming you don't
have a web server listening there.
Better to black-hole with an unroutable address. Try [::].
~~~
kybernetyk
Will it really be a timeout? I thought the OS will send back a RST packet on
ports it doesn't listen.
------
jnbiche
Part of this rise is likely due to the proposal of Mike Hearn, a core
developer and Bitcoin Foundation officer, to start "redlisting" (marketing-
friendly word for blacklisting) wayward Bitcoin addresses.
How someone who has been around the Bitcoin world for so long doesn't
understand the importance of the currency's fungibility is beyond me. If you
make one person's Bitcoins worth less than another person's Bitcoins, then we
all lose.
~~~
primitivesuave
Hearn only proposed there be a discussion on the matter, he definitely didn't
seem like he was pushing either way.
~~~
jnbiche
Yes, he proposed the idea for the second time in a year "just for discussion".
And the first time he proposed it, he definitely and very explicitly argued in
favor of it. If he's more subtle this time, it's because he's learned this is
an issue that is deeply important to most long-time Bitcoiners, since it's an
assault on one of the most fundamental of Bitcoin's properties.
------
jamoes
In my opinion, a better way to price LTC is in BTC. So, I prefer the LTC/BTC
chart: [http://www.cryptocoincharts.info/period-
charts.php?period=1-...](http://www.cryptocoincharts.info/period-
charts.php?period=1-year&resolution=day&pair=ltc-btc&market=btc-e)
Looking at it this way: LTC is now up to 0.014 BTC, up from a recent low of
0.01 BTC, and down from a high of 0.0449 BTC back in April.
------
mrspeaker
Is it still worth unleashing a miner on my lappy to go spelunking for
Litecoins, or has that ship sailed too?
~~~
ck2
On a laptop forget it, difficulty is way too high.
------
QuasiAlon
Interesting to see spillover effects from the bitcoin drama over the past few
days
------
yitchelle
Noob here. Are the various crypto currencies floating around exchangeable via
a trading exchange of sorts?
~~~
ngpio
BTC-e is the most popular. I believe Vircurex[1] handles the largest number of
cryptocurrencies.
[1] [https://vircurex.com/](https://vircurex.com/)
------
brador
Litecoin could be the spark that causes the Bitcoin crash. Hear me out.
Method: litecoin gets major news push as the "new Bitcoin", people sell out of
bitcoins into litecoin causing Bitcoin prices to crash overnight.
Needed: a very fast way to move from bitcoin holdings to litecoin.
If someone makes that it's game over for bitcoin.
~~~
jnbiche
> Needed: a very fast way to move from bitcoin holdings to litecoin.
There are _many_ very easy ways to quickly move from Bitcoin to Litecoin. On
btc-e.com, for example, you can sign up and be trading Bitcoin>Litecoin within
an hour or less (the time needed for Bitcoins deposits to clear).
~~~
brador
Right, so now all we need is the mainstream news push.
------
jackgolding
Is there a cryptocurrency index anyone has made?
~~~
QuasiAlon
en.wikipedia.org/wiki/List_of_cryptocurrencies
------
hayksaakian
I love it.
Diversification of investments is awesome.
~~~
shubb
But Bitcoin is a gamble not an investment - its value may be 1000% or 1% of
its current value next year, and neither of us can really predict that. Sure I
own some (I made like 100% so far), but its a lottery ticket.
I'd say that investing in a high growth startup is a gamble, but investing in
100 is an investment.
Which links to your diversity of investment thing.
If you invested in 100 startups in the same niche, for instance 500 bitcoin
exchanges, that would not be diversified. You would be betting on the same
business model.
If crypto currencies are discredited - the buyers today get burned by the big
crash and never come back, or laws change to make it too inconvenient to
bother investing in (all the exchanges are pushed into he black market), then
the litecoin price will dive like bitcoin and stay there.
~~~
mikro2nd
> its value may be 1000% or 1% of its current value next year, and neither of
> us can really predict that
True enough, but then we could say much the same thing for the fiat currency
of your choice. (Except maybe the CHF? ;)
~~~
sentenza
No. All significant 'real world' currencies are tied to the economy of a
nation-state or nation-state-like entity.
Bitcoin et al are free-floating in that regard. To see how this makes a
difference, consider what would happen if one of the early adopters were to
sell 100K bitcoins right now.
~~~
mikro2nd
(I'm not arguing that Bitcoin is somehow less volatile than fiat currencies,
here, but...)
The values of "Real world" currencies is quite (not totally) decoupled from
the intrinsic value of the issuing state's economy. See Zimbabwe for a recent
object lesson - still plenty of economic activity and trade going on in the
country, but hyperinflation happened anyway. "Real world" currencies are much
more tied to emotional attachments (or lack thereof) to the currency and faith
in its issuer.
If you've ever tried your hand at forex speculation, you'll be very (perhaps
painfully) just how much the relative values between fiat currencies is driven
by emotion (and the stop-loss orders of other speculators.)
~~~
shubb
Shares and currencies...
In theory, analysts and professional traders incorperate everything they know
into bids for shares, so that the current share price 'prices in' everything
publicly known about a share at the current time.
I quite often talk to people who have a share portfolio, and buy mining shares
because they think international demand for mining is going to rise. But the
share price already reflects that because pro investors and analysts though to
it first.
So really, when you buy a share, unless you have some extra information, you
are making a bet on variations due to things no one knows at the time - it's
simply a gamble.
Don't get me wrong, it's not a terrible idea to buy shares - if the economy
grows, the FTSE grows with it. If you buy luxury goods manufacturers and
insolvency companies, you can hedge. But if you, as a private individual,
think you can make money day trading (in your lunch break) against the highest
paid experts in the world, well you are kinda nuts.
A friend put together a database of tick data for every currency pair in the
world, for a spread of different brokers. He examined the data, and found that
at last in the 3 hour / day interval, it appeared statistically random. You
can bet on a random variable if you like, sometimes you will make money. I
prefer to put what little money I have in passive funds, and hope it beats
inflation^.
^That said, I'm cashed out and 400% ahead following this bitcoin bubble. lol
------
shomyo
Who cares.
|
{
"pile_set_name": "HackerNews"
}
|
Bill Simmons: A Sports Column Written Far From Print, and the Game - prakash
http://www.nytimes.com/2009/11/16/business/media/16simmons.html?_r=1&ref=basketball
======
zach
Simmons is just expounding, at the highest level, the fascinations, lifestyle
and thoughts of a guy who likes sports. So much so that he actually doesn't
cover sports as much as he covers being a sports fan.
~~~
timmaah
2 of my favorites from the past year..
The cheating Red Sox
[http://sports.espn.go.com/espn/page2/story?page=simmons/0905...](http://sports.espn.go.com/espn/page2/story?page=simmons/090507&sportCat=mlb)
and a ~3,000 word obituary of his dog
[http://sports.espn.go.com/espn/page2/story?page=simmons/0901...](http://sports.espn.go.com/espn/page2/story?page=simmons/090122)
~~~
nbroyal
I missed the one about his dog. Thanks for sharing. Awesome read.
------
scotch_drinker
It continues to fascinate me how someone who is willing to work hard and
follow his passion can use the Internet to do something previously thought
impossible.
I love Simmons and his writing. I probably never would have heard of him in
the days before he could easily publish on the Web. There are an multitude of
voices just like his out there that I look forward to discovering.
~~~
sgoraya
His Vegas columns are some of my favorites; my friends and I can relate to
them on so many levels. I look forward to his column every Friday (though he
has not been updating as often lately, probably due to his book signing
schedule). Its one of the few columns/articles that can literally make me
LoL...
Purchased his book but have not had a chance to dive into it yet :)
~~~
dhyasama
Here's his newest: [http://www.amazon.com/Book-Basketball-NBA-According-
Sports/d...](http://www.amazon.com/Book-Basketball-NBA-According-
Sports/dp/034551176X/ref=sr_1_1?ie=UTF8&s=books&qid=1258486548&sr=8-1)
I'm a ways in (it's big) and it's enjoyable so far.
------
drc1912
I'm going to his book signing tonight in Seattle. Any other HN people from
Seattle going?
|
{
"pile_set_name": "HackerNews"
}
|
Feed me data - robg
http://www.scribd.com/doc/17460297/Feed-me-data
======
kierank
Minor word of warning. Scrolling through that document caused major cpu usage
and my browser to lock for about 20 seconds. (Opera on Windows XP)
|
{
"pile_set_name": "HackerNews"
}
|
VC Trading Cards - gk1
https://vctradingcards.com/
======
johnhenry
At $12 per card, are they at least signed?
|
{
"pile_set_name": "HackerNews"
}
|
Hash function performance - glower
http://scripting.com/stories/2010/09/09/hashFunctionPerformance.html
======
gojomo
That's an awful hash function. The largest bucket has more than 4X as many
items as the smallest. He should pick another function that uses all the
characters in the object name. Whatever language or libraries he's already
using probably has a better function for strings handy.
~~~
acqq
I don't know in which language he implements this, but if it's in any
efficient language he should certainly use all the characters. Some
interesting benchmarks (and also some very interesting links in the comments!)
can be found at:
<http://www.strchr.com/hash_functions>
------
Xk
Looks like a case of premature optimization to me. Optimizing for hash
function speed when you end up with thousands of comparisons is just pathetic.
Who cares if your hash function takes a whole millisecond instead of a
fraction of a millisecond if it ends up that you get an equal distribution in
the different buckets?
(On average they're going to need to compare with 6324 other objects before
they've gotten the right one. A perfect hash function would end up 5686
checks. Willing to bet that the 639 fewer checks would make up for a better
hash function.)
But that's just the beginning of the problem! I mean, even with a hash
function that's that bad, ~100 buckets would get them nearly 10X better
performance! And realistically, why not use several thousand buckets? Come on
...
~~~
wiredfool
This is used for everything in his environment, and has been for 20 years or
so. The Odb. Local hash variables, stack frames. Both speed and size wewe
important when this was created. Why it's coming up now, I have no idea.
------
jallmann
The ignorance in this article is almost embarrassing. Why does Dave Winer keep
getting posts on HN?
~~~
codexon
I thought he was being sarcastic when he said it looked random. Isn't that why
he is asking for input?
~~~
Natsu
If he wants my input he should:
* Learn more about the theory behind hashes before designing one.
* Use a hash that distributes _evenly_ across buckets.
* Learn what random looks like.
If it was random, all the numbers would be chosen with close to the same
probability. So there would be an even number of items in each bucket. What we
have here shows that bucket 1 is rarely chosen and bucket 7 is really common.
The spread is so ridiculous (17.6k vs. 3.8k) that it cannot be due to chance.
In short, this is an absurd hash function. Do not use it in cryptography or
elsewhere. If you need simple, low-cost, non-cryptographic hashes, look here:
[http://www.cs.hmc.edu/~geoff/classes/hmc.cs070.200101/homewo...](http://www.cs.hmc.edu/~geoff/classes/hmc.cs070.200101/homework10/hashfuncs.html)
------
wiredfool
Many of the design decisions in his software make a lot of sense for the
design of an environment that could fit on a floppy.
~~~
jemfinch
None of the ones in this post do, though.
Using an appropriate number of buckets would actually reduce memory usage.
Using an appropriate hash function (one that at least considered all the
characters in the key) wouldn't increase the time to hash a string read off a
floppy disk, because he's using the first and last characters anyway.
~~~
wiredfool
What's the appropriate number of buckets, taking into account everywhere it's
used?
~~~
jemfinch
Depends on your way of hashing. If you're using linear chaining (basically,
hashing a linked list of entries) you can get away with N buckets for N items
if you have a good hash function. If you're using open addressing, you'll
usually want your load factor (the ratio of filled slots to unfilled slots)
not to be much higher than 2/3 or so. It may appear on the surface that open
addressing uses more memory, but don't forget the overhead of linear chaining
(an extra pointer for every element in the hash table, many more cache misses
relative to a low-load open addressed table).
------
16s
I read that headline and straight away thought of md4 versus md5 performance.
I need to take a break :)
------
watersco
Why only 11 buckets? That seems like the bigger issue. Converting an O(n)
lookup to O(n/10) seems silly. Why not have 1000 buckets and get two more
orders of magnitude improvement in lookup performance.
Of course that will expose how bad the hash function really is. Saving a few
cycles in the hash function and then chaining through 17k linked list entries
doesn't make any sense.
~~~
wiredfool
Because it was a design decision from 20 years ago for small n hash tables.
It's baked into the design of his object db files.
------
houseabsolute
I upvoted because I really hope to hear from someone who knows this stuff. Off
the top of my head (as a non-expert), this sounds like a horrible hash
function that is vulnerable to a plethora of attacks, but I don't know for
sure. Why not just use the built-in hash function, or some function someone
smarter than you has written?
~~~
drblast
I'm not a hash function expert, but I did sleep at a Holiday Inn Express last
night.
This is a horrible hash function for a number of reasons. You want your hash
function to use every bit of information from the data you're keying on
because that's more likely to give you a spread that doesn't contain a
collision. Consider this list of keys:
aaz abz axz
Those three keys would collide, and for no reason. Even adding up each letter
(another horrible hash function because it eliminates information about the
position of the character, so "the" and "eht" would collide) would be better
than this.
Information theory is important here; you want to preserve as much information
from your key as possible. Check out
<http://en.wikipedia.org/wiki/Entropy_(information_theory)> for a good
discussion on this.
Also, there is almost no earthly reason to use a hash function with 11
buckets, and you certainly wouldn't want to evaluate your hash based on that.
Assuming you'd have to search each bucket of 10,000 for your match, hashing it
into 11 sections buys you very little time.
Also, there's no reason not to do something more complicated; assuming your
key is in CPU cache because you're going to add the first and last letters,
why not at least add up all the letters? You're wasting free CPU cycles after
you've already loaded the key from memory.
Finally, you don't want output that "looks pretty random." You want output
that sorts exactly evenly between buckets. He's nowhere close.
|
{
"pile_set_name": "HackerNews"
}
|
Voice, an invite-only social network where everyone is verified - aspenmayer
https://voice.com
======
emerged
There is a push against anonymity on the internet, but I'm not convinced that
is the best way. Not with the growing threat of cancel culture. I'd rather go
the other direction and provide additional assurances of privacy.
~~~
ahoy
cancel culture, as its commonly understood ("woke" scolds running well-meaning
people out of public life) largely doesn't exist. There are a few high profile
cases of it happening to actual sexual predators, otherwise it's as ephemeral
as Fox News hosts hyping fear over the "knockout game".
~~~
danhak
I believed this until this year. Then I saw it happen to a friend who is
prominent in his niche industry but otherwise unknown.
He was disgraced via Twitter and forced to step down from the firm he created
for making some offensive jokes.
~~~
davidgerard
That's very nonspecific.
~~~
danhak
That’s deliberate. I don’t want to identify my friend on this forum. I am
disputing the parent’s assertion that “cancel culture” is limited to a few
high profile cases. It is far more prevalent than that. The high profile cases
are obviously just the ones that most people are aware of.
~~~
ChrisClark
Then without any more details we'll probably just assume "some offensive
jokes" are actually horrible, it's the most common defense by people like
that.
"But it was just a joke!" "No Grandpa, you're just a racist."
You're right though, it is a lot more prevalent than just high profile cases.
But I also assume your friend deserved it.
~~~
eska
> we'll probably just assume "some offensive jokes" are actually horrible
Don't speak for others.
My personal experience is that women I never had romantic relationships with
pretended that I raped them to get influence or money (once as a teen, once as
an adult). Talking about this openly 3 friends have told me that similar
things have happened to them. But I also won't give you specifics so you'll
probably assume that we deserved it.
~~~
newacct583
> Don't speak for others.
This subthread is _literally_ a response to someone speaking on behalf of a
friend.
And that logic is a little suspect: cancel culture is such a pervasive threat
that (1) no one can talk about it happening despite (2) _everyone_ talking
about it happening everywhere, pervasively, without evidence.
That doesn't seem off to you?
------
FryHigh
Main differences from the usual social network.
\- KYC based account registration. Legally verified names.
\- Blockchain powered (de-centralisation). Censorship resistant. Many
interfaces to this website.
\- Earn money. Users get paid for content directly. More likes, views
translate to payment.
Early stages of voice.com but the ideas have potential in a hard to crack
market.
~~~
faitswulff
> Earn money. Users get paid for content directly. More likes, views translate
> to payment.
Is this funded on an advertising model? Otherwise where does the money to pay
users come from?
~~~
davidgerard
magical crypto "money" \- Voice is basically EOS Twitter
------
philipkiely
I use my real name, a recent photo (where possible), links to my own domain,
and other personally identifiable information on every site that I use, even
default anonymous ones like Reddit. I do not have alt accounts. Being able to
live this way on the internet is a privilege that I recognize is not available
to many people, but for those who can I highly recommend it; it ensures that I
always think twice before posting and engage in good faith.
~~~
amoshi
What if you want to express a thought or opinion that is not politically
correct though, or might simply be controversial? It by think twice you mean
not express it at all, then it sounds like subconscious self-censorship (if
such a thing exists).
Just a reminder, politically incorrect != incorrect
~~~
aspenmayer
This reads like one wanting to keep their accumulated social capital and
continue making social capital outlays publicly, and likewise reap the
dividends of their own social capital investments, as long as the winds of
popular acclaim blow one’s way, while expecting to keep one’s shirt when the
winds change. It seems like one not having any skin in the game, or wanting to
have it both ways. Talk is cheap. Money talks, bullshit walks. Put your money
where your mouth is, or you’re freeloading, lacking the courage of your
convictions, or lacking the moral turpitude to proclaim your convictions
publicly.
~~~
aspenmayer
As a counterpoint to myself, I also agree with the link below. There are good
reasons for and against using real names. I do not mean to paint with too wide
a brush; all free people are and should always be able to decide for
themselves whether to use their name or a pseudonym, and readers are usually
able to tell why authors do so from context. It gets a bit muddy where social
media comes into play, which is why this debate will never be over, and why
there is no one right answer. I hope I didn’t come off as someone imposing my
choice on others. I simply think there are more downsides than upsides for
readers when it comes to pseudonymous authors; however, some texts will never
be written under an author’s real name for perfectly legitimate or no reason,
and the world would be diminished if those voices were not also heard, and
those stories not told.
[https://geekfeminism.wikia.org/wiki/Who_is_harmed_by_a_%22Re...](https://geekfeminism.wikia.org/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F)
------
8organicbits
Every site that does "verification" seems to also prohibit anonymity (like
this one seems to do). But I don't think that needs to be.
Imagine a site that does ID verification. Add OAuth and now other sites can
integrate (similar to login with Google). However, instead of passing name,
user ID, and email the verification site can pass an anonymous ID.
With that approach you get 1-to-1 guarantees for human-to-account, preventing
sock puppet accounts, users who create new accounts when banned, etc. But, the
other sites never know who the user actually is.
Its anonymzing proxy meets OAuth single sign on.
Anyone know existing site that do anything like that?
~~~
rglullis
[https://www.human-id.org/](https://www.human-id.org/)
------
Nextgrid
What's the business model? One of the problems with current social media is
that their advertising-dependent business model relies on "engagement" and
thus the platforms are built to encourage outrage, etc.
------
floatingatoll
The third post on the page is coinspam. Nope, I misread, this is a blockchain
social network.
@aspenmayer, what is interesting about this social network to you?
~~~
aspenmayer
I found it while on Twitter, and thought the space was really expanding
rapidly. I thought that their KYC integration combined with a kind of
micropayment tipping/boosting mechanism was interesting as a way of both
surfacing and rewarding quality content in a transparent, verifiable way.
I also posted some recent things about Paras[1], which uses NEAR Protocol[2],
both of which I find interesting for the same reasons. Paras allows what seems
to me like a self-hosted federated platform with built-in
tipping/mining/staking/royalties.
Do you find this interesting too? What are you looking at in this space? I’m a
learner and hold no cryptocurrency of any kind, nor do I develop for any
crypto. I’m just curious about it, and want to share what I’m seeing if it
seems like it will resonate with the audience, in this case HN.
[1]
[https://news.ycombinator.com/item?id=23737526](https://news.ycombinator.com/item?id=23737526)
[2]
[https://news.ycombinator.com/item?id=23737560](https://news.ycombinator.com/item?id=23737560)
~~~
floatingatoll
What is it about .. “surfacing and rewarding quality content” (I only write
like this at work?) .. that makes you excited to get out of bed in the morning
and research it and post about it?
Is there some personal experience that could help us understand your interest?
Have you worked as a librarian? Do you believe in blockchain as a way to
overcome discrimination? Have you worked as a newspaper editor and feel that
decentralized opinions are a solution to problems you encountered? If so, what
were those problems, and how did they make you feel?
I can’t tell from your reply if you have any emotions on the topic, or why.
You find the topic interesting because of a list of buzzwords, but your reply
- while a very well-written set of words - does not contain any emotion with
which others can resonate. As a result my initial takeaway is that you’re
somehow shilling for coin - profiting somehow from placing this content at HN,
If you aren’t, I apologize! But this is HN, and the coin spam is real, and so
if you can bring some life to why _you_ personally are jazzed about this - not
just referencing the technologies by keyword but actually explaining why they
matter to you using feelings words - then that would go a long way towards
establishing some authenticity here.
~~~
aspenmayer
Of course. I nearly mentioned in my previous disclosure or disclaimer that I
have no interest at all in these areas, financially or in any other kind of
quid pro quo. I don’t receive any benefit from posting, for any of my posts. I
just do it because I genuinely don’t learn or reason well in a vacuum, and I
don’t know enough about most of my fields of interest to be an expert; I’m a
generalist user in most advanced discussions and cutting edge areas of
computing at best, and my interests are usually cursory and self-edifying, if
not deep-dives.
I hope my post was not interpreted as shilling. That’s why I also mentioned
other topics I’ve posted about in this space, to show my own hand as well as
show that I’m not playing favorites. I’m just here to learn from those whose
experience and knowledge surpass my own, and whose skills I care to learn
myself.
In all honesty, I was having trouble sleeping and was just clicking around on
Twitter, found some interesting projects, so I thought, and shared them. There
really wasn’t a whole lot of reasoning on my part, beyond my perhaps-mistaken
belief that the content was of interest to readers of HN.
I’m open to further discussion on this or any other matter. My only motive is
to receive and share info, and if I can help, I want to do so.
~~~
floatingatoll
HN readers are hypersensitive to coin stuff — far moreso than most topics.
Thank you for taking the time to reply, it does lessen the concerns somewhat.
I think that I'm still very wary of this due to their dependence on blockchain
and lack of clear profit model. I wish they had a tech landing page that
explained why they're interesting (and not just 'woo blockchain') succinctly.
------
web-cowboy
I want this exact same thing (for many of the same reasons), but for gaming.
\- Toxic people can finally be identified and penalized \- Better matchmaking
(less "smurfing", less streamer accounts going from zero to hero)
I haven't looked too closely at Voice's setup, but does "verified" always have
to mean non-private, and non-anonymous?
~~~
marcinzm
And female gamers can be stalked to their home addresses by obsessed people.
Anonymity has some advantages for marginalized social groups as it lets them
avoid predators better.
edit: Actually I'd assume a lot off harassment of people via other channels
would happen including SWATing, emailing employers to get them fired, false
reports to get them banned on other sites, contacting spouses with fake
evidence of infidelity, etc. Only the victim loses anonymity, the attacker
keeps it. Competitive gaming brings out the worst in some people.
------
aspenmayer
Voice in the news
[https://decrypt.co/34598/crypto-social-media-platform-
voice-...](https://decrypt.co/34598/crypto-social-media-platform-voice-
finally-launches-on-eos)
Welcome post
[https://app.voice.com/post/@salah/join-
us-1593835144-1](https://app.voice.com/post/@salah/join-us-1593835144-1)
Help and FAQs
[https://help.voice.com/hc/en-us](https://help.voice.com/hc/en-us)
The tech which makes Voice work
[https://eos.io](https://eos.io)
------
davidgerard
With the real names thing, they might have the startling success of Google
Plus!
A real names policy is one of those perennial ideas people keep thinking will
fix everything - but there's zero evidence for this, and some evidence against
it.
The total examples I have:
* In 2007, South Korea required commenters on sites with over 100,000 users to supply their Resident Registration Number (national identity number), and this reduced malicious comments by ... 0.9%. They scrapped it in 2011. [http://english.chosun.com/site/data/html_dir/2011/12/30/2011...](http://english.chosun.com/site/data/html_dir/2011/12/30/2011123001526.html)
* UK, 2007: a study of students showed they were worse, not better. "There was four times as much flaming when they knew each other than when they didn't." [https://www.theguardian.com/technology/2007/jul/12/guardianw...](https://www.theguardian.com/technology/2007/jul/12/guardianweeklytechnologysection.privacy)
* In a now-unavailable post to Google+ by Yonatan Zunger, one of the people whose problem Google+ was, he said: [https://plus.google.com/+YonatanZunger/posts/WegYVNkZQqq](https://plus.google.com/+YonatanZunger/posts/WegYVNkZQqq)
> "While there was an expectation that people would behave better when their
> activity was tied to their own identity, as that identity is presumably a
> highly valuable and non-renewable resource to them, the evidence weighed
> against it: people seem quite willing to be jerks under their own
> identities."
Voice has put forth this policy that's completely lacking in evidence,
presumably because they just felt like it'd work out fine.
And that's before we get to Voice's core audience being the crypto crowd - who
have some reluctance to provide full KYC dox to join EOS Twitter.
If anyone ever tries to tell you that a Real Names policy is a good idea - ask
them for their numbers on this.
Also, did you know Voice paid $30m for the name voice.com? A historical record
amount - even sex.com only went for $13m.
[https://domainnamewire.com/2019/06/20/yes-voice-com-is-
the-m...](https://domainnamewire.com/2019/06/20/yes-voice-com-is-the-most-
expensive-publicly-announced-domain-ever-sold/)
------
crsv
Pretty expensive domain. Wonder if it was bought using ICO fugazi money, since
this looks like the content is oddly interested in fringe crypto content.
~~~
realbarack
Apparently they recently got a "$150 million cash injection":
[https://decrypt.co/34598/crypto-social-media-platform-
voice-...](https://decrypt.co/34598/crypto-social-media-platform-voice-
finally-launches-on-eos). And voice.com cost $30 MILLION!
------
ksec
_Error 1020
Access denied
What happened?
This website is using a security service to protect itself from online
attacks._
~~~
luckylion
The money was spent on buying the domain, there wasn't any budget to implement
caching.
------
im3w1l
So can someone explain what this is? It's not very clear to me.
------
empath75
Here’s what I’m surprised about — nobody has built a moderation focused social
network that has tried to poach Reddit’s best moderators by offering to pay
them.
------
fortran77
Ha! Blockchain!
> Voice uses the inherent characteristics of blockchain technology to promote
> trusted and transparent social interactions.
~~~
CharlesW
So based on that quote either they don't know what blockchains do and don't
do, or they're just straight-up lying.
Voice requires the use of EOS coins, the creator of which paid a $24 million
fine to the Securities and Exchange Commission over its 2017 ICO.
From [https://davidgerard.co.uk/blockchain/icos-magic-beans-and-
bu...](https://davidgerard.co.uk/blockchain/icos-magic-beans-and-bubble-
machines/)
_" The legal EOS Token Purchase Agreement is a frankly amazing document that
everyone should read.[1] US citizens or residents are not to buy the tokens
(though EOS assures us they totally don’t constitute a security – hear that,
SEC?); the tokens are defined as not being useful in any manner whatsoever;
forty-eight hours after the end of the distribution period, the tokens will no
longer be transferable; the buyer promises not to purchase them for
speculation or investment. If there’s any legal problems caused by you buying
these officially worthless things, you agree to indemnify EOS."_
[1] PDF:
[https://davidgerard.co.uk/blockchain/references/EOS%20Token%...](https://davidgerard.co.uk/blockchain/references/EOS%20Token%20Purchase%20Agreement%20-%20June%2022,%202017.pdf)
~~~
aspenmayer
You and David himself have posted about this, and I am glad you did. This
looks pretty shady considering the parties involved and their past actions. I
appreciate the context you bring to this post.
------
seemslegit
Sounds like a very useful way to identify people one would not wish to hear
from in real life either.
|
{
"pile_set_name": "HackerNews"
}
|
Box86, an x86 app player for ARM with native rendering performance - jdonald
https://github.com/ptitSeb/box86
======
ekianjo
See this for a lot more details:
[https://www.giantpockets.com/box86-run-x86-code-and-games-
on...](https://www.giantpockets.com/box86-run-x86-code-and-games-on-arm/)
~~~
jdonald
Thanks for linking that as well as your earlier HN posts about Box86:
* [https://news.ycombinator.com/item?id=19389120](https://news.ycombinator.com/item?id=19389120)
* [https://news.ycombinator.com/item?id=19400920](https://news.ycombinator.com/item?id=19400920)
I think there are a couple factors that have limited the awareness of Box86.
One is having the word "Box" in its title which reminds too much of existing
solutions DOSBox or Bochs that already do their job.
The second is occasionally headlining Box86 as an "emulator", taking into
account that most of the audience does not get very far into the article. Even
if Box86 does use x86 emulation it's important to highlight that libraries
like OpenGL and SDL run natively. Compare that framing to WINE, which is so
forthcoming on not being an emulator that it's in the name.
------
jbverschoor
How does this compare to the ish.app Linux emu for iOS?
I’d love tinker with using my iPhone as my workstation. Airplay or av-cable,
usb keyboard
~~~
jdonald
Good question. iSH appears to be a true emulator with syscall translation much
like qemu-user, so it's versatile but wouldn't run a game library like SDL
natively.
------
daneel_w
"App player". That's a really curious term. Is there any chance this will run
on 64-bit Arm cores in the near future? I may have gotten the wrong
impression, but from the past few years that I've perused the market of Arm
SBCs - including the incredible variety of highly affordable Chinese "Android
TV boxes" \- it seems that 32-bit Arm has more or less left the building
already.
~~~
jdonald
Good question. ptitSeb's work tends to target armv7 as that's the architecture
used by OpenPandora.
I don't know if this design currently depends on specifics of the armhf ABI vs
aarch64.
With regard to the market, 64-bit SBCs (such as Raspberry Pi 4) often run
32-bit operating systems such as Raspbian. Even 64-bit ARM operating systems
such as Ubuntu, Gentoo, and Manjaro are capable of running 32-bit software
such as this via multiarch, chroot, or containers.
------
vxxzy
Does this mean we can finally use 32bit wine on android??
~~~
mappu
Hangover kind-of works:
[https://github.com/AndreRH/hangover/releases](https://github.com/AndreRH/hangover/releases)
------
drudru11
Where does he get these games? Are they freeware?
~~~
voltagex_
[https://www.gog.com/game/airline_tycoon_deluxe](https://www.gog.com/game/airline_tycoon_deluxe)
[https://www.gog.com/game/world_of_goo](https://www.gog.com/game/world_of_goo)
(was also free on Epic Game Store a while ago, plus in a million different
bundles)
for example.
The games used as tests seem to be lower end indie titles that can be found
DRM-free (because you don't also want to have to work around that)
~~~
drudru11
Nice - thanks
------
d--b
Question: does anyone know if this can be used to run chrome / Netflix on a
raspberry pi?
~~~
jdonald
I guess it's theoretically possible to emulate the x86 Widevine DRM plugin
while running other browser code natively.
However, there's already an easier way to run Netflix on a Pi 4 natively.
Someone figured out to just to grab the armv7l libwidevine binary from Chrome
OS: [https://blog.vpetkov.net/2019/07/12/netflix-and-spotify-
on-a...](https://blog.vpetkov.net/2019/07/12/netflix-and-spotify-on-a-
raspberry-pi-4-with-latest-default-chromium/)
------
snvzz
Nice progress towards making x86 redundant.
Too bad it targets ARM and not rv64gc, but it's a start.
I'm curious why it isn't based on qemu-user.
~~~
daneel_w
I understand your perspective, but I think it's _GREAT_ that it targets Arm.
If anything finally can and finally should replace x86/64, to usher in a new
era of power-efficient computing, it should be Arm.
~~~
dTal
Just curious why you think that? ARM cores can sip power when idle, but the
story for performance-per-watt under load appears less clear.
~~~
jdonald
> the story for performance-per-watt under load appears less clear.
Is this referring specifically to their in-order cores (Cortex-A53, A35), out-
of-order cores (Cortex-A72, A73), or big.LITTLE configurations in general? *
Thinking more broadly, the next era of power-efficient computing may depend
more on heterogeneous architectures than the CPU alone. The Arm ML Processor
IP and corresponding offerings from competitors play a large role in this.
* Can be generalized to custom cores designed by NVIDIA, Qualcomm, Samsung, Apple, and other vendors.
|
{
"pile_set_name": "HackerNews"
}
|
Why’s “Try Ruby” Back Online - Hagelin
http://www.rubyinside.com/try-ruby-back-online-2413.html
======
oink
<http://tryruby.sophrinix.com/>
------
shadytrees
> _We will be back in a few hours. Someone discovered a security hole. They
> reported it, but not until someone else thought it would be cute to drop a
> rootkit in._
If Mister McElroy is reading this: Interesting! Was it a problem with _why's
sandbox? (I don't think the original Try Ruby was ever exploited in the short
time I kept a tab on it, and beyond that I haven't a clue; but I couldn't find
any security commits in a quick scan of the changesets.)
<http://github.com/whymirror/why_sandbox/commits/>
------
johnfn
Whenever I try to do 40.reverse, the thing errors out on me, and it wont let
me progress any more through the tutorial. Kind of disappointing. I remember
doing this thing before and it was quite fun.
~~~
carbon8
You can type _next_ to move to the next lesson.
------
quizbiz
Warning, a very "newb"ish question: I completed his tutorial inside my
browser. Buw what now? How do I do that permanently on my shared server which
allows me to create a rails application via cpanel. How do I "populate it with
[my] code"?
~~~
mikeryan
This isn't a small question and is largely going to depend on your shared
server.
First generally you're going to do most of your development locally and then
deploy it to your server. If you're just starting out then check out the
RadRails IDE. It does most of the heavy lifting for you and makes it really
easy to get going.
After that look at something like capistrano or vlad the deployer for getting
the site onto your webserver.
~~~
bts
Or Heroku instead of cap/vlad. Makes getting a quick rails app deployed and
up-and-running very easy.
------
bmelton
This was what I honestly considered the biggest loss with regards to _why's
disappearance. I am literally ecstatic to have this back online.
The only thing that could make this better would be if it were presented by
_why (him|her)self.
------
rogeriopvl
It's great that this is back online again. I got lot's of people trying ruby
thanks to this.
|
{
"pile_set_name": "HackerNews"
}
|
Richard Hamming: You and Your Research (1986) - apsec112
http://www.paulgraham.com/hamming.html
======
wwarner
I think this is good reading. I like the advice on drive and hard work, I
think the amortization example is perfect. I recommend Hamming's book _The Art
of Doing Science and Engineering_ which goes on in this vein for many
chapters, and ends powerfully by showing how little mathematicians understand
what they're doing when they do it.
|
{
"pile_set_name": "HackerNews"
}
|
Python and Machine Learning in Astronomy [audio] - privong
https://talkpython.fm/episodes/show/81/python-and-machine-learning-in-astronomy
======
tangue
I've just discovered this podcast thanks to HN. Thanks @privong
------
giancarlostoro
Great episode, there are also great mobile apps out there for viewing the Sky
through your phone if you never could afford that telescope and had no idea
what each star you're looking at is, using GPS for locating you, and revealing
even the stars you can't see (pointing downwards would reveal stars on the
other side). I would name some, but there are too many and I haven't settled
one any yet.
~~~
SpaceInvader
I like most the Stellarium app. It's cheap (free for desktop) and works great.
------
gigatexal
It's a pretty stellar podcast about python in general and all the episodes
have transcripts.
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: A Node.js and electron based image viewer for Mac, Windows and Linux - sachinchoolur
https://github.com/sachinchoolur/lightgallery-desktop/tree/master
======
brudgers
Recently:
[https://news.ycombinator.com/item?id=11776321](https://news.ycombinator.com/item?id=11776321)
|
{
"pile_set_name": "HackerNews"
}
|
Engineers boost AMD CPU by 20% with software alone; no overclocking - 11031a
http://www.extremetech.com/computing/117377-engineers-boost-amd-cpu-performance-by-20-without-overclocking
======
wwalker3
It sounds like the NCSU guys are using the CPU as a prefetcher to speed up GPU
kernel execution, not using the GPU to speed up normal CPU programs as the
ExtremeTech article implies.
The CPU parses the GPU kernel and creates a prefetcher program that contains
the load instructions of the GPU kernel. This prefetcher runs on the CPU, but
slightly ahead of kernel execution on the GPU. This warms up the caches, so
that when the GPU executes a load instruction, the data is already there.
~~~
profquail
_It sounds like the NCSU guys are using the CPU as a prefetcher to speed up
GPU kernel execution, not using the GPU to speed up normal CPU programs as the
ExtremeTech article implies._
The article says the same thing you are -- that the CPU is used as a
prefetcher for the GPU; read the 3rd paragraph:
_To achieve the 20% boost, the researchers reduce the CPU to a fetch/decode
unit, and the GPU becomes the primary computation unit. This works out well
because CPUs are generally very strong at fetching data from memory, and GPUs
are essentially just monstrous floating point units. In practice, this means
the CPU is focused on working out what data the GPU needs (pre-fetching), the
GPU’s pipes stay full, and a 20% performance boost arises._
------
teamonkey
This is only tangentially related, but with a title like that I was expecting
a brainless regurgitation of a press release, or some kind of extrapolation
from a paper that wasn't claiming that meaning at all.
Instead, I see a news article with a clear description, caveats and
constraints clearly listed, and a portion of how this relates to the parent
company. It's a shame that I find this surprising.
------
bryanlarsen
The fact that it's only a 20% increase makes it sound promising. Normally
press releases will boast about "100x" increases in speed when they switch to
using the GPU. And you can get that sort of increase for highly parallel tasks
with low memory pressure. BitCoin mining, for example. But the low 20% speedup
implies that they're doing this for general purpose computing.
------
faragon
That's hilarious. Using a _whole_ CPU for prefetching data because of poor
shared bus performance for both the CPUs and GPUs (?!). Instead of such crazy
"software solution", I would rather prefer to use a portion of its L2 or L3
cache size (e.g. 1MB for a 3MB L2/L3 cache) for the GPU itself, and reduce the
bus saturation with DMA transfers (e.g. just like the SPE units of the Cell
CPU work).
------
pessimist
So by using a custom compiler someone speeded up an unspecified benchmark by
20%. Is this news?
~~~
elemeno
If that was all it was then no.
However, what they did was demonstrate a novel way of making use of two
different processing cores that exist on the chip (namely using both the CPU
and an integrated GPU) to improve the performance of their benchmark - which
certainly is both interesting and news.
Of course, a proof of concept is a long long way from it being of practical
benefit!
~~~
Someone
A very, very long way, I would guess. A 20% performance gain is nice, but
having to power a GPU to get it is not. I would expect that adding a second
CPU instead of that GPU almost always will give you more than that 20%
performsnce and less heat, for less money.
~~~
EvanKelly
It depends on the application. If the application, as the article puts it,
"pushes polygons around", then I imagine the APU concept may have the
advantage.
Though, as previously noted, this APU concept is highly dependent on tailored
software (compilers, etc.) and AMD has been banking their strategy on the fact
that these critical pieces will take advantage of the APU.
I think the NCSU research (co-sponsored by AMD) is a move in the right
direction for determining whether these APUs are an effective solution when
compared to the multi-CPU architectures.
------
afhof
GPUs are pretty tailored and aren't really good for general purpose computing.
Branching and cache coherency are much easier in the CPU compared to the GPU.
I doubt that any of the advertised gains would be realized by normal users.
~~~
cbsmith
It was the GPU that ran 20% faster by leveraging the CPU, not the other way
around.
------
nivertech
I hope this has something to do with HSAIL virtual ISA. For example general
purpose code in C compiled to HSAIL and then CPU makes intelligent decisions
which parts of code to JIT-compile to CPU and which to GPU ...
------
KeyBoardG
Hopefully we can get this into drivers sooner than later. AMD has already been
working with Microsoft to get a large performance gain out of BullDozer chips
in Windows 8 simply by the way threads are prioritized.
------
overshard
The title is deceiving as per usual. It's mostly using the GPU and using CPU
for prefetching. Nothing too new here, we know the GPU is faster.
~~~
sliverstorm
_we know the GPU is faster_
More specifically, the GPU is more parallel.
------
gcb
summary: they send all the instructions to the CPU to simply encode them for
the GPU, and let the GPU do the heavy lifting.
and end up saying that AMD is dying as the news love to do.
|
{
"pile_set_name": "HackerNews"
}
|
Painting with Math Formulas in Google Sheets [video] - dustmop
https://www.youtube.com/watch?v=JnCkF62gkOY
======
2pointsomone
This is so amazing!! Is there more of this math-driven animation or design on
the web? I constantly need to do math for web animations. And even though I
learned this in middle and high school, I struggle to translate to real world
problems.
~~~
vchak1
Check out Shadertoy. By the same person (Inigo).
|
{
"pile_set_name": "HackerNews"
}
|
"If APIs are copyrightable in U.S....startups will come to Europe." - msredmond
http://adtmag.com/articles/2012/05/08/api-implications-from-oracle-v-google.aspx
======
stewie2
but startups generally don't create api, they use ready-made apis.
if apis are not copyright-able, what about assembly apis? does Qualcomm need
to license from ARM before creating its own implementation?
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: How to find a better job while being employed? - ishener
I wonder, did anyone else ever had this problem. I have a full time job, but I also want to look for a better job. I can't just quit because I have a family and honestly, we wouldn't last a month...<p>So how can I look for a job? I'll have to answer phone calls during working hours from my current work place, and I will have to go to job interviews. Is this even possible without quitting or letting the employer know?
======
ColinWright
I'm one of four directors in a company with 25 employees. If one of my
colleagues wanted to look for another job, I'd want to help them. That might
be by making them happier in their current work, or it might be by helping
them find another position in our company, or, as a last resort, helping them
find a position they're happier with in another company.
Absolutely I'd want to keep them, but sometimes that's not possible. Even if
they leave, I'd help them remain productive, plan their exit, and control the
hand-over.
If finding another job is the path they'd take, I'd find a way to let people
take calls and attend interviews. Obviously we'd discuss either longer hours
to make up the time, or a reduction in pay, but we'd find a way to agree how
it would work.
If you can't talk to your current boss, moving out is the best thing to do,
but the least you can do it talk to them first and give them the chance to
make things better, or help you transition to a job where you're happier.
~~~
S4M
I wish all employers could be like you...
~~~
Gustomaximus
I agree! IMO I would be hesitant to tell an employer I was looking to exit.
Some egotistical people may react as though it is personal to them. Also it
puts you in some pressure to get a job in a reasonable time, when really you
might want to take six months to find the right company and renumeration. Job
hunting is best when your not 100% sure you want to leave a company vs.
desperate to change. And the employer is likely to mentally write you off and
you don't know if new internal opportunities will open up. While I would chat
to someone if I had a long close relationship, I would err on the side of
caution.
As for calls and interviews, this is what voicemail is for. Call back on your
breaks. People understand the situation. Also interviews can usually be done
before or after work. If the new employer is not understanding it might be a
sign they're not a great company to work for.
------
MalcolmDiggs
You'd be surprised how accommodating a potential-employer can be. Just be
truthful: "I can't interview 9-5 because I have a job." If anything, that will
make you _more_ attractive. If they're eager to fill the position they'll work
with you; keep in mind you're probably not the only candidate who has a job
already, so it's likely that they're planning to conduct some interviews
during off-hours/weekends anyway.
~~~
Nicholas_C
>"I can't interview 9-5 because I have a job."
This is a little dishonest, but why not tell your employer you have a doctor's
appointment or jury duty? Or just say you'll be out of office for personal
reasons. Or take a day off and interview one half of the day and use the rest
to relax.
------
dkarapetyan
Why don't you just tell your employer you are looking for another job? Is your
fear that they would fire you on the spot? In my experience that is an
unfounded fear and most employers want to know the reasons that led you to
looking for a another job to begin with so that they can work with you to fix
any issues that led to your decision to look for work elsewhere.
~~~
ishener
he may not fire me on the spot, but he will definitely look for a replacement.
what id he finds a replacement before i find a new job?
~~~
dkarapetyan
That's a really weird set up. You should hold your cards to your chest then
and do interviews secretly by making up excuses.
------
munimkazia
Most interviewers/companies understand that you work during usual work hours.
That said, you should be able to step out for 15 minutes to take a call during
work. If you get questioned about it, just tell them that it was an important
personal call. And you could try to schedule these calls around lunch or your
usual afternoon break time.
You can always schedule job interviews on Saturdays. Again, most interviewers
understand the circumstances and usually do hiring on Saturdays to accommodate
those who are working.
------
loumf
Not only is it possible, but it's utterly normal. As others have said, tell
interviewers your constraints.
Hopefully, taking a vacation day (or half day) isn't impossible at your
current job. Having appointments in the middle of the day isn't unusual. Make
sure all phone calls are scheduled and step out to take them.
|
{
"pile_set_name": "HackerNews"
}
|
Job Security Through Code Obscurity - andrewacove
http://seven-degrees-of-freedom.blogspot.com/2011/01/job-security-through-code-obscurity.html
======
j_baker
I can't wait for the follow-up: "Ego Security Through Assenine, Passive-
Aggressive Blog Posts".
(And I can say this because I've written asinine, passive-aggressive blog
posts to boost the ego myself)
------
iwwr
When a programmer is adding ugly code to the codebase, he is making a marriage
proposal. Unless the prospect with him or her (till death dues you part) is
appealing, work to remove the offending code before children start coming in.
------
zdw
Reminds me of the "If you're not replaceable, you can't be promoted" adage...
------
tsotha
_Specialise via inheritance_
Ugh. That's the one I'm continually running across. Inheritance is so heavily
abused by novice programmers they shouldn't be allowed to use it without a
twenty page written justification.
------
impendia
Have I been out of the programming world for too long, or do at least some of
his complaints refer to using advanced features of programming languages in
essentially the way they were intended?
~~~
andrewacove
He's talking in the context of console game programming (though it applies to
everything). The current generation of consoles has pushed a lot of this stuff
to the forefront - with in-order processors and cache behavior in particular
exposing the flaws inherent in OOP/Inheritance based paradigms. I'd guess that
it's a bit of a perfect storm - inflow of PC developers who previously could
rely on hardware upgrades, massive increase in fresh/grad programmers (who are
now filling positions on enormous teams of programmers compared to previous
generations), and much more PC-like hardware that requires less specialized
programming.
It's hugely important in console game programming to know exactly how you're
using the system's resources - memory and clock cycles. The styles he's
criticizing aren't just bad for limiting resource usage, they're hard to
analyze. On top of that, it can be very difficult to silo components of game
code (and game engines) behind nice clean interfaces. When you're the engine
programmer tasked with optimizing game code because you're the one who knows
how to squeeze performance out of the console, you don't want to have to climb
up and down hierarchies to figure out where everything's hidden. I've been
there - it's especially prominent in 3rd party developers with cross-platform
game engines - and it's a nightmare.
------
rvirding
Yes, wonderful. Shows the true power of OO.
|
{
"pile_set_name": "HackerNews"
}
|
Google Explains The ‘Hotpot’ Name: “It’s About Community” - abraham
http://techcrunch.com/2010/11/29/google-hotpot-hotpotato/
======
mark_l_watson
Hotpot is cool. I just took 5 minutes to invite all my friends who live in the
same small tourist-rich town that I live in (Sedona Arizona) and also rated
some restaurants.
|
{
"pile_set_name": "HackerNews"
}
|
Esketamine Drug For Depression Treatment Nears FDA Approval - pseudolus
https://www.bloomberg.com/news/articles/2019-02-12/first-big-depression-advance-since-prozac-nears-fda-approval
======
copper_think
So, it's just enantiomerically pure S-ketamine - not really a new drug
discovery. The drug is already around as an anesthetic but it is almost
finished with the FDA's approval process for treatment-resistant depression.
When used as an anesthetic it is administered intravenously, but the anti-
depressant formulation is a nasal spray. Cool! That definitely makes it more
accessible.
I wonder if J&J got a patent on S-ketamine? The patent system is often abused
this way. First you sell the racemic mixture, then when that patent is about
to expire, you start selling a new product that is just the active enantiomer.
And you can tell customers that it's new and improved: that you only need to
take half as much! See Prilosec/Nexium, Celexa/Lexapro, etc.
Although, maybe it's difficult to manufacture just the one enantiomer, at
scale?
~~~
WalterSear
It's even more confusing: R-ketamine is the more effective enantiomer, with
less dissociative effects.
~~~
civilian
Sweet, I like that J&J is using S-ketamine then. I think the dissociative
effects of ketamine are fantastic and beneficial, even though they can be
surprising. At a low dose it likely won't be an issue.
~~~
WalterSear
Personally, I have no issues with the dissociative effects per se. However,
studies have determined that they aren't related to the antidepressant effect.
Fwiw, a full clinical dose feels like ~1/4 of a 'rail'. There's no
introspective trip here, just slight drunkenness.
(Incidentally, we have some evidence that the metabolite that causes the
effect can be blocked via concurrent CBD administration - without reducing the
dissociative effect)
The standing appointment to get slightly high for a couple of hours every
three days does get inconvenient (though it's not as problematic as the
resultant insomnia). I expect esketamine will be more so, for less
antidepressant effect. And, being a controlled substance, I'm concerned I'll
be unable to increase my dosage to make up the difference.
------
rdiddly
_" But Meisel said he was convinced by a patient survey Johnson & Johnson
conducted. 'We don’t take the patient voice into account enough,' he said."_
I suppose it didn't occur to him that the company trying to get their drug
approved might not have taken the patient voice into account enough either? Or
maybe under-reported certain patient voices? (The adverse ones. Hey, it's not
like it has never happened. At least take a closer look!)
~~~
torrance
This.
Evidence and studies like this should not be done by the company itself. The
OxyContin claims about low addictiveness should be a timely reminder for
independent clinical studies. Reading that set off alarm bells for me.
------
pizza
It might be a realy big milestone if it comes to market. Especially if it
could come out as soon as within this year.
Sometimes I hear people mention that ketamine’s special effectiveness (in
those whom it is effective, which isn’t 100% of people afaik) wears off over
the length of a treatment protocol. It’d be nice to hear more about that,
especially in comparison to the long-term effectiveness of traditional
antidepressants and SSRIs in particular.
~~~
indalo
This. I've read some horror stories about patients that received infusions
which for a time, gave them their life back, and suddenly for all intents and
purposes it became ineffective. Some people described going back to feeling
depressed after seeing the alternative to be even less bearable.
I'm eager to see it come to market, and for it to work, but I'm scared to work
with my doctor to try it based on those stories.
~~~
rincebrain
Can confirm that this happens.
To put it mildly, not the most pleasant experience of my life, but still
looking forward to seeing where the next iterations on rapid-acting
antidepressants go, and whether they turn out to be any more durable.
(Also quite curious to see what research crops up about why the effects are
sometimes not durable, but that's gonna be a decent wait.)
------
mark-r
So rather than approve the existing drug that's off patent, they approved a
knock-off that will undoubtedly cost 100x as much. How typical.
~~~
philwelch
Is there anything stopping doctors from prescribing or administering ordinary
ketamine off-label? I don't think the controlled substance issues would differ
between the two.
If the answer is "no, but the marginal differences between ordinary ketamine
and the new drug are enough that you'd want to prescribe the new drug
instead"\--well, that's new value added, isn't it?
~~~
dundercoder
Ketamine HCl can and is dispensed off label, both for in office IM/IV
treatment and at home nasal spray, or sublingual troche.
~~~
epmaybe
that requires the pharmacy to somehow obtain nasal spray. Unfortunately, no
manufacturer actually sells a nasal spray variety (yet) as there was no FDA-
approved use for intranasal ketamine. That being said, compounders have been
able to make intranasal ketamine formulations for a while now, since this
treatment for acute depression has been well recognized in the literature for
at least a decade.
------
ipunchghosts
Who we the 14 people who approved?
~~~
all_blue_chucks
I'm curious as to who voted against this. I mean, sure, abuse of this could be
bad. But suicide caused by depression is far, far worse.
People who want recreational drugs will find them anyway. Letting innocent
people die from treatment-resistant depression is a perverse priority.
~~~
martinald
It's slightly worrying the language from pharma is very similar to the early
opiod days.
"The voice of the pain patient is underrepresented! We should approve opioids"
Now is
"The voice of the depressed patient is underrepresented! We should approve
nasal ketamine".
I'm all for new therapies but ketamine can be highly addictive and causes
really nasty side effects in high quantities (bladder problems being horrific
from what I've heard). I hope we don't see a huge spike in ketamine abuse in
10 years time like what happened after overprescription of opioids.
I also think this will be extremely popular. Current antidepressants are not
very effective and take a long time to work. This seems to work very quickly.
Who would want to wait 8 weeks Vs hours to get better? It would not surprise
me if J&J have massively downplayed the potential addiction risk of this.
~~~
sjjshvuiajhz
Ketamine is plentiful on the street and you can legally buy ketamine analogs
online. I don’t see this new route of getting it from a psychiatrist as a huge
abuse risk.
Subjectively, it’s not as good a “take the pill and forget your problems” drug
as opioids, benzos, and alcohol are. It can be used in that way if you take a
huge “k-hole” dose and dissociate completely, and some people do get addicted.
But at common doses, certainly whatever they are going to prescribe, it’s more
of a “think about your problems and figure out how to solve or accept them”
drug like mushrooms and LSD.
~~~
martinald
You could use the same argument for Oxycontin surely. Heroin was always out
there on the street, but it's only when opioids were prescribed to the masses
via doctors that it reached ecedemic proportions.
I'm not saying it is addictive as heroin etc but it's definitely addictive.
And will people start increasing their doses as they become tolerant to the
antidepressant effects, etc?
~~~
sjjshvuiajhz
I think responding to this requires giving a summary of the effects of
different drugs, to build a simple mental model of how they could cause
addiction.
Group A: Drugs like benzos, alcohol, and opioids provide pleasant sensations
upfront, killing your pains and anxieties, but those problems return in even
worse form when the drug wears off. It’s as though your brain’s baseline for
what counts as suffering had been lowered by the experience of being coddled.
It’s very clear how this leads to compulsive redosing and addiction.
Group B: Psychedelic drugs like mushrooms and LSD induce unpleasant feelings
as they take effect, followed by a more positive (perhaps euphoric) experience
once the brain adjusts to the presence of the drug, and then less potent
pleasant after-effects when the drug wears off. It seems like the brain has
raised the bar for suffering - suddenly the fact that you can see things in
the correct color, gravity is pointing in the right direction, and you have
clarity of thought makes life feel easy. You might redose to extend the peak
effects, but you aren’t going to take any more for a while once it wears off.
It would be tough to get addicted to these. However, a depressed person might
have trouble with the come-up, and could experience a panic attack. You’d want
a very skilled therapist if you’re trying to treat depression this way.
IMO ketamine kind of straddles the line here. The onset of the dissociation
can be stressful, but it’s not that hard. You don’t forget your problems, but
they feel like the problems of somebody that you know closely and care a lot
about, so you can try to solve them from a different perspective. You can feel
that you are doing something great, which can lead to compulsive redosing, but
you keep the lessons you learn when it wears off. It’s not as though your
problems come back in even worse form like you get with group A. Although if
you really blast yourself, the dissociation can get so strong that you don’t
care about your problems at all, which gives you more of a group A experience
while you are peaking. I’m not a scientist and I may be wrong about this, but
I think the antidepressant effects are not caused directly by the drug or
metabolites that one could become tolerant of. I think that they are a result
of the brain’s recent experience of looking at life from a non-depressed
perspective.
------
eikenberry
"... lower than the abuse rates for other hallucinogens like ecstasy and LSD."
Ecstasy is a stimulant, not a hallucinogen. I thought bloomberg was generally
considered to be better at fact checking than that.
~~~
marcrosoft
Agreed, also LSD doesn't have a measurable abuse rate. It is literally anti-
abuse.
~~~
toomanybeersies
I think that in the article, abuse is used to mean illegal unintended use,
rather than abuse in the conventional sense.
------
ChildOfChaos
Hmm but Prozac was never that great anyway.
There is very little evidence that depression is caused by chemical imbalance
in the brain, it's something that big pharma have sold you so they can sell
their drugs.
Great book on the topic, check out lost connections.
~~~
aaaaaaaaaaab
There’s very little evidence for a broken leg being caused by the lack of
plaster around it, yet nobody questions its efficacy...
~~~
wu-ikkyu
How is that comparable? Seems like a non sequitur
~~~
aaaaaaaaaaab
The treatment of a condition need not be the “inverse” of its cause.
A broken bone can be fixed by immobilizing it via a plaster cast. Did the bone
break due to the lack of plaster?
Eczema can be fixed by applying topical steroids. Did eczema develop due to
the lack of topical steroids?
Depression can be fixed by inducing changes in neurochemistry. Did depression
arise due to an imbalance in neurochemistry?
|
{
"pile_set_name": "HackerNews"
}
|
Stripe now supports ACH payments - craigkerstiens
https://stripe.com/blog/accept-ach-payments
======
spenczar5
When I last worked on a billing system about three years ago, we looked _so_
hard to find an ACH provider. There was just nothing out there. This is an
enormous advance.
The problem we had was that we issued moderately large invoices monthly - they
could be from $1k to $50k. We wanted to be paid quickly, so we tried to
convince our customers to pay by credit card (through Stripe) and enable
automatic payments, but we had trouble when customers would bump up against
credit card limits.
So, for our bigger clients, we were relegated to asking for checks to be sent
by mail. This meant we couldn't automatically charge customers every month,
and instead needed to badger them to send their check - all of a sudden, we
needed an accounts receivable _team_. We couldn't just ignore the problem
since these were our _biggest_ accounts, too.
Stripe's pricing is almost comically friendly here - credit card transactions
are usually in the ballpark of 2.7% + $0.30, so even when card limits weren't
an issue, we'd be paying out the nose on the transaction - 2.7% of $10k is
$270. The new ACH payments would cost just $5.
Anyways, this is a serious accomplishment. The underlying banking regulations
and technologies around ACH are thorny. Good job, Stripe!
~~~
DrJokepu
It's insane how expensive financial transactions are in the United States. In
the UK, the equivalent of an ACH transaction would be a BACS payment, which
costs roughly 30p ($0.45) for a small business per transaction, although some
banks will do them for free. Many smaller businesses however would just use
Faster Payments Service for payments under £10K, which is like wire transfers
in America, except they rarely cost more than 25p ($0.35) and once again, many
banks will process FPS payments for free for small businesses.
~~~
tomschlick
It's because of all the fraud on the backend they have to fight. I'm sure
banks would like nothing more than to have a simple system where they only
charge half as much but net 2x the transactions. Lots of overhead with fraud.
~~~
toomuchtodo
It's not. It's because the largest banks are all stakeholders in the Automated
Clearing House system, they make far higher margins on faster out of band
methods (wire transfers), therefore it doesn't behoove them to improve.
There is some progress being made though, slowly and painfully.
[http://www.npr.org/sections/money/2013/10/04/229224964/episo...](http://www.npr.org/sections/money/2013/10/04/229224964/episode-489-the-
invisible-plumbing-of-our-economy)
~~~
jslampe
I should also note that there is powerful Fed-chartered initiative, called the
Faster Payments Task Force, going on right now. It combines +300 of the
nation's payment stakeholders (including big banks, networks, retailers, etc).
We've made a ton of meaningful progress on speed, standards, and more.
Big news coming in the next few weeks...
(learn more at
[https://fedpaymentsimprovement.org/](https://fedpaymentsimprovement.org/))
~~~
toomuchtodo
Thanks for the effort, and for posting this.
------
artursapek
In college I worked part-time for a music social network startup that sold
songs and paid artists a commission, kind of like iTunes. I had to write a
cron job that would issue the artist payouts every night by writing ACH files
in plaintext and SFTPing them to SVB.
3 years later it's great to see Stripe providing this, but I will always have
an oddly fond memory of working on that. It felt so obscure and unnecessarily
difficult.
~~~
protomyth
> It felt so obscure and unnecessarily difficult.
I earned a bit of money in the 90's doing a lot of those types of data
transfers. I get the feeling 70% of IT is just data munging.
~~~
IanCal
> I get the feeling 70% of IT is just data munging.
The remaining 30% is, as far as I can tell, fixing bugs in the data munging
code.
~~~
protomyth
Leave some room for the UI folks - although that is basically getting humans
to munge the data as a pre-step for the computer munging.
------
BHSPitMonkey
The UI component presenting the login form for major banks is troubling. It
seems like encouraging users to feel comfortable entering their online banking
credentials into anything other than their bank's web site is a bad move.
~~~
ltrcola
Until banks get developer friendly and support OAuth, I'm not sure what else
you could do if you want real time verification. I guess micro deposits could
still work if you had faster rails, but ACH is still a batch process.
~~~
jgalt212
> Until banks get developer friendly and support OAuth
We may wait a long time for that one.
~~~
jslampe
Not necessarily. BBVA has already this with Dwolla's real-time payment tech,
FiSync. (Limited, I know, but it's the only cool proof of concept like it out
in the wild)
Also, there's been a lot of work done through the Fed's remittance coalition
on a supradirectory for destinations. Adding authentication capabilities would
be the next logical step and they've already committed to using open standards
(lots of ISO stuff to compete with though). Combined with whisperings of big
bank projects and JP Morgan's CEO very vocal hatred for screen scraping, Oauth
could be a powerful and quick-to-market alternative.
------
rwmurrayVT
I hate to see this honestly. Stripe is already notoriously vulnerable to
credit card fraud and this is likely not helping their cause.
I'd love to know how long you should expect for an ACH payment to clear and
into your bank account. The two day period for credit cards is so small. Who
checks their statements every single day? This is obviously the benefit of
using Stripe if you're the merchant, but it leaves you vulnerable.
We can only pray that Stripe accepts some responsibility for verifying these
ACH transfers.
Fraud on Stripe is a two-fold problem. They've got fraudsters signing up for
Stripe accounts and running cards through them. They've also got fraudsters
making purchases on Stripe-based sites. Obviously, Stripe prefers that the
charges are made to legitimate users. That way they're not out all the money.
Edit: It appears it could take "up to 5 days" for the payments to be
processed. This is entirely on the bank's side of the equation. I guess from
there you'll only need to wait your 2 days (7 days for some users) to receive
the ACH into your bank account. I see massive fraud coming in here.
~~~
zachperret
Plaid co-founder here. When users connect their accounts via the Plaid instant
verification process, we actually allow developers to get a greater
understanding of the user. Via Plaid, you can do things like validate the
available balance and check the account owner's identity -- which can
significantly reduce the likelihood of ACH fraud.
~~~
mapgrep
Wait, so if I pay someone with ACH, they get to see how much money is in my
bank account? WTF?
~~~
zrail
No. Plaid instant verification uses your bank login info to get the ACH
numbers along with current balance, etc. You can't get that just with account
and routing number.
~~~
FireBeyond
Yeah, like I mentioned upstream, not sure why any developer should have access
to my account balance...
~~~
mateosu
I mean, these are fintech companies we're talking about here. If you're
signing up for these services your generally already deciding to trust the
company with personal data. Plus, you still have to login to your online
banking with Plaid, it's not like developers have unlimited, unauthorized
access to you r account balance and transaction information.
~~~
FireBeyond
There's a difference between - "I am paying you with ACH routing and account
information" and "The payment platform exposes, to the app, my account
balances". A huge difference.
------
fblp
Glad to see Stripe release this, but the $5.00 fee cap doesn't seem
competitive for frequent large transactions compared to other providers that
have a flat fees from $0.25 like Dwolla, with more listed here
[http://www.merchantmaverick.com/need-know-accepting-ach-
paym...](http://www.merchantmaverick.com/need-know-accepting-ach-payments/).
Why the higher cost?
~~~
pc
Dwolla also charges monthly fees:
[https://www.dwolla.com/pricing](https://www.dwolla.com/pricing). You need to
pay $1,500 a month in order to charge more than 10 bank accounts.
In general, most providers have a lot of fine-print. We tried to come up the
simplest and fairest pricing we could.
~~~
jacobsimon
Still, it seems unusual to charge a percentage fee. From what I understand,
one of the main benefits of ACH over credit cards is the flat fee structure.
~~~
0xffff2
Compared to credit cards, $5 is pretty flat isn't it?
------
lsh123
From a consumer perspective, paying via ACH is much more dangerous compared to
paying with a credit card. In case of fraud (e.g. hacked payment provider),
recovering from ACH fraud is much harder than recovering from a credit card
fraud. While you will likely get your money back at the end, it is also likely
that you will spend many months talking to your bank (not to mention returned
checks or other payments in the meantime).
Overall, I strongly recommend to NEVER pay via ACH from an account that is
used for other purposes. Personally I have a special account at my bank just
for rare ACH payments I need to make. I can transfer money instantly from my
primary account when I need to make a payment. And of course this special
account has all the overdraft protections, etc. disabled.
------
wesleyfsmith
Making this system integrate with Plaid out of the box was a smart move. Plaid
is so much better for users' than having to manually enter bank account and
routing numbers.
------
tomschlick
Looks like the TOS acceptance button on this page is spitting out a 500 error
[https://stripe.com/docs/guides/ach](https://stripe.com/docs/guides/ach)
~~~
pc
Sorry about that -- this only affects a small number of users, but we'll have
it fixed momentarily.
~~~
raycmorgan
This has now been fixed. Sorry again for the trouble!
~~~
tomschlick
Thanks! Just confirmed it's working.
------
Frozenlock
> "And so, today, we’re delighted to launch support for ACH payments for all
> U.S. Stripe users."
Any ETAs for international? More particularly Canadaland?
~~~
petercooper
Hopefully they'll get there. We're a UK based company with 90% of our revenue
from US based enterprises, so moving to this would be awesome :)
------
jblake
I wish the pricing was more competitive. In the US, I've used Beanstream -
which is $5/mo + $0.25 flat per transfer. In Canada I've used Versapay - $0/mo
+ $1 per transfer.
------
fideloper
This is super important for b2b companies that need to accept payments of
multiple thousands of dollars (e.g. Annual subscription / support), which
largerish companies typically will not hand over a cc for, either due to CC
limitations or (usually) internal process red tape.
~~~
tyingq
I would say both yes, and no.
My experience in B2B is that medium to large companies will want to pay you
via ACH. However they have little to no interest in entering their online
banking ID and password into something like what Stripe is providing.
They already make ACH payments, using their own tools, and all they want from
you is your routing and account number so they can do so. Assuming you are a
US company, with a US bank account, you've already been able to accept ACH
payments in this way for some time.
This setup from Stripe seems to be targeted at consumers, or maybe small
business owners, as the purchaser. I'm sure that's a need, but probably not a
huge one. Many of my customers that are medium/large businesses want to make
ACH payments, almost none of the small businesses have an interest in that.
~~~
pcunite
This is my experience too. When a customer wants to buy my product and the
cost is over $5K, I give them the following and it shows up in my bank in
about 3 days or so.
Bank Address, Swift Code, Account Number, Routing Number.
------
ck2
Wait, is there no minimum?
Because 0.8% on very small amounts is very practical for micro payments.
$1 payment would cost a fraction of a penny.
What am I missing?
This will be game changing if my observation is correct.
~~~
oddevan
I'd wager the only thing you're missing is the high friction involved in
setting up a bank account. I've got my credit card stored in my password
manager vault, and even if not, I've got that sucker memorized. Contrast that
with digging through the clutter on multiple horizontal surfaces in our house
to find a checkbook so I can find the routing number and account number,
waiting for the two deposits to show up (because I use a small bank that
hardly ever shows up on these "automatically authenticate" lists), and then
trying to remember what I was trying to spend $1 on...
But that's literally the only problem. The fee structure is AWESOME for micro-
payments, so for the right things (small subscriptions?) this could be
amazing.
~~~
jedberg
But once you've set up your bank account once, it will work on every site that
takes Stripe.
That's the big sell as a merchant -- chances are your user is already
configured for payments. And the larger their network grows the more likely
this is true.
~~~
nemothekid
I'm pretty sure this isn't true. If both Merchant A and Merchant B are using
stripe, and I authorize my bank with Merchant A, that doesn't mean my bank is
authorized with Merchant B.
Especially because in most cases, you can be totally transparent with the fact
that you are using Stripe.
------
whazor
Non-USA person here, I am amazed that this is even allowed. Giving your
username and password to a third party? Think about all the security problems,
like creating a fake Stripe. You are basically trusting every website you use.
In my opinion, there should be a free and open protocol between banks for
payments. Where you get redirected after you clicked on the pay button.
------
arohner
Can this be used, or setup in a way that supports the standard invoicing
model?
i.e. I want an api call to email a .pdf to a customer, and then have them
"click this link and enter your info to pay via checking account". Several
kinds of customers, including consulting, would be much happier with a "push"
model of paying, than a "pull" model.
~~~
jaredtking
Our startup, Invoiced, does exactly this. You can easily send invoices through
our API [1] that customers can pay from the email. Now that Stripe ACH
payments have been released we will be supporting it in the near future.
[1] [https://invoiced.com/docs/api/#send-an-
invoice](https://invoiced.com/docs/api/#send-an-invoice)
------
dang
[https://news.ycombinator.com/item?id=10889918](https://news.ycombinator.com/item?id=10889918)
looks like another announcement about this. It was posted slightly later so
we'll treat it as the duplicate in this case.
Eventually we'll have some form of URL grouping when there are multiple
stories on a topic.
------
wesleyfsmith
I am trying to build a lending platform where we by definition cannot afford
to lose 3% on each transaction. I'm am thrilled to see this. We have had to
hand jam our current system with a local credit union (where we literally have
to print out forms and bring it to them), so being able to switch to a fully
programmatic system is awesome.
------
itsthisjustin
Just got my beta invite too. Got frustrated at the lack of an easy ACH
payments platform long ago though and went ahead and made my own. Launched
[http://paynote.io](http://paynote.io) on January 1st of this year.
~~~
niij
This looks very interesting. Tell me more! How are you offsetting the cost of
fraud when payments by fraudsters will cost them nothing?
~~~
itsthisjustin
Well it's not for ecommerce for one thing. So fraud will be a lot less just in
terms of use case here. This is individuals and companies are paying invoices
for services rendered. There's not an area where a fraudster would gain a lot
from a fraudulent payment. The ACH Payment system I'm using is also FDIC
insured up to 250k.
~~~
0xEFF
Is this for real? I just started a consulting business and have been searching
for a good, inexpensive way to receive relatively large payments on my
invoices. How are you going to make money? Is this good up to $50K?
EDIT: I just tried to sign up, and this seems _really_ scammy. You need a
photo ID uploaded, along with my bank account details? How do I have any
assurance you're not going to drain my bank account?
~~~
itsthisjustin
Interesting you find that scammy. It is for real though. I'm a consultant and
it's been a real pain trying to collect large payments without getting killed
on fees. The information is required by law by the patriot act for any bank
transfers done over the internet. I can't get around it. It's required by law
and by my ACH provider.
~~~
0xEFF
I hear you. I'm not saying it _is_ scammy, just saying my scam-radar pinged
pretty loudly.
I'm all for ease-of use, but the flow went:
* Ah, a new service I could use!
* Sign up, cool, that was easy. Just needed my email.
* Verified my email with the 6 digit code, that's cool, sorta like Square.
* Sign in... Verify identity... OK, that makes sense. Hang on a tick, you want a copy of my ID? That's the first thing I'm asked for? Who's asking?
It's the who's asking bit that made me stop in my tracks.
I'm in the same boat and have eaten the high fees in order to get paid quickly
when I was first starting out. It's a great service, if it's for real...
------
kyriakos
Stripe should start supporting the rest of the world. I'd love to use their
services.
~~~
mcollier000
GoCardless is another ycombinator business providing a very similar platform
for the whole of the UK & Europe.
[https://gocardless.com/](https://gocardless.com/)
------
chadkruse
@pc - curious if you have plans to integrate ACH into your Checkout product.
Seems like an incredibly difficult UX problem, but I'm sure your folks are up
to the challenge :)
~~~
pc
No immediate plans. ACH is _mostly_ popular for B2B use-cases whereas Checkout
is optimized for "lightweight" transactions. You can obviously imagine cases
where an integration would be useful but we decided not to hold up the launch
in order to build that. We'll calibrate based on feedback going forward.
~~~
rwmurrayVT
@pc How do you feel about the assumed fraud risk you are taking on with this?
I'd love to have a conversation with you about it (in private). You can email
me at rwmurray (@) vt.edu. I know you all have quite a few vulnerabilities in
your system and can't ever find a way to share them.
~~~
pc
You should definitely email the fraud team! Try [email protected]. (Feel free to
CC me; [email protected].)
------
Eliezer
Did micropayments suddenly become possible for the first time? Would this be a
great service to charge 1000 users $0.10 each? Or for that matter, for selling
ebooks?
~~~
dtwhitney
I think it would need to support payments in the opposite direction too --
deposits into someone's account. I'd love to see this!
~~~
matthewarkin
Stripe Connect allows sending money to other peoples account
~~~
BillinghamJ
Other people's Stripe accounts yes, but not just arbitrary accounts.
~~~
matthewarkin
With Managed Accounts you can create accounts for other people via the API
that besides for a line in your terms of service they don't need to be aware
of Stripe for.
------
joshmn
Just an FYI to everyone accepting payments: Fighting chargebacks for ACH
payments is a lot harder than just calling up a cardholder's bank. With Stripe
now making ACH payments easier, I suspect the card shops " " will see a rise
in routing/account numbers for sale.
And as always, just a friendly reminder Stripe's "fraud protection" is beyond
trivial to get circumvent, so don't think they're doing you a favor.
------
mschuster91
From what I gather from the comments, you as customer have to log in into your
bank's online banking portal to initiate the transfers.
Don't you US guys have an equivalent to SEPA direct debits where I just give
the IBAN bank account number and the merchant will automatically deduct the
money from my account?
If so, then no wonder why US people are so dependent on credit cards...
------
Brushfire
This is great.
Next up: I'd love to see true Debit card transactions (i.e. With Pin) at Debit
prices (perhaps 10-30c regardless of transaction size). That could really
enable a ton of smaller items to be sold to a very large audience.
------
ROFISH
So is it possible for simple ACH->ACH deposits? I'm looking to take out of my
bank account and put into another (royalty payments for the curious). Ideally
without having to hold a Stripe balance, just simple xfer.
~~~
matthewarkin
You could take a look at combining ACH and Stripe Connect.
------
kevindeasis
So with ACH I can directly bill my customers' account. ACH lets me to bill
them for multiple or single payment. So, with these I can do person to
person(p2p), business to business (b2b), and business to consumers(b2c)
That is cool.
------
coupdejarnac
Somewhat offtopic- is there a way to use Stripe to hold payment in escrow or
to authorize a payment more than 7 days in advance? I really like Stripe,
though I am not sure it can do what I need.
~~~
matthewarkin
Using Stripe Connect and Managed Accounts you can hold funds for 30 days. Its
not legally escrow though (escrow tends to have a specific legal definition
and set of rules).
------
staticautomatic
Praise Jesus I've been waiting for this.
------
thedogeye
We've been using the beta for about 6 months and find it to be incredibly
reliable. Thanks Stripe people.
------
calgaryeng
Is there an ETA on availability in Canada?
~~~
itsthisjustin
Definitely working on international. You will be able to accept payments (not
send) this week in the form of a paper check mailed to you. So when you claim,
just check the box for mail me a paper check and you'll get one in the mail
once the payment clears.
------
dubcanada
I wonder if this means Interac Online will come soon :O that's a very big deal
for the Canadian market.
~~~
tomschlick
Hopefully. I've had to integrate with that in the past and it was not a fun
project.
I faintly remember them passing back html to inject to the user that contained
iframes and a bunch of other junk javascript.
------
needusername
So does the ACH do settlement rather than clearing as the name suggests?
------
orliesaurus
Sooooo as someone who used to live in the UK, should GoCardless tremble ?
------
throway1234
n00b alert: Can someone ELI5 differences between the solutions/products
offered by Stripe and other payment related companies like Square, Braintree?
------
rw2
This opens up so much
|
{
"pile_set_name": "HackerNews"
}
|
Cypherpunk rising: WikiLeaks, encryption, and the coming surveillance dystopia - LoganCale
http://www.theverge.com/2013/3/7/4036040/cypherpunks-julian-assange-wikileaks-encryption-surveillance-dystopia
======
LoganCale
Relevant to this, where have the cypherpunks gone? The mailing lists fizzled
out in the early 2000s, but given recent developments, has anyone begun
organizing the movement again? (Or has it kept going via other means?)
|
{
"pile_set_name": "HackerNews"
}
|
Android 8.0 Oreo, thoroughly reviewed - Yossi_Frenkel
https://arstechnica.com/gadgets/2017/09/android-8-0-oreo-thoroughly-reviewed/
======
lukepothier
If it's true that Chrome on Oreo genuinely prevents picture-in-picture _for
YouTube specifically_ , that's a troubling precedent to set. Why not allow all
websites to enable/disable PiP (maybe via a meta tag, in the way that tab
theme colours already work)? If I query
[https://m.youtube.com](https://m.youtube.com) while emulating a Nexus 5X, the
response contains:
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no, target-densityDpi=medium-dpi">
Is that what is preventing PiP on YouTube or is it some sort of underhanded
inter-Google arrangement between Chrome and YouTube?
~~~
kinlan
Shared below too.
Just as a quick point and I will drop it in the article comments too. I'm the
lead for our Chrome Developer Relations team.
Chrome doesn't disable picture in picture for Youtube, Youtube disable it in
Chrome. They listen to resize events iirc and then exit fullscreen mode (the
only way to currently get to pip mode in Chrome).
~~~
kuschku
Then Chrome should have an option to stop the resize handler from firing when
entering PiP mode. The user should always have the right to override what the
website is doing.
~~~
kinlan
So you would like us to not tell the page what the render is doing? I'm not
sure how that would play out for any number of API's that exist on the web.
Developer's have consistently had the means to override the default actions of
the browser (think drag and drop) but I don't think hiding side-effects or
user actions helps anyone.
~~~
avaer
I'd like to be able to tell my browser what it should tell the pages it's
loading, especially if pages are leveraging it to do things against my will.
That applies to blocking Google ads, as well as fixing Youtube malfeatures.
Of course, it's understandable you won't see this from a browser paid for by
Google. But you can't paint in broad brush strokes like "I don't think hiding
side-effects or user actions helps anyone."
~~~
kinlan
I'm disputing the fact there was a casual offhand design, and I don't think
hiding the state of the render or the browser helps anyone not least
developers who need the information about the state of their page.
I'm not saying that there can't be meaningful response from the browser to
user hostile actions, I don't think anyone disagrees.
There's a broader question about a user's will and the sites intent especially
when it comes to business plans of the site that I'm not sure if access to
features native in the browser is aligned with say ad blocking or tracking
etc... I don't know.
~~~
Aaargh20318
> There's a broader question about a user's will and the sites intent
> especially when it comes to business plans of the site
The site's business plans are not my problem. Basically, _my_ phone and _my_
computer should do what _I_ want. Why is there even an API to make a video
player enter/exit full-screen mode ? That's 100% a user decision and there is
no valid reason why that should ever be exposed to JS.
~~~
Touche
What if I want to present an X button that the user can click to exit
fullscreen? How would I do that without a programmable API?
~~~
pas
That should be a browser option. (Show exit full-screen button or not.)
Also, especially for video, the browser should be able to play it full screen
without any distractions.
Of course, there are optional enhancements (subtitles, or different audio
tracks) driven via JS. And for those the controls have to go somewhere.
Ideally, if there were a standard for those, the browser could handle it. (But
then we're at the problem of an ever bloating browser.)
~~~
Touche
Sounds like you are advocating for something very different than the current
APIs. You're asking that the browser define its own UI for an exit button. How
does it know where to put that? What if it is a game in a `<canvas>` element
and the button overlays some important UI in the game?
I think you're overreacting to one bad-actor. Inevitably your suggestion here
leads to good-actor pages having much less power to present good UI to its
users. The browser has to think of all use-cases and have options for that,
rather than defining lower-level hooks that pages can do what they want with.
Would it make you feel better that there already many other ways that pages
can do user-hostile things? Have you ever visited a page that blocks right-
click? Would you want to forbid Mouse Events because of this?
~~~
pas
I like the trust model that current browsers do. If I trust a page they can
use the full viewport or screen, and a lot of keys, etc.
> You're asking that the browser define its own UI for an exit button.
Yes. Currently firefox puts a "to exit full screen press esc" OSD already on
videos, that also interferes with visual presentation of sites/directors. So
... directors already don't put shit there.
The same thing goes for walled gardens (like Apple's - they don't allow some
things), the problem is not that it's curated, the problem is that there are
insufficient tools available for users to put their walls where they want.
Yes, by default I don't want to allow blocking right click. (You might be
familiar with the saga of this bug
[https://bugzilla.mozilla.org/show_bug.cgi?id=78414](https://bugzilla.mozilla.org/show_bug.cgi?id=78414)
. )
------
TheAceOfHearts
How many people build their own ROMs? When I bought my Pixel I tried my hand
at building my own ROM, but I kept stumbling and eventually gave up.
Admittedly, I was running a newer version of Ubuntu than the one they
suggested in the docs, but I didn't expect it would break everything. Am I
right in guessing that using a VM would be the easiest path to success?
I've been a bit frustrated with android's sparse docs and unreliable build
tool. Whenever I try searching for additional information, all I find are
questionable tutorials and xda-developer threads. I don't have anything
against the forum itself, but I find it a bit questionable to see so many
people happily propagating and flashing random binaries. Is xda-developers
still the main place to go when looking for help, or have any other
communities started to overthrow them?
Now with 8.0 out, it seems like a good chance to retry building my own ROM.
I'm thinking of forking CopperheadOS [0], and applying some minor patches on
top. Is anyone here running their own ROM, or Copperhead in particular? I'd
love to hear about your experience, along with any pros and cons. F-Droid
seems to be capable of handling all of my requirements. My only big remaining
concern would be with Project Fi; I'm uncertain if the service will work if
Google Apps aren't installed.
[0] [https://copperhead.co/android/](https://copperhead.co/android/)
~~~
Veratyr
> Am I right in guessing that using a VM would be the easiest path to success?
I've been able to successfully build Android on Arch Linux. Depending on which
particular ROM you're building, much of the required build environment is
packaged with the source. I just needed repo's dependencies and to rebuild the
prebuilt Bison with the included source.
> I've been a bit frustrated with android's sparse docs and unreliable build
> tool.
The guide at
[https://source.android.com/source/](https://source.android.com/source/)
should result in a working AOSP build. I've followed it before without
problems. My guess is you're trying to do something weird that makes sense to
you that isn't really supported.
> Is xda-developers still the main place to go when looking for help, or have
> any other communities started to overthrow them?
XDA is still the primary place ROM development is carried out, yes. Finding a
ROM's IRC channel can be helpful too.
> Is anyone here running their own ROM, or Copperhead in particular?
I built Copperhead for my Pixel but unfortunately it resulted in a bootloop so
I gave up on it. I'm running PureNexus right now and might try Copperhead
later.
> My only big remaining concern would be with Project Fi; I'm uncertain if the
> service will work if Google Apps aren't installed.
I don't think it'll work totally bereft of Google services but you might be
able to manage it with MicroG: [https://microg.org/](https://microg.org/)
------
awjr
The summary from page 9
[https://arstechnica.com/gadgets/2017/09/android-8-0-oreo-
tho...](https://arstechnica.com/gadgets/2017/09/android-8-0-oreo-thoroughly-
reviewed/9/)
The Good
Project Treble isn't a silver bullet for Android's update problems, but it's
the first time in a long time Google has changed Android to make system update
development easier.
I love the smaller "by the way" notification section. It really cleans up the
notification panel, while still letting the user read less-important
notifications at their leisure. I just wish I could demote any app to "less
important," regardless of what version of Android it targets.
The automatically-colored media notifications look amazing! Sometimes I cycle
through songs with the notification panel just to see what it comes up with.
The background processing lockdown has been a long time coming. Finally, we'll
see the end of wakelocks.
Picture-in-picture on a phone is great for videos, and Google's experiments
with things like Google Maps look very promising.
EmojiCompat and downloadable fonts means Android users should get new emojis
super fast. You don't even need Android O for this to work—it will work on
Android 4.4 and up!
The Bad
Google's revamp of notification controls has the side effect of removing fine-
grained notification controls for most apps. We'll have to wait for every app
to upgrade to get the controls back.
The ambient notification display gets a huge downgrade, changing from showing
the full notification panel to only showing tiny status bar icons.
Snoozing notifications could be a great feature, but the timing options are so
limited that it's useless. A max of one hour? Seriously? Give me a time
picker.
The disabling of Chrome's picture-in-picture support specifically for
youtube.com is downright sleazy. That's not how Web browsers are supposed to
act.
The Ugly
Updates—they're still a huge problem. Here's hoping Treble actually helps.
~~~
kinlan
Just as a quick point and I will drop it in the article comments too. I'm the
lead for our Chrome Developer Relations team.
Chrome doesn't disable picture in picture for Youtube, Youtube disable it in
Chrome. They listen to resize events iirc and then exit fullscreen mode (the
only way to currently get to pip mode in Chrome).
------
Jedd
After my Pixel updated recently to Oreo, being frustrated by a constant
notification for Twilight (a red-tinting background app) and starting to
search on google for a solution -- I discovered quite a lot of people have
been asking 'how to suppress notifications on oreo' recently.
FWIW you _can_ control the notification snooze timeout, but you're going to
need Tasker [1] (or similar). Interestingly I evidently bought this several
years ago - can't remember why - and haven't installed it on at least the last
two phones, since some version of Android a few years ago shipped with
whatever native functionality Tasker used to provide for me.
[1]
[https://play.google.com/store/apps/details?id=net.dinglisch....](https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm&hl=en)
~~~
discreditable
Were you aware that Android has built-in tinting capability? It's under
Display > Night Light. There's a quick tile for it too.
~~~
joshuamorton
That's only available on 7.0+ phones with hardware support (currently I think
only the pixels?)
~~~
sangnoir
Context (emphasis mine):
>> After my _Pixel_ updated recently to _Oreo_ , being frustrated by a
constant notification for _Twilight_
~~~
on_and_off
Twilight is a third party app.
It draws a red layer on top of everything else. This is why the system
displays a notification : it wants you to know that an app is drawing on top
of everything.
The system settings works very differently, it directly applies a matrix to
what needs to be displayed. This is way more efficiencient since applying this
matrix is basically free instead of adding one layer with alpha composition.
It also allows other customizations, like adapting the screen for
deuteranomaly.
Ideally, Google should open the API behind this feature at some point.
------
AndrewDucker
I knew Project Treble was exciting, but this is really interesting:
>Treble promises to change everything. Malchev says that Treble standardizes
Android hardware support to such a degree that generic Android builds compiled
from AOSP can boot and run on every Treble device. In fact, these "raw AOSP"
builds are what will be used for some of the CTS testing Google requires all
Android OEMs to pass in order to license the Google apps—it's not just that
they should work, they are required to work.
Ron paints a rosy future here:
>Custom ROMs shouldn't need to be painstakingly hand-crafted for individual
devices anymore—a single build should be able to cover multiple Treble devices
from multiple manufacturers. Imagine the next time a major new version of
Android is released—on Day One of the AOSP code drop, a single build (or a
small handful of builds) could cover every Treble device with an unlocked
bootloader, with a "download Android 9.0 here" link on XDA or some other
technical website.
If this comes to fruition, the ROM community is going to go nuts. This is
enormously exciting and Oreo will turn out to be a real turning point for
Android.
One thing that is interesting though is the implication that Android updates
will get more iOS-y in the future. By that I mean certain features will be
missing from updated phones because the HAL layer doesn't support it.
(Copied over from previous discussion here:
[https://news.ycombinator.com/item?id=15167138](https://news.ycombinator.com/item?id=15167138))
~~~
epmaybe
> By that I mean certain features will be missing from updated phones because
> the HAL layer doesn't support it.
What features are you referring to? Also, will that be worse than the current
situation for Android devices?
------
haddr
Quick comment on notifications: it seems that Android has really worked them
out nicely. Now I even more prefer Android notifications to iPhone
notifications.
~~~
rovek
Confirm dismissal of a notification is one of my biggest of myriad gripes with
atrocious iOS design after 4 months.
------
wyclif
Perhaps slightly off-topic, but what are the best hardware/carrier options in
USA currently if you want vanilla Oreo that updates reliably instead of a
custom ROM (w/ no carrier or third-party overlays and skins)?
~~~
Navarr
Basically just pick up a Pixel and put it on whatever carrier you prefer.
I prefer T-Mobile.
------
maufl
Can somebody say something about what Treble means for other open source OS
for phones, e.g. Sailfish? Might it be possible to use the vendor provided
Treble core for other OSs?
~~~
pas
Yes. The question is access and licencing.
------
afro88
The music notifications with colours chosen from the artwork is really nice.
Does anyone know much about the method behind this?
~~~
pennaMan
I'm guessing they are using the Palette class from the support libs to extract
the colors from the artwork:
[https://developer.android.com/reference/android/support/v7/g...](https://developer.android.com/reference/android/support/v7/graphics/Palette.html)
------
mtgx
Keeping my fingers crossed that Project Treble is the real deal and would
actually make it possible for a "generic AOSP image" to be installed on
_virtually any mobile device_ that came with Android. That's the dream, isn't
it?
But I think we've been burned too many times by Google's promises to "make it
easier" for OEMs to update devices and other such promises, or at least these
projects always sounded much better than they turned out to be. Hopefully this
time it is different.
I would be curious to know when Project Treble started. I imagine something
like this, and if it was serious enough, would take 3-4 years of development
and thought put into it? If it's less than two years then I would probably be
worried about just how much thought and development Google put into it. I
would also be disappointed that Google only started taking such a project
seriously two years ago - or _seven years_ after Android officially launched.
Some could say this "feature" should have been enabled from day one.
~~~
vanderZwan
Is Project Treble "backwards compatible" with old models? If so I hope it will
work out for Fairphone users, especially the first model which is doesn't
receive as much support as the Fairphone 2
~~~
Mindwipe
No. In theory you could backport it to older phones, and OEMs allegedly might,
but I wouldn't expect that to be the norm and it's not mandatory from Google
in order to get Oreo.
------
sdotsen
Can anyone confirm if they brought back the ability to select a wifi/Bluetooth
connection via a drop-down from the swipe down option? You know, without
having to click and go into the screen to select the device you want to
connect to.
~~~
Mindwipe
They did (I don't think it was ever gone as such, they just messed up the
gesture, but it's back).
------
sahaskatta
This was particularly interesting:
>Google shared a fun statistic at I/O 2017: The company expects one-third of
Android devices shipped in 2017 to cost under $100.
Even more so considering that the US is the 2nd largest market for phones of
that price point.
> While the long term goal is to tackle the 5 billion users without internet
> access, immediately this helps the US market too. Google says the US will be
> the second most popular market for these sub-$100 phones.
------
edoceo
The upgrade reset/reenabled all the audio alerts for mail, calendar and SMS
that I had disabled. Why did Google reset my notification preferences!?
------
kijin
The third set of pictures on page 4 shows a "Rotate" notification that is 47
years old. A timestamp must have been set to zero somewhere...
------
Severian
I really hope they updated the audio stack to make latency lower. Right now
doing anything sub 10ms with audio is impossible.
~~~
eco
It says right in the article that there is a new low latency audio API.
~~~
Severian
Yeah, I saw that at the end of the article. A little short on details. I hope
it really is low latency though, and can be used for real-time synths,
trackers, dj mixers, etc. It would be nice in the future to have a small
attachable audio interface we could hook up to our phones or tablets and work
out some tunes on the go. I hope Android can start doing the same things as
iOS in this regard.
Hopefully we'll see some reviews once hardware and apps make a foray into this
new API.
------
hateful
They lost me at "light grey notification panel"? I spent some time trying to
get the dark grey one to black with no luck - and now it's even worse.
~~~
harrygeez
you can probably achieve it by modifying framework-res.apk. At least it worked
on Nougat, copied the colors from Pixel to get its blue theme on my Nexus 5
------
zeep
A bit off-topic but will Android P/9.0 be the last version before you have to
switch to Fuchsia?
~~~
on_and_off
There are no official plans for Fuchsia.
If anything O makes you think that Google plans to keep Android around for the
long run since it aims at putting it in a good architectural position for the
next 5/10 years.
------
wavesplash
Oreo Battery life is atrocious on Nexus 5X. Do not upgrade if you can avoid
it.
~~~
bostand
Stop that!
You cannot judge battery life after only a few days, specially after an
update.
~~~
oelmekki
Would you care to explain why? (provided that when we speak of battery life,
we mean how much time it can go without recharging, and not after how many
years you have to change it)
~~~
philjohn
Because when you install a new OS there's things that are optimized in the
background, and you also tend to use it more because of the "new and shiny"
factor.
~~~
oelmekki
Sounds legit, thanks for explaining.
------
richardwhiuk
The link here is to page 3 - any chance a mod can fix it to go to the start of
the article?
~~~
sctb
Thanks, updated!
------
xbmcuser
Duplicate
[https://news.ycombinator.com/item?id=15171605](https://news.ycombinator.com/item?id=15171605)
|
{
"pile_set_name": "HackerNews"
}
|
Remote Procedure and Event Protocol - billytetrud
https://github.com/Tixit/RPEP
======
billytetrud
I wrote and implemented a similar protocol to RPEP in 2013 and have been using
it personally, but haven't released it. In my recent search of something
similar I found the unfortunately named WAMP (Web Application Messaging
Protocol). Despite its goal of being simple and easy to implement, I found the
spec complex and bloated. It doesn't stick to being a messaging protocol, also
defining routing and brokerage - unnecessarily so in my opinion. I subscribe
to the single-responsibility principle, and WAMP seems to violate that. So I
wrote up this RPEP spec and want to know what people think about it.
If anyone wants to contribute to RPEP by either making implementations or
discussing changes to the spec, please let me know.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: Asynchronous customer service - swah
I'd like to know if this already exists or if this is stupid.<p>All customer services in my country operate in "Blocked IO" mode: you call them, you wait 30 minutes for someone to answer you, and sometimes they even hang-up.<p>The alternative I'm thinking is: you call, automatically your number is queued, and someone will call you back on your cellphone.<p>Apart from the calling costs, is there any other reason the latter system isn't a win-win solution?
======
Travis
Both Southwest Airlines and Amazon.com do this in the U.S. They've both called
my cell phone before to do it.
With amazon, it's actually even easier. IIRC, you can click a button on the
site to queue you, and that's what gets you in the queue.
Other corporate phone tress I've called before have had a similar option.
After a given time they give you the option to leave a voicemail, and they'll
return your call.
I think the major obstacle is psychological. People don't actually believe in
their lizard brains that they won't lose their spot in line unless they're
standing there to preserve it. Even though being on hold isn't the same (after
all, it's not like you can _see_ the people in front of you, to ensure no
cutting in line), I think people interpret it the same.
That, and, async like that would probably make people feel like they're being
put off. Whereas with blocked IO, you think, "it's not that they don't care,
it's just that they're busy right now."
But I agree -- I love it. I think people's attitudes will shift towards the
system you're suggesting. It'll just take time.
~~~
swah
Thanks. I felt this should exist somewhere already. Here, some systems say you
have an "waiting time: 10 minutes" every 10 minutes.
But what can I do with this, startup-wise? :) (It's a problem to myself also)
------
seven
IBM (in Germany) did this at least some years ago.
Called the hotline, a nice computer voice told me that all lines are blocked
and that they will call me back in 12 minutes.
After 12 minutes my phone rang and a friendly lady apologised for letting me
wait and I told her about my problem.
I am still impressed.
------
pierrefar
I've heard of exactly this from US and UK companies, but their names escape me
right now. And this is a biased sample because I don't read much news in other
languages.
------
inerte
Calling a cellphone is pretty expensive...
But anyway, the consumer might be busy, I guess? Maybe that's a pretty big
reason, but I never worked in call centers / telemarketing.
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: Globlog – One post a day. Anyone can submit their post for publication - mapehe
http://globlog.xyz/
======
mapehe
This is a format I thought about a long time ago. No idea if it makes any
sense, but decided to give it a shot and put it out there anyway. Making it
was also a good few hour exercise.
Maybe there would be room for something like this these days when you are
overwhelmed by information all the time.
------
nitemice
Site is down. Showing Apache2 Ubuntu Default Page.
|
{
"pile_set_name": "HackerNews"
}
|
How the Internet Is Broken: Big Questions and Bad Answers - rhema
https://nextbison.wordpress.com/2019/01/07/how-the-internet-is-broken/
======
k9s9
I am going to keep posting the ledger of harms -
[https://ledger.humanetech.com/](https://ledger.humanetech.com/)
Until we see all the issues in one place like a github issue tracker,
solutions will always be piece meal and confined to personal experience.
We need more issue trackers like this, so that people with solutions to one
issue or another have some sense of the big picture. Also Big Tech cant weasel
out of one issue by talking selectively about another.
Who knows, maybe one day we will get to proper integration/system wide testing
for fixes to social issues...
~~~
DyslexicAtheist
I love this concept. I think it could benefit by branching out in a way that
tracks different companies, then create different branches for these companies
to track the people making these decisions. There is a huge problem with
greedy and horrible individuals hiding behind a corporate structure. A company
is by definition a group of people and since we treat companies now legally
the same as people the same rules must apply. The reason why this works is the
same as why shaming individuals who rape people (Weinstein, Stacey etc) works.
The only way to improve things is by moving the goalposts in corporate ethics
the same way it has been moved by the #metoo movement for sexual violence.
the reason why I think this works is that ethics is an individual thing that
can only be applied on an individual level. If a CxO suggests to set up
offshore structures to channel profits away then burn them to the ground
(figuratively).
the tech to do it is already there: the darknet!
~~~
k9s9
"The only way" you are suggesting is not the only way.
Changing people's thinking and behavior, especially that of highly misguided
characters in power, requires as Psychologist Marshall Rosenberg would say,
choosing between Violence and Compassion.
Ideally for progress we want people to change the way they think about what
they are doing, not spend their time defending it or how to get away with it.
The latter being what we manage to keep doing.
This is where the choice of approach we take makes a huge difference.
Our Natural instinct is to choose violence, punishment, judgement, name and
shame etc. We want to see heads role for suffering caused.
But this approach takes the focus off the suffering of the victim, and puts
focus on how to cause pain to the perpetrator. It doesn't get perpetrators to
change the way they think - to reflect. Instead they react - play defense, and
spend their time and resources on avoiding and skirting punishment.
Think about this. Think about what Gandhi, MLK, Mandala did by not choosing
this route. They didn't allow perpetrators to play that game. They showed us
there is a big difference in outcomes when we tell a man - look at the pain
you have caused VS you are <LABEL> and will be punished for the pain you have
caused.
This ledger of harms concept is great, because it keeps focus on the suffering
produced. It makes people in power squirm and reflect. Which is what plants
the seeds of change. They will feel the need to change, just as the British
Empire, US Govt or the South African govts did.
But as soon as you add punishment to each issue their focus will shift to self
defense at all cost.
------
packet_nerd
> We need to teach our children to observe the world accurately and debunk
> pseudo-science.
I'm not a scientist and don't have the depth of insight in many fields that I
feel I could always accurately debunk pseudo-science. Most people aren't
experts, and most kids certainly aren't. I've spent my career getting the
measly amount of domain expertise I do have, and that's only in a very narrow
IT field.
Rather, we need to focus on learning to better evaluate authorities. Who's the
most qualified source of information? Which of several competing theories have
the most qualified proponents? What incentives might they have to color what
they believe?
On the web its really hard to know who's who. Mostly we have nothing better
than the tone and look of the site and domain name. That's really pretty weak
if you think about it! I'd definitely be interested in a system of some kind
(browser extension?) that added a little more information.
~~~
drKarl
That's interesting. I always struggle to understand why people believe in a
god (or many), or that the Earth is flat and dismiss Science and evidence on
the other direction. From the point of view of something that you can't
understand, and you choose to delegate to an Authority... and you choose which
Authority to trust... if you've been indoctrinated into religion you'll be
more prone to choose a priest/pastor or a magic book as your source of
Authority, even in things like the origin of the universe and where does life
and animals and humans come from. Other people choose David Ike as Authority
and believe there's a reptilian master race ruling humanity for millennia, and
so on.
------
ohiovr
A lot of people could benefit from more skepticism. Thinking things through is
not just work but it can be a hell of a lot of work. People naturally don’t
like to work. It is easy and fun to think someone’s pipe dream will provide
energy without cost. And it is easy to think that the such and such ethnic
group is the cause of all our problems instead of thorough analysis of the
issues.
The internet is mostly ok (besides the occasional bgp attack)
The web is really mostly rubbish though.
There is no way to fix the fact that a lot of people are wrong on the web.
Anymore I don’t feel a huge need to try to correct people.
~~~
Droobfest
_> A lot of people could benefit from more skepticism. Thinking things through
is not just work but it can be a hell of a lot of work. People naturally don’t
like to work. It is easy and fun to think someone’s pipe dream will provide
energy without cost. And it is easy to think that the such and such ethnic
group is the cause of all our problems instead of thorough analysis of the
issues._
Could you maybe also benefit from more scepticism of your own opinions?
I ask this, because of the way you project the opinion of a 'wrong' group of
people with two simplified strawmen. I don't think many people exist in the
way presented. At least not in a quantity that makes the web rubbish.
It's easy to think other people are simplifying things too much, but maybe
that's just an oversimplification itself..
~~~
ohiovr
We are allowed to be wrong. That is why I don’t care to correct random people
anymore.
------
coldcode
Any mechanism that amplifies information also amplifies noise, something I
think we didn't really understand when the web/internet started. Understand
the provenance of information we see is still difficult, its not as if there
is git log of changes in some concept or news item in some random website that
goes back to whenever it was created globally. Even "blockchain" doesn't
really fix the idea that it was garbage when it was inserted. Simple ideas
like someone saying some words in a public context seems difficult to validate
or verify sometimes much less complex ideas like climate change. Information
is a messy concept made worse by the freeform nature of the internet/web.
~~~
AnimalMuppet
Hmm. I seem to recall AT&T figuring out, about 1927, that they could amplify
long-distance signals better (with less noise) by adding some negative
feedback.
I know it's only an analogy, but maybe the Internet amplifies too well, and
therefore the noise gets amplified much too well. Maybe the internet
(especially the echo chamber parts) need some negative feedback.
------
renholder
>We need to teach school children to be better at interrogating evidence and
deciding (for themselves) what to believe.
_And..._
>We need to teach our children to observe the world accurately and debunk
pseudo-science.
These two ideas directly conflict with one another and create a dichotomy: If
I have the freedom to beleive what I want, how does this translate into having
the initiative to observe the world accurately and debunk pseudo-science?
The better approach would be to understand that a lot of the laws that govern
our universe (or whichever other science) has been peer-reviewed. While this
doesn't mean that they're infallible (see: ether), it does mean that we should
still shouldn't question them, if there's something that doesn't align with
our thought-processes on it.
The harder part, of course, is proving/disproving these ideas that they may
have because of limitations - whether it be levels of education (such as in
the states) or the time available to go through the scientific method until
they have arrived at 'x' conclusion regarding it.
People working 40+ hour work-weeks, with their own families, etc. are -
generally - less inclined to take-up the mantles of science because, at the
end of the day, the cost versus rewards are too askewed for them.
Instead, what I think should happen is that the sciences should have bounty
programs, much in the same way that technology does, to incentivise these
ventures.
At the end of the day, people still need to put food on the table and deal
with day-to-day life. While some are hobbyists (e.g.: hobbyist astronomers), I
don't see such large communities around the heavy sciences (e.g.: hobbyist
cosmologists, hobbyist theoretical physicists, etc.) and I think that would
change if either people had more free time to donate to such ventures and/or
it was incentivised.
Even the old precepts of the scientific minds gathering weekly for banterings
about (might be stuff of media to blame for this, if untrue) to challenge and
expand our understandings isn't something openly lauded - which, to me, should
be. (I want to say this was something practiced by Einstein and
friends/colleagues?)
Let's face it: Despite the advancements of the last 50 years, the word "nerd"
is still a pejorative.
~~~
FigmentEngine
> These two ideas directly conflict with one another and create a dichotomy:
> If I have the freedom to beleive what I want, how does this translate into
> having the initiative to observe the world accurately and debunk pseudo-
> science?
no these two ideas work well. be curious, but question what you see, and
question yourself.
------
deckar01
No matter how educated you are about the scientific method, there are going to
be hypothesis that you don't have the technical skills to test yourself.
Scientific knowledge has advanced, in part by accepting consensus. Not until a
new idea fails do the assumptions it is built upon really need to be
scrutinized.
------
stunt
I think the proposed solution in this article also covers educating people to
find a qualified source of information.
I was prepared to read an article talking about a problem without giving a
proper solution for it. But it seems proposed solution is covering (not all)
but a lot of the ground.
------
vfclists
_It’s easy to see how people might wonder if vaccines cause autism—many
children regress developmentally right after receiving vaccinations. It’s been
conclusively proven that this is correlation, not causation. There is
absolutely no doubt: vaccines do not cause autism. But it’s emotionally easier
to for parents to believe they have a reason and someone to blame than the
stark truth that their child’s autism is not yet understood_
Tell that to the parents of the afflicted children. If the autism was not a
genetic condition then the vaccines must be the cause. Simply because the
mechanism by which autism develops is not understood doesn't mean the vaccines
were not the cause.
_post hoc, ergo propter hoc_
It is similar to the argument of those who maintained although there was a
correlation between the presence of the HIV virus and the AIDS disease, it was
only a correlation because the mechanism by which the HIV virus caused AIDS
was not understood. I for one not being in the medical profession cannot say
for sure that the actually mechanism is well understood.
I say again, tell that to the afflicted.
And what has this got to do with the Internet? If the Internet gives the
opportunity to spread falsities as well as truths at Internet speed is it an
inherent fault?
~~~
SketchySeaBeast
Sorry, I'm a bit confused - are you trying to lay out the faulty reasoning, or
are you saying that vaccines COULD cause autism?
~~~
vfclists
Wasn't there a link between the MMR vaccine and autism in African-American
boys that the researchers suppressed in the initial findings?
As far as autism was concerned research showed a correlation between autism
and African American boys and the parents of those boys noticed that it began
soon after the boys were adminstered the vaccine, and that was way before the
researchers themselves admitted that there was a correlation.
As far as I am concerned vaccines cause autism in some cases.
Just because the actual mechanism has not been discovered doesn't mean they
are not the cause, for the simple reason that you cannot simply vaccinate
human beings with the intention of finding the causal link between vaccines
and autism, which assumes that the neurological basis of autism is clearly
defined.
[https://www.ageofautism.com/2014/08/whistleblower-says-
cdc-k...](https://www.ageofautism.com/2014/08/whistleblower-says-cdc-knew-
in-2003-of-higher-autism-rate-among-african-american-boys-receiving-mmr-shot-
earlier-than-36-mont.html)
[https://thevaccinereaction.org/2017/05/african-americans-
vac...](https://thevaccinereaction.org/2017/05/african-americans-vaccines-and-
a-history-of-suspicion/)
_On the other side are numerous horror stories involving vaccinated children
like that of Harvard-educated attorney George Fatheree, who was pressured by a
pediatrician to resume vaccination despite seizures his infant, Clayton,
experienced after a previous round of vaccines. That night, Clayton’s seizures
returned and he stopped speaking for three years. He grew into a severely
disabled teen, suffering dozens of seizures a day. Because of similar vaccine-
related injuries and deaths, the National Vaccine Injury Compensation
Program—a fund under the U.S. Department of Health and Human Services set up
to shield vaccine manufacturers from liability—has paid out over $3.6 billion
in compensation to affected families._
If the blogger wants to give an example of false notions propagated at
Internet speed, they can do a lot better than using the link between autism
and vaccinations as an example.
~~~
SketchySeaBeast
>Wasn't there a link between the MMR vaccine and autism in African-American
boys that the researchers suppressed in the initial findings?
> As far as autism was concerned research showed a correlation between autism
> and African American boys and the parents of those boys noticed that it
> began soon after the boys were adminstered the vaccine, and that was way
> before the researchers themselves admitted that there was a correlation.
You're going to have to find some articles that aren't from suuuppppeeerrr
biased sites. Only the age of autism has any sort of scientific paper link,
and that paper they referenced was withdrawn due to Brian Hookers conflict of
interest
([https://en.wikipedia.org/wiki/Brian_Hooker_(bioengineer)](https://en.wikipedia.org/wiki/Brian_Hooker_\(bioengineer\))).
> Just because the actual mechanism has not been discovered doesn't mean they
> are not the cause
That's a weird thing to assume. In that case turning 1 is the greatest cause
of autism.
|
{
"pile_set_name": "HackerNews"
}
|
An attempt to upgrade to Webpack 4 - isuckatcoding
https://gist.github.com/gricard/e8057f7de1029f9036a990af95c62ba8
======
andrew_
Hello. Webpack team member here. The webpack@4 release was, well, interesting.
I can't add much that hasn't already been said - but I can say that we're
working to make sure there will never be another webpack release that suffers
from the same kind of documentation shortcomings. There's a metric poopton of
good stuff in the release, and we're still working to get all of it written
down. That said, look for future major releases to be far more proactive than
reactive with regard to documentation, announcements, and migration details.
~~~
tbrock
Hey thanks for all the hard work!
> _I can say that we 're working to make sure there will never be another
> webpack release that suffers from the same kind of documentation
> shortcoming_
We still have time to make sure this release has good docs. Please don’t give
up on it just yet!
Can we make some task issues on github that ask people to document a portion
of the code/process?
~~~
shakna
Have a look at this issue [0], I think it covers most of the docs for v4.
[0]
[https://github.com/webpack/webpack.js.org/issues/1861](https://github.com/webpack/webpack.js.org/issues/1861)
------
btown
IMO, Webpack (and React Router [0]) have been in this weird and honestly toxic
place, where there was very little regard held for backwards compatibility and
gradual deprecation by the core teams _at first_ , so businesses with larger
apps became marooned on previous versions as soon as the first major version
change... thereby not contributing their expertise and bug reports (they spent
that time fighting against the lack of documentation), and thereby making it
even more difficult for the core teams to get feedback on people's real-world
migration woes.
It's fair to say that there were problems in the earlier APIs, but if that's
the case, make changes gradually, even create an RFP process, tease/market
your API change proposals to get feedback from people who might say "this is
not something we can adopt easily, but you could add this param for backwards
compatibility." It'll take longer to get to an API that feels "perfect," but
it makes for a stronger culture.
[0] [https://github.com/ReactTraining/react-
router/blob/master/pa...](https://github.com/ReactTraining/react-
router/blob/master/packages/react-router/docs/guides/migrating.md)
~~~
bfred_it
I remember reading of Babel maintainers releasing betas and asking for
feedback but not many helped; most just came to complain after the release.
Webpack 4 has been in beta for a while but not enough people tested it on real
projects, evidently.
~~~
sho
> Webpack 4 has been in beta for a while
Perhaps the naming is sending the wrong message? "Beta" sounds incomplete and
subject to change; I certainly don't make a habit of introducing betas to
serious projects unless there's something in them I'm desperate for.
An extended period of Release Candidate versions could give end users more
confidence in adoption. This naming implies that the product is finalised and
will be released as-is barring serious bugs. It's a small thing, but maybe
giving end users the feeling of being early adopters rather than beta testers
could increase uptake in this final critical stage.
~~~
rdsubhas
Or React's way of avoiding major version cliffs seems to have grown out of
this problem.
[https://reactjs.org/blog/2016/02/19/new-versioning-
scheme.ht...](https://reactjs.org/blog/2016/02/19/new-versioning-scheme.html)
(skip to the major versions and avoiding cliffs part)
------
lmcardle
As someone who has worked with many of the various tools out there, I have a
lot of respect for the way the EmberJS team handles upgrades. I personally
wish more teams learned from their success. Their mantra is “stability without
stagnation.” While it is not perfect, the team appears to try hard to move the
framework forward without breaking backwards compatibility. When it comes to a
major release, I.e. the just released 3.0, the team released no new features,
but rather just removed the features that had been depreciated in earlier
releases.
Like or hate Ember as a framework, the team has developed some very good
release processes and practices a lot of us could learn from.
------
stevebmark
According to Webpack's donation site,
[https://opencollective.com/webpack](https://opencollective.com/webpack), the
creator of Webpack, "Sokra," made $44,164 off of Webpack development in 2017
(and $9,000 so far in 2018).
We're in a new world of sustainable open source software development, enabled
by services like Patreon and this one.
Maybe it's petty, but as someone who works on several open source projects
without reimbursement, I hope that such "hefty" funding leads to better
documentation, better community support, and cleaner upgrade paths. Webpack is
notoriously fickle and confusing when it comes to configuration, and at a
glance Webpack 4 doesn't seem to have helped with that. Although I certainly
appreciate performance improvements.
Can I request that funds be diverted into community support and enforcing
clean writing across documentation pages? That's what Webpack needs most, and
has always needed, by far over any core API changes.
~~~
sho
> the creator of Webpack, "Sokra," made $44,164 off of Webpack development in
> 2017
While I certainly welcome this new trend I wouldn't exactly call sub-45k
"hefty". That's far below what the author could presumably make working for
one of the big shops, and doesn't exactly leave much to "divert" if the author
has any plans to pay rent, buy food, etc.
As stated it's a great and promising trend, but I wouldn't be too hasty in
calling for the author to start redistributing a fairly minimal income. That's
barely enough for one person, let alone "staff".
------
ericmcer
Is it weird that I miss gulp and even grunt sometimes? Webpack does some
really cool stuff with basic configuration, but gives me a scary black box
feeling, like if it really wanted to it could stop working and I would be
unable to fix it. I never got the same feeling with gulp/grunt, maybe the
bulky config files give a sense of control.
~~~
unclebucknasty
> _gives me a scary black box feeling, like if it really wanted to it could
> stop working and I would be unable to fix it_
This was verbatim my thinking when evaluating Vue 2 a while back. Its reliance
on Webpack or Browserify for optimal use really gave me pause. I could just
see myself losing days trying to figure out which bit got flipped to stop its
working.
Vue 2 was new at the time as well, so I wasn't sure about adoption, ecosystem,
and/or the maintainers' committment. Looking back, I would've rolled the dice.
But, all of this to say glad I'm not alone in that black boxes give me pause
in a world where everyone seems to rush headlong into the next big thing,
ceding ever more control.
~~~
CharlesW
> _Its reliance on Webpack or Browserify for optimal use really gave me
> pause._
Mind elaborating on what you mean by "for optimal use"? For example, I use
Brunch with Vue 2 and it seems to work fine.
~~~
LukeShu
There is no complete stand-alone Vue SFC compiler; every SFC compiler is
written as a plugin for a specific bundler. The 3 official implementations are
the Webpack, Browserify, and Rollup plugins. There's no significant code-
sharing between them (and I'm _sure_ there are no subtle differences that will
bite you), and the Browserify and Rollup plugins can lag months behind the
Webpack one--they still don't support functional SFCs, which were added in Vue
2.5.0 back in October.
So, how does vue-brunch work? It declares the Browserify plugin as a
dependency, and wraps it.
~~~
unclebucknasty
Exactly this. And Rollup wasn't available (or at least well-publicized) when I
was evaluating about a year ago.
------
monaghanboy
Another major version was again released? Seems like literally just a few
months ago when, just after upgrading my team's codebase to Webpack 2, Webpack
3 came out.
Why do front-end application frameworks (e.g. Angular, and even React) and
build tools constantly feel the need to reinvent what they just reinvented?
Don't people (users and developers alike) recognize and tire of the churn?
Stepping back though, a thread like this appears on HN at least once every few
months, so I guess this is just par for the course.
~~~
franciscop
Because they learn from mistakes and can make it better the next time! That
happens everywhere; except JS world (at least with npm) is very, very new
(relatively speaking). Also now there are many more devs than when the _older_
languages came out, so the experimentation is a lot wider.
~~~
vanderZwan
> Because they learn from mistakes and can make it better the next time!
Well, except seemingly learning from the mistakes made when introducing
breaking changes.
I not being a cynical jerk bashing on WebPack in particular here. I use it
every day and am grateful for its existance, and look forward to the point
where migrating to version 4 is going to be relatively painless, at which
point I'll actually give that another try.
But on the whole, it's like nobody in the JS world seems to learn from the
mistakes of other JS projects and they have to make them again themselves.
------
Zalastax
I know that this is tongue in cheek , but shouldn't we give the maintainers
time to solve the documentation before we start complaining too much? The
changes they are making seems to make sense to me. They are adopting patterns
that the community uses and provide solutions to hacks that have emerged from
real needs. Webpack was in my opinion already simple, and it's getting simpler
every release. I can't imagine the horrible mess I would have if I went back
to gulp.
~~~
krankthat
Just like git tags, release notes, makefiles, etc ... Documentation is one of
those things that needs to be updated as part of a new release.
~~~
Zalastax
Yes, it could have been handled better. I guess you need to consider whether
it's best to let newcomers in on the v4 goodness or to avoid aliating users of
v3. The pragmatic approach is to at least mention somewhere that v3 users
should wait on upgrading.
------
alexdong
I keep telling myself: "embrace changes. an upgrade is great; new defaults are
cool; performance improvements are great; just do it; just do it".
Yeah.
One day.
PS. I do miss the days when you can just read a book and `man` your way around
the dev environment.
~~~
monaghanboy
I think your ideal exists in backend/SRE roles.
------
GabeRicard
Author of the gist here. Please don't take this as me complaining about
webpack 4 in any way. This was me detailing my process of migrating to the
webpack 4 BETA in order to provide feedback to the webpack team, and
potentially help anyone else out who got stuck on a few things like I did.
They're updating the documentation. Webpack is awesome and v4 is a great
update and I highly recommend people try it out.
~~~
disgruntledphd2
I think that this style of blog/info dump is really, really useful and speaks
to how I have spent many, many weekends.
Even though I don't really use much JS, I still got a real kick out of reading
this.
~~~
GabeRicard
Haha. Thanks! I am trying to do this more, just for personal notes about
projects. It makes it easier to go back later and remember what the pitfalls
were.
------
blueside
I've used Webpack on and off since v2 and I'm still no fan.
If I'm being honest, there's been times that it's perhaps been the worst dev
experience I've endured in the last decade.
Can we use Parcel as a replacement yet?
~~~
thelarkinn
Sure, but since your experience is not configuring anything, how are you going
to know what it's actually doing to your code. And what are the tradeoffs?
Seems like many don't have this answer because they don't have good
archetectural reasons for choosing it.
~~~
BigJono
For what it's worth, my experience with Parcel so far has been a show stopping
bug that makes CSS Modules unusable with it, was quite difficult to diagnose
(as with any bug in a dependency, I spent yonks looking at my own codebase),
and has so far gone un-fixed.
[https://github.com/parcel-
bundler/parcel/issues/378](https://github.com/parcel-
bundler/parcel/issues/378)
I don't see how this is really possible given the relatively high usage level
of CSS Modules, but whatever. I'm forced to stick with Webpack for the time
being.
------
jacquesc
Haha, pretty much my life every 3 months. Welcome to the new world of frontend
development.
Luckily the bad also comes with the good. Adding and upgrading components off
npm in a webpack project is almost always a breeze.
~~~
6t6t6t6
Another option is to wait just a couple of months to upgrade, so you give time
to the library developers to put their s __* together, fix bugs, and write
some documentation.
At the end, from all the features your users could care about, your webpack
version is not one of them.
~~~
slig
> Another option is to wait just a couple of months to upgrade
I did that when webpack 3 was released and now I'm two versions behind.
------
thelarkinn
PS: I submitted this response also in Reddit for those who come across it
twice. It's not meant to be disingenuous (as I'll likely respond to individual
comments also), just meant to show what our statement is.
Hi all,
We appreciate the feedback. So whether you care to know or not I'll explain
where we stand as a project for our docs.
So in this major release we want to try different ways to impact as many users
as possible. So this time we consiously put priority on working directly with
framework and tooling authors like angular, react, preact, vue to ensure they
were unblocked on implementing v4.
With this we decided: since plugins and loaders will also need time, we will
give them a month and release our final version without breaking changes from
RC. However I as a maintainer orchestrating all of our separate ecosystem,
docs, and teams didn't realise we had been behind on contributions and work
from our docs and when the 30 day window elapsed v4 shipped as promise.
To not hold back v4 with docs is an organizational and communication error on
my part and maybe even a side-effect of juggling my full-time job at Microsoft
and the 120-140 hours I spend a month on webpack.
I'm more then willing to own this, and for anyone frustrated, its on me and
I'm sorry. However we grow and improve after any mistakes we make. So, we have
some takeaways that we will build on for our next Major Release.
A complete migration guide will be a primary shipping dependency for any
future Major version of webpack going here forward. We learned that instead of
focusing on frameworks as heavily to provide impact and upgrade paths, (which
not have all done this yet), that instead focusing on content accessible to
everyone is a far more impactful result. For now, we have the PR pending for
our docs upgrade and I'll likely spend the majority of my free time ensuring
we have it ready to release as soon as possible. Thank you all for being
reasonable and patient with this.
Major version releases like webpack 4 are much less likely to happen. This
major release included the rewrite in our plugin system, which is the cause of
incompatible loaders and plugins. We have no plans to rewrite it again as it
is exactly where we need it to be in terms of performance, capabilities, etc.
Therefore, look to most future Major versions to be less severe and impactful
to loaders and plugins.
I plan to spend much more focus on ensuring all of our separate teams are
communicating for releases by coordinating all hands ship meetings. This
ensures that we are on the same page and everyone is aware what each team has
a dependency on.
We actually only do have one full time engineer. And have attempted to fully
fund a documentation team to work on it full time! It takes a considerable
amount of effort to do right. And so thanks for understanding while we
continue to work on it. Contributions are always welcome along with
constructive feedback.
Regardless, we appreciate as always support the positive comments we've seen
here but also the actionable criticisms. It probably sounds redundant but we
really stress the high bar and expectations that are set for us in the
ecosystem as so many of you rely on webpack to ship your products and
solutions to the web. We will keep on improving as we have since 2012 and hope
you all continue to help us along the way.
Sean + webpack Team (@TheLarkInn)
~~~
isuckatcoding
Hi Sean.
I certainly didn't mean to bring you down or disparage the work of the webpack
team/project in anyway. For me, the way the gist itself was written was just
too funny.
Anyways, thank you for all your hard work.
~~~
thelarkinn
I honestly don't feel you did! In fact this gist we actually had from a
contributor kind enough to document his experience so we could identify
exactly what hot path migration would look like for users. So we were really
happy to have this gist created. And we have it tied to our docs PR/issues :)
------
simon04
There's a big webpack 4 documentation update PR in the pipeline:
[https://github.com/webpack/webpack.js.org/pull/1856](https://github.com/webpack/webpack.js.org/pull/1856)
------
knackers
Went through this exact process last week. Did they really need to remove
CommonsChunkPlugin? Feels like breaking stuff for a slightly nicer API. If
this were React we would've had a deprecation warning for a few versions.
~~~
shakna
> If this were React we would've had a deprecation warning for a few versions.
If this were React, you would have got docs with a release.
Choosing to release before documentation is done, for a tool developers are
supposed to use as a black box, which has a lot of strange configuration that
is required, is a really strange choice for something with so many breaking
changes.
~~~
Zalastax
They haven't released webpack 4 yet.
~~~
shakna
They have [0][1] ("beta" was dropped from the v4 releases at v4.0.0), and are
actually in the process of updating the documentation [2]. It's just odd to do
that after a release.
[0]
[https://www.npmjs.com/package/webpack](https://www.npmjs.com/package/webpack)
[1]
[https://github.com/webpack/webpack/releases](https://github.com/webpack/webpack/releases)
[2]
[https://github.com/webpack/webpack.js.org/pull/1905](https://github.com/webpack/webpack.js.org/pull/1905)
~~~
Zalastax
Whoops, guess it's a good idea to check things before blurting something out.
In that case I'm not sure what's best, to let newcomers in on the new goodies
of v4 or to avoid confusing previous users.
------
Vinnl
The upgrade seemed relatively straightforward with the release blog post.
Really, it's the fact that many plug-ins will have to be updated that is
holding the upgrade back, but that's not _that_ much of a problem: I've simply
documented the status of each [0], am following the respective
issues/preparing to submit PRs myself if needed, and am meanwhile remaining on
v3.
[0]
[https://gitlab.com/Flockademic/Flockademic/issues/306#note_6...](https://gitlab.com/Flockademic/Flockademic/issues/306#note_60571871)
------
isuckatcoding
So I found this gist as I was googling for my webpack error. Originally was
trying to get the uglify plugin to work. Then I said screw uglify and STILL
couldn't get it work.
I gave up and ended up using Rollbar.
------
mratzloff
OK, so faster builds—and what else? Why not wait until documentation catches
up and spare yourself the trouble?
~~~
GabeRicard
Because I was just trying it out? Are you saying it's wrong to test a beta and
provide feedback to the authors?
~~~
mratzloff
Sure, that's perfectly fine. I interpreted your original post as a public
warning or complaint to other users. You've updated the Gist since then with a
big disclaimer, so perhaps I wasn't the only one.
More broadly, I often see developers stay on the cutting edge when they don't
have a security or feature need to, then complain about it.
------
pacifika
Stop guessing start learning. It sounds like the person doesn’t understand
what each configuration command does.
~~~
GabeRicard
I'm the author of the gist. You're exactly right. This was me guessing when I
didn't understand the configuration options, because this was my attempt at
learning the BETA of webpack 4 without the new docs available.
------
mnm1
Angular, brunch, babel, webpack. While not limited exclusively to the JS
community, it seems that much of the JS community especially does not care
about backwards compatibility or long term support. Piling on new features and
re-architecting everything with every major version number, and sometimes even
with minor version numbers, seems to have taken over actually creating and
maintaining useful products to the detriment of all software that uses such
products. Now we end up with software that's dependent on literally un-
upgradable components that are no longer supported and in some cases do not
even get security patches. With certain breakages, the software cannot be
built at all. Perhaps it might not be a bad idea for open source developers to
include a small section in their documentation that details their intended
philosophy for the future of the software. That way, people don't get duped
into using software written by teams that plan to abandon it completely (I'm
looking at Angular here) or partially (everything else).
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: My personal project landing page - hhog
http://intro.daloops.com/
======
gabeochoa
Here is the list that hides from you:
Make your art richer and interactive
Get collaborators to work with you
Use the internet as a canvas
Record, stream and share your sessions
Remix many different kinds of media
Use your external controllers on the web
Work offline and sync online
Opensource and developer friendly
------
wodenokoto
When I opened the page, I started reading the long introduction text. Then
that disappeared and some hexagons appeared. I can click them, but for what
purpose I couldn't figure out.
There's a little plus-icon I can click to see the first 4 lines of the former
text, but that is it.
I have no idea what your project is, but it seems to be user hostile.
~~~
hhog
Sorry for the bad experience (that was not the point of it). The initial list
is my failed attempt at making it also available to anyone without JS. I just
removed it since it is present on the +.
Im going to change the site to make the purpose evident, thanks for the
feedback (and sorry again for the hostility).
------
hhog
Thank you for the feedback, i changed a few things and now it starts with a
small presentation of what it is.
Any new feedback ? (do i have to do another Show HN for new feedback?)
------
wingerlang
I have no idea what it is about.
------
eevilspock
Far too vague if you're trying to find interested early adopters. The only
thing clear to me: The photo and video were taken in West Seattle.
~~~
hhog
Thanks for the feedback. Ill try to make it clearer :)
The photos and video are from Lisbon
|
{
"pile_set_name": "HackerNews"
}
|
Positive Lexicography – List of untranslatable words from other languages - cromulent
https://hifisamurai.github.io/lexicography/
======
Festro
This is always oversold as 'untranslateable', when it just means 'can be
translated, but not in a single word'.
Really nicely presented though! :P
|
{
"pile_set_name": "HackerNews"
}
|
Must-know url hash techniques (2011) - jfaucett
http://blog.mgm-tp.com/2011/10/must-know-url-hashtechniques-for-ajax-applications/
======
StavrosK
With pushState, isn't this now irrelevant (and, as it has always been, ugly)?
~~~
jfaucett
Well, if you do not have to support older browsers, which I think a lot of
webdevs still have to do, then I guess its probably irrelevant. But the
history API implementations are (of course) different across
browsers/platforms, so I'd recommend taking a look at history.js, if you want
cross-browser RIA with good hashchange fallbacks.
~~~
Rygu
Good one, didn't know about history.js!
I think the best pushState fallback is just allowing page reloads. Hashtags
are probably just annoying and confusing for the user experience. Besides,
there are other very effective ways to improve page load time.
~~~
alexchamberlain
I agree; the correct fallback is to load the webpage the "traditional" way.
------
jacques_chester
This reminds me of the URLs generated by Oracle's Application Express (which
is my main source of income at the moment).
And you wind up using cookies anyhow to cross-check (ApEx does), because real
applications tend to require authorisation as well as application state.
So, to recap:
1\. Uglier URLs
2\. Cookies anyway.
------
hising
I think the best solution for handling older browsers is to allow server
rendering via loading the page the traditional way. Solving two problems:
Keeping your urls looking like they should and also verify that your site is
accessible for software that do not rely on JavaScript such as some search
engines and other types of machines / software. I see little reason for
messing up urls with hashchange, adds complexity when recreating correct state
on bookmarks.
------
wizard_2
The author glosses over the fact that encoding of special characters in the
hash is broken in firefox. As an application there's no way to know if you're
getting the data you want or url encoded data. Base64 encoding (with replacing
chars like $ which most im and email clients fail to autodetect as being part
of the url) seems to be the only bet at the moment. Putting the data in the
get request and ignoring it on the server also seems to work well.
~~~
jacques_chester
Specifically, Base64Url. It's described in RFC4648 and is URL-safe.
|
{
"pile_set_name": "HackerNews"
}
|
How Sun Tzu Would Outflank Patent Trolls - wslh
http://businessmodelvalidation.com/HowSunTzuWouldOutflankPatentTrolls.aspx
======
WalterSear
Actually, I think would have probably set fire to their homes, flushing them
out and into the range of archers. Then sent his chariots in to route those
still standing. I know it's what I want to do.
------
btilly
Personally one of my favorite responses is in
[http://www.audioholics.com/news/industry-news/blue-jeans-
str...](http://www.audioholics.com/news/industry-news/blue-jeans-strikes-
back).
I particularly like the line, _Not only am I unintimidated by litigation; I
sometimes rather miss it._
|
{
"pile_set_name": "HackerNews"
}
|
Eric Schmidt talks about AI at MIT conference - giacaglia
https://www.youtube.com/watch?v=WFgPGwH9Oec
======
ghostcluster
It's nice to hear him talk about demographic challenges and urban fertility.
We are entering an idiocracy sort of situation in the Western democracies, as
well as Asian countries including China, where the the people who will have
the kinds of children who can perform these elite jobs aren't having children.
He doesn't seem to take the step to imagine what that means for the wide
swaths of people in the future.
------
leuty
Eric is such a legend, he always has something smart to say
------
Lreiy123
Super interesting conversation. I wonder why Google bought and then resold
Boston Robotics...
|
{
"pile_set_name": "HackerNews"
}
|
Ask YC: How important is it to be friends with your co-founder? - pxigorth
At least twice I've had the opportunity to co-found a startup with someone whom I've respected, who I know does good work, and has complementary skills. But I chose not to, because of some other aspect of their personality. Something bothered me about them, but I couldn't put my finger on it. It definitely seemed like at some level we didn't like something about the other, and we weren't the kind of people who would end up being friends. <p>At normal jobs, I've never had a problem working with people that I wasn't friends with, and sometimes actively disliked, but my view is that in a startup situation, this is much more important. How important is liking or being friends with the other person? Did I pass up good opportunities or make the right decision?
======
pg
When speakers at YC dinners take questions at the end, there's one I ask most
of them: what do you know now that you didn't know when you were 24? The most
common answer is to trust your instincts about people. If you're thinking of
working with someone but something seems iffy about them, don't.
So you probably did the right thing.
Friends can help you screen people. Especially female friends, who often have
a more sensitive social radar than men.
------
gigamon
I agree with comments from other posters. The only two things that I would add
are that ...
1) you don't need to like someone to work with them in a startup.
In fact, it is very important that you surround yourself with people who have
complementary skills and have different lifetime experience and perspective.
As a result, it is difficult to truly like someone who is that different.
But you must respect someone in order to work with them in a startup.
2) Trust is very important.
But trust has a different meaning in a startup. Imagine yourself having to
jump off a baloney and one of your co-founders is the one who needs to catch
you to save your life. In this case, trust means trusting their ability to
catch you and their assessment of their ability to catch you. So if you ask
"can you catch me?" and they say "yes" but in reality they can't, then you
would rather that they say "no". That way, at least you would try to find
alternative.
~~~
derefr
Oddly, to "jump off a baloney" seems to somehow more accurately represent the
startup process than jumping off a balcony, which you _possibly_ meant to
type. (Oddly similar to "living on a smile and a shoeshine", as well.)
------
pius
From my experience, I would recommend finding someone with whom you are
_friendly_ , but not _friends_.
The pressure of the startup will polarize your relationship, so you'll either
end up pretty solid friends or pretty solid enemies. Someone whom you already
dislike somewhat probably won't become any more palatable. Someone with whom
you've already established a friendship leaves you in the position of risking
the friendship and/or not having the luxury of candor when they screw up.
------
DarrenStuart
go with your gut I say. Only you know what ticks the boxes for you. Also never
look back at what could of been...
I have worked with people that I have disliked but over time I have ended up
getting on with them quite well in the end. Sometimes first impressions can be
wrong.
------
sspencer
I would argue that it is perhaps the most important quality of all.
Always remember that this is someone you are _starting a business with_.
Anything less than true friendship is likely to cause grief later on when the
going gets tough.
------
willphipps
i think it's really important. not only from a trust perspective, but also the
amount of time you are going to spend with them means its vital you can have a
laugh together when things get a bit tough. the risk is that you may fall out
over something related to the business - this is something only you can
decide, but if you address the possible scenarios early on and have agreed
upon a way of dealing with possible outcomes (good and bad), then i think this
risk can be limited.
|
{
"pile_set_name": "HackerNews"
}
|
Proto-internet trolls: Struensee and press freedom in 18th century Denmark - Pausanias
https://blogs.bl.uk/european/2018/06/johann-friedrich-struensee.html
======
kingofhdds
The piece is interesting, but author's conclusions left very strange
aftertaste. So there was a king with psychiatric problems, and his doctor who,
abusing influence over his patient, usurped powers, and made a kid with the
queen. And when aforementioned doctor decreed freedom of press, people of
Denmark immediately started to express their dissatisfaction with the weird
political situation. And the author calls it depressing, not constructive, and
them proto-internet-trolls. Seriously?
~~~
nowarninglabel
I believe the subtle point is that Struensee did these actions for the good of
the people, only for the people to then use these new tools against him.
I'm not sure we know of Struensee's motivations, but we can see that he
advocated for and temporarily obtained some outcomes that can be seen as
almost universally good for the people.
Regardless, maybe one takeaway can be, just because you have given your life
towards something good for others, don't expect to be thanked.
~~~
AnimalMuppet
Giving the people freedom of the press was good for the people? Probably.
Having a kid with the queen was good for the people? Probably not.
So, was the criticism for giving them freedom of the press? Or for having the
kid with the queen? Or was it for using his position as a doctor to put
himself in political power in the first place?
~~~
oh_sigh
Having a kid with the queen seems neutral, especially if the alternative is
for a severely mentally ill man to be the father.
~~~
AnimalMuppet
The queen having kids that didn't come from the king is capable of setting off
a civil war once the king dies and someone has to succeed him. That's not
neutral for the people.
~~~
oh_sigh
What was important at that time was that the child be a recognized heir to the
king and queen. Actual birth father is mostly irrelevant.
~~~
vict00ms
> Soon, not only had the doctor risen to become the King’s most trusted
> advisor in the Danish court, he had also become Queen Caroline Matilda’s
> lover – which fast became common knowledge.
Perhaps if it had not become common knowledge it would have remained
irrelevant. I think your argument has fallen flat and you should reconsider
your opinion before further entrenching yourself.
~~~
oh_sigh
No, it has not fallen flat. Something being common knowledge, and yet still
held to be irrelevant was exactly my point. You are free to peruse information
about Louis Auguste if you'd like
[https://en.wikipedia.org/wiki/Princess_Louise_Auguste_of_Den...](https://en.wikipedia.org/wiki/Princess_Louise_Auguste_of_Denmark)
> Though officially regarded as the daughter of King Christian VII, it is
> widely accepted that her biological father was Johann Friedrich Struensee,
> the king’s royal physician and de facto regent of the country at the time of
> her birth.[1] She was referred to sometimes as "la petite Struensee"; __this
> did not, however, have any effect on her position.[2] __
------
mongol
The book referenced, Swedish title "Livläkarens besök", is one of the best I
have read. Reviews do not seem to agree entirely through. On Amazon:
[https://www.amazon.com/Royal-Physicians-Visit-Olov-
Enquist/d...](https://www.amazon.com/Royal-Physicians-Visit-Olov-
Enquist/dp/1585671967)
~~~
dmlorenzetti
Enquist is one of my favorite authors, and Royal Physician's Visit among the
many reasons why. That said, I have almost stopped recommending his work to
others, as it seems to be hit-and-miss.
Nobody I know hates his work, but nobody takes to him quite the way I have.
Royal Physician is fairly conventional in structure and tone, but Enquist's
more personal work -- the work that feels like it comes from his deep interior
life, like Downfall, and Captain Nemo's Library -- seem even harder for a lot
of people to get into.
------
peter303
These played a role during the Enlightment, American and European revolutions.
A person of strong political views and middle class means could write, publish
and distribute an anonymous political pamphlet. Establishment governments
often didnt like this and hunt down and destroyed them.
~~~
all2
This is, in part, why the USA constitution calls out freedom of the press
(right after freedom of speech).
------
FlowNote
The printing press was subsidized by the Catholic Church as a way of reducing
costs and increasing spread of their primary asset: the Bible.
They didn't realize this would drive down startup costs of presses by creating
more press techniques and technicians looking to acquire new customers. With
Church backing, the bar-to-entry for spreading information dropped until
anyone could spread information, including anti-Church factions such as the
Protestants. This did not end up well for the Church.
The same is true for America, their obsession with spreading democracy, and
the transistor AKA the American Printing Press.
Once information propagation becomes cheap, noise floods everything, and
trolling is the only effective way to create lasting context.
|
{
"pile_set_name": "HackerNews"
}
|
Rules for Choosing Nonfiction Books - rollinDyno
http://herman.asia/how-i-choose-nonfiction-books
======
tptacek
John Carreyrou wasn't an expert on phlebotomy or biochemistry. Roger
Lowenstein wasn't an expert on hedge funds or technical finance. Eric
Schlosser on nuclear command and control? Nope. Woodward and Bernstein and
constitutional law? Not so much!
I don't think these are very good rules.
I thought it was especially amusing that this person doesn't trust journalists
to write about anything but journalism. If you believe that, why are you
interested in reading about journalism in the first place?
HN has an unfortunate fixation on Michael Crichton's supposed Gell-Mann
amnesia effect (wherein you read something in the news that pertains to your
field, spot errors, get angry, and then forget that happened when you go on to
read the next story). I propose the countervailing Djikstra amnesia effect,
wherein a technical professional produces workmanlike output with all the
attendant errors and omissions that attach to any work produced by humans
(Christ knows our field is intimately acquainted with errors and omissions),
then forgets they did that, and expects every other professional to measure up
to the standard they themselves failed to meet.
~~~
Emma_Goldman
I don't know any of the authors you've mentioned, but in my own experience,
not reading books written by journalists is a good _rule of thumb_. That
doesn't mean, obviously, that it's impossible for a journalist to write a good
book. By profession, journalism is both shallow and narrow: it focuses on the
surface of events in the given moment. That tends not to equip someone with
the knowledge or expertise to write a truly insightful book.
~~~
claudiawerner
As a funny counter-point, Marx was a journalist and _Capital_ is one of the
most insightful books I'm reading at the moment.
~~~
kortilla
Insightful maybe but drastically wrong about the whole LTV thing.
~~~
claudiawerner
I think that's a point if we argued about we'd agree to disagree.
~~~
kortilla
I’m honestly not sure what there is to disagree with. There has been no
quantitative evidence supporting it. “Things should cost based on labor” is a
nice thought for laborers, but it’s completely unrealistic. Nobody is going to
pay twice as much for something because one manufacturer used a really shitty
process that took twice as many man hours.
There’s a reason the “criticisms” section of the wiki is one of the largest
and links to a full article on it. It’s a religion, not an evidence-driven
theory. Prescriptive, not descriptive.
I suppose we would agree to disagree if you believe in ideas not grounded in
reality (a.k.a. backed by empirical evidence).
------
jawns
I am an author of several nonfiction (pop-sci) books. I am also a former
journalist.
I readily acknowledge that I am not an expert in the fields I write about.
That is why the bulk of my work is finding out who the experts are and
presenting their work in an accessible way. (Granted, once you've been writing
about a certain area for long enough, you do tend to become educated about
it.)
One of the things that makes me feel good about what I do is that I am able to
expose readers to intriguing information they might otherwise never encounter,
unless they've got subscriptions to a bunch of academic journals in fields of
study outside of their own.
Keep in mind, it's not a given that people who are the most well-respected
experts in their field are also talented writers. People like Douglas
Hofstadter certainly are both, but not every expert is like him. And if you
follow Rule #1 religiously, it sounds as if you're limiting yourself to
experts who are both. In the process, you're likely missing out on a lot of
cool information, just because you are only willing to read first-hand
accounts.
~~~
specialist
I'm a big fan of science writers.
There's a lot of day light between journalists and celebrity pundits.
------
hoytech
"When I'm in a book store nowadays, or when I'm at a conference looking at a
table full of new books, I try to gauge how good the books are by picking them
up and -- yes, I guess it's okay to reveal the secret -- I turn to page 316. I
try to read that page rather carefully, and this gives me an impression of the
whole book. (Often the book is too short; then I use page 100. Authors, take
note if you expect me to buy anything you've written.) This system works quite
well, I think."
\-- Donald Knuth in "Things a Computer Scientist Rarely Talks About"
------
minouye
Here's what I do. If I get a book recommendation, I immediately buy it and put
it on a bookshelf. Every time I walk by, I scan the shelf and pick what speaks
to me at that given time: fiction, non-fiction, cookbooks, science, history,
classics, short-story collections, biographies, etc. If you match your what
you read to your mood and frame of mind, you can consume and retain
information much more quickly and enjoyably. And having the book on hand is
really important since my interests and mood change from day to day.
I also try follow some other general rules, that work for me:
* Read several books at once, esp. across disciplines.
* Read paper books.
* You don't need to finish books. Stopping mid-way is fine (still have problems with this!)
* Seek out durable works over bestsellers ([https://en.wikipedia.org/wiki/Lindy_effect](https://en.wikipedia.org/wiki/Lindy_effect))
* Read across disciplines
* Write in books and make notes. Write up notes a couple of weeks after finishing (create your own commonplace book)
* Avoid audiobooks (if you want to retain the content). I just can't retain when I listen while driving/multitasking, but like listening to fiction for fun.
* Tag interesting books/papers cited in the books you like. Look them up and read them too.
* Find interesting/prolific readers on Goodreads. Lookup the books they read, esp. the ones you've never heard of.
* Let other people know that you like reading, and ask what they've read recently. When they read interesting books, they'll recommend them to you.
~~~
madcaptenor
One problem with buying books when they're recommended is that it's a lot
easier to _buy_ books than _read_ them. I recently went through my shelves and
counted the books I owned but have not read and was shocked.
So my current method is now to read the books that past-me thought sounded
interesting.
~~~
minouye
Yeah that's true. And having a small apartment makes that problematic as well.
I like this Umberto Eco anecdote: [https://fs.blog/2013/06/the-
antilibrary/](https://fs.blog/2013/06/the-antilibrary/). Having a lot of
unread books around is a good reminder of how much there is to read, learn,
and experience. And it makes reading instead of turning to Netflix an easier
decision.
------
jcoffland
Like the article's #3, I have my own silly rule, but it's a good one.
No books with the author's name written in a larger font than the title.
This eliminates books who's main merit is the author's fame. It's especially
good at filtering out crappy New York Times best sellers and it works for
fiction too.
~~~
kakarot
100% agree when dealing with non-fiction. It shows a lack of respect for the
book as an art form or information resource. It shows the author is writing
for exposure and money and not to produce something of true value.
> it works for fiction too.
Nonfiction authors usually write on a small handful of subjects, and chances
are you're looking for a book on a particular subject and will compile a list
of contenders, then do some light research on each author to gauge their
authority on a subject.
But with fiction, you aren't likely to know the general contents of a book
before you read it. You may be looking for a particular genre, but not a
particular story. If someone like Stephen King or Isaac Asimov have proven
that they are capable writers within their genres, it's actually beneficial
sales-wise for these names to stand out in a book rack. If I see a bunch of
books on a rack and one of them says _Asimov_ then I'm honing in on that book
first.
------
fasteddie
In recent years I've switched a lot to watching a non-fiction author's 30
minute talk on youtube about their book rather than reading the book entirely.
This is especially true for business/self-improvement type books, where I've
found its almost never really time efficient when I'm really just looking for
the list of 5 things I should be doing and skipping the extraneous pages of
anecdotes.
~~~
e1g
For me it's the opposite: for example, I had watched countless videos on
"disruption" but did not fully grasp it until I read The Innovator's Dilemma.
The same holds true for theory of constraints vs reading The Goal, Robert
Cialdini's works on influence, Richard Dawkin's views etc. In all cases, I
thought I understood the material before reading the original source, and in
every time the books blew me away.
Perhaps there is a dimension about how rigorous the thinking is behind the
book. I struggle to imagine a YouTube video that could effectively and
convincingly unpack ideas from The Intelligent Investor, The Sovereign
Individual, Sapiens, etc. Other topics like "How to get rich with x" or pop-
sci covered by the likes of Kurzgesagt are simplistic enough for a video
essay, but those are seldom worth consuming regardless of medium.
~~~
tynpeddler
I think you're hitting at the dichotomy of knowing vs grokking. A lot of
powerful concepts have high level summaries that can trick the audience into
thinking that's all there is to it. Chris Voss's excellent "Never Split the
Difference" comes to mind. A lot of his negotiating strategy can be boiled
down to "develop a mutual and deep empathy with your negotiating partner". On
some level this makes intuitive sense; people who empathize with you will help
you solve your problems, and empathizing with other will help you tailor your
solutions to their concerns. But when you sit down to pull this off, you run
into two major problems. What is the strategy for doing this? Some phrases
tend to antagonize people, while others help build rapport. There are often
emotional stages that relationships have to go through, so you need to behave
appropriately at each stage, but also you need to move the relationship to the
point that is beneficial for you. The second major hurdle is actually
implementing these strategies. Even with the head knowledge, when the pressure
to perform is on, you may not behave appropriately.
~~~
scott_s
Put another way: it can be easy to gain an understanding of a particular goal,
but very difficult to develop a deep understanding of the systems that allow
you to achieve that goal.
------
kakarot
_It’s also a good idea to read some of the lowest-rated reviews first, to see
what the main complaints about the book are, and whether they would put you
off reading it._
This goes for basically anything you purchase online. Read a couple highly
positive reviews, read a couple highly negative reviews, then read some
somewhat positive and somewhat negative reviews. Get the entire spectrum.
Emotions or lack of consideration cause some people to leave inaccurate
ratings but still give useful information in the reviews themselves. Therefore
you should never trust a 5-star or 1-star rating, but you should still
consider why the reviewer felt compelled to leave such a rating.
At the same time, less extreme ratings might provide a fair and comprehensive
assessment but the reviewer might have overlooked a particular edge case or
issue.
~~~
jodrellblank
The positive reviews are always "I bought this for my grandson, I'm sure he
will love it, 5 stars" or "I liked it". Negative reviews are a mix of "I hated
it" _and_ actionable information "can't use both sections at the same time,
wobbles, and edge is too sharp"
I contend that only negative reviews have information; positive ones are
propaganda you read with excitement to reinforce your emotional feeling of "
_I want this thing to enhance my life, I want to be part of the people
experiencing this 5-star feeling, I 'm dreaming of who I can be if I own this
product, let me join in!_", they don't tell you useful things.
If you read the negative reviews looking for dealbreakers, and think you can
live with the defects described, then it might be a good enough buy. If you
want to read the negative reviews with an emotional view, you can use it to
reject the dream and stop wanting to buy the item or anything like it
entirely, but that's not mandatory.
~~~
kakarot
That's simply wrong. In order to make an educated purchase, you need to know
the weaknesses _as well_ as the strengths. Reading 1- or 2-star reviews is
unlikely to uncover all of the strengths of a product. Comparing products
solely based weaknesses is just dumb.
~~~
jodrellblank
Uh, if you're an alien, maybe. Most people want to buy a particular thing - a
wood chipper, a paintbrush, a laptop - first. They already know the strengths
without reading the review - that's what brought them to these specific
products and what the marketing bullet points are.
If it has a "surprise strength" which the company who made it didn't notice,
and didn't advertise, that's also something which probably brought you to it
by referral (like a DVD player which is region unlocked - the reason you're
looking at that one, is because it was linked on a forum, and now you read the
weaknesses.
You want a bluetooth speaker, you find all the ones you can, then look for the
negative reviews of which have poor battery life and which have weak suckers
for glass. You don't look in the positives to see if one is secretly really
loud, because the negative reviews will tell you by complaining if it's too
quiet, or too loud.
~~~
kakarot
You're not understanding. Wood chippers don't have "strengths" compared to
other objects. It's a wood chipper, you can only meaningfully compare it with
other wood chippers.
Some are better than others. Why? Because each product will have its strengths
and weaknesses. This wood chipper has a great coat of paint which is
impervious to scratches! But this wood chipper is much more fuel efficient!
And this wood chipper is fully electric!
Those are all examples of a product's strength, not a weakness.
> If it has a "surprise strength" which the company who made it didn't notice,
> and didn't advertise ...
The whole point of reviews is because we can't trust the advertiser.
Especially on Amazon where it's likely from a reseller who may be ignorant or
straight up lie about the product.
> Want a bluetooth speaker, you find all the ones you can, then look for the
> negative reviews of which have poor battery life and which have weak suckers
> for glass.
Lol. I also care about how good they sound. I don't want to see an absence of
reviews saying how bad it sounds. I want to see motivated reviews by
enthusiastic users claiming how _great_ the sound is, and then taper my
expectations by checking the negative reviews to make sure someone more
educated about speakers hasn't made a more in-depth analysis of the sound
stage and quality of drivers, cables, etc. Using either source alone provides
an incomplete picture.
You're arguing for arguing's sake. You clearly don't have a good method of
making an educated purchase, so consider improving it before evangelizing it
over well-established, comprehensive methods of making an educated purchase
which are undeniably superior.
The idea that "negative reviews by themselves will always contain the full
amount of information needed to make an informed purchase" is an axiom of
online shopping is laughably preposterous.
------
maceurt
The idea that only 4+ star rating books are worth reading is laughable.
Reading only very popular books is a recipe to read only books that fit into
your own preconceived worldview.
~~~
gnclmorais
Quality assessment is not necessarily the same as popularity, surely.
~~~
maceurt
Most people rate books not based on rather they are good books, but rather if
they enjoyed the books and believed in what it was saying. Furthermore, there
is a certain demographic of people who uses and rates nonfiction books on
goodreads, which will skew the rating.
------
nearbuy
I found The Lean Startup relied heavily on anecdotes. The article gives it 5
stars, despite this being one of their main complaints about other books.
To give one example, the book talks about how in their 3d virtual chat game,
IMVU, as a shortcut, they initially had the characters teleport around because
they didn't have time to implement walking animations. They got some positive
feedback from users about the teleporting "feature", and concluded teleporting
was a great selling point and they shouldn't implement walking. The book then
starts teaching what lessons you should take from this story.
However, around the same time they were making IMVU, Linden Labs made Second
Life which did have walking animations and I believe was more successful than
IMVU.
It was a common pattern in The Lean Startup to point to a single example that
the author thinks worked out well for IMVU and extrapolate advice from it
while ignoring any counter-examples.
~~~
onlyrealcuzzo
The point of The Lean Startup isn't to give you advice to run a successful
VR/gaming company, though. It's to teach you that your assumptions are
probably wrong, and you should figure out ways to test them and get feedback
as quickly as possible. It's easy to run a bad test or misinterpret your
results -- which is I think what you're getting at -- but it's still important
to test things, and at least for me, it seemed like that was the main takeaway
from the book.
~~~
nearbuy
I think part of why I didn't enjoy The Lean Startup was because I read it too
late. By the time I read it, MVPs, "fail fast", and "make something users
want" had been the motto of startups for years, and the points the book was
making just seemed obvious. Maybe that wasn't obvious in 2011. (Although Paul
Graham's essays have been saying this since at least 2005 and are a much
better read IMO.)
But the other part was that it was just poor science, relying almost entirely
on anecdotes for its claims. Which is not to say his claims are necessarily
wrong, just that for any claim that isn't obviously true, there's no
convincing data to back it up. There's usually just one success story where it
seemed to work, and even within the story, it's hard to know if the strategy
was actually successful. Like, as I mentioned before, the book passes off
learning the users preferred teleporting to walking in IMVU as a success
story, but it may have in reality been the inferior choice. We can't know for
sure, but we do at least know other more successful virtual worlds have
walking. IMVU never even tested walking. It undermines his credibility when he
seems oblivious to his own possible failures. It's a lot of, "this is what we
did, and I think it worked out well". Despite advocating split testing, he
never split tested the techniques he advocated. There's no control - no
baseline to compare to.
Or when he prefaces the chapter on small batches with a third-hand story of a
father and two daughters who had to address, stuff, and seal a stack of
envelopes. The daughters, aged 6 and 9, felt it would be faster to address
them all first, then stuff them all, then seal them all. The father thought it
would be faster to do them one at a time. So they each took half the envelopes
and the father won.
I enjoy stories, but a 3rd hand anecdote about a father stuffing envelopes
faster than two children is just... why? That's not going to convince anyone.
He then calls back to this example several times when explaining how to apply
this at a software startup (release frequent small updates, use continuous
deployment). But even if one-at-a-time envelope stuffing is faster, and the
startup advice is right, one does not imply the other. Just cite an actual
study, or do an analysis of what was successful at other companies.
------
kilo_bravo_3
My rule: no index, no sale.
If the author didn't care enough to generate an index, what else did they not
care enough about?
A surprising number of books, especially "pop sci" books, don't include an
index.
~~~
StavrosK
What percentage of "good" books include an index?
~~~
marcosdumay
For non-fiction, I think I have always unconsciously followed the GP rule,
because I don't remember any book, good or bad, that lacked one.
Fiction, of course, rarely gets one.
~~~
madcaptenor
I can't think of any fiction that has an index. Do you have examples?
~~~
madcaptenor
Answering my own question: the American Society for Indexers has a list.
[https://www.asindexing.org/about-indexing/indexes-and-
indexe...](https://www.asindexing.org/about-indexing/indexes-and-indexers-in-
fiction/)
------
miloshadzic
Here's a rule: read stuff that you think is interesting.
~~~
jodrellblank
And what's your answer to " _But how can we identify those, without first
reading them?_ "
~~~
axlprose
by previewing/skimming them first?
I don't get how this is some big black/white issue. You don't have to finish
every book you start, nor do you only have to read a single book at a time,
and you can listen to audiobooks at high speeds if time is your main concern.
Because in the time it takes you to "research" whether a book is worth your
time or not, you could've already read a couple chapters of it and gotten
enough of a broad overview of what it contains to decide for yourself.
I've managed to read about 60-90 books a year for the last 6 years, and that's
only counting the ones I actually finish. Yet I'm sure the tally of books I've
only started/skimmed would be significantly higher than that if I bothered to
keep track of it.
~~~
jodrellblank
It's a big issue because " _There are somewhere between 600,000 and 1,000,000
books published every year in the US alone, depending on which stats you
believe._ " \- Forbes.
You aren't going to preview / skim literally millions of books which exist to
find the ones worth reading. You can't.
_I 've managed to read about 60-90 books a year for the last 6 years_
That's, what, 8 hours worth of book publishing in the USA, in 6 years?
~~~
axlprose
I clearly stated:
> _in the time it takes you to "research" whether a book is worth your time or
> not, you could've already read a couple chapters of it_
Meaning that there would be no actual trade-off in the time invested for you,
to take a slightly less superficial approach when evaluating books.
To compare the approach I proposed to a literally impossible task of
evaluating every single book ever written (as if anybody would actually even
be interested in reading all that) is quite disingenuous. This entire HN
thread and article are already excluding most books in existence anyway, by
the simple fact that everyone here has been mostly talking about "english non-
fiction" books specifically. They may still tally up to a large number, but
there's no point in exaggerating their quantity when we're all still just as
hopeless in trying to read them all anyway. All you did by pointing out this
impossibility is just needlessly restate the obvious, and has nothing to do
with what I said.
I'm already more than satisfied with the amount of books I read, and get a lot
out of them without experiencing any existential anguish over it, so I don't
understand why you're trying to throw my reading habits back at me as if
they're "not good enough" or something all of a sudden. Or is that what
reading is about these days? a mere measuring contest? Am I supposed to feel
ashamed I didn't meet some internet stranger's arbitrary criteria for a habit
that's supposed to benefit _me_? I proposed a viable alternative for
evaluating books that works more efficiently for me than the one presented in
the article, in case others are unsatisfied with their current reading habits.
I didn't claim it was something that was going to win you the "reading
olympics". If anything, I clearly supported the opposite by emphasizing that
people should feel less obligated to finish the books they start, because
feeling like you have to finish them is just going to cause needless anxiety
about the way you've invested your time.
Besides, who cares if your entire lifetime of reading could've been published
in a day? Is your goal in life to out-pace authors and publishers, or is it to
get fulfillment out of books? Because in my experience, reading a single
chapter out of a bad/mediocre book is a lot more fulfilling than reading a
bunch of amazon/goodreads reviews.
------
dlkf
> Malcolm Gladwell’s books are enjoyable—there is no denying he tells
> compelling stories—but ultimately they rely heavily on anecdotal evidence.
> In general, I find that books by journalists cherry-pick examples to fit the
> argument, and gloss over (or completely ignore) evidence to the contrary.
The irony.
------
veddox
Addendum to rule 1: Browse through the bibliography.
Quick but effective heuristic to gauge how serious the book is, and how well
the author knows his field. If there isn't one, don't bother buying it. If
it's a 20 page list of citations from reputable journals (or original sources,
if you're reading a history book), then the author doesn't have to be a
professor to be believable.
------
yoran
I would add #4: books that are at least 10 years old. I feel like knowledge
has to pass the test of time. This rule would rule out "Why We Sleep" for
instance, but you'll be able to read it in 10 years if the science behind it
still stands.
~~~
jodrellblank
Terence McKenna's suggestion was "if a book is less than 100 years old, don't
read it".
I think it's a combination of "contemporary books will be from the swirl of
life around you that already know, old books will be from a different culture
and time and that difference is important" and "knowledge lost, from people
who are no longer living".
It would also have the test of time - Darwinian natural selection of
information. Something we're missing when we want the internet to keep data
forever, we should be letting data rot and be lost, taking ongoing action to
keep it in the present is a vote for its importance, setting up a system which
preserves information without effort is cheating the system and leaving us
swimming in e-waste.
------
munificent
I like reading non-fiction. I think my rough heuristics for picking books are:
1\. Aim for topics I was already interested in before I knew the book existed.
There is something to be said for books so good they draw attention to the
topic, but that also increases the odds of the book's rating being based on
popularity and not quality. I'd rather read a good book about niche topic that
is particularly meaningful to me than a better book that just happens to hit
the zeitgeist. In other words, I pick a topic and hunt for books.
My most recent non-fiction books were Derek Wu's book on Spelunky, Chapman
Piloting & Seamanship, and Thinking with Type. None of those are going to make
Oprah's Book Club, but all were very enchriching for me because they aligned
with areas that matter to me.
2\. Aim for books that are "canonical" according to people in the field.
Signals for this are lots of reviews, especially many reviews over a period of
years because that shows consistent relevance. When people I respect that I
share interests with mention a book, that's a strong signal. When reviews of
other similar books mention as a point of reference, that's a strong signal.
3\. Read a few pages and judge the quality of writing. I have read very very
few books where the underlying concepts were valuable enough to be worth
wading through bad writing. On the contrary, my experience is that deep, clear
thinkers produce quality at all levels of exposition. They wrap their good
ideas in good chapters with good paragraphs full of good sentences. Life is
too short for shitty prose.
I also have a simple rule for how to increase the _quantity_ of non-fiction I
read: Put it in the bathroom and don't bring my phone in there. You'd be
surprised how much text you can get through one poop at a time. This is great
for books where reading it is not super engaging but I want to _have read it_.
------
pcprincipal
Worth noting that Nassim Nicholas Taleb has some interesting explanations
about why practitioners write better and more accurate books than journalists
in "Skin in the Game". As you probably guessed from the title, practitioners
have a lot to lose by writing a bad book about their practice because it
impacts their livelihood. Journalists - while they do face some consequences
for bad books - don't pay the same price, and are more likely appealing to a
general audience when they write a non-fiction book.
------
baddspellar
I would argue that goodreads is _not_ a particularly good way to determine
whether a non-fiction book is worth reading. goodreads reviewers are just
regular people who evaluate books based on how much they enjoyed them, not how
accurate of useful the books are.
If I'm looking for a book I want to learn from, I scout recommendations from
journals, magazines, blogs, respected radio programs/podcasts. End of year
"best of" lists are also a good source of ideas. Then I read some in-depth
reviews of books that strike my fancy by reviewers who have reason to know
what they're talking about. Many of these have 4+ star ratings in goodreads,
some don't.
Also, a blanket rejection of books by journalists or other non-experts is
going to lead you to miss some really good books. That rule immediately called
to mind Tracy Kidder. So, "The Soul of a New Machine" is off limits. Really?
No thanks. I can think of many others.
~~~
hermanschaaf
Although I haven't read Tracy Kidder's "The Soul of a New Machine", Google
categorizes it as Biography, so if you buy that then by the rule of thumb it
would not be off limits. And thanks, I have added it to my to-read list!
------
jcoffland
A trick I use to "read" books much faster is to feed them through Google's TTS
but set a much higher than normal read speed. I call it robotting a book.
It takes some getting used to and requires more focus but it's very efficient
in terms of information dump. I've been able to increase the speed over time.
My wife thinks I'm crazy. I use headphones.
~~~
Benjammer
I do this with informational youtube videos. I crank the speed to 1.5-2x
depending on how fast the presenter is talking, but I push it to the point
where I feel like I'm just barely keeping up and I try to get faster over time
where I'm setting more and more videos to 2x rather than 1.5x.
~~~
codemac
Use youtube-dl and you can train yourself to go past 2x pretty easily as well.
~~~
the_jeremy
I use the "video speed controller" extension to set the speed in increments of
0.1x. Works on any HTML5 player (netflix, youtube, most random websites).
~~~
Benjammer
Nice, I figured I'd probably get some of these suggestions posting this on HN,
thanks! I'll check these out. I've been meaning to find something that will
let me go past 2x.
------
misiti3780
Interesting post, but i would say you can easily read more than 3000 books if
you retire at 65 and live to be 90+. Also there are great non-fiction books by
journalists that are not about journalism, a few that come to mind are Bad
Blood, The Chickenshit Club, Angler, All The President's Men, books by Matt
Taibbi, etc.
~~~
cfmcdonald
For similar reasons to the "time value of money" concept, books read now are
worth more than books read in the future. e.g. reading GEB at 20 and having it
change the course of your life to become a Computer Science Ph.D. is very
different from reading it on your death bed.
Moreover, 3000 books is still only scratching the surface of all non-fiction
ever published in English (let alone other languages), so strong heuristics
will still be needed.
~~~
misiti3780
good point
------
Emma_Goldman
In my experience, the best way of finding good books has been:
1) Find authors and intellectual subcultures that you are interested in, and
follow what they are doing.
2) Read intelligent long-form reviews of books. Most good intellectual
magazines and journals have a review section.
3) Read whatever you can on Amazon preview and Google Books before buying a
book.
------
padolsey
For lack of ideal heuristics, I recently built myself a tool to find books
that "people like me" read most. This is similar to the rating heuristic,
except it relies on a collaborative filtering & clustering of sorts. The
internet is full of people who have reviewed and curated lists of books, books
that can thus be associated with each-other. And so now I can ask my tool the
simple question: For people who enjoyed these N books, what other books did
they most enjoy? This is something I've found incredibly lacking on the likes
of Amazon/GR/etc. recommendation mechanisms, hence why I built the tool. I
won't plug it here since I don't want to be spammy but I can share directly
with people if they'd like.
------
publicfig
I do think these are some good guidelines that are definitely worth at least
considering if you're struggling with choice. One other method I've used to
great effect is to read books that I see cited at least 3 times in 3 different
places. So if I'm interested in or researching a topic and I see a book or
work cited (or recommended) by multiple independent sources, I'll add it to my
list. It really helps me find a lot of non-fiction/research-based works with
generic titles that I may not come across otherwise.
Obviously, YMMV. This can lead to issues (for example, ideological hegemony in
a lot of the fields I'm researching in) and it's by no means the only method I
use to pick works. But it has been really helpful for me!
------
gruppen
#2 shouldn't be a hard and fast rule, especially, with respect to textbooks
and, especially, when it comes to subjects like math. A student rating a
phenomenal book 2 stars because "...it was assigned and we had to buy our
book. But the book is too hard to understand, explains jack shit and totally
worthless. I give it two stars only because I bought it at 1/4 the price on
sale, otherwise this book barely deserves 1/2 star at most. Believe me, I am
not dumb - I sometimes even read straight out the textbook." is not rare.
I read comments only after I'm done with the book.
------
scandox
Wait. Good books stick around. 10 years is a reasonable minimum delay. What
are you going to miss?
And yes this does not apply to books about JavaScript.
------
amai
A simpler rule: Read what nobel prize winners read:
[https://www.theguardian.com/books/2015/apr/03/steven-
weinber...](https://www.theguardian.com/books/2015/apr/03/steven-
weinberg-13-best-science-books-general-reader)
------
awillen
I think the journalist rule is good for books that go deep into a topic, like
many of the ones you've cited here. On the other hand, I would expand your
caveat around biographies to say that journalists are generally very good when
it comes to writing about events.
Barbarians at the Gate is a good example of this - it required authors who
were able to dig deep into _what_ happened. There's obviously some info on why
things happened there as well, but the primary purpose of the book is to
inform the reader of what occurred, which is a good use of the journalistic
skill set.
------
fitzroy
Paul Fussell's book, Class: A Guide Through the American Status System is one
of my favorite non-fiction books. I consider myself lucky to have discovered a
copy at a used book store 20+ years ago. Unfortunately, it only has a 3.95 on
Goodreads, so I guess it's going in the trash.
The Empathy Exams by Leslie Jamison only has a 3.63 and I don't think she's
really an "expert" in any of the specific topics she writes about. It's
brilliant ...but maybe I can still return it?
This system is a great way to hyper-optimize for narrow-mindedness.
------
User23
I pick the shortest one on the shelf for the subject. That’s how I found
Einstein’s Relativity and Feynman’s QED. Omit needless words!
~~~
tasty_freeze
Then check out the "A very Short Introduction To ..." series from Oxford
University Press.
[https://www.amazon.com/s?k=oxford+very+short+introduction+se...](https://www.amazon.com/s?k=oxford+very+short+introduction+series)
~~~
stuxnet79
My experience with the Very Short Introduction series is mixed. I kind of wish
there was more editorial oversight to ensure that the style, content and
pacing are more consistent from book to book.
Some books I've read don't really follow the spirit of "Very Short
Introduction" and are rather dense. The quality of writing also varies quite a
bit.
Overall though, they are a good first stop if you want to get basic
familiarity with a topic.
~~~
User23
I'm actually a fan of dense. If I have to reread a well written paragraph five
times to understand that's fine by me. When I'm reading serious non-fiction
(currently Predicate Calculus and Program Semantics by Dijkstra and Scholten)
I consider it normal to just make it through a handful of pages per day.
------
thisrod
The problem I see with these rules is that you're only going to read new
books.
I use a heuristic that the best book about a given topic was most likely
written half as long ago as people have been thinking about that topic. People
write more books now, but they get less interested in old topics as time goes
by, and those effects tend to cancel out.
------
amai
I know a simpler rule: Just have a look at my list of recommended nonfiction
books:
[https://github.com/asmaier/littlelists/blob/master/books_non...](https://github.com/asmaier/littlelists/blob/master/books_nonfiction.md)
;-)
------
mindcrime
Ratings aren't fungible from person to person, so I reject the whole "4.0
rating" thing. A book could have a 2.0 rating and I might still find it
useful. Generic ratings like that are, at best, a VERY weak signal, and
certainly not something I'd ever incorporate into an ironclad rule.
------
yoz-y
For non fiction I only read what was directly recommended by somebody whose
taste I already know. It takes discovery out of the equation and I definitely
miss good books, but since I do not read that many non-fiction books this
method is highly efficient.
------
kieckerjan
I guess many journalists view themselves as the ideal outsider. If you deny
them a place on your reading list (or demand that they only write about their
job) you deny that whole vocation. The world would be a poorer place if we all
did that.
------
int_19h
#1 is kinda tricky to apply to books that take down pseudo-science. In those
cases, who is and isn't an "expert in the field" (and whether the field even
meaningfully exists) is exactly the question.
------
jcoleh
I would recommend still reading books written by journalists since some can be
very good. But for these books adjust your Goodreads rating threshold to
something more like 4.2 or 4.3 for journalist-written books.
------
EastSmith
I am using the 4 star Goodreads rule and it works pretty well alone (without
the rest of the nice suggestions in the article).
I also use the 6.5 IMDB rule for movies and the 8.0 IMDB rule for TV shows.
------
CharlesColeman
My rules: be interested in some topic, ask around for recommendations, then
pick one or two of the recommended titles based on how people described them
in the responses you got.
------
thecleaner
Do we really need to read a book a week ? I mean just reading a book once wont
really help much. Isnt it better to read less but read those books deeply ?
------
urubu
Rule #1 should probably be complemented with Tyler Cowen's proviso: If the
book lists PhD after the author's name, run the other way.
------
meijer
Wrt to "How the Mind works": The beginning is really pretty dry, but it gets a
lot more interesting in the later chapters.
------
leopoldsw
Just follow Bill Gates' blog. Lol.
------
cryptosteve
I love reading nonfiction. But, as I am sure you can relate, I only have
limited time. Even if we were able to make enough time to read one book a week
on average—certainly not the case for me right now—we would still only be able
to read around 3,000 books in our entire adult lives. At my current pace, the
real number will likely end up being far lower.
------
droithomme
> no books by journalists, unless it’s about journalism
Charles Mann's 1491 and 1493 remain the best books on the topic.
Robert Whitaker's Mad in America and Anatomy of an Epidemic likewise.
------
crimsonalucard
Rules restrict the diversity of your learning and restricts your knowledge.
You don't have time for everything, so it's good to have rules. But always
remember that on occasion you need to break your own rules otherwise your
knowledge becomes biased.
My extra rule to overcome this bias is. Anyone who recommends me their
favorite book I will read it no matter how stupid I believe the recommendation
is, no matter how many rules it violates.
I found the DaVinci code this way. That book was a huge mistake, but still I
now have actually read the book and have the definitive knowledge to know it's
a mistake.
------
jamesrcole
[EDIT: the post says "Prefer books by experts in the field" and says these are
people who have spent their lives researching that field. It gives GEB as an
example of such a book. That claim is factually incorrect and calls into
question the idea of requiring books to meet that criteria. Does anyone of the
many people who've downvoted my comment care to explain why you find it
objectionable?]
> _The best nonfiction books I have read have invariably been by folks who
> spent their lives researching that particular issue. A couple of books in
> this category immediately come to mind: Why We Sleep, The Language Instinct,
> Gödel Escher Bach._
Hofstadter was only 34 when GEB was published
~~~
Retra
That doesn't refute the fact that he spent his life researching the content in
GEB.
~~~
jamesrcole
"The idea that changed Hofstadter’s existence, as he has explained over the
years, came to him on the road, on a break from graduate school in particle
physics. Discouraged by the way his doctoral thesis was going at the
University of Oregon, feeling “profoundly lost,” he decided in the summer of
1972 to pack his things into a car he called Quicksilver and drive eastward
across the continent. Each night he pitched his tent somewhere new (“sometimes
in a forest, sometimes by a lake”) and read by flashlight. He was free to
think about whatever he wanted; he chose to think about thinking itself. Ever
since he was about 14, when he found out that his youngest sister, Molly,
couldn’t understand language, because she “had something deeply wrong with her
brain” (her neurological condition probably dated from birth, and was never
diagnosed), he had been quietly obsessed by the relation of mind to matter.
The father of psychology, William James, described this in 1890 as “the most
mysterious thing in the world”: How could consciousness be physical? How could
a few pounds of gray gelatin give rise to our very thoughts and selves?
Roaming in his 1956 Mercury, Hofstadter thought he had found the answer—that
it lived, of all places, in the kernel of a mathematical proof. In 1931, the
Austrian-born logician Kurt Gödel had famously shown how a mathematical system
could make statements not just about numbers but about the system itself.
Consciousness, Hofstadter wanted to say, emerged via just the same kind of
“level-crossing feedback loop.” He sat down one afternoon to sketch his
thinking in a letter to a friend. But after 30 handwritten pages, he decided
not to send it; instead he’d let the ideas germinate a while. Seven years
later, they had not so much germinated as metastasized into a 2.9‑pound,
777-page book called Gödel, Escher, Bach: An Eternal Golden Braid, which would
earn for Hofstadter—only 35 years old, and a first-time author—the 1980
Pulitzer Prize for general nonfiction."
[https://www.theatlantic.com/magazine/archive/2013/11/the-
man...](https://www.theatlantic.com/magazine/archive/2013/11/the-man-who-
would-teach-machines-to-think/309529/)
~~~
Retra
What is your point? When Hofstadter was 35, he had spent his adult life
studying a subject which culminated in the publication of a book about it. He
wasn't passively throwing words on pages to get anything published, he was
publishing his life's work.
You seem to be trying to argue that -- because he continued to live afterward,
he somehow didn't spend his _entire_ life on the book. But that's _not_ what
was being asserted -- only that he had spent "his life" up to the time of the
work's publication on the work. And it's not even relevant if it were meant as
you seem to be assuming, because Hofstadter has _continued_ to study the very
same subject ever since. He's also published more books on the very same
matter in the following years. It was his life's work then, and it continues
to be today.
~~~
jamesrcole
> When Hofstadter was 35, he had spent his adult life studying a subject
First, a nitpick - the book was published when he was 34, and he was even
younger when he wrote it.
But the main point is that the post is clearly talking about people who have
spent the entirety of a lifetime studying an area, not someone who has spent
their adulthood _so far_ studying the subject.
And Hofstadter clearly hadn't devoted the entirety of his adulthood up to age
34 studying the subjects of GEB. The quoted passage says that his formal study
had been in particle physics.
~~~
Retra
I disagree with your assessment of what the main point is. I don't think
anyone else is interpreting it that way, and can confidently assert that the
error is entirely on your end.
Moreover, Hofstadter's study of physics isn't unrelated to his study of
intelligence. I personally got a degree in physics to study language and
intelligence, because the way physicists use language, analogy, and simple
concepts to understand the world is particularly effective and interesting. So
I can tell you first-hand that they are related. In fact, anyone who studies
philosophy is probably making a serious mistake to not study physics first.
~~~
jamesrcole
> _I don 't think anyone else is interpreting it that way, and can confidently
> assert that the error is entirely on your end._
The post is very clear that it's talking about expertise in the sense referred
to in my comments (which, as also indicated in my comments, I don't fully
agree with):
_" Rule #1: Prefer books by experts in the field The best nonfiction books I
have read have invariably been by folks who spent their lives researching that
particular issue. A couple of books in this category immediately come to mind:
Why We Sleep, The Language Instinct, Gödel Escher Bach.
Positive indicators of this in a blurb may include “Professor in [field
directly related to the book’s topic]”, “Long-time researcher in [field
directly related to the book’s topic]”._
Note how they say "Professor in" and "Long-time researcher in".
The way you're using a term, a 21 year old can have "spent their life
researching the topic" if they've been focused on it over the previous three
years.
~~~
Retra
You don't need to focus on these kinds of semantic quibbles if you want to
miss the point. You can do that directly.
Albert Einstein was 21 when he published his first paper, and 26 when he
published his best. Please tell me that he didn't spend his life researching
physics, even then.
~~~
jamesrcole
You've completely missed the point of my comments. Go back and read them more
carefully.
~~~
Retra
I've reread your comments and stand by what I said. Perhaps you should confer
with someone else.
------
AbyormPiranha
Read papers on arxiv and forget books.
~~~
mathnmusic
Papers are written for peers, not beginners.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: “Pay to turn off ads” SaaS - lenomad
Scenario:
An author/website publishes quality content and makes money with ads. There are some regular readers who prefer to remove ads (or who use an ad-blocker but would like to support the site).<p>The website would get a cost-per-visit (capped to a maximum per month from one user).<p>The service would support multiple websites, and it should be super-easy, like clicking a button "Remove Ads" to remove ads and start the micropayments.<p>I have a feeling that such a service will be popular soon (but do not plan on building it myself, if it doesn't exist)
======
masonicb00m
Tried it.
See
[https://www.google.com/contributor/welcome/](https://www.google.com/contributor/welcome/).
Users who don't like ads can just run an ad-blocker, so this hypothetical
service would be competing with free, and would only convert the fraction who
feel guilty enough about running ad-block to pay what the publisher charges.
The bigger sites will already have set up such a system, so you're left
chasing smaller sites. The smaller ones are probably already on AdSense, so
Google's experiment with Contributor will address them.
~~~
iMerNibor
> The bigger sites will already have set up such a system
The main issue what that is you need to pay a lot of money if you visit a lot
of big sites and dont like ads
5$ a month for one service vs. 10 or so is a big difference
|
{
"pile_set_name": "HackerNews"
}
|
Flavors of programming ... - RiderOfGiraffes
http://pozorvlak.livejournal.com/135255.html
======
gruseom
I find it interesting that you (implicitly) like Data Munging better than
Clever Algorithms. I agree that the former is orders of magnitude more common.
I haven't encountered a Clever Algorithms problem very often at all on
commercial projects. My current project is an outlier insofar as we've been
doing algorithmic work for months. It's fun, in the way any good hacker would
expect such a challenge to be, but it's also stressful to work on something
for a long time and not know for sure whether you'll end up with a solution.
(Especially when the context is a startup and not a research project.) It's
much easier when you know for sure you can get there, you just have to figure
out how. Thankfully, the Algorithms category gets more and more fun as you
knock off significant chunks of the problem. Once you see the light at the end
of the tunnel, you're home free. Unless, of course, you got it wrong. :)
Edit: by the way, do you actually use APL/J/K in the way you mentioned? I
really like that paradigm. I think it deserves a place alongside OO and the
others as a fundamentally different approach to programming. However, I
haven't used it nearly enough.
Edit 2: I would distinguish two different kinds of Data Munging. One is Data
Transformation, where you have data in some original context and it needs to
go through a series of transformations or passes until you've extracted or
computed what's suitable for your problem. That is way fun. The other type,
though, sucks rocks. I call it Meat Grinding. That's when you have the _exact
data you need_ in a form that you can't use because of some purely technical
limitation. A common example is you have data in a database or a DTO when what
your code need is a domain object, so you have to copy the properties over
like this:
myStupidObject.CustomerName = stupidDataObject.CustomerName
This isn't so much a programming problem as an integration one. (You sort of
touch on it at the end.) It's also one of the worst things about programming
in less powerful languages where you can't easily write a program to do the
gruntwork for you.
Edit 3: I'd also argue that your "API spelunking" nightmare is at its nastiest
in OO systems. In poorly designed pre-OO APIs (like Win32), it might take a
long time to figure out how to call a function or what to do next; there might
have been some ugly one-off struct you had to populate, etc.; but at least you
didn't have to figure out how the hell to make a Bar out of a Foo so you can
pass it as the third argument to Baz, when neither Foo nor Bar has anything to
do with your problem. But with poorly designed OO APIs, these absurd hurdles
seem endless. I used to compensate for this by studying such code
archaeologically (if I couldn't avoid it altogether), in order to learn how
the absurdities had arisen over time. This was a way to escape Dostoevsky's
definition of the easiest way to drive a man insane: make him move a pile of
dirt from one end of a prison yard to another _and then move it back_.
~~~
RiderOfGiraffes
I understand your use of the pronoun "you", but it's not me. I work almost
exclusively on algorithmic stuff, and when I do work on data munging it's the
direct application of algorithms just developed. So I'm lucky.
~~~
gruseom
Oops, I assumed from your other comments that the OP was yours. Sounds like
you've got a good situation though.
------
david
I don't think one of my favorite (OOPish) flavors was included. I think of it
as something like designing an organism out of various independent parts,
figuring out how they will work together and respond to input.
For me a lot of more complicated UI design is like this, I have objects
representing each part of the interface and spend a lot of time sending
messages between objects and figuring out how to handle user events. I guess
the focus isn't so much on algorithms or if-statements so much as high-level
architecture.
------
diN0bot
at first i thought this would be good. data munging is definitely a time of
programming that occurs.
but then the list just gets ridiculous, almost poking fun at the different
jobs one might do. more importantly, real tasks are left off.
i flagged this submission for being boring.
~~~
jacquesm
The list is meant with a wink, the references to the maze are really a funny
way to get the point across. Grow a sense of humour and flag stuff that really
needs flagging, such as this: <http://news.ycombinator.com/item?id=943367>
In case you missed, it, let me spell it out for you:
Programming is idiomatic at a higher level than you'd think by casual
inspection of the code, when you zoom out to the level of individual programs,
maybe even projects there is a very limited set of classes that you can group
most if not all programs in to.
I'm missing one for web programming, so let me supply that one here:
WebApps: data gets moved from some datastore, through some processing
machinery to produce html, then gets sent to the user, the interactions get
captured in forms or ajax and pass a bunch of processing machinery on the way
to some datastore. Repeat until machine failure.
~~~
RiderOfGiraffes
Have you put that in the comments on the site? It fits ...
|
{
"pile_set_name": "HackerNews"
}
|
The asteroid that wiped out most life on Earth allowed microbacteria to thrive - Tomte
https://edition.cnn.com/2020/02/04/world/asteroid-bacteria-killed-dinosaurs-scn-trnd/index.html
======
masonic
"almost took all of life on Earth along with it"
Absolutely false. The _Permian_ extinction was far closer to this, but the K-T
event didn't even purge _mammalian_ life from _North America_ , let alone
"almost all" life in all forms.
|
{
"pile_set_name": "HackerNews"
}
|
Riecoin breaks world record for prime sextuplets, twice [pdf] - gatra
http://riecoin.org/Press%20release%202014-11-26.pdf
======
gatra
Please learn from the project at riecoin.org Latest news: "Last week, Riecoin
– a project that doubles as decentralized virtual currency and a distributed
computing system - quietly broke the record for the largest prime number
sextuplet. This happened on November 17, 2014 at 19:50 GMT and the calculation
took only 70 minutes using the massive distributed computing power of its
network. This week the feat was outdone and the project beat its own record on
November 24, 2014 at 20:28 GMT achieving numbers 654 digits long, 21 more than
its previous record."
------
brey
very neat. it's bothered me just how many GWh we've collectively poured into
bitcoins.
is this of any practical use (cryptography?), or is it for pure mathematical
interest?
(not saying the latter is inferior ...)
~~~
tacotime
I think the answer might be that for now it's mathematical interest but no one
really knows what it might or might not turn into. I think the more of these
sextuplet, twin etc. primes that we discover the closer it may be bringing us
to developing a grand theory of primes and solving difficult problems like the
twin prime conjecture and whatever else is out there. And if we could do that
then that would be where the practical ramifications would start to possibly
emerge. So for now I see it as just more data that might or might not help
spark something in the mind(s) of some mathematical genius(es) one day.
I am in no way an expert and this is just amateur speculation but I wanted to
write it anyways because I was (am) wondering the same thing.
~~~
xamuel
Mathematician here, have to disagree. Knowledge about individual primes (or
n-tuples) is pretty much useless for understanding the big picture of them.
And if things like "solving the twin prime conjecture" had practical
applications, we wouldn't need to solve them to reap them: if there's a
crypto-technique that hinges on TPC being true, you can start using it today,
and if it somehow breaks, congratulations, you disproved the conjecture.
~~~
wbhart
I absolutely agree with that. But there can be exceptions. For example the
ternary Goldbach conjecture boiled down in the end (after a lot of analytic
"pencil sharpening" as the author called it) to an exhaustive computer search
up to some limit.
Of course that was an exceptional case. There were some very committed
computational people who had been working on related stuff for years prior who
just happened to have the code that could just about reach the required limit
with sufficient hardware, which we just happened to have sitting around in
between working on projects paid for by our grant.
By the time the real theoretical work was done with pencil and paper, the
computational task was just completing, meaning the result could be stated
unconditionally.
Whether you would say that the computer search allowed us to learn anything
mathematically useful, though is a state of mind. Mathematicians care much
more about techniques than knowing large lists of otherwise random looking
numbers, even if conjectures about such lists do motivate looking for new
techniques, and even if there is some prestige in having established a long-
lived conjecture.
~~~
Someone
I would say that, for now, the statement
*"any odd integer > 5 can be written as the sum of three primes"*
is at best a tiny bit better than the
N _" any odd integer > exp(3100) can be written as the sum of three primes"_
that we had before or even than the even weaker
*"any sufficiently large odd integer can be written as the sum of three primes"*
That would change (for me) if we ever link that magical constant 5 to another
magic constant 5 in math, or if we create a series of related constants (sums
of 4 primes? Sums of three numbers of the form pq with p and q prime? Who
knows?) and show a pattern in them.
That, I think, is another good reason to try and pinpoint the exact values of
these limits. The values the,self are dull, but knowing them may give
mathematicians ideas about why they have the values they have. It may often be
(relatively) dumb work, but still worth doing.
------
lappa
This scamcoin is uninteresting. Bitcoin has come up with the smallest sha2
hashes (which is also pretty uninteresting). The hope behind Riecoin is that
some breakthrough may come about for finding prime sextuplets. The hope for
Bitcoin is that no breakthrough will come about for finding small sha256
values.
Basically the purpose for Riecoins PoW is to be broken which doesn't make
invest from the perspective of someone using it as a currency.
~~~
lappa
s/invest/sense/
|
{
"pile_set_name": "HackerNews"
}
|
Don’t Use Icon Fonts - yread
https://cloudfour.com/thinks/seriously-dont-use-icon-fonts/
======
whywhywhywhy
It's not mentioned much but these are also a nightmare for maintainability. If
your designer uses these and leaves the project then the next designer opens
the file and it's just words and boxes where icons should be.
At least if SVGs are used then even if the previous designer leaves the
project in a shambolic state with no sources then you can still salve the
vectors from the live site.
Complete nightmare trying to do that with icon fonts.
|
{
"pile_set_name": "HackerNews"
}
|
Top 40 Static Code Analysis Tools - DmitryNovikov
https://www.softwaretestinghelp.com/tools/top-40-static-code-analysis-tools/
======
hideo
PSA: I'm a fan of static analysis tools but if you are in the position of
making decisions about technology at your organization, please be aware that
they are NOT a substitute for nuanced and detailed design.
In my personal experience, combining strongly-typed compiled languages with
extensive static analysis has helped "eliminate" bugs at the syntax level, and
to some extent at the semantic level. But the stuff that really causes issues
is often at the level of the pragmatics of your software. (Going vaguely off
the definitions here [http://www.cs.sfu.ca/~cameron/Teaching/383/syn-sem-prag-
meta...](http://www.cs.sfu.ca/~cameron/Teaching/383/syn-sem-prag-meta.html) )
I think given the overwhelmingly large number of frameworks around, people
tend to make snap judgments around how to use these tools (and the names of
these tools don't help - "Findbugs" is a bit overkill :).
Make sure your software has a real set of designs before you start writing
code (i.e. block, sequence, control/data flow diagrams and use-cases) and
it'll be worth more than any static analysis tool, and it takes far less time.
Static analysis can be layered in later if you have time.
~~~
jciochon
On that note, do you have any links with tutorials/guidance on (perhaps
typical/common) designs, and their related diagrams? I haven't written proper
design docs since school--perhaps it's time to revisit the idea :)
|
{
"pile_set_name": "HackerNews"
}
|
IE6 Cheatsheet: How To Fix Internet Explorer 6 Bugs - amackera
http://www.virtuosimedia.com/tutorials/ultimate-ie6-cheatsheet-how-to-fix-25-internet-explorer-6-bugs
======
billybob
"The best strategy for dealing with Internet Explorer 6 is not to support it."
Amen.
~~~
jsm386
That is the _easiest_ strategy, but the author doesn't mean it, and the entire
collection of info backs that up. He even says so a few lines later:
"This isn't one of those rants about IE6 or a campaign to try to kill it.
There are enough of those around the web, but they don't help if you need to
support IE6 because it still has a significant enough marketshare that you
can't ignore it for business reasons. No, this is the resource you've been
hoping for."
But yes, I'd love not to support IE6 in all of my work, but that's just not
happening...yet.
~~~
trebor
I agree, jsm386.
It's often simpler, though not necessarily easier, to code a design with
progressive enhancement in mind rather than graceful degradation. Designing a
structure that functions properly in IE6 is not difficult.
(Not that I _like_ IE6, I just have to support it.)
------
warfangle
Noticed this site was causing firefox to hang for about a minute and peg my
CPU, so I profiled it:
The Addthis script you're using has an onReady method that gets called 94
times on pageload and takes an average 12 seconds to run each time. During
this time Firefox gets the dreaded beachball/umbrella.
I'm having similar issues on a site we're using sharethis on, but not quite
that bad on page load - it does increase the constant load on the cpu by about
15% though.
Be careful when using these widgets...
(macbook pro; 2.4ghz; 4gb ram; ff 3.0)
~~~
NathanKP
I didn't have any problem opening it Safari. ;)
Edit: I just tried it in Firefox 3.5.2 and no problem either.
I'm running Mac OS X Snow Leopard on a Macbook 2ghz duo, 2gb ram.
Possibilities: It could be Firefox 3.0, try updating. Or it might be the 64
bit power of Snow Leopard that makes it faster for me. But then again Firefox
is still a 32 bit application.
~~~
warfangle
Even so, it's pretty egregious that something so simple would be so heavy.
Works fine in safari for me, too, but according to statcounter about 15% of
people are still using FF 3.0.
~~~
NathanKP
Yes, I personally removed Addthis from my website. Social bookmark sharing
isn't worth the extra overhead. And Sharethis is even worse than Addthis.
------
makecheck
If there's a culture that refuses to upgrade software, then the solution
requires _both_ the education of those who are not upgrading, _and_ the
solutions to let them do it. For example, present the relative costs of
staying put (mainly risk to the business, e.g. several million dollars a few
years from now if everything breaks), against the costs of hiring someone to
fix it. Bring in the people who could fix it, and find some quotes.
In other words, the people who made the irresponsible decision to stick with
old software should have to cover the cost of fixing it. Every site on the
Internet should not continue to cover this cost for them.
------
Jasber
Another interesting one I've stumbled on is the underscore hack. This is a way
to apply CSS only to IE 6: [http://designpepper.com/2008/01/09/defeating-
ie6-with-emphas...](http://designpepper.com/2008/01/09/defeating-ie6-with-
emphasis-the-underscore-hack/)
__ _#page { background-color: white; _background-color: black; }_ __
This applies background-color black to IE 6 only, and white to everything
else.
~~~
run4yourlives
You shouldn't really be including hacks straight into your main css. The main
reason being that you have no idea how future browsers will handle the hack
because is basically undocumented, and this might cause you - or your client -
grief in the future.
Include a new stylesheet via conditional comments and you can isolate any IE
browser while not needing any hacks. (i.e., you can just declare the
background color directly)
~~~
pbhjpbhj
Most clients don't want to pay you to create an IE only CSS file, takes
minutes I agree but if they won't pay a few quid extra then why should I do
it.
TBH I usually do this sort of thing (add an ie6.css file instead of a hack) as
I tend to over-engineer rather than keeping to the agreed spec and work
schedule.
------
sanj
Has anyone explored using user agent sniffing to serve up completely different
pages for IE6? Create a set of simple vanilla pages (much like you might for
mobile) that are independent so that you don't mess up modern browsers with
the hackery.
~~~
warfangle
Don't need to. You can include completely different stylesheets and javascript
files simply with IE conditional comments (IE parses what's in them based on
simple rules like, version >= 7; all other browsers ignore them as comments):
<http://www.javascriptkit.com/howto/cc2.shtml>
If you code to standards and follow unobtrusive / degradable javascript
patterns, your site should work without stylesheets and javascript anyway :)
------
NathanKP
What a useful cheatsheet! Its just too bad that it is so useful. IE6 should
have died long ago, forever replaced by Firefox.
~~~
RyanMcGreal
At this point I'd be willing to live with "...forever replaced by IE8".
~~~
NathanKP
I agree. Even IE8 would be better than nothing.
~~~
alagu
Actually, the debugging tools in IE8 are on par with Firebug.
------
theone
Really a nice compilation.. thanks for the effort
------
mitko
Why?
------
_ck_
Best single IE6 CSS cheat for 90% of my problems:
zoom:1
Learn it, love it, use it.
It makes IE use its old renderer calculations (hasLayout) for whatever element
you put it on, so it often fixes disappearing items, incorrect overflow,
floats, padding/spacing etc. If your style looks correct in other browsers,
give zoom:1 a shot on IE6 and see if it fixes things before wasting another
minute.
ps. For those that are validation crazy, there are other triggers for
hasLayout that will validate (certain padding, margins, etc.) but they vary
from element to element, whether they have ancestors, etc. and you'll waste a
great deal of time figuring them out.
~~~
pbhjpbhj
Zoom is not a W3C standard though ...
~~~
sam_in_nyc
Is this a joke? After reading about how zoom:1 is essentially a miracle, you'd
be hesitant to use it because it doesn't pass a very arbitrary and often
stupid set of rules?
I'm not meaning to put you down, but I just don't understand this validation
craze -- beyond the fact that you can say "it validates, does yours?" It seems
as though it is some sort of psychological/sociological game... and I was
hoping us developers were beyond that sort of thing.
I would be more impressed if there were a validation that said: "This
_actually_ looks like it should in X, Y, and Z browsers."
~~~
pbhjpbhj
Whether something "looks like it should" in the browsers is a function of the
browser makers. Whether code is valid and meets the spec is a function of the
coders. If the browsers are well programmed then designing to spec means that
the pages will appear as expected in those browsers.
The MSIE team basically created an undocumented system called hasLayout and
didn't bother to tell anyone until the fixes had been found - if I add
"zoom:1" there's nothing to say that the next update of MSIE won't balk at
that or that W3C won't adopt zoom using % making my pages potentially (only in
later IE I'd warrant) zoom to 1%.
As for "looks like it should" I'm sure you know that (X)HTML & CSS are not
intended to convey the same visual expression to every browser. If you want
identical visuals then web pages aren't what you're after.
Yes I have used zoom on pages when clients haven't been bothered about web
standards. I contend that accessibility of pages by _all_ well-made browsers
can only be assured by coding to the agreed standards.
MS: _whacks man on head_
man: "hey, wtf?"
MS: _whacks man on head_
man (putting on helmet): "please don't do that it's not right" (doesn't meet
the arbitrary standards of public decency!)
MS: "Put a helmet on or when we whack you it will hurt bad"
MS: _whacks_
man: "Stop hitting me and it won't hurt at all"
------
joubert
1\. Upgrade.
~~~
DrJokepu
I must mention that the main reason IE6 is still around is because XP is still
around.
~~~
joubert
Isn't IE 6 still around because many companies still standardize on it for
their intranets? I would think the answer is Yes as evidenced by MSFT's recent
life extension to 2014.
~~~
DrJokepu
Let me put it like this: Microsoft is only supporting (issuing security
updates to, etc) IE6 because it was shipped with XP and Microsoft is still
supporting XP. XP can't enter into the legacy version phase due to the large
number of customers refusing to upgrade. If Microsoft could phase out XP, it
could phase out IE6 as well. If Microsoft phased out IE6, companies would be
forced to phase out IE6 as well. Hence, everyone who's refusing to upgrade to
Vista (or Windows 7 soon) is helping keeping IE6 alive.
~~~
trebor
How did you come to this conclusion? IE4 and IE5 were swiftly phased out by
the "new, improved, IE6!" if I remember rightly, and I think ActiveX was
seriously improved and overhauled with IE6 which many companies use(d) in
their intranet.
I don't mind Vista any longer, and this was written on Vista, but I don't
_like it_. If you want to talk about refusing to upgrade... I still like
Windows 98 better than all "improvements" upon it, because it never crashed on
me.
:-)
~~~
jsm386
last week i booted an old box with ie6 as the only browser around, windows
update was pushing ie8 as a critical update. i didn't apply it, because a real
install of ie6 (not the stand alone collection) is what that system is used
for...but it is nice to know that msft is pushing users to upgrade with a bit
more emphasis than in the past (an optional upgrade, though critical upgrades
are optional)
~~~
trebor
Jsm, you might be happy to know, but any Windows system with IE can go back to
IE6. I just found IETester recently, and it's been an awesome pack to test
compatibility with IE5.5 - IE8!
Check it out here: <http://www.my-debugbar.com/wiki/IETester/HomePage>
(I was mainly asking DrJokepu.)
|
{
"pile_set_name": "HackerNews"
}
|
On Max Perkins, One of America's Greatest Editors - samclemens
http://lithub.com/on-max-perkins-one-of-americas-greatest-editors/
======
Finnucane
I read Berg's biography back when I was a noob editor, and I recall it being a
good read. It is not an exaggeration to say that several generations of
editors were influenced by him (an example that would be almost impossible to
emulate today). The 'new' authors mentioned--Paton (Cry the Beloved Country)
and Jones (From Here to Eternity)--ended up being the last books he edited
(spoilers: he dies at the end).
|
{
"pile_set_name": "HackerNews"
}
|
CHIP-8 Interpreter, Assembler and Disassembler - ageofwant
https://github.com/wernsey/chip8
======
ageofwant
The Author has a nice blog here
[http://wstoop.co.za/chip8.php](http://wstoop.co.za/chip8.php)
|
{
"pile_set_name": "HackerNews"
}
|
The Longest Possible Chess Game - monort
https://www.chess.com/blog/kurtgodden/the-longest-possible-chess-game
======
tromp
The article itself doesn't arrive at the correct number of 5898 moves, but
some of the commenters do. At my own page
[http://tromp.github.io/chess/longest.html](http://tromp.github.io/chess/longest.html)
you can play through a 3-moves-shorter game.
~~~
ramshorns
It seems like the alternating of irreversible moves makes the 5870 shorter
than it could be. The pattern of (49.5 + b) + (49 + w) + (49.5 + w) + (49 + b)
+ … could be replaced with several moves in a row by black, then several in a
row by white, allowing some of the 49s to be 49.5 instead. Is that where the
discrepancy comes from?
~~~
tromp
Yes, that would explain most if not all of the 56-ply discrepancy. In the game
on my page you can indeed see black making irreversible moves on move
50,100,150, ... 450 and white's first irreversible moves comes only at move
500.
------
schoen
A user who is banned asked in this thread why you can't have an unlimited-
length game where kings just move back and forth. The answer is
[https://en.wikipedia.org/wiki/Fifty-
move_rule](https://en.wikipedia.org/wiki/Fifty-move_rule)
[https://en.wikipedia.org/wiki/Threefold_repetition](https://en.wikipedia.org/wiki/Threefold_repetition)
~~~
dfan
All that the 50-move and threefold repetition rules do is permit one player to
claim a draw; the game may continue if both players wish to play on. (It has
happened, particularly threefold repetition where the repetitions are spaced
far apart and the players don't have much time.)
There are actually new FIDE rules as of 2014 or so, where after 75 moves, or a
fivefold repetition, the game is immediately declared drawn without the need
for a player to take action.
~~~
schoen
This is a good point in that it means players could have collaborated to
deliberately achieve an arbitrarily long game (and indeed, in some of the long
reported games it looks like nobody chose to claim a draw for a while). I
guess such collaboration isn't sportsmanlike in a competitive setting, but it
wouldn't have been precluded by the rules.
Thanks for that clarification.
~~~
sanderjd
At the highest levels it probably does imply collaboration, but where I play
at (much) lower levels, I might very reasonably choose not to take a draw
because I think my opponent is likely to make a mistake that will result in a
win for me. My opponent may reasonably think the same about me.
------
Someone
I thought the 50 move rule has been lifted for a few cases where a position is
known to require more moves to win.
However, [http://www.fide.com/component/content/article/1-fide-
news/10...](http://www.fide.com/component/content/article/1-fide-
news/10115-fide-laws-of-chess.html) doesn't mention that, and even has the in
some sense stronger _" a game is drawn if any series of at least 75 moves have
been made by each player without the movement of any pawn and without any
capture."_, which doesn't require either player to claim the draw, and, thus,
cannot be avoided.
So, even if the players wanted it, a game can't be longer than somewhere
around 96*75 or 7200 moves.
~~~
bluGill
It was up to 75 for a while, and then someone worked out a winning sequence
that required more than 75 moves, and there is every reason to believe longer
sequences are possible - to work them out is a brute force problem (chess is
not mathematically solved)
Note that by winning sequence I mean perfect play on both sides - there are
lots of sequences to allow to allow the game to continue longer than it has
to.
~~~
schoen
In fact, there are already solved positions that are known to require over 500
moves to mate:
[https://en.wikipedia.org/wiki/Endgame_tablebase#Tables](https://en.wikipedia.org/wiki/Endgame_tablebase#Tables)
It seems like different parts of the chess world are a little bit divided
about the best way to integrate this knowledge into the game.
------
mikeash
From 2007, for whatever that's worth. I only noticed because the sudden
mention of a "PDA" made me think, "what year is it?!"
------
schoen
One thing that I wondered about when looking at the longest recorded
professional games that occurred in real play was at what point they reached a
position that is actually solved in a tablebase. In particular, chess endgames
involving only 7 pieces were explicitly solved by computer search in 2012.
(But I also wonder if bringing those solutions to bear in a real game would
involve ignoring the 50-move rule, since tablebase solution for these
positions may often require more than 50 moves to complete.)
On tablebase-informed endgame theory, some of the professionals in these
ultra-long games should probably have resigned long before the actual end of
the games, except that they can also assume that their opponents don't know
the full solutions.
~~~
Someone
Professionals may try to get a draw by getting their opponent to make an error
under time pressure, even if they know their position theoretically is a lost
one.
That even happened back in the time when games got adjourned after x moves
(sort of an example: [https://www.chess.com/blog/NimzoRoy/a-famous-bishop-vs-
knigh...](https://www.chess.com/blog/NimzoRoy/a-famous-bishop-vs-knight-
ending-or-smyslov-blows-botvinnik-off-the-board3)), so should be more popular
nowadays, if the losing party has more time on the clock, or thinks to know
the particular endgame better.
------
ouid
>Otherwise, we have nothing further to discuss here regarding the longest
possible game.
The 50 move rule is there to ensure that the players don't have to occupy
every possible position twice before making progress, but it is not the only
possible way to put an upper bound on the game's length. Since the position
will eventually repeat, and the three repititions rule can be invoked.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: How about a rental focused credit union - bowyakka
I recently moved to San Francisco to work at a startup part of this was finding and renting a place to live. In short time I found a nice place to live in the city, and, whilst the rent was a bit high I figured that it was part of the growing cost of living in SF.<p>I setup a online cheque from my current account, paying the first payment at the start of December (for December), the second on the 15th (a forward payment for January). Somehow the first cheque went missing, the letting agency assumed that the second cheque was for December and /silently/ cashed it despite it being late..<p>On the 10th of January (Friday) I get a phone call claiming that my account was delinquent. Given the rules of rent control as of Monday a three day eviction will be issued. Naturally this shocked me, I had not received notice of any issues (there was a letter dated 6th, received in the post on the 12th). I phoned the bank to query the cheques, a task made more challenging by the fact that the rental agency does not record cheque numbers.<p>With the missing cheque cancelled I am in a situation where its either a payday loan, or a legal notice. This sounds like a whine, and for that I apologise, rather I am using it to illustrate an industry that needs to be disrupted.<p>How hard would it be to create a startup that was a credit union, bank or clearing house, providing accounts to renters issuing payments under a personal name. The B2C for renters could use any sane payment method, with the backend dealing with this legacy industry. This startup could also cover bonds, offer better legal protection for renters, offer late payment insurance, have decent alerting and so forth.<p>Technically I see this a somewhat easy to manage, legally perhaps its a bit harder to do.<p>Does anyone here have any insights on how such a startup could be created ?
======
bowyakka
So it looks like this exists, [https://www.cozy.co/for-
renters/](https://www.cozy.co/for-renters/)
|
{
"pile_set_name": "HackerNews"
}
|
How Apple Is Working from Home - laktak
https://www.theinformation.com/articles/how-apple-is-working-from-home
======
kn8
Dupe, discussion:
[https://news.ycombinator.com/item?id=22730478](https://news.ycombinator.com/item?id=22730478)
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: Good book to start programming for 15 yo? - pplonski86
I'm looking for good book for my cousin (15 yo) to infect him with programming. What would you recommend?
======
goldenbeet
Why book? I don't know about you or your cousin, but learning from a book is
so slow and dry. Why not a video series or online tutorial? Hell, maybe teach
him the basics and build a small project with him.
Codecademy is a great way to learn a bit of syntax and then build something.
'How to make a website' is a good intro course for getting into web dev.
There's also a lot of good python basic videos on YouTube, then he could dive
into ML/AI with:
Machine Learning Recipes from Josh Gordon at Google
[https://www.youtube.com/watch?v=cKxRvEZd3Mw](https://www.youtube.com/watch?v=cKxRvEZd3Mw)
Siraj Raval's channel on ML/AI/Programming
[https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A/vid...](https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A/videos)
All that said, if you reallllly want a book: "Learn Python the hard way" by
Zed Shaw
[https://learnpythonthehardway.org/](https://learnpythonthehardway.org/)
~~~
pplonski86
Thank you! I'm looking for materials in Polish so I was thinking that it will
be the best to look for some book and its Polish translation. I haven't seen
English videos translated into Polish.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: How much traffic to expect if your project hits HN front page? - willismichael
I have no idea what the population of HN is, not the global distribution thereof. I'm working on a pet project that I would like to show of at some point, and in the event that it actually hits the front page (unlikely as it may be), I would like to know if I have the budget to spin up enough servers to handle the load, or if I should just point people at the github repo.<p>When other people have had their project show up on the front page, is there any pattern of how many concurrent users you topped out at, and how long most of them stuck around?
======
USNetizen
I submitted a blog post a while back that made it to the front page and it hit
about 11,000 visitors from HN and ancillary HN feed sites alone in 6-8 hours.
The traffic was elevated for the next couple weeks and even my website, which
was barely linked from the blog, saw a 50-60% increase in traffic lasting
about a week after the post. I still saw traffic from this spike up to a full
month later.
------
joepie91_
My total hits on / of when PDFy hit the frontpage
([https://news.ycombinator.com/item?id=8034431](https://news.ycombinator.com/item?id=8034431)),
and remained there for some 24 hours, iirc:
root@debian:/var/log/lighttpd/pdf.cryto.net# cat access.log | grep "GET / " | grep "news.ycombinator.com" | wc -l
19387
That said, the Hacker News post led to a bunch of other places writing about
it a day or so later, the most notable of which was Gigazine:
root@debian:/var/log/lighttpd/pdf.cryto.net# cat access.log | grep "GET / " | grep "gigazine.net" | wc -l
544
But most of their traffic came from the document viewer they embedded in their
article:
root@debian:/var/log/lighttpd/pdf.cryto.net# cat access.log | grep "GET /d/C8gHjDOxTLdunq1a/embed" | grep "gigazine.net" | wc -l
41609
And this is what the bandwidth usage looked like during those few days:
root@debian:/var/log/lighttpd/pdf.cryto.net# vnstat -d
eth0 / daily
day rx | tx | total | avg. rate
------------------------+-------------+-------------+---------------
[...]
07/13/14 767.19 MiB | 949.01 MiB | 1.68 GiB | 162.72 kbit/s
07/14/14 1.27 GiB | 9.13 GiB | 10.40 GiB | 1.01 Mbit/s
07/15/14 7.05 GiB | 106.73 GiB | 113.79 GiB | 11.05 Mbit/s
07/16/14 2.89 GiB | 48.73 GiB | 51.62 GiB | 5.01 Mbit/s
07/17/14 2.22 GiB | 24.21 GiB | 26.43 GiB | 2.57 Mbit/s
07/18/14 1.23 GiB | 11.90 GiB | 13.13 GiB | 1.27 Mbit/s
07/19/14 1.31 GiB | 11.88 GiB | 13.19 GiB | 1.28 Mbit/s
07/20/14 1.38 GiB | 7.73 GiB | 9.11 GiB | 884.50 kbit/s
07/21/14 1.44 GiB | 9.55 GiB | 10.99 GiB | 1.07 Mbit/s
[...]
If I recall correctly, my HTTPd was hit with some 50-100 reqs/sec total (for
static + dynamic). It didn't really have any issues with it, despite running
on a cheap VPS with 512MB of RAM, on a non-optimal stack (lighttpd + PHP +
MySQL).
I've noticed a significant increase of recurring traffic since (it still
hovers at about 5-15GB of traffic a day as opposed to the 2GB before, and
there's a steady stream of uploads).
As long as you don't run something obscenely heavy like WordPress or Joomla,
and you don't use Apache, you'll probably be fine.
~~~
frik
Do you mind to mention the VPS hosting company? (you mentioned it is cheap:
[https://news.ycombinator.com/item?id=8039666](https://news.ycombinator.com/item?id=8039666))
~~~
joepie91_
It's RamNode ([http://ramnode.com/](http://ramnode.com/)) :)
I got a DDoS-mitigated VPS in Seattle. I believe the plan I have is normally
$15, but I used a coupon code so I pay $9.30 recurring.
I can definitely recommend them - however, I should add that their DDoS
mitigation appears to suffer from the same issues as all other cheap VPS DDoS
mitigation proxies; speeds are not always reliable, and connections
occasionally break halfway through. That's not a problem with RamNode though,
but with the mitigation provider (CNServers in this case) and/or proxy setup -
their own connectivity is rock solid.
Some other hosts I can recommend in a similar vein are RAM Host
([http://ramhost.us/](http://ramhost.us/)) and VPS-Forge ([http://vps-
forge.com/](http://vps-forge.com/)), in case you want to set up a redundant
system of sorts. I've hosted with both for years, and they're both rock solid
and very helpful as well. (Relatively) small operations like RamNode, but very
reliable.
------
binarymax
I had a project that was number 3 on the front page for a good part of a
Saturday.
I had 7500 unique visitors from HN including the traffic coming from linkbots
that re-serve HN links.
With a single node.js (express) app on an EC2 medium instance and I was fine.
I got about 10% conversion rate. It was a game with a signup page that
required you to register first.
The single instance held up the static content and the app itself for a while.
In hindsight I should have used an nginx reverse proxy for the homepage.
\--EDIT-- here is the post:
[https://news.ycombinator.com/item?id=7364927](https://news.ycombinator.com/item?id=7364927)
\--EDIT2-- changed conversion rate typo from 1% to 10%
~~~
willismichael
So by "1% conversion rate" you mean that around 75 people actually played your
game? I see some discussion on your thread about how your server got hammered
- was it the people hitting the landing page, or in-game requests?
~~~
binarymax
OOPS! I meant 10% of 7500!
About 750 people signed up to play and there were about 500 games played (a
game needs exactly 2 players).
The homepage stayed up the whole time. The stuff that got hurt was some actual
gameplay due to an exception that was getting thrown, and I hotfixed.
~~~
willismichael
Ha! Nothing like recruiting a whole swarm of HNers to find some obscure bug :)
I'm encouraged by your metrics of about 750 people and about 500 games - I
think I could probably afford enough compute power to support that for a
limited amount of time.
------
coldcode
I've gotten as much as 60,000 in 24 hours if it stays on the front page.
Recently I had one that hit 4000 in about 20 minutes but some circuit breaker
hit (which I have no clue about, not really controversial or anything) and it
dropped instantly to page 5 so the traffic died quickly. Generally at the top
you might see as much as 10/s peak. I've seen 500+ concurrent but of course
that is based on whatever Google considers concurrent.
Never had any trouble serving it (my own code, LAMP, on Amazon micro
instance). programmer.reddit.com is a little lighter. Ancillary traffic (other
sites) from a front page post might add 10% or so over time.
~~~
kapkapkap
> I've seen 500+ concurrent but of course that is based on whatever Google
> considers concurrent.
Almost sure that it means at least one hit within the last 5 minutes.
------
bubblicious
My website usually has very little activity (about 100-200 visits per day for
an open source project) and out of nowhere I went #1 on HN last week-end for a
blog article.
From saturday to monday: 80,935 unique visits with peaks of 600 simultaneous
people on site. Out of those 81K, 24,661 came from HN. The average time spent
on site was 2:11. No real pattern, things started going viral as soon as the
article hit the front page (which took a couple of hours). Things died off
very quickly after 3 days of intense load.
Now I had never expected that type of load... In fact my blog is hosted on the
_cheapest_ shared hosting service NearlyFreeSpeech. I want to mention that
they held the charge perfectly. I wrote a quick article about it with more
numbers: [http://www.nicolasbize.com/blog/and-the-best-shared-
hosting-...](http://www.nicolasbize.com/blog/and-the-best-shared-hosting-
service-is-nearlyfreespeech/)
------
diggan
I submitted a pet-project called ngProgress[0], a progressbar provider for
AngularJS. It's not a project per se, more a smaller library I decided to
share here. It was on the frontpage for almost one day and on the second page
for a day as well. I got about 10'000 visits during the first day and about
half, 5000 during the second day. This also includes shares on Twitter that
came after posting it here.
Most people just opened the page and closed it within ten seconds. The second
largest group had the page opened for about one minute before closing. Please
note the landing page for ngProgress[1] is very simple though and has almost
no engagement except demo for the library.
[0] -
[https://news.ycombinator.com/item?id=6250112](https://news.ycombinator.com/item?id=6250112)
[1] -
[http://victorbjelkholm.github.io/ngProgress/](http://victorbjelkholm.github.io/ngProgress/)
------
viggity
The link to my data visualization was on front page for ~8 hours. I got 10K
hits first day, 2500 second day, 400 third day. The post was not specifically
pimping my side project, but was a visualization of Seed Funding. That being
said we had about 500 people from HN request beta access.
link:
[https://www.machete.io/board/view/seed_db_funding_rounds/157...](https://www.machete.io/board/view/seed_db_funding_rounds/157a518b-cbf2-4bde-84b4-98cfa0bc15ba)
If you want to track visits from HN, MAKE SURE YOU ENABLE HTTPS BY DEFAULT AND
LINK TO AN HTTPS LINK. It is in the http spec that no referrer info is passes
from an https site (like HN) to an http site.
One last bit - we are hosted on a small Azure instance, we used loader.io to
test what kind of a load we could handle and it shit out pretty quickly. We
implemented some output caching and it handled the HN flood just fine (200-300
concurrent users).
------
villek
My post [1] made it to #8 on the front page last week and stayed there about 5
hours. I haven't made proper analysis yet, but here are rough numbers. The
page got ~3500 visits, most of which within the first 24 hours. Most
concurrent users was ~80. The first 24 hours got the linked app ~500
downloads, so that would give a pretty good conversion rate. I don't have data
on how much of that came through the website or HN, though.
[1]
[https://news.ycombinator.com/item?id=8070131](https://news.ycombinator.com/item?id=8070131)
------
dkriesel
When my Xerox Story was hit last year
([http://www.dkriesel.com/en/blog/2013/0802_xerox-
workcentres_...](http://www.dkriesel.com/en/blog/2013/0802_xerox-
workcentres_are_switching_written_numbers_when_scanning)) I got about 200k
hits in two days, at peaks reaching about 10k per hour. All numbers are from
google analytics so real values might be higher. Also, it's not only hn,
consider also the follow-ups (in my case, shortly after hn, the front pages of
slashdot and reddit hit me as well).
~~~
dkriesel
Addition: I use Dokuwiki as a blogging system, which was accompanied by a
varnish cache. The combination of both was able to cope with the load (and
additional load from mass media taking up the story) very well on a standard
hosting server.
------
mattybrennan
Apparently, more than GrandArmy can handle with this USPS redesign post...
------
zeratul
I also saw ~7000 visits first day but that was in 2012. Here are some
conclusions of my web traffic analysis:
[https://github.com/entaroadun/hnpickup/wiki/Hacker-News-
Pick...](https://github.com/entaroadun/hnpickup/wiki/Hacker-News-Pickup-Rate-
Web-App-Analytics)
Here are multiple screen shots of the google web traffic analytics interface:
[http://hnpickup.appspot.com/hnpickup_web_app_statistics_snap...](http://hnpickup.appspot.com/hnpickup_web_app_statistics_snapshot.png)
------
davidgerard
My WordPress blogs have hit the HN front page a coupla times.
I got about 6000 hits in an hour the first time and 10,000 the second time.
WP-SuperCache coped admirably in both cases. The mod_rewrite caching was
enough to cope with HN. (I sent the developer, Donncha O Caoimh, £10 with
gratitude!)
But what really made the server cry: being on HN led directly to being on
Reddit, where the second popular post got 80,000 hits in a day. In this case I
had to put WP SuperCache into direct-cache mode. Then it was fine.
------
3stripe
A blog article of mine (hosted on Squarespace) was on the top of the frontpage
for most of a day back in January and received approx 25,000 pageviews.
(And also, incidentally, 42 article comments and counting, without a single
nasty/sarky/snarky one.)
The same article has since received 500 Facebook likes and was tweeted around
360 times.
[https://news.ycombinator.com/item?id=7075537](https://news.ycombinator.com/item?id=7075537)
| 334 points
------
aith
This time last year my project VimSnake.com was in the top 3 for most of the
day. I made the stats public:
[http://statcounter.com/p9177631/summary/daily-rpu-labels-
bar...](http://statcounter.com/p9177631/summary/daily-rpu-labels-
bar-20130816_20130823/?guest=1)
[https://news.ycombinator.com/item?id=6223946](https://news.ycombinator.com/item?id=6223946)
------
dronehire
We've had a couple of blog posts hit #1. From memory, we received around
35,000 visitors on each occasion, spaced over the course of several hours.
~~~
danesparza
What was your converstion percentage, and what kind of hardware (virtual or
otherwise) did you run to support this?
------
wellboy
I had this Android app
[https://play.google.com/store/apps/details?id=com.hour.chat](https://play.google.com/store/apps/details?id=com.hour.chat)
on the front page last week. It got 45 upvotes and also around 450, pretty
good.
So for an app you could probably say 10 downloads per upvote.
------
mdewinter
Been on the frontpage a few times, mostly weekends. 2/3 days got me 30,000 to
60,000 hits/day (24h) per time. My website is static, optimized and hosted on
a cluster of geo-spread nodes so they were still doing nothing, the statistics
server (piwik) did have a load of 4 at peak...
------
kimburgess
When a few readers collectively had the poor judgement to vote a post I wrote
to the front page it brought in around 12k uniques from HN and associated
parasitic sites over the space of 6 hours or so. This was peaking around 200 -
250 simultaneous visitors.
------
dirtyaura
Unless it goes viral outside of HN, based on my experience of a few front page
hits, you can get 10K-50K visitors. As most of visitors will bounce, serve a
static landing page and you can handle the traffic easily with a single server
------
kephra
My site made it several times to HN main page. Rule of thump is, that every
upvote equals about 100 visitors, while the site is on HN, and that about same
number of visitors come during the next days by tweets and facebooks.
------
talles
I got 26k pageviews when I managed to reach first page
([http://blog.talles.me/my-hacker-news-front-page-
day.html](http://blog.talles.me/my-hacker-news-front-page-day.html)).
------
olalonde
Happened to me a few times (for technical articles/code). If I recall
correctly it was around 3000-7000 unique visitors. My site is hosted on Github
pages so I didn't have to worry about the load.
------
shogunmike
I've had a few posts hit the front page over the last couple of years. I had
between 5,000 and 10,000 unique visitors (as Google Analytics defines them)
over the following 24hrs for each post.
------
stangeek
10-20k total if it doesn't go viral outside of HN. Maybe 100-200 concurrent at
peak. Don't worry about your server, unless it's a box at your home it should
handle the traffic...
------
viach
I've got my project on the front page #20, it led to 500 visitors in 3 hours.
Not a huge success i would say :) The interesting question is how much
_conversion_ to expect.
------
kanakiyajay
My blog [http://jquer.in](http://jquer.in) was briefly (about 3 hours) on the
front page. I got around 1500 Unique Visitors according to GA.
------
masukomi
If I recall correctly, I seemed to top out around 300 visitors per minute.
Load stayed close to that for about 2 days (dunno about US evening hours)
------
new299
About 10 to 20,000 in my experience for a site that's on the front page for
about a day.
------
duiker101
I agree ~10,000 visits for a standard front page story, with ~200 simultaneous
visitors.
------
stasy
I got ~26k from 12 hours as number 2.
|
{
"pile_set_name": "HackerNews"
}
|
Spam Peaked at 200 Billion per Day in 2008 - peter123
http://www.circleid.com/posts/20081217_spam_200_billion_per_day_2008_cisco/
======
SwellJoe
I totally stopped counting when my junk folder got to 100 billion.
|
{
"pile_set_name": "HackerNews"
}
|
Armin Ronacher: Collections in C - jgalvez
http://lucumr.pocoo.org/2010/11/24/collections-in-c/
======
tptacek
Armin seems to have reinvented the BSD queue macros.
Having once had the pleasure of inheriting a codebase animated by CPP-macro
collections, allow me to be a voice in favor of running, not walking, from
programmers who embrace them. They're tricky, they blow up, they're extremely
noisy in the code, and (worst of all) they don't encapsulate, offering new
devs a myriad of ways to write tangly hard-to-understand subtly broken code
that works directly with the collection structure.
Type safety is entirely overrated. C requires so much deliberation to deploy
even the simplest collection that the likelihood of you picking up a _foo_
where a _bar_ was what was provided is minimal, and easily diagnosed. Just use
voidstar.
One of the first things I did at that job was to port STLport's Red-Black tree
(from <map>) to C code, specialized on voidstar. It worked beautifully. If
there are times when you don't want to use a void* collection (or a gossamer-
thin wrapper around one), those are also times when you don't want a generic
collection library to begin with.
~~~
strlen
There's also a much cleaner way to have interfaces and implementations in C,
without relying on macro hackery.
Why, there's even a whole book on that: [http://www.amazon.com/Interfaces-
Implementations-Techniques-...](http://www.amazon.com/Interfaces-
Implementations-Techniques-Creating-Reusable/dp/0201498413/)
Like Norvig's _Paradigms of Artificial Intelligence: Case Studies in Common
Lisp_ and Joshua Bloch's _Effective Java_ , it's one of those books which
despite having a specific programming language in the title, is really about
programming in general.
~~~
tptacek
This is my favorite C book ever, I recommend it wholeheartedly, and it
recommends voidstars. =)
~~~
nkurz
Respectful of you but not of the book, I have to ask: what do you like about
it? My initial reactions were very negative
(<http://news.ycombinator.com/item?id=1705332>). I love K&R, but recoiled in
horror at every page I managed to read of this book.
I've written entire macro-based systems similar to those recommended by the
author here, and find them very useful. Void pointers are great where they can
be used, but I love the efficiency and clarity of the generated code approach.
~~~
tptacek
I found your objections to this book pedantic.
For instance, in a chapter on string atoms (symbols) --- a concept which
virtually no C program in the world takes advantage of, despite the centrality
of the concept to Lisp, Python, and Ruby --- and this is _the first chapter in
the book_ \--- you were shocked by his use of a 43 byte string to hold a
number string... because someone might be using that code on a 192 bit
machine?
You missed the forest for the trees. If you don't want your code tainted by
the number 43, don't write that code. The point of the book is how you
structure your code, divide it into subsystems, and present coherent
interfaces to the rest of your program.
~~~
nkurz
I'm not bothered by the idea of a fixed length for strings, rather the
explanation that it is best to use the number 43 instead of a named constant
(MAX_STRING_LEN) to avoid namespace pollution. If the first paragraph I quoted
had been standalone, or better as a short comment, I would have been in full
approval.
Anyway, I appreciate your response. It's good try to appreciate what others
see that I do not. Yes, if you ignore the details of the code and the
explanations, there are some good parts. If you look _only_ at the big
picture, it's probably a fine book. And I love his clearly prefixed naming
conventions. But I think you'd do a lot better reading something like the
SQLite source code rather than this book if you want to see examples of good
C.
I guess I have to ask: do you feel that chapter 4 on using setjmp() and
longjmp() plus some brittle macros for error handling is also good for
learners? I thought it was technically very clear but about 40 years out of
date as to good practice. Is this a forest or a tree?
------
btmorex
Why not just pick and choose what you need from C++? For all the hate that C++
gets, one nice thing is it doesn't force anything down your throat. Between
simple not using features, and compile time options (disabling exceptions for
example), you NEVER have to pay for a feature that you don't want or need.
He could have literally written C-style code except used C++ just to create a
few generic data stores instead of using all of this preprocessor magic which
I guarantee is more fragile/harder to debug than basic C++ templates.
~~~
jedbrown
FWIW, C++ does force some things on you, such as automatically typedefing
structs instead of giving you a separate namespace, not warning about multiple
declarations with different arguments (because function overloading is a
feature), not warning about multiple definitions in different compilation
units (linker just picks one, assumes they are equivalent), having to cast
assignment of void* to a typed pointer (ugly and redundant), slower
compilation, and generally later errors/warnings (thus usually more cryptic
and less useful, although it does stricter type checking in some ways, so this
point is not completely clear-cut).
~~~
shin_lao
Hi, just want to bounce on the _"cast your void_ to a pointer"*, actually you
don't need to do that if you don't use an object oriented paradigm à la Java,
but go generic full speed and use return by value. No pointers, little or no
casting.
~~~
jedbrown
> go generic full speed
What does this mean?
As a simple example, suppose I use malloc to allocate memory (because I don't
want to allocate with new which would oblige me to handle exceptions and
prevent me from handing ownership over to a C client or using an allocator
provided by a C client (without extra work)). I have to cast the return value
as in
struct Foo *x = (struct Foo*)malloc(n*sizeof(*x));
instead of
struct Foo *x = malloc(n*sizeof(*x));
This looks purely cosmetic, but in C, I can write the macro
#define MallocA(n,p) (!(*(p) = malloc((n)*sizeof(**(p))))
which then works with a standard error checking convention
struct Foo *arr;
err = MallocA(n,&arr);CHK(err);
This doesn't work in C++ without non-portable typeof or evil and less safe
(not conforming for function pointers)
*(void**)(p) = malloc((n)*sizeof(**(p)))
Similarly, if a client registers a callback with a context, I store their
context in a void* and pass it back to them
int UserCallback(void *ctx,...) {
struct User *user = (struct User*)ctx;
instead of
int UserCallback(void *ctx,...) {
struct User *user = ctx;
I understand that this is just cosmetic. I don't see how "going generic full
speed" helps with this. Also note that aggregate returns are slower for all
but trivially small structures, and downright bad for big structures.
------
zedshaw
If you really want to learn how to do this, and do it really really well,
check out sglib:
<http://sglib.sourceforge.net/>
It's got nearly every data structure you can imagine, all implemented as CPP
meta hackery. Brilliant.
~~~
chancho
I used this library a while back, and it was nice at first, until I noticed
that his quicksort always picked the first element as the pivot, which makes
it O(n^2) on already-sorted input. Then I started second guessing everything,
and when you do that with a library you may as well write it yourself without
the CPP hackery. At least then the compiler errors will make sense and you can
debug it.
I went to go check out the code to see if this issue had been fixed, but the
download link has gone bad.
I mentioned this library on HN some years back, when I was still excited about
it, and the reply I got was something like "No! Not that macro boneyard!" He
was right.
------
roel_v
I don't agree with one of his premises:
"I normally like to avoid tools that generate new C files for the very simple
reason that these usually generate ugly looking code I then have to look at
which is annoying or at least require yet another tool in my toolchain which
causes headaches when compiling code on more than on operating system. A
simple Python script that generates a C file sounds simple, but it stops being
simple if you also want that thing to be part of your windows installation
where Python is usually not available or works differently."
I have moved most of my macro code generation to using another language, with
a proper template engine, to generate code (in my case C++ but the same holds
for C). If he doesn't trust Python across platforms, he can take a fixed
version with known properties and code around it, or take another language
(which presumably will have the same issues). I use PHP, I'm a bit careful in
cross-platform features and it works great.
Apart from this, he can still write his code generator in C, so that on a new
platforms it can be bootstrapped with a regular C compiler, then process his
templates, then compile his actual code. It's painful to do string processing
in C, but this generator only needs to be written once anyway, and it's not
much work. Plus if he uses a small template engine, that'll take most of the
pain away (most of the work will be in modifying the templates, not the code
generation engine).
------
mrb
A good C library implementing common data structures (lists, trees, hash
tables, etc) is GLib:
<http://library.gnome.org/devel/glib/>
It does many other things, provides abstractions for threads, files, etc. It
is a general-purpose utility library originally written for GTK+. I never
really understood why GLib is not more often used.
------
sukuriant
"This worked really well up to the point where I needed two kinds of lists.
One that accepted floats and another one that works with arbitrary pointers."
Had he never heard of unions?
Even without knowledge of unions, you could treat a pointer as a, depending on
the architecture, sequence of 32 bits. That sequence can be cast to whatever
you want. So long as you have a clear way of describing what you've stored in
that 32 bit sequence, you should never get confused.
------
kvs
Another way is probably to use a Linux Kernel-like list:
<http://isis.poly.edu/kulesh/stuff/src/klist/>
~~~
pmjordan
The kernel list approach also has the advantage that there's only one
implementation of the list operation functions, whereas the one outlined in
the article generates a new set of functions for each type.
------
heresy
There is something to be said for not taking DRY to extremes. This appears to
be one of those extremes.
------
rbranson
Oh man, with that kind of macro "metaprogramming" (if it can be called that),
you're bringing on a world of error message pain. It's almost not worth it.
~~~
burgerbrain
Painful as it may look, it's used in the while _a lot_.
Most of the time it actually works well.
------
jerf
I find myself reminded of the famous Dr. Sagan quote for some reason: “If you
want to make an apple pie from scratch, you must first create the universe.”
(From "cloning Minecraft" to "implementing a list" in only five sentences!)
------
st3fan
This is both awesome and very terrible at the same time.
This proves that you can do magic in C but it also proves that C is really a
low level assembler :-)
For me, stuff like this has been one of the prime reasons to prefer C++ over
C. You can totally get the same performance and compiled code when using
templates. But you gain compile-time checking and much more robust code.
(These preprocessor tricks are also popular in the BSD kernel)
------
endgame
`#define _CAT(A,B) A##B` is going to invoke undefined behaviour, I think:
""" None of these macro names, nor the identifier `defined`, shall be the
subject of a #define or a #undef preprocessing directive. Any other predefined
macro names shall begin with a leading underscore followed by an uppercase
letter or a second underscore. """ (s6.10.8)
~~~
zb
Not really, all that says is that it risks the name colliding with a built-in
macro defined by the compiler. At worst that seems like implementation-defined
behaviour.
~~~
endgame
I was too hasty and confused implementation-defined with undefined.
------
angrycoder
I give thanks every day that there are people like the author and others who
can write and understand code like that so that I don't have to. I'll be
damned if that doesn't look like trying to build a boat using a chainsaw and
popsicle sticks with instructions written in Farsi.
------
lukesandberg
this kind of stuff has always scared me. Its seems like it would be really
easy to get confused just trying to use this stuff. Generally if i need a list
i either use the void* interface and cast as neccesary, or if i want to do
something like store a list of floats, or anything where i want to store the
value of the item in the list (not just a pointer) then i use a 'fat'
datastructure like this
[https://github.com/lukesandberg/Regex/blob/master/src/util/f...](https://github.com/lukesandberg/Regex/blob/master/src/util/fat_stack.h)
when you construct it you just tell it how big each item is then it copies the
value into the stack element for storage.
Sometimes it a few extra copy/cast statements but at least its very clear what
you are doing.
------
srparish
<http://uthash.sourceforge.net/utarray.html>
<http://www.openbsd.org/cgi-bin/man.cgi?query=queue>
------
jchonphoenix
For those who are curious, this solution has a name:
Variable Queue
|
{
"pile_set_name": "HackerNews"
}
|
Lean Usability Testing: Current Best Practices and Resources - carterac
http://www.astatespacetraveler.com/best-of-lean-usability-testing-practices-and-resources-2/
======
fhirzall
Another option is to collect a group of your friends, family, and various non-
technical users and just give them a few tasks to do on your website. If the
site is usable then most people in the group will be able to complete the
tasks.
As a technical user, I'm finding that some users struggle with the most basic
things on the web so you should always base your usability tests on your
target market. (seems obvious but frequently forgotten)
------
tbgvi
I've just recently gotten into using usertesting.com to get quick feedback on
the usability of my product. It's extremely valuable to get an idea of how
other users are interacting with software, and helps guide where we need to
make improvements.
As the article mentions, it also helps our team to get on the same page. If
left to ourselves we'd debate for a long time on how certain things should
work. Once we see the user testing videos it's painfully obvious what needs to
be done, and then we all get to work making it better. Down at the bottom
there's some good tips for usertesting I'm looking forward to giving a try
~~~
carterac
Thanks for the helpful feedback. I keep hearing positive things about
usertesting.com and today our team decided that we're definitely going to
incorporate it into our testing plan.
However, easyusability.com has a much more powerful call to action and seems
to be a newer competitor to usertesting.com. I'm excited to try both.
100% on having the whole team see the videos live. Makes the necessary changes
obvious to everyone without getting egos involved.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: How do you assess teamwork skills during interview? - koopuluri
The standard technical interview process: 4 technical interviews that cover data structures, algorithms, system design, etc., with perhaps a conversation over food, is not enough to assess how well a candidate would fit into a team.<p>Do you disagree with this premise? If so, why?<p>How do you gauge team skills during your interview process? What do you look for? How has it worked for you?
======
HeyLaughingBoy
We used to have a two-person team provide a standardized programming exercise,
clearly telling the candidate that they could choose how to do it: preferably
discussing their approach with us, get help/feedback as they progressed, -or-
completely silently if they were more comfortable, or anywhere in between.
Generally we'd spend about 5 minutes just explaining how it could work,
assuaging their nervousness, etc.
Most people opted to do the exercises while thinking out loud. If they paused
for a few minutes or seemed lost, we'd offer hints or ask questions, etc. Let
them take the discussion where ever they wanted.
All in all worked very well. Candidates seemed more comfortable than just
doing whiteboard exercises and we got a really good feel for how they worked
with others. We didn't really care much about the code; we wanted to see _how_
they approached problem solving, how well they responded to feedback or
questions about potential optimizations or offers of help. Or what happened if
they were shown that their approach was a dead-end, but they still doggedly
kept pursuing it.
It revealed personality issues very quickly. Nervousness was normal. Some
people were hostile when bugs were pointed out, some simply couldn't get
started even with extensive help, and others went heads down for half an hour
and then came back up with the answer.
It wasn't perfect, but of all the interview techniques we used over the years,
it was the one tool that we never got rid of because it proved to be so
useful.
~~~
dasmoth
_others went heads down for half an hour and then came back up with the
answer_
How did people who took this approach fare in your scheme? Thumbs up for
solving the problem or penalised for “not a team player?”
~~~
HeyLaughingBoy
Thumbs up of course.
We're hiring them to be developers first and foremost, so if that's how they
work best, so be it. Then we move on to discussing their solution and if they
thought it could be improved in any way. The rationale for this phase of
interviewing was mainly to accomplish the following:
1) Can you take a problem statement and show progress in writing code in the
language of your choice to solve it? You don't have to complete the entire
exercise, but you do need to show us that given enough time, you'd probably
get the right answer and that you can explain your logic.
2) Attempt to weed out the personalities who can't take negative feedback or
refuse to accept guidance when they're wrong.
------
kostarelo
A bit out of topic and not really an answer to your question.
Once in an interview for a startup company(~10 people), when I got into the
office they had an uncompleted coat hunger from IKEA and there was a person
trying to assemble it. They intentionally let me wait for a bit next to him
and he started to look confused with the manual so naturally, I offered to
help him and got on my knees and started to read the manual.
They told me much later that that was a test and I had successfully passed it.
I am not sure about the validity of such a test and maybe someone else can
verify it.
~~~
tebugst
I love to help other people to fix there coding problems. But in this
situation I may withdraw unless I really feel person needs help. As per my
experience, sometimes offering help hurts ego ( particularly senior brogrammer
)
~~~
kostarelo
Sensitive egos isn't quite a proof of healthy teamwork players, right?
------
core-questions
For me, this kind of assessment is based on a number of factors:
1\. Personality and language skills - are they sufficiently outgoing and
conversant to be able to interface well with others? Introversion is common in
our profession but in order to work on a team you've got to be able to talk to
people in person. Panel interviews are good for this; anyone who can keep up
decently, seems personable, not stressed by the situation, and manages to
leave an imprint is likely a good candidate by this measure.
2\. Written communication skills - while arranging the interview, you're going
to be emailing back and forth a few times. Ask some questions and see how they
respond.
3\. Asking about previous teams and their views on team composition, dynamics,
and work distribution; handling large workloads, dealing with absence due to
vacation/sickness/etc. This is a good kind of question because unlike asking
about what their previous teams _did_, which someone can embellish, asking
about the actual process of dealing with different scenarios shows their own
understanding. Present a few scenarios and ask the candidate what they think a
team lead / manager should do in that situation.
It's going to be hard to get a quantitative metric for this, you have to rely
on your intuition.
~~~
koopuluri
> ...anyone who can keep up decently, seems personable, not stressed by the
> situation, and manages to leave an imprint is likely a good candidate by
> this measure.
I'm not sure high stress environment test is a good measure. I've worked with
some amazing teammates before that might have been nervous if in such a
situation.
>Present a few scenarios and ask the candidate what they think a team lead /
manager should do in that situation.
I like this approach.
Regarding the question examples you gave, do you look mostly for how they
reason through those situations, and their ability to put themselves in their
teammates' / managers' shoes, or do you also have a spectrum of [right -->
wrong] answers for those questions.
~~~
core-questions
> I'm not sure high stress environment test is a good measure.
Talking with a few of your future coworkers in a room where nothing is on fire
shouldn't be "high stress" \- interviews are always a bit of a stress inducing
thing, but ultimately if you're hiring someone who may have to deal with tough
situations from time to time on a team, this sort of situation should be easy
enough for them to handle.
If they're not in so much of an ops role and won't be dealing with things that
are on fire, you can always reduce this stress by starting 1 on 1, and then
adding people as the interview goes along in order to give them a more
comfortable base.
> Regarding the question examples you gave, do you look mostly for how they
> reason through those situations, and their ability to put themselves in
> their teammates' / managers' shoes, or do you also have a spectrum of [right
> --> wrong] answers for those questions.
For me, it's more of a matter of determining whether they've just worked on a
team as a member, or if they've actually thought about the dynamics of teams
and have their own opinions and ideas about what makes for a high performance
team vs. a dysfunctional one. Getting a feeling for what sorts of business
processes they like, how they like to communicate, what level of tracking
artifacts they require/feel comfortable with, how they think work should be
broken down, etc. is a good way to understand if they're going to be someone
who works well with the way your team is already composed, and if they're
going to be someone who is going to help evolve the team in a better
direction.
If they say something that makes you bristle, pay attention to your intuition.
If you think, "gee, person X on my team isn't going to like that", consider
it, because you want person X to be happy and productive and introducing a new
team member who is going to potentially have conflict with them may not be
worth it. Every aspect of your evaluation forms a heuristic in your mind that
you can use to decide if they're the right blend of worker for your particular
team.
I really like trusting a lot of this sort of thing to my hindbrain. There's a
lot of magic happening back there that combines things in a big-data sort of
way that we don't consciously understand, but when it comes to interactions
with other humans, we evolved to be good at this kind of judgement - since
it's life or death, in the state of nature - and we should trust our
instincts, provided we've tuned them to not be irrationally prejudiced.
------
im_down_w_otp
Why not simultaneously ditch the weak proxy for capability and ditch the weak
proxy for what working with them would be like all at the same time?
Instead, just work with them. Have them pair with 1 or 2 people for a few
hours. Maybe even engage them in a small team meeting if one comes up
naturally. There's no assessment as good as actually doing the things they'd
be doing and working with the people they'd actually be working with.
It's a better, more accurate assessment in both directions. For them to want
to join your team and for you to want to have them on it. Its also a ton
easier to train your team how to execute this method effectively. The standard
alternatives are spurious generally, and what little value can be derived from
them requires well-prepared, expertly-trained assessors.
Why contrive a weird edifice?
Disclaimer: am currently a person responsible for reforming and building a
high-performing team in a very demanding field across hardware and software
(both internal and external) disciplines.
~~~
koopuluri
I agree that working with the candidate is the greatest signal for competence
and team fit.
My concern is that oftentimes, there's a lot of context to absorb before being
able to contribute, and a few hours wouldn't be enough. Don't want to extend
interviews to multiple days --> biases against candidates that have packed
interview schedules.
I'm thinking we could structure a problem to take a few hours, and pair
program with them, with the interviewers swapping out every hour and half
(which would also help test how well the candidate brings someone who doesn't
have context, up to speed).
Or perhaps, have multiple candidates work together to solve a problem, and
observe how they work together.
~~~
deepaksurti
>> My concern is that oftentimes, there's a lot of context to absorb before
being able to contribute,
And that exactly is the problem with interviews where we can't measure what we
should really measuring, so despite these marathons and what not, ending up
with misfits. I can't explain it better than below: source [0]:
Your scenarios are different, the normal scenario inside a company is: a) Your
boss tells the new problem to the team at Monday morning. b) You have your
first approach in your mind to the problem. c) The team discuss the problem
for 50 minutes. You still don't have a clear idea about the problem and for
sure you don't have any idea about how to solve it. But who cares? Is pancake
day ! d) Lunch, you think about the problem by your own, for first time in a
"serious" way. e) One day later you write an email to clear some doubts about
the problem. f) You talk with another member of the team. He notices some
potential problems with your approach but also you realize that, after all,
your approach is better than his. You walk in the supermarket alley with your
wife thinking about how to explain why your approach is better using a
metaphor and trying to foresee objections from other team members. g) Thursday
you finish those TDD pending upgrades, do the pull request and then you think
an hour about the problem meanwhile you are still in the "zone". You have
those beautiful "I got it! " moments. h) Friday you "play" with some library
that you find googling and you read some code from github from a guy who did
something similar. You then notice that there is a part or a step that you
didn't consider before, that could change your design to solve the problem in
a deep or shallow way. i) After redditing for an hour, yo do a minimal code
that solve the functionality. At this time the problem has been processed in
your mind for a week from different angles and using different tools. j)
Monday at lunch time you take the marker and you write in the whiteboard
trying to explain to the team your solution. The team decides to follow your
approach. The other scenario is: a) The interviewer tells you an isolated and
descontextualized problem. b) Ten seconds later you have a marker in your hand
in front of a empty whiteboard. My problem is: how the real cognitive process
can be replicated in an interview? Notice that in the real scenario is not the
most smart developer but the best "architect" who gets the best solution after
many trails and error experiments and dead-ends paths. Of course companies
must have a selection process, but the whiteboard kind is just producing a lot
of false negatives.
[0]:
[https://www.reddit.com/r/programming/comments/4h15a1/why_is_...](https://www.reddit.com/r/programming/comments/4h15a1/why_is_hiring_broken_it_starts_at_the_whiteboard/)
------
goldenbeet
So we just went through the process of interviewing candidates for summer
internships. Our interview process is far less technical than most and is
almost all about determining how the candidate will fit into the team.
I believe it's called Behavior Based Interviews. But basically the
interviewing team looks at a list of qualities and picks a few (~3 per person)
that they think are valuable for someone joining the team. For example, I
chose to look for: Priority Setting, Compassion, and Self Development. Once we
have some qualities chosen, we develop questions that are specifically meant
to test for those qualities.
Overall I really enjoy this kind of interview process. I feel like these
qualities are more important than technical ability, so it's nice to put more
emphasis on them during the interview process (We still do assess technical
ability of course). And for us, it results in new team members that I'm
excited to work with!
~~~
koopuluri
Nice!
> I chose to look for: Priority Setting, Compassion, and Self Development.
> Once we have some qualities chosen, we develop questions that are
> specifically meant to test for those qualities.
What are some questions that you used?
~~~
goldenbeet
So all of the questions had some kind of context/primer that I'd talk about
before asking them, but the actual questions were:
Priority Setting - Have you ever felt overwhelmed by the work you've been
given? How do you manage the work so that it gets done on time?
Self Development - How do go about seeking feedback from those around you to
facilitate your personal growth? What is something that you've learned about
yourself recently that you found helpful?
Compassion - Tell me about a time where you worked with someone who was
struggling, for whatever reason. What were some of the challenges? How did you
go about helping them deal with their struggle?
------
w4tson
I spoke to guy once who was researching what makes a high functioning team for
some large (unarmed) company. His insight over the various studies he’d looked
at was as follows.
1\. High functioning teams are in the order of 5–10 people 2\. Everyone in the
team is respected for what they bring to the group regardless of experience.
If you’ve 20 years experience you’re valued for your knowledge of pitfalls
etc. If you’ve 2 years you might be valued for your enthusiasm. Or if your a
fresh graduate you could be valued on your ability to question conventions
that others do not.
My question is: given this, how would an interview ever hope to sift out a
potential candidate. Especially when the qualities of the final aren’t (could
never?) be properly specified relative to the team they were due to go into.
~~~
koopuluri
> 2\. Everyone in the team is respected for what they bring to the group
> regardless of experience.
I agree with this.
I think there are ways to notice this during interviews. I'm a fan of a junior
engineer interviewing a candidate for a senior position - as a way to see how
well they interact. Perhaps the question could be more open ended (e.g.
explain x,y,z concepts - maybe oven ones that the interviewer doesn't know but
wants to learn). If the junior feels like they had a good interaction, were
able to learn, and felt respected through the process, great. If the candidate
seemed offended by the lack of experience of the interviewer, then not great.
A highly subjective signal though.
~~~
w4tson
That might be a start. But there’s even more subtle things going on.
What about the person who can resolve conflict. The person who always takes on
a task to the bitter end regardless how bad it smells. The person who thinks
ahead. The developer who loves process and documentation. The developer who
can code you out of a tricky situation. The person who’s always trying new
things.
Personally I think it’s a lottery. Like putting together a football team but
we don’t even know the positions yet. We just know they can play football
------
bjourne
Should the candidate fit into the team, or should the team fit into the
candidate? Given that most companies want to have a as diverse workforce as
possible, it seem reasonable to argue that the _least fitting_ person should
get the job. :)
~~~
koopuluri
I think you're talking about culture fit. Even when striving for diversity, I
want to ensure competency in the role they'll perform at the company, which I
feel depends on a combination of technical experience, understanding, ability
as well as communication skills, and ability to function well in a team.
I'm trying to think through how to assess team skills well that would also fit
into the constraints of an interview.
------
sidcool
Code pairing rounds usually give a good picture. How they respond to
disagreement from someone lesser in experience than them? How they deal with
conflicting ideas etc.
|
{
"pile_set_name": "HackerNews"
}
|
Rakuten buys yet another ecommerce company - RivalHound
http://blog.rivalhound.com/rakuten-acquires-ebates/
======
RivalHound
So who is next on Rakuten's buying spree?
|
{
"pile_set_name": "HackerNews"
}
|
NASA Overspends on Bagels, Soda at Conferences - markbnine
http://www.space.com/news/nasa-conference-catering-costs-100325.html
======
DanielStraight
So they spent $60k on snacks. Let's say they could have gotten by with a third
that budget. So they're saving $40k now. Not only is that only 0.0002% of
their budget (the equivalent of someone who makes $50k a year wasting a dime),
it probably cost more than that to put together and present the report. Small
problems should be solved with small solutions (such as telling the conference
planners that things seem to be costing too much, so they should try to be
more budget-conscious: a solution which would take 15 seconds and cost
nothing). Solving a wasted dime with a 26 page report is like spending 2 weeks
improving the speed of a function by a few milliseconds when the function is
only called once.
|
{
"pile_set_name": "HackerNews"
}
|
Airlines Urge Passengers to Wear Face Masks - bookofjoe
https://www.wsj.com/articles/airlines-urge-passengers-to-wear-face-masks-11588039204
======
interestica
For security, won't one be required to remove the mask? And isn't the act of
donning/doffing in itself the most dangerous and critical part in
transmission?
------
bookofjoe
[https://archive.vn/i5FnX](https://archive.vn/i5FnX)
|
{
"pile_set_name": "HackerNews"
}
|
Eleven Years of Erlang - khingebjerg
http://prog21.dadgum.com/64.html
======
pragmatic
I wonder what he does with Erlang? Does he do work for clients, is it a
company, is it just private projects.
I have a lot of languages that I love (python, et al) that I don't get to user
nearly as much as I would like.
[unqualified assumption] I would think it would be hard to find jobs in
Erlang.
~~~
dlsspy
> I would think it would be hard to find jobs in Erlang.
I've got a few cold calls over the years from companies in my area
specifically asking for erlang experience.
I'm currently primarily writing erlang code at work. I've use it at my two
jobs before this, though not primarily.
It's good technology, but I don't typically hire people who know good
technology. I hire people who can recognize and create good technology. One of
the erlang rock stars I hired at my current company did not know erlang a
couple months ago.
|
{
"pile_set_name": "HackerNews"
}
|
Things Android Does Better Than iPhone OS - Ghost_Noname
http://www.maximumpc.com/article/features/10_things_android_does_better_iphone
======
Irfaan
Most of the author's complaints are completely valid. That said, a jailbroken
iPhone can install software to mitigate a number of these issues. Not a sure-
fire solution, of course - it's such a pain chasing after jailbreaks, and you
never know if-and-when the magic update happens that'll stop jailbreaking
forever.
Still, for those curious...
_1: Android can Run Multiple Apps at the Same Time_
"Backgrounder" will give you basic multitasking. And "ProSwitcher" builds on
it to give you wonderful, Pre-like task switching. Caveat - there just isn't
enough memory in anything less then the 3GS to comfortably multitask.
_2: Android Keeps Information Visible on Your Home Screen_
It'll cost a few dollars, but I've been very happy using "LockInfo" to provide
a useful lock screen. It's a must-have application for me - it goes a long way
to making my iPhone feel like an actual PDA rather then just a toy.
_4: Android Gives You Better Notifications_
Not nearly as nice as the Android's, but "LockInfo" does a pretty good job of
managing my notifications once I told it to suppress pop-ups when my phone's
locked. Paired with "Notifier" to continue pestering me if I miss an important
update, I'm pretty content.
_8: Android Lets You Change Your Settings Faster_
"SBSettings" makes changing settings a snap - just slide my finger across the
status bar at the top of the screen to access it. Though I'm not a fan of the
default appearance - do yourself a favor and install the "Deep HUD" theme for
a cleaner interface.
\--
And while I'm advocated bits and bobs to match up with the Android user
experience:
"Simple Background" is a nice bit of eye candy, if you want your iPhone to
have the same sort of parallax scrolling wallpaper that Android enjoys.
"Inspell" gives you red squiggly misspelling highlighting and intuitive
spelling correction. It's a _fantastic_ addition. Honest question - does the
Android have something like this?
~~~
thwarted
_Most of the author's complaints are completely valid. That said, a jailbroken
iPhone can install software to mitigate a number of these issues. Not a sure-
fire solution, of course - it's such a pain chasing after jailbreaks, and you
never know if-and-when the magic update happens that'll stop jailbreaking
forever._
You've just pointed out that the general purpose CPU in the iPhone can run
other software. This isn't what is at issue, what's at issue is the
default/sanctioned install that comes with the iPhone. Of course you can get
other android-like features on your iPhone if you replace or supplement the
core iPhone software with something else. But then, your jailbroken iPhone
isn't really an iPhone anymore, is it?
_"Inspell" gives you red squiggly misspelling highlighting and intuitive
spelling correction. It's a fantastic addition. Honest question - does the
Android have something like this?_
The default Android keyboard, and other keyboards, corrects as you type and/or
gives you spelling suggestions and word completions. This has been a much
better experience than having to go back and try to correct misspellings after
the fact.
~~~
Irfaan
_You've just pointed out that the general purpose CPU in the iPhone can run
other software._
Actually, the gist of my message was to point out _specific bits_ of software
to help folks who feel hindered by some of the limitations the article's
author mentioned. I know they help me feel more comfortable using the iPhone.
I didn't _think_ I was being particularly subtle, but I guess I was mistaken.
<shrug>
_But then, your jailbroken iPhone isn't really an iPhone anymore, is it?_
That's a... strange claim to make. And the only way it seems even related to
my message is if you ignore my explicit caveat about how annoying and fretful
jailbreaking is. Which I'm fairly certain you read - why, it's right there, in
the bit you quoted.
_The default Android keyboard, and other keyboards, corrects as you type
and/or gives you spelling suggestions and word completions. This has been a
much better experience than having to go back and try to correct misspellings
after the fact._
Given the tone of your response, I seem to have really irked you. I'm asking
an honest question here - this is a feature I find really useful (enough that
I gladly paid for this bit of jailbreak-only software that could vaporize at
Apple's whim). I'm not asking for your opinion on how text input should be
done (because frankly, I don't agree), nor for platform advocacy. Really, I
thought my question was rather straight forward.
~~~
warfangle
The point was about how things come out of the box - for the consumer, not the
prosumer :). Of course it's annoying to jailbreak. That's why, for all intents
and purposes, it doesn't matter. It's like Δx^2 while you take the first
derivative: in the end, jailbreaking doesn't really affect the sales of a
product.
Anyway, red squiggly lines are good for checking a mass of text after the
fact. But only if your input method does not allow for checking and correcting
_at time of input._ When this type of thing happens at input-type instead of
scan-before-hitting-reply-time, you're probably less likely to miss something.
Also, given that you can manually add words to the phone's dictionary is
something my iPhone never did have (another win for Android).
~~~
Irfaan
_The point was about how things come out of the box - for the consumer, not
the prosumer :)_
And that's a completely valid concern... if I were engaged in platform
advocacy or arguing these iPhone shortcomings are inconsequential or non-
existent.
But I'm not.
For the record: I don't like jailbreaking my phone. I don't like installing
jailbreak software. I'm annoyed that Apple will neither provide this
functionality that I enjoy nor expose a legitimate mechanism for 3rd parties
to create it.
But such is the state of things. And given I have an iPhone, I darn well am
going to make the most of it. And _that_ matters. I'm not trying to affect
software sales - I'm trying to help people like me. People who (for better or
worse) have an iPhone, are hamstrung by some of the more valid critiques the
article lists, and are looking for solutions.
As for the red squiggly text editing - let's consider that a red herring. I
don't agree with your text editing philosophy, but that's neither here-nor-
there. This wasn't meant as a dig on Android. I just assumed Android could do
this, and was hoping to find out more.
Warfangle - I don't mean this to sound harsh. While I read your reply as
assuming platform advocacy on my part, beyond that it was even handed. But
golly, I _shouldn't_ have to defend information. Opinions, yes. Information,
no. :(
------
flyosity
Maybe the Android Marketplace is "better" than the App Store because anybody
can put an app up there, but the apps certainly aren't better, not by a long
shot.
~~~
jbrennan
I think it's really hard to call one store "better" than the other. I'm an App
Store developer, but I can see advantages to both:
App Store: Far more apps (and thus a stronger platform, attracts more users,
attracts more developers). The review process generally controls quality
(ignoring taste, for example fart apps), in the sense of no virus or battery
drainers or network abusers or completely unusable apps. This is a plus and I
think it's severely under-appreciated.
Android: No review process means you can potentially get any kind of app.
Alternative app stores (or you can download right from the web). Ever-
improving number of apps in the store (it's lagging behind App Store but
quickly catching up).
So it really depends on what you're after and what you value. I don't think
it's really fair to for the article to call one store "better" than the other.
------
czhiddy
Many of the points are valid (I really, really, _really_ would like widgets or
something on the lockscreen), but the author's tone makes it seem like he's
short AAPL posting on Yahoo finance. (Or more likely, inciting fanboys on each
side to generate site traffic)
~~~
bad_user
Those points are valid, and many of them pulled me back from buying an iPhone
since before there were Android widgets on the market.
I am pretty sure though that one can find 10 things the iPhone does better
than Android ... like fixed screen resolution, better dev tools (although
that's debatable, as I just want to work on my OS of choice), more polished
design for the hardware, etc...
Competition is such a wonderful thing ... those guys at Google that decided to
throw Android in the marketplace to increase competition (such that their
search engine remains in business) are pure geniuses.
------
nooneelse
The date says June 3rd, but then the text says "the Android Marketplace has
only just broken the 50,000 mark". That number is, like, so last month.
<http://www.androlib.com/appstats.aspx>
Ordinarily a month-old fact would probably be fine, but things are moving a
bit fast in the smartphone world these days. So in this case it leads to an
error of 26%.
------
awolf
Could someone elaborate on how Android's multitasking is better than iPhone OS
4's?
I was under the impression that the mechanics of each were nearly identical.
~~~
masklinn
> I was under the impression that the mechanics of each were nearly identical.
They're actually very different: with Android you create what's basically a
daemon server, no UI but does everything it want, while with OS4 you register
against service (there are 7 service types to choose from) and the OS itself
runs and manages it, calling your registered service hooks.
Android gives far more control, at the cost of more (programmatic) complexity
and more battery life risks (a badly implemented daemon/service will kill your
battery life quick, whereas on the OS4 since the OS keeps full control of the
execution it has a much better shot at managing battery/power issues, and it
can more efficiently handle multiple applications registering for the same
service).
The part that is similar is the handling of non-multitasking-aware
applications (regular applications in OS4, applications with no daemon in
Android): they're simply frozen in RAM, and killed if memory pressure becomes
an issue.
~~~
warfangle
A poorly implemented daemon sure can kill your battery life quickly.
Thank goodness there's a tool that lets us figure out what applications use
the most battery! :)
~~~
masklinn
Yes but that puts the onus of battery-care on the user of the phone. It's fine
for geeks (nb: not a put down, I'm in that category myself) which are going to
spend half their time between that and the task killer anyway, but that's not
the demographic Apple targets, and that's not the kind of tradeoffs they'd
find acceptable. Hence their selection of a different solution, which (they
believe) gets you 90% of the way with 10% of the costs.
~~~
markkanof
Not necessarily. It could also be the developer of the application that is
taking advantage of the battery monitor functionality to see if their
application needs improvement.
I agree that no end user should have to be keeping a watch for applications
that use too much battery, but it sure is nice to have that information easily
accessible for developers.
------
loewenskind
I wrote a medium sized post about why I like iPhone OS4's approach to
multithreading, but once again when I submit it I get "this link is expired"
and go back to find my text gone.
Seaside used to use continuations for everything but realized that the only
place you really need it is wizard-like work flows, so they've removed it
everywhere else. This allows them to not need a session unless you're in a
workflow and things like "expired link" for pretty static looking pages
doesn't happen.
Hint, hint, hint....
------
obeattie
A lot of the points are certainly valid, though somewhat debatable. The #1
one-up (Multitasking) is set to become an iPhone feature in about a month. As
for no notifications on the lock screen, have the authors never received a
push notification? Again though, local notifications are coming in iPhone OS
4.
~~~
ZeroGravitas
I was under the impression that both the Android and Apple multi-tasking
implemntations were limited in various ways, but that the Apple one was more
limited, so you could probably still count that if you wanted to.
~~~
masklinn
The Android way has less (if any) limitations (you register a service/daemon
which behaves as an application server, it's headless but other than that does
whatever it wants). Apple's solution is much more limited (you register tasks
against the OS and the OS runs it, which limits what you can do to what the OS
itself provides) but the tradeoff is that "debatable" programming has lower
chances of eating your battery alive (and the OS can more efficiently handle
some services e.g. if 5 different applications asked for GPS/movement
notification, the OS can poll the chip once and it will then dispatch that one
message to everybody, whereas with the Android solution you'd have 5 different
services polling the location API)
------
watty
The background on this website makes my monitor flash...
|
{
"pile_set_name": "HackerNews"
}
|
Your body wasn’t built to last: a lesson from human mortality rates - aespinoza
http://gravityandlevity.wordpress.com/2009/07/08/your-body-wasnt-built-to-last-a-lesson-from-human-mortality-rates/
======
reasonattlm
The body can be considered as a system of many redundant components, with
aging as the result of progressive unrepaired damage to those components. This
is a model that works very well. For further reading, you might look at the
application of reliability theory to aging:
[http://www.fightaging.org/archives/2010/05/applying-
reliabil...](http://www.fightaging.org/archives/2010/05/applying-reliability-
theory-to-aging.php)
[http://en.wikipedia.org/wiki/Reliability_theory_of_aging_and...](http://en.wikipedia.org/wiki/Reliability_theory_of_aging_and_longevity)
Once you start to think along the lines of damage and repair, you inevitably
end up in the SENS camp. It's the logical place to be.
[http://www.fightaging.org/archives/2006/11/the-engineers-
vie...](http://www.fightaging.org/archives/2006/11/the-engineers-viewpoint-
treat-change-as-damage.php)
Bodies are complex systems and all complex systems can be prolonged in their
period of prime operation by sufficiently diligent incremental repair.
Developing a toolkit to do that for humans is the point of SENS the research
program, with the point of SENS the advocacy program being to help people
understand that the scientific community well understands in detail what needs
repairing.
For more on the biochemistry of damage-that-causes-aging, explained for
laypeople, you might look here:
<http://www.sens.org/sens-research/research-themes>
~~~
hallman76
We need to "gamify" this research. Farmville carrot-growers could be solving
humanity's greatest problem!
(I'm totally serious)
~~~
possibilistic
I'm a computational biochemistry student, and I don't think this is possible.
We can gamify protein folding because this is a well-understood, well-
characterized problem. What we don't have the slightest clue about is how to
restore original cell state, eg. rid cells of aggregate intracellular waste,
repair non-trivial DNA damage, restore the extracellular metabolome, etc.
I think we should stop funding/granting scholarships to liberal arts majors.
Let them become STEM majors.
~~~
ebiester
There are really intelligent people out there who could never pass a calculus
class, much less thermodynamics. Do you want them in your program taking up
all the time of the teacher, stopping those proficient in math from getting
the education they deserve?
~~~
hessenwolf
No there aren't; that's daft. What are you basing that on?
Sure, it might be less easy for some than others, and less motivating, but if
they can learn A, they can learn B and vice versa.
~~~
ebiester
I'm basing it on both my time as a math tutor for Pima Community College and
on people I have known.
People like my mother who, if she can quantify the data she can effectively do
algebra, but as soon as X and Y appear she shuts down. She cannot make the
jump to the abstract thinking of math, and it curtailed her ability to go back
to college. It didn't help that her math teacher in 9th grade told her that
she would never be good at math, much like many women from poor backgrounds.
(I have no study, only the experience in tutoring on how many women said that
a teacher told them not to bother, and could never make it over that hump.)
Now you say that it's merely psychological for them and that they could. I'm
telling you that for as much work as some of these people I tutored put in,
they had a mental block that they simply could not overcome, fighting their
way to a C in college algebra so they could get to where they're going.
I can give as an example my boyfriend, who can speak and teach two languages
better than most people here can in their native language, and is a promising
Ph.D candidate in his program. He worked for a month straight (with my help)
to raise his GRE math to a minimum score. His limit may be calculus, but
certainly not higher math, and not upper level sciences. Yet I have seen this
man wake up, read, write, sleep, and repeat for weeks straight.
He's a gifted writer and academic, but even if he could muddle through
(say...) managed information systems, he'd never be more than mediocre because
his brain simply does not work that way. (He still asks for help with his
Mac.) Square peg, round hole.
It doesn't benefit those who have a passion for the sciences to put him and
dozens of others in the same class, wearing down the professor because they
struggle to grasp concepts that future scientists understood in fifth grade.
~~~
hessenwolf
1\. I worked for a few years in the remedial math centre for mature students
in my university, and one of my best friends did his doctorate in teaching
mature students mathematics. Based on my experience, had I taken your attitude
then I would have just not shown up for work.
2\. I disagree that it doesn't benefit the stronger students. It was awfully
hard for me to learn to teach mathematics, because I had never really had to
learn it in a step by step way myself. However, when I did learn to teach an
area, my understanding was orders of magnitude higher because I had the
understanding of somebody gifted in the area but the method and the attention
to detail of somebody who has learned it the hard way. If you mix classes with
high and low skilled students, you just have to make sure you rely on the high
skilled students as a teaching resource.
~~~
ebiester
1\. Just because I didn't believe they could do differential equations doesn't
mean that I didn't believe they could learn college-leve. algebra with some
coaching. I had more faith in them than they did of themselves, in many cases.
2\. There have been many studies that have tried what you say. The problem is
that the class must go slower to accommodate the slower students. Engineering
degrees already have so much packed into them that they often can't take
classes of interest -- are we going to make it even longer?
------
panic
You can find the original, less ad-encrusted version of this article at
[http://gravityandlevity.wordpress.com/2009/07/08/your-
body-w...](http://gravityandlevity.wordpress.com/2009/07/08/your-body-wasnt-
built-to-last-a-lesson-from-human-mortality-rates/).
------
willchang
> Anyone who paid attention during introductory statistics will recognize that
> your probability of survival to age t would follow a Poisson distribution,
> which means exponential decay (and not super-exponential decay).
Small correction: survival to time t under the lightning bolt scenario follows
an exponential distribution. A Poisson distribution, besides for being
discrete, has factorial decay, not exponential decay.
------
angdis
The longevity of the body is one thing, it is debatable whether or not it can
be extended by a lot or a little (or at perhaps until the MTBF of a freak
accident).
What I rarely see discussion of, however, are philosophical and psychological
implications of "living indefinitely".
Even if the body is says relatively youthful, what about diseases of the mind?
In other words, I am saying that in the same way that increasing life-span has
uncovered a plethora of diseases that were previously unknown like cancer, is
it possible that further increasing life span may uncover conditions (perhaps
purely psychological) that we can't even imagine. Will people _want_ to live
100's of years?
~~~
JoshTriplett
To the extent we find bugs that cause the brain to stop working properly over
time, we'll need to find and fix those. That falls under "problems we'd love
to have".
As for the question of whether people want to live for hundreds of years: if
you don't want to live longer, you can easily stop. A surprisingly large
number of people seem to rationalize the lack of immortality by claiming
people won't want to live forever, which strikes me as sour grapes. Given an
actual solution that allows people to live forever, the question becomes "do
you want to die?", and I seriously doubt many people will say "yes".
(Also, if you think of immortality as "hundreds of years", I think you need to
recalibrate your scale. I'd like a lifetime measured on a cosmological time
scale, and I have no problem conceiving of ways to spend that time.)
~~~
jonp
As someone who does have difficulty conceiving of ways to spend a
cosmological-scale lifespan, how might you spend it?
~~~
JoshTriplett
Mostly, I'd never stop learning, and I'd apply everything I learned.
Consider the sum total of human knowledge today. Consider how small a fraction
of it any one person knows.
Within the next month, I'll have completed a PhD in computer science. It took
me years to learn the fundamentals of _one_ field, plus years more to get
practical experience by tinkering in numerous areas, plus years more to become
an expert in one narrow area (scalable concurrent data structures) and advance
the state of the art in that area. Take a look at
<http://matt.might.net/articles/phd-school-in-pictures/> to get a clearer
picture of scale; now consider what a few million or billion lifetimes could
produce, between research, practical work, exploration, and just good old-
fashioned tinkering.
How many of those narrow areas exist in one field alone? How many more fields
exist to explore? How many more will exist by that point? What happens when
someone with expert-level knowledge in a pile of those fields starts applying
them to each other? And most importantly, do you really think it ever stops?
Apart from that, I'd have plenty of time between learning everything and
creating new things to enjoy the enormous amount of available entertainment
created over the aeons, in all its various forms.
I think the future sounds awesome, and I want to see _all_ of it. :)
------
drumdance
My dad used to joke that if he made it to 80 he was going to take up smoking
again. Alas, he only made it to 79.
~~~
libraryatnight
A friend used to say that most things he was warned about started with "Men
over the age of 35..." and so once he hit 35 he would stop smoking, drinking,
eating horribly etc. We would usually discuss such things on a smokey patio
with beers.
He's just shy of 35 now, so we'll see ;) Sounds like your dad had a good sense
of humor :)
------
jessriedel
The chance-of-death plots should be logarithmic, so we can tell if this
exponential is really a good fit. On linear plots, it's hard to distinguish
exponential decays from 1/x^n decays.
------
kingkawn
That graph showing survival probability as near 1 for age 0 can't be correct,
since mortality is significantly higher in birth and immediately after, then
drops for a long time, then shoots up again in old age. This is at least true
in westernized countries that have medical care available.
~~~
ArchD
Obviously, this is a simplification. Infants die for other reasons that you
can think of to be different from the reasons for which old people die (e.g.
cancer), and these reasons are being factored out of the graph.
~~~
ryusage
I thought the point of the graph was that it didn't matter how or why the
people died though? Isn't that part of why it's so surprising?
~~~
yelsgib
The point of the graph is that it doesn't matter how or why people die, after
a certain age. This is clear from context, though not explicitly stated.
------
fragsworth
Evolutionary theory would also suggest that we have some mechanism to ensure
our deaths. Longer lifespans cause fewer generations per time period,
resulting in less adaptability as a species.
~~~
JoachimSchipper
Why would evolution need that? If grandpa is badly adapted, he'll starve/get
eaten/etc; no need for a built-in kill switch. In fact, if there were a kill
switch getting rid of it would be highly adaptive, if only because you could
be around to defend your great-grandchildren.
(Of course, we do die. But the explanation looks more like "growing and
reproducing quicker beats longevity" than like "planned obsolescence".)
~~~
JoeAltmaier
Grandpa wears out, yet competes for resources. Kill him off, more for the
healthy youngsters. Certainly its selective, at the family/community level,
choosing to keep more-efficient members.
I also think, making grandparents less mobile means they are around the
campfire teaching the youngsters. It makes sense it would be selected for in a
race of communicators.
~~~
onemoreact
Mice and rabbits also age.
~~~
JoeAltmaier
Sure. But rats quickly die when they become less mobile or arthritic.
Humans can live for decades beyond their most-productive years. Their has to
be a Darwinist reason for this.
~~~
onemoreact
The Darwinist reasoning behind rates of aging goes something like this. A
adult mouse has a high chance of being killed in a random year (above 20%) a
Parrot has a low chance (below 3%). Maintaining a body into old age has a cost
that reduces reproductive capability in a given year and a benefit of
increasing the number of years of reproductive capability. There are also
minimums of capability in the wild where vision and mobility link to survival
rates such that there are thresholds below which rates of survival
dramatically decrease.
Thus, the number of healthy years in the wild relates to both the probability
of an external death AND internal heath issues. For a mouse this suggests a
minimum of internal maintenance for maximum reproduction where a parrot can
make significant trade-offs in reproduction in order to live 10x as long and
have more long term reproductive chances.
_However_ , that's in the natural setting. A pet (mouse, cat, parrot) can
live slightly longer in captivity by surviving pat the point where it can find
food for it's self. If you look a human vision decline people are
significantly less capable of surviving on their own before they lose
reproductive capability. And in that "unnatural" old age it's not uncommon for
various species to have increasing reproductive issues.
------
ggwicz
One of the sharpest changes (I guess a "point of inflection" it looks like?)
seems to be at about 65, or the most common retirement age (here in the states
at least). I wonder if there's a connection?
In hunter-gatherer societies, elders older than 60 have been observed as 1)
looking healthy and 2) still being able to hunt, fish, trap, build, and pretty
much everything else along with their younger counterparts. Perhaps at a
slower pace, but they're generally far more fit than the modern world's old.
_Pampered bodies grow sluggish through sloth, movement and their own weight
exhausts them._ \- Seneca
~~~
kiba
_In hunter-gatherer societies, elders older than 60 have been observed as 1)
looking healthy and 2) still being able to hunt, fish, trap, build, and pretty
much everything else along with their younger counterparts. Perhaps at a
slower pace, but they're generally far more fit than the modern world's old._
Survivorship bias.
------
j_baker
Silly question: If I'm understanding this correctly, doesn't this essentially
mean that statistically it's possible to live forever? Or is there a point
when you statistically have 100% probability of dying?
~~~
jbri
Sort of, depending on the model. For most models, if you take your
"probability of dying in year X", and sum that over all the years from 0 to
infinity, you'll get a 100% probability of dying at some point. The
interesting thing is that there's no individual year with a 100% probability
of dying - the certainty of death is just because "forever" is a _really long
time_.
There are hypothetical distributions, though, where the sum total of
probabilities is less than 100% - where some proportion of the population
will, statistically, never die.
Of course this brings us to the _real_ issue, which is that what the model
says doesn't really matter - if the model disagrees with reality in extreme
cases, reality wins.
~~~
usaar333
On high ends it definitely is. It predicts no one could have ever lived past
116, but clearly some have:
<http://en.wikipedia.org/wiki/Oldest_people>
------
ifearthenight
Fun and interesting read. Personally though I think by only examining
mortality rates then half the story has been missed. ie. life expectancy.
While intuitively we can see that the probability of dying in any given year
increases with the more years you live, perhaps slightly more counter
intuitively the longer you live then the longer your life expectancy is
(rolling average obviously).
Would love to see the two put together somehow and charted.
------
scotty79
Are there any other scenarios that lead to Gompertz Law like distribution?
|
{
"pile_set_name": "HackerNews"
}
|
What is the best way to protect your internet activity from an intrusive ISP - suprgeek
All:<p>With bills like these [1] looming over the horizon, what is the best* way to protect your internet activity from your intrusive ISP?<p>Yes VPN is the one-word answer - if so please be specific - Which provider? which plan? Does Google et al work properly with out captchas<p>*Here BEST means - very little speed loss (after using whatever it is) and easy to install/set-up (think aged Parents), US specific (end users are in US)<p>[1]https://arstechnica.com/tech-policy/2017/03/gop-senators-new-bill-would-let-isps-sell-your-web-browsing-data/
======
deftnerd
I was using AT&T gigabit in Austin, TX. Their regular price included them
monitoring all my internet traffic to profile my browsing habits. I could pay
another $30 or $40 a month for my privacy to stay intact (at least they
promised that they would respect my privacy if I paid extra).
Rather than spending that money, I just rented a small server at a colo in
town and set up a VPN server. I then configured my router so all traffic went
through the VPN. All traffic through AT&T's networks remained encrypted
through the VPN. It also had the benefit of hiding my real IP address from
nosy sites.
Using a VPN service would also be acceptable, just make sure they have enough
bandwidth.
------
savethefuture
Very difficult to avoid ISP intrusion since they are the ones allowing you
access to the internet. As you've said already a VPN is the way to solve this
but even that is still going to have its own problems related to privacy. I
would never trust a vpn provider, I would recommend you learn to setup your
own vps box (linode, digitalocean, aws, etc) and run an ssh vpn through it.
Then you have absolute confidence that your activities are not being monitored
by the provider. But again there is an ISP giving your vps access to the net
now which can monitor you, and if THEY try hard enough they can correlate your
activities back to you.
~~~
savethefuture
Unfortunately all this leads to self censorship and adds fear into speaking
freely and doing as you please, but adding additional lays of protection will
make it more difficult for them to discover things about you. But there is no
total safety or total privacy on the internet ever.
------
wazanator
Have you looked into Tor? If you don't want to use a VPN that's an
alternative.
|
{
"pile_set_name": "HackerNews"
}
|
I'm Hosting Remote Work Summit-26 Speakers from MS,WordPress,Buffer,GitHub(Phew) - nishchaldua
Quick Overview: It's a virtual summit. Free to attend. Spread over 5 days from March 5-9th. With 27 speakers from - Buffer, Evernote, Microsoft, Trello, Github, Appirio, Zapier, Toptal, Automattic (WordPress), Helpscout, Treehouse, WomenWhoCode, Mailbird, Flexjobs and others!<p>https://www.theremoteworksummit.com<p><i></i>Why remote work?<i></i>
I'm a big supporter of flexible work, remote policies, sharing economy, diversity & gender balance at work. Imagine if all of us could work from anywhere, anytime, however, we want to, without any barriers.
Remote work was supposed to be the ONE TRUE BENEFIT OF INTERNET. We got Snapchat instead.
Remote organizations are able to hire people for their skill & value instead of who's based where and how they or their resumes look. I'm not saying that every job & company can go remote today. But those who can, should. I think of remote work as a big equalizer that will finally break down the last few geographic walls we face.<p>To make sure there is real value in this online conference, we got together a phenomenal panel of speakers. See & judge for yourself.<p><i></i>Speakers<i></i>:
* Director of Partnerships, WordPress
* Director of People, Buffer
* General Manager, Evernote
* Program Manager, Microsoft (Scott Hanselman, the ASP.Net guy if you know?)
* Marketing Head, Trello
* COO, Treehouse
* CEO, Liquidspace
* CEO, Mailbird
* CEO, FreeUp
* CEO, WomenWhoCode
* CEO, Tortuga
* CEO, NinjaOutreach
* CEO, Outpost
* CEO, Rype
* Customer Support, Kayako
* Digital Entrepreneur/Nomad
* Freelance Marketer (Digital Nomad)
* Freelance Designer (Digital Nomad)
* Director, Nomad Capitalist
* Director, InMarketingWeTrustDirector of Marketing, Github
* Director of Engineering, Toptal
* Head of People, Helpscout
* Talent Head, Appirio
* CFO, Zapier
* Director of People, Flexjobs<p><i></i>Background Story:<i></i>
Me & my team of 3 spent the last 3 months putting the whole event together and getting the right set of questions for each of the speakers.<p>There's tons of value for entrepreneurs, freelancers, people managers & those already working remotely.<p>Remember, it's free to attend with an optional upgrade for those who want lifetime, on-demand access (I still have to eat & pay bills)!<p>Try and attend at least a few sessions and give me your feedback. I'm here all day so ask me anything. <i>Ignore typos, I'm running on coffee for the last 36 hours. Cheers!</i>
======
nishchaldua
Here's the link -
[https://www.theremoteworksummit.com](https://www.theremoteworksummit.com)
|
{
"pile_set_name": "HackerNews"
}
|
The Building Blocks of Ruby - wycats
http://yehudakatz.com/2010/02/07/the-building-blocks-of-ruby/
======
kscaldef
I don't feel like any of these are particularly compelling examples,
particularly if you are familiar with more languages beyond Java and Python
which are used for comparison in the article.
In both the file handling and mutex example, blocks seem to be serving as a
substitute for proper lexically scoped variables. In Perl, for example, we
would use a lexically-scoped file handle to ensure the file is closed when the
variable goes out of scope. The same technique is the standard way of
implementing mutexes in C++.
As for respond_to, I've never understood how it wasn't just a baroque
rewriting of a case-statement, but perhaps someone can enlighten me as to why
blocks are superior in this situation.
~~~
wedesoft
Blocks are objects themselves and they preserve their access to the local
variable scope (i.e. they are "lexical closures"). This allows you to write
control structures yourself. E.g. if-statement:
def my_if( cond )
if cond
yield
end
end
x = 3
my_if x < 5 do
puts "#{x} is lower than 5"
end
# prints '3 is lower than 5'
Unfortunately the syntax for a control structure accepting more than one block
is less elegant. But you don't need to compromise on the semantics:
def my_if_else( cond, a, b )
if cond
a.call
else
b.call
end
end
x = 2
my_if_else( x < 0, proc do
puts "#{x} is lower than zero"
end, proc do
puts "#{x} is greater or equal zero"
end )
# prints '2 is greater or equal zero'
~~~
kscaldef
Sure and I understand all that. My comment was just that the examples in the
article are perhaps not the best ones to use because they are simply
replicating what other languages do with lexically scoped variables, and users
of those languages view the lack of such as a deficiency of Ruby.
~~~
swannodette
How does a lexically scoped variable close the file? I couldn't find any
documentation about this while Google searching. Is this a real language
feature where you can specify what happens to a variable when it goes out of
scope?
In other words, is it a baked-in convenience or a real extensible abstraction?
~~~
wedesoft
#include <fstream>
using namespace std;
int main()
{
{
fstream f( "test.txt", ios::out );
f << "Hello" << endl;
} // File gets closed
// ...
return 0;
}
~~~
weaksauce
I haven't done C++ in a while but is it possible to allocate an fstream object
on the heap instead of the stack? If so and you do not explicitly call delete
on it then will it still close the file when going out of scope?
Something like:
#include <fstream>
using namespace std;
int main()
{
{
fstream *f = new fstream( "test.txt", ios::out );
f << "Hello" << endl;
} // memory leak and non closed file.
// ...
return 0;
}
~~~
scott_s
You can allocate an fstream (or any) object on the heap, and if you do not
explicitly call delete it will not close the file. Resource-acquisition-is-
object-instantiation (RAIOI) is a common C++ idiom, and it only works on stack
variables.
~~~
altano
It only works with stack variables, but you can wrap the construction and
destruction of anything, including heap-allocated objects, with a stack
variable. You tie the allocation of the heap object to the constructor of the
stack variable, and the reverse for de-allocation/destructor.
The premise of all this is that the construction and destruction of stack
variables both happen at well known times and are guaranteed to occur.
------
j_baker
I've always felt that blocks were Ruby's gimmick. They're neat, and I'd like
to have them in Python. But I get the feeling that this is just a bikeshed
issue. People show them off because they're easy to understand. There's
absolutely _nothing_ wrong with that.
However, I get the feeling that they're not Ruby's strongest feature. Python's
coolest features (metaclasses, descriptors, and other things) wouldn't make
sense if you'd just read a blog post on them. I suspect Ruby is the same way.
~~~
bad_user
It's not a bike-shed issue. Surely the other features are great, and Python
has lots of powerful abstractions.
But once you have closures with a light-weight syntax, as Ruby has, the APIs
start to look a lot more different.
For example, take this example ...
v = [ x for x in collection if x % 2 == 0 ]
for item in v:
print item
In Ruby the equivalent would be ...
collection.find_all{|x| x % 2 == 0}.each do |item|
puts item
end
Yes, the Python example is elegant, but Ruby doesn't need extra baked-in
features like list-comprehensions. It doesn't need a bunch of other features
as well, like generators, or generator expressions, or with statements.
There are a lot of PEPs in Python that cover use-cases for Ruby's blocks,
trouble is there are still use-cases that aren't covered.
Guido is partially right though ... adding blocks in Python wouldn't be
pythonic because blocks wouldn't be orthogonal with lots of other Python
features. They should've been added from the start, and now it's kind of late.
------
tptacek
_pedantic:_
Digest::MD5.digest(x), not Digest::MD5.hexdigest(x). If humans aren't reading
it, don't convert it to hex.
------
subwindow
Blocks are also an extremely powerful tool when it comes to building DSLs, and
if done correctly are a great alternative to complicated options
files/hashes/etc. Rails' routes and config/intializer come into mind.
~~~
chromatic
> _Blocks are also an extremely powerful tool when it comes to building
> DSLs..._
I can't read this as anything more insightful than _functions are also an
extremely powerful tool when it comes to building APIs_. I first used Ruby in
2000. What am I missing?
~~~
Vitaly
ruby block syntax is clean, elegant and nest-able which allows creating good
looking DSLs.
foo do
bar do
..
end
baz 123
end
now try to do the same with some other language which supports something-kind-
of-like-ruby-blocks but with a different syntax. it will not look nearly as
good, so in those languages instead of creating DSL people usually implement
some kind of config file format instead. or just use XML :)
if you'd have to use 'lambda' to define a block for example, it would make a
much worse DSL with lots of extra syntax noise.
~~~
chromatic
The lack of punctuation characters makes this a DSL and not bog-standard Ruby
code?
> ... instead of creating DSL people usually implement some kind of config
> file format instead. or just use XML.
Writing a grammar or a parser means you _haven't_ created a DSL?
Unless by "DSL" you mean "Ruby syntax and Ruby semantics with symbol names
chosen by the programmer", I have no idea what you mean by "DSL".
~~~
jimbokun
How about this:
Ruby's syntactic flexibility and semantic model mean that many cases handled
by a change to the language spec or delegating to another language (XML config
files, for example) can be cleanly handled in Ruby itself. The end result is
often something that looks like a little language for a specific task, such as
Rake, Rails, etc. (I'm not a Ruby programmer, so I hope those are good
examples.)
So with Ruby, there are very few cases where writing a custom grammar or
parser is necessary. Ruby's flexibility gives you the ability to do things
that might require a custom grammar or parser in other languages.
~~~
draegtun
However even Jim Weirich doesn't like Rake being called a "DSL".
I think Piers Cawley was dead on by calling things like this a _Pidgin_
refs:
* <http://www.infoq.com/interviews/jim-weirich-discusses-rake>
* <http://www.bofh.org.uk/2007/08/08/domain-specific-pidgin>
------
adelevie
Yehuda Katz is always able to remind me that there is so much Ruby I don't
know.
|
{
"pile_set_name": "HackerNews"
}
|
DNS-over-HTTPS causes more problems than it solves, experts say - sgnork
https://www.zdnet.com/article/dns-over-https-causes-more-problems-than-it-solves-experts-say/
======
ohiovr
If for some reason you need secrecy in dns access, why not install bind9 on a
raspberry pi?
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: Collaborative todo list app – to reduce the email load in my office - nikodunk
https://nikodunk.github.io/simple-to-do-react/
======
aldo712
Sorry if I'm missing something, but how is this collaborative?
~~~
nikodunk
Ah sorry if that’s unclear. It’s not password protected, so I just log in to
my coworker Emma’s list to add stuff to her todo list (and she to mine too)
instead of sending her an email asking her to do something.
~~~
bernardhalas
Hi, how can I do that technically?
Other than that I like this approach. It looks simple and intuitive. It would
be interesting though to see if there's a possibility to add more details
behind each todo item. If you send emails to people asking them to do
something, often one line is not enough to explain that.
BTW, if you'd like to get more UX feedback, please feel free to visit our UX
free community platform at
[https://usability.testing.exchange](https://usability.testing.exchange).
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: What machine learning approach to use if you only got positive examples? - sparkpluglabs
Are there some well known Machine Learning algorithms when you only have positive examples? I did some research but most I could find was when you have positive and unlabeled examples.
======
mswen
Maybe I don't understand your situation but unless you have variation you
don't stand a chance of developing a predictive model. In the dependent
variable you need at least 0, 1. And in independent variables you need
variation. If you are doing time series you need variation over time in your
dependent variable.
~~~
sparkpluglabs
May be I am not thinking right about it.
Let me give an example. If given a list of all cocktails, I have lot of
positive examples on what ingredients go well together. Say Gin and Vermouth.
So I have big list of positive examples. But is hard to get negative examples.
I want to build a model where given some ingredients the model tells me
whether they go well together or not.
~~~
mswen
What I would do is build a crowd-sourced data set of what ingredients go well
together and which ingredients people don't like together (rate it from 1 to
10, where 1 means these ingredients should never be used together to 10 means
this is a heavenly combination). After lots of rating by many different people
you could have a average quantitative score for each pair of ingredients.
You now have variation and can use a variety of predictive approaches
~~~
sparkpluglabs
Thanks!. Your comment gave another idea. What if I take a cocktail recipe as a
vote. I take all ingredients and create generate all possible pairs. Look for
how many times the pair appear in cocktails. Treat those as votes. More votes
means they go together really well.
~~~
mswen
Seems like a logical approach to me. Also once you get the data set built you
could algorithmically generate new recipes. If one type of machine learning
algorithm generates more pleasing new cocktails you have a winner. Just don't
do all the testing yourself. We don't want to read the HN post explaining how
'sparkpluglabs' ended up in detox from unit testing!
~~~
sparkpluglabs
:) a "surprise me" button for
[http://www.kickstarter.com/projects/monsieur/monsieur-the-
ar...](http://www.kickstarter.com/projects/monsieur/monsieur-the-artificially-
intelligent-robotic-bart)
|
{
"pile_set_name": "HackerNews"
}
|
GitHire Swamped After Promising 5 Hire-Worthy Programmers for $1k - tswicegood
http://www.nytimes.com/2012/02/03/jobs/githire-a-headhunter-is-swamped-after-promising-5-hire-worthy-bay-area-programmers-for-1000.html?_r=1&scp=1&sq=githire&st=cse
======
guywithabike
GitHire is the sleaziest, scammiest spam setup I've come across in a long
time. It shocks me that it's treated with legitimacy by the New York Times.
Additionally, it saddens me to see so many people uncritically praising the
idea as if it was anything other than a spam house.
GitHire does nothing more than scrape GitHub profiles and spam users with
unsolicited junk mail -- regardless of the users's qualifications or desire to
be recruited. It's spray-and-pray spam and companies wishing to keep their
respect among developers would do well to avoid them. Any company willing just
resort to spam is neither a company I'd want to work for nor a company I'd
wish to be a customer of.
~~~
dpritchett
Really interesting PR hack in play here. At first glance the article is
published at NYT.com and that fact _will_ be milked endlessly. The really
curious part is that the article is syndicated up from the SF-area nonprofit
"Bay Citizen" [1].
Note to PR-hungry founders: Figure out which blogs are syndicated by the NYT
and other big players and then pitch to them directly. They should be much
more receptive audiences than first-party NYT staff.
[1] <http://www.baycitizen.org/about/>
~~~
natrius
It's not really syndication. The New York Times has partnerships with The Bay
Citizen and several other regional news nonprofits to publish articles in
their regional editions on Fridays and Sundays. The articles go through The
New York Times's editorial process, not a completely separate process like the
Associated Press.
In theory, The New York Times won't lend its imprimatur to articles it doesn't
deem worthy, so I don't see why the newsroom that the reporter sits in is an
issue.
~~~
NinetyNine
There's a pretty big effort jump between writing an article and approving it
for editing, in addition to writing involving you intently think about the
story for a significant amount of time. This means that any ethical,
financial, pr - related concerns are applied longer for the writer. NYT
writers are highly motivated and competitive people, and would worry about
even the occasional negative piece. Bay Citizen writers aren't held to the
same requirements, since they're aren't viewed in the same critical way.
Editors, since they simply have to review the piece and what gets on the page
is whatever is the best to choose from, are a lot more likely to approve a
piece.
~~~
natrius
_Editors, since they simply have to review the piece and what gets on the page
is whatever is the best to choose from, are a lot more likely to approve a
piece._
Your view of the role of an editor is incorrect. My observations of the
newsroom I work in (as a developer) are quite different. My organization has a
similar partnership with the New York Times, and some of our stories have made
the front page of the national edition, not just the regional edition.
Your view of The New York Times's willingness to publish pieces that don't
meet its standards are based on speculation.
------
gkoberger
They were practically begging my co-workers and I to interview. They were
offering $10 Amazon gift cards for an interview, which was just insulting. It
wasn't even the amount; it turned what could have been a good fit into a cheap
business transaction.
Here's how I saw it: If I invited you to dinner, it would be considered a nice
gesture. But what if I offered you $10 to come to my house for dinner? It
changes everything.
I feel bad for the company, who we blamed for the "stunt". Turns out, the
company had no knowledge they were doing that.
~~~
AgentConundrum
Joel Spolsky on motivating factors:
> _But when you offer people money to do things that they wanted to do,
> anyway, they suffer from something called the Overjustification Effect. "I
> must be writing bug-free code because I like the money I get for it," they
> think, and the extrinsic motivation displaces the intrinsic motivation.
> Since extrinsic motivation is a much weaker effect, the net result is that
> you’ve actually reduced their desire to do a good job. When you stop paying
> the bonus, or when they decide they don’t care that much about the money,
> they no longer think that they care about bug free code._
<http://www.joelonsoftware.com/items/2006/08/09.html>
~~~
gkoberger
I think the awesome book Predictably Irrational does a better job of
explaining it:
> What’s going on here? Why does an offer for direct payment put such a damper
> on the party? As Margaret Clark, Judson Mills, and Alan Fiske suggested a
> long time ago, the answer is that we live simultaneously in two different
> worlds- one where social norms prevail, and the other where market norms
> make the rules. The social norms include the friendly requests that people
> make of one another. Could you help me move this couch? Could you help me
> change this tire? Social norms are wrapped up in our social nature and our
> need for community. They are usually warm and fuzzy. Instant paybacks are
> not required: you may help move your neighbor’s couch, but this doesn’t mean
> he has to come right over and move yours. It’s like opening a door for
> someone: it provides pleasure for both of you, and reciprocity is not
> immediately required.
> The second world, the one governed by market norms, is very different.
> There’s nothing warm and fuzzy about it. The exchanges are sharp- edged:
> wages, prices, rents, interest, and costs- and- benefits. Such market
> relationships are not necessarily evil or mean-in fact, they also include
> self- reliance, inventiveness, and individualism-but they do imply
> comparable benefits and prompt payments. When you are in the domain of
> market norms, you get what you pay for-that’s just the way it is.
~~~
AgentConundrum
Once you mentioned it, I realized I knew this was in Predictably Irrational.
I've never read it, but I have heard it mentioned in this context.
I first heard of this through Joel, so I quickly found a citation from his
blog. I know he's mentioned it repeatedly on the SO/SX podcasts as well, so
it's likely the association was solidified there.
That said, your quote doesn't make the point I was trying to highlight.
Specifically, I was trying to show that moving from intrinsic to extrinsic
motivation is not only damaging, but also hard to reverse as well.
Thanks for the quote though. I've renewed my mental note to someday maybe
finally get around to reading the book.
------
latchkey
These guys are spammers, there is no way around it.
<http://lookfirst.com/2012/01/githirecom-is-spammer.html>
Here is my post about an HN post that they _deleted_ after people started
ripping into them:
<http://lookfirst.com/2012/01/githire-spam-again.html>
Please make them go away. Mark all emails from them as spam. Do not buy their
services. Do not respond to their emails.
~~~
X-Istence
I was not even aware that they create profiles from my public data on Github,
and the only way it seems to remove my profile is to log in using OAuth on
Github.
That just seems even more sleazy.
~~~
latchkey
Yea, I'm starting to get upset at Github for allowing them to continue. I've
emailed [email protected] multiple times about this. One email said that they
would take care of these guys and then all further support emails have been
ignored.
Sadly, they are giving Github a bad name now too as it appears to me that they
are supporting these guys by allowing them to continue.
------
almightygod
Seems like hype. This page <http://githire.com/job_board> shows only 19 job
requests and only 3 intros made so far. Also if these intros are made by
programmatically scraping emails on github, then they could be in some trouble
for violating spam laws.
~~~
Aqua_Geek
I would hope that, at minimum, they're respecting the hirable? flag on
profiles. Their about page is full of marketing speak, but short on actual
details. (I understand that they don't want to give away their "secret sauce,"
but is there really that much to it?)
Edit: Also, their algorithm seems pretty off - a search around me returns some
candidates in the "top 10%" group who only have one repo and it's a fork of
someone else's with a small bugfix. (Nothing wrong with that - I appreciate
those who take the time to submit patches. But surely a single fix of a couple
of lines shouldn't rocket someone into the top 10% of GitHub.)
~~~
jeremymcanally
They do not (or at least, did not) respect that flag nor do they (or did they
previously, at least) respect an explicit opt-out of their service. Great that
they're successful, but for a while there they were well on their way to
pissing off the developers they so desperately need to actually make money.
------
idan
Shameless plug: our startup, Skills (<http://skillsapp.com>) is attacking the
resume problem from a different angle. On one hand we don't do candidate
sourcing (yet), on the other hand we don't spam potential hires.
The way our service works:
1\. You create a position. Right now we offer two kinds: Django and Frontend
2\. You get a link from us. Paste that link in your job board adverts, or
anywhere you'd otherwise say "send your resumes to…"
3\. Candidates apply through the link, we crunch their data and produce a
clean, concise report that helps you decide whether they're worth talking to,
without wading through the resume keywordfest.
You can see a sample brief here: <http://skillsapp.com/sample/brief/>
Check us out, we're also just launched, and we're hungry for feedback.
------
rpwilcox
Sounds like businesses can smell a deal a mile away.
I thought normal recruiter practice was some percentages (10%, 20%?) of a
hire's first year pay. So, like $10,000 to $20,000.
If you can get an interview with a potential employee for one _tenth_ of
that... __and __you know these people are famous in some nerd circles (buying
your company more geek cred)... it's a deal.
~~~
praptak
On the other hand, if it looks too good to be true then it probably is. See
other comments.
~~~
rpwilcox
Oh, I have no doubt. Many things are funny about Githire, including just how
many coders are in the top 10-20% of Github users (including those with one
repo and no activity).
------
iambot
Seriously how badly are they using the gitHub API, or scraping. Because if I
find my profile. (I'm not listed in my area "Edinburgh", even though people
with a lower score are.) and on my profile the projects are so severely out of
date it's embarrassing.
Update: In my opinion, even though its not for "Hiring",
<http://coderwall.com> does a way better job. Perhaps they should up their
game and do what gitHire is doing - but properly.
------
dpritchett
The quote at the end made my hair stand on end:
_"Top programmers are like a race car. Once you get them you don't want to
lose them and you want to get as many as you can."_
~~~
angersock
yay im a commodity
gotta catch'em all zuck
~~~
talmand
even worse, you're treated as such by a large number of people who don't seem
to understand how to properly invest in that type of commodity
------
mixonic
A snippet:
“Top programmers are like a race car,” he said. “Once you get them you don’t
want to lose them and you want to get as many as you can.”
It is a little off topic, but I think comments like this give me an idea of
what it feels like to be objectified, like a woman might be in American media.
I'm only now realizing that's why the "rock star" or "race car" label makes me
feel so shitty. From Wikipedia:
"Some feminists and psychologists argue that such objectification can lead to
negative psychological effects including depression and hopelessness, and can
give women negative self-images because of the belief that their intelligence
and competence are currently not being, or will never be, acknowledged by
society."
Now, I know this is only my career and not my body, but in spite of myself I
get intimidated by all these C-suites looking for such hyperbolically able and
single-minded individuals I start worrying about if I can cut it. If I will be
recognized for anything except my ability to program.
I'm not comparing this problem to objectification of women in scope, and I'm
not saying this is all-consuming my life or anything. It is far from that, but
something about the "race car" statement brought this into focus for me. I
don't want to be collected like race cars or Pokemon by some dipshit.
This was not intended to troll, I just wanted to share my realisation about
being compared to expensive/flashy stuff.
------
rudiger
This article is in the print edition of the New York Times with the headline "
_A New Resource for Hiring Programmers Has Become Entirely Too Successful._ "
GitHire must have some good PR.
~~~
rhizome
It may be all they have.
------
ChuckMcM
And this is surprising how? Great to hear they are being successful though.
$200/lead is probably an order of magnitude too low though.
------
X-Istence
Did they get permission from Github to scrape their website for the content
they have put up?
From the terms of service:
4\. You must not modify, adapt or hack the Service or modify another website
so as to falsely imply that it is associated with the Service, GitHub, or any
other GitHub service.
5\. You agree not to reproduce, duplicate, copy, sell, resell or exploit any
portion of the Service, use of the Service, or access to the Service without
the express written permission by GitHub.
Although I guess if the GitHire guys never used Github they aren't bound to
those terms.
~~~
latchkey
They are in clear violation of those terms. Github is knowingly allowing them
to continue to use their services.
------
tosseraccount
The NYTimes and the WashPost are probably gearing up to get Congress to raise
the guest worker quotas again, after the election. They need a lot of "supply
and demand and capital mobility don't work anymore" stories. Just raise wages
and labor adjust to the incentives. Most of this new stuff is not a labor
supply problem, it's a vision and management problem. "If I could just hire
the best engineers for real cheap , then ...". yeah yeah right.
------
rokhayakebe
I have no idea why companies that suddenly become wildly successful without
press, start to go out and make themselves public, hence waking up every smart
guy in the game. Keep a low profile and milk it, maan.
~~~
jeremymcanally
Perceived legitimacy. In the recruiting game, you need a leg up on the next
200 mindless email drones. Saying "We're so good the NY Times covered us!"
goes a long way towards considering both sides of the transaction that you're
worth the time.
------
shareme
Here is what I do not get:
1\. Recruiters expect me to spend my free-time at their disposal to use my dev
contacts for them? My dev contacts remain my dev contacts because they know
that if I ever turn over a contact to a 3rd party I do some review of the
opportunity to see if its a scam or recruiter bs first and only turn over
contacts to that 3rd party if they are a founder, cop-founder, or the actual
,manager the person will work under.
2\. To me relationships are formed over a good meal and a good drink, no
offense to you HN'ers but that is what it comes down to. If someone wants to
recruit me than consider taking me out for a meal. Its not necessarily the
cost of the meal, its the fact that you took sometime to spend on forming a
relationship with me.
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: Automatically create a rest API for Rails - miquelb
https://github.com/miquelbarba/instant-api
======
miquelb
A gem that automatically generates a rest API from the models and the routes
of a Rails project. Please, give me your feedback!
|
{
"pile_set_name": "HackerNews"
}
|
Chip wars: China, America and silicon supremacy - sbuccini
https://www.economist.com/leaders/2018/12/01/chip-wars-china-america-and-silicon-supremacy
======
40acres
Full disclosure, I work for Intel.
We have a fabrication plant in Chengdu, it's public knowledge that this fab is
helping to manufacture products built on the latest process technology. As
I've come to learn more about China's tactics when dealing with foreign
companies it's become of great concern to me what this plant means for our
future. I don't think it would be far-fetched to assume that some very
protected and valuable IP has leeched through our doors and into China's
hands. In all honestly I really can't fathom how the American government let
this deal occur.
EDIT: To add more information [which is all public knowledge so if you're
reading this Intel folks don't track me down! :)] its a packaging / assembly
plant working on CoffeLake, which is 14nm++, CPUs are NOT fabbed there as
Congress forbids it. My concern is more about the ability for the Chinese to
potentially reverse engineer these products at assembly and derive IP. Also..
our packaging technology is pretty advanced so I'm concerned with even having
an assembly plant there as well.
~~~
vokep
From people I've talked to who do manufacturing in China...if you aren't
literally watching every single employee, every single moment, your IP is
being stolen.
~~~
setquk
I’ve considered leveraging this in electronic manufacturing. Send design off
to be cheaply fabricated, wait for it to appear on aliexpress as a clone with
all BOM cost optimisation done free. Buy bulk lots and resell in local
country. Bingo!
~~~
moflome
I agree. Doing this now for 5G "mesh" based BTLE devices which I hope will
drop dramatically in price. Working out low-cost PMIC, crystal and passive
component compatibilities for US based chipsets is a bit of a regulatory
issue, but the "clones" iterate so fast I expect them to have complete designs
available by mid-2019 and hope to make the money on the software / data.
~~~
setquk
Interesting. I suppose your value add could be EMC, packaging, support and
infrastructure on that.
------
jerf
"America has legitimate concerns about the national-security implications of
being dependent on Chinese chips and vulnerable to Chinese hacking."
I've lately been having this fantasy, or day dream perhaps, where China or the
US declares war on the other. The world cowers in fear at the impending
catastrophic conflict, yet as the hours drag on, it becomes clear that nothing
is happening. It turns out that the only net effect of the declaration of war
is that 10 minutes later, every piece of military equipment either country has
was bricked by the other's hackers and both militaries are now more-or-less
sitting on their butts, twiddling their thumbs.
In which case the victory goes to Russia, I suppose.
~~~
jamesjyu
Cixin Liu (of Three Body Problem fame) has a new novel called Ball Lightning
that talks about this exactly. Not a great book in terms of character
development or plot, but is chock full of intriguing military + technology
scenarios. Worth a read.
~~~
notSupplied
Fantastic read. The scale of the problem escalates so suddenly, I always love
how ambitious the scope of his stories are.
~~~
sfgunn
He makes me not care about character development, the science is so
titillating on its own.
------
11thEarlOfMar
I've personally observed this 'war' from a couple of angles. First is in the
process equipment required to fabricate the chips. China has endeavored to
create its own chip equipment industry, the way that Japan did in the 80s and
Korea did in the 90s. I was personally involved in development of 3 product
lines by a Chinese capital equipment firm, that have been in development for
10 years. They were a customer for our system component. To kick start the
effort, they paid expert, targeted process consultants from the US. To date,
none of the 3 have cracked into volume shipments to the fabricators. Ten years
in, both Japan and Korea had built equipment that could at least compete
domestically. In Japan's case, internationally. The Chinese progress was
stifled by ineffective engineering management that ultimately drove our
program manager to resign.
Recently, I visited Korea and was told that Chinese semiconductor companies
are recruiting semiconductor process engineers from Korea, paying them 3-5x
their Korean salary under 5 year contracts, effectively buying out their
career. This could be an effective approach to garner the technical expertise
needed, but I wonder whether it will simply repeat the outcome of the
equipment effort: Experts were brought in and paid well for their knowledge,
but it could not be realized due to business cultural or management style
constraints.
This is my personal and obviously limited experience, but in the end,
fabricating semiconductors is a multidisciplinary endeavor of great
complexity. Success depends equally on commitment, pragmatism and effective
collaboration, as it does on technical excellence. That is the price of entry
to leading edge technical achievement and I'm not sure the Chinese industrial
firms are there yet.
~~~
fspeech
You need customers who use your equipment to close the product improvement
cycle. Both Japan and South Korea have (or had) competitive chip manufacturers
who could help their domestic suppliers improve. Chinese equipment makers
don't have such luxury. Their potential fab customers are hobbled by US export
control and are surviving on thin margins. So they couldn't or at least
haven't been able to get into the positive feedback loop.
------
NicoJuicy
I'll repeat it again, my solution would be: Support for Made in India 2025.
Greed won't disappear, but you can shift the balance to a more western
friendly country.
They are already destroying a lot of countries. They borrow money to them, so
the Chinese can build their infrastructure. But there are almost no locals
involved, so all the artificial inflation goes to China. The belt is just an
excuse for colonizing the world ( or at least the harbors, strategic airports
that they themselves have built, see Sri Lanka, they are the first to comply).
I think a lot of those countries that built a dam with Chinese money forget
that maintenance costs a lot also. Reference: FIFA football stadiums
Ps. Feel free to share your concern
~~~
KSS42
They are adopting the USA's playbook.
See
[https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit...](https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit_Man)
According to Perkins, his role at Main was to convince leaders of
underdeveloped countries to accept substantial development loans for large
construction and engineering projects that would primarily help the richest
families and local elites, rather than the poor, while making sure that these
projects were contracted to U.S. companies. Later these loans would give the
U.S. political influence and access to natural resources for U.S. companies
India would do the same if it had the capital resources.
~~~
NicoJuicy
The USA playbook is only 30% of China's, they are literally bankrupting them
off everything.
\+ There is a difference, companies can have PR issues now without just paying
off some key figures ( internet). Not saying that it's not possible, but it
should be ( and is) harder than the pre & internet age.
China hasn't got that problem, because they control their internet. And trying
to control yours ( free internet through satellites)
------
Animats
This is a big deal. The US already lost its consumer electronics industry.
China's plan for 2025 is to have no dependence on foreign sources in certain
technology areas, including semiconductors. Who will buy from Intel then?
Someday Foxconn will figure out how to do without Apple in phones.
~~~
tynpeddler
I'm always fascinated by all the economists who write articles detailing the
cost of protectionism to the American economy but I've never seen an article
discussing how much Chinese economic policy costs them. Maybe I'm reading the
wrong journals.
~~~
RobertoG
Protectionism is only bad when you have the upper hand.
The way to develop a powerful industry is protectionism. When you have
achieved it, the way to avoid others develop powerful industries is free
trade.
It was not by way of free trade that the USA, Germany or Japan developed.
~~~
pzone
The costs of protectionism are typically diffuse and difficult to pinpoint.
They take the form of higher prices and reduced choice.
Here's one example. Chinese consumers use Alibaba, not Amazon. They use QQ,
not Twitter. They use Baidu, not Google. In all of those cases, the foreign
brands are higher quality products.
------
kappi
Story that all chinese technology is stolen is not true. They have lot of
talent and hardworking folks in cutting edge technology. Vast majority of
employees in US chip companies doing cutting edge technology are also Chinese.
Most of US engineers chase easy money in web technology etc. Same concern was
raised by Peter Theil about VC investing not in hard tech. Ycombinator is also
classic example of investing tons of money in only short term tech. Most of
silicon tech needs 5-7 years or even more.
~~~
NicoJuicy
You are Chinese? Raw guess.
But the entire setup of China is made for stealing tech from companies who
invest there, this doesn't mean anyone isn't hardworking or hasn't got any
talent. I know Chinese are hardworking.
Also, hardware needs a lot of capital in Western countries, workers are
expensive.
Starting for yourselve ( with software) and gradually growing is a lot
"easier" if you aren't rich.
~~~
dang
> You are Chinese?
Do not go down that road, please.
~~~
casefields
Oh please. The Chinese always sprout up to defend and pretend, that the
unsavory aspect of Chinese industrial espionage is American/western
propaganda. Happens all over the place on the Internet.
~~~
dang
We have a lot of experience with this, and I can tell you what is far more
common: people projecting manipulation and astroturfing onto legit users who
in fact are merely expressing an opposing point of view. Hauling out this
accusation without evidence is poisonous to discussion. That's why the site
guidelines ask you not to post such insinuations. Please read and follow them
from now on:
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)
Following the site guidelines also means not using flamewar tropes like "Oh
please", and refraining from nationalistic flamewar in general. "The Chinese
always sprout up" is a pretty gross thing to see here.
------
doctorpangloss
When your industrial based depends on cheap labor, like in China, it's always
possible to compete by legislating wages down. You can solve your economic
woes with a stroke of a pen.
But people have to live shittily.
When your industrial base depends on IP, like in the US, it's very challenging
to defend secrets, especially in a free society. Competing on IP alone is very
risky. You have to spend exorbitant amounts of money on security.
But at least I can live wherever I want, freely vote for who represents me,
don't get sent to prison camps to make living space for the ethnic majority,
have pretty clean air and water, aspire to treat the downtrodden humanely,
treat people who aren't my blood relatives humanely, trust my social systems,
travel freely, say whatever I want, and do something with my life other than
engineering, making money or achieving subsistence survival. The investment in
human capital, that makes American labor so expensive, makes it durably
valuable regardless of technology, government or foreign relations.
Wait isn't that supposed to be the tradeoffs part?
~~~
40acres
Great response dude. I'm so concerned because China is moving up the economic
ladder when it comes to production. China is no longer a place where the
British own the land and the people make T-shirts. These guys are stepping it
up, China is making phones.. cars.. they have their eyes set on aerospace,
they are building islands in the sea and infrastructure on other continents.
Look at what China is about to do to Africa, seems downright colonial. China
has taken blue collar jobs in America. Are they now about to start making
microprocessors? We have one of America's greatest sources of IP generating
talent (Google) ready to risk it all and re-invest in the country, China is
about to become an equal power. We don't seem to be prepared.
~~~
kaveh_h
What is China doing to Africa? And how is that any different than what US does
to any other region in the world. Even though America is a democracy it has
not really promoted democracy overseas. Apart from earlier itself being
involved in wars now instead many bad foreign leaders are supported by US
politicians and leaders. Allies which have been behaving badly for long time.
US greatest talent is (perhaps was) it’s ability to attract the talent of the
world and creating great value out of it. This happened primarily during and
after the world wars was as many nations were becoming more hostile places and
American open culture and values became a refuge for many. US will only truly
lose it’s leadership position when it gives up on moral leadership.
~~~
surferbayarea
"attract the talent"...this is no longer the case. I talk to 50+ (new) senior
engineers/month (for the past 6 years) and there has been a serious mind shift
in the last couple years. I now hear things like "hasn't immigration become
impossible to the US" and with increasing remote work, people are happy/prefer
working from their home countries (better tax structures, they actually make
more money). While this won't hurt US in the short term, the next generation
of top startups might not be in Silicon Valley.
------
_cs2017_
From a global perspective (rather than US- or China-specific), what exactly is
the cost of IP theft? Obviously, the reduced incentives to do R&D (if you
cannot monetize it as well, you'll invest less). That by itself could be
pretty devastating.
On the other hand, what are the advantages of IP theft? One that I can see is
that it creates more competition. I don't know if it helps much in the long
run though: if the competitors are just copying the leader, it's probably not
adding any value.
Any other pros and cons?
~~~
zanny
The more general cost is if your country ignores IP while another doesn't you
have an immense competitive advantage if the country you are "stealing" from
can't in some way punish you economically for it.
I'm personally an IP abolitionist but can recognize that China is abusing its
position here. It doesn't hurt the US nearly as much as it does Southeast
Asia, India, Japan, Korea etc that by and large obey US IP law. That was in
part what the TPP was meant to enshrine in more formalized, consistent, and
persistent terms - the Asian nations agree to obey US IP imperialism while the
US gives them advantageous trade opportunities over China. A large degree of
why it was so heinous was because of how biased it was in favor of the US
because of how desperately the countries surrounding China wanted a
competitive edge over their ability to ignore US IP.
In many ways Trumps China tariffs are giving a lot of these nations what they
wanted without getting anything in return from them.
~~~
_cs2017_
I agree that IP theft hurts countries that don't partake in it.
However, I was asking about the effect on the world as a whole. Taking some
wealth from one country and giving it to another isn't directly changing the
amount of wealth in the world. But some second order effects may come into
play that do change the global wealth.
~~~
pzone
The reduced incentive for innovation from US firms hurts global wealth just as
much whether the IP copycats are in the US or if they're in China.
------
xvilka
A bit of unrelated, but since it is important for chips development posting it
here. There is an initiative [1] to make FPGA/ASIC design tooling more
universal and interconnected with sharing the LLVM-like low level intermediate
representation (HDL), which is more powerful than Verilog or VHDL. At this
point many people see FIRRTL[2][3] as a viable candidate for this position,
with some enhancements.
[1]
[https://github.com/SymbiFlow/ideas/issues/19](https://github.com/SymbiFlow/ideas/issues/19)
[2]
[https://bar.eecs.berkeley.edu/projects/firrtl.html](https://bar.eecs.berkeley.edu/projects/firrtl.html)
[3]
[https://github.com/freechipsproject/FIRRTL](https://github.com/freechipsproject/FIRRTL)
------
halhod
HN may be interested in the longer briefing that this editorial is based on -
[https://www.economist.com/briefing/2018/12/01/the-
semiconduc...](https://www.economist.com/briefing/2018/12/01/the-
semiconductor-industry-and-the-power-of-globalisation)
------
jshowa3
For all the comments I see about China stealing IP, just remember, IP can be
stolen anywhere, even in America.
We have a couple production plants in China. I haven't seen any concrete
evidence that the employees are stealing IP. And if they are, they do a poor
job of reproducing it because they're almost never as good as the real thing.
Not to say it doesn't happen, but I just find it humorous that a lot of buzz
is around China precisely because most manufacturing is from there and not
other nations. There's also a lot of claims in reports, but no hard proof.
There's also no examination of the victims practices or its done very poorly.
Not practicing good IP protection like obfuscation in your code, encryption,
anti-tamper mechanisms which, from my experience, many American companies
don't practice, is just asking for trouble. They may be doing this, but its
never mentioned in news articles.
Of course, given enough time and resources, most things can be cracked.
However, that's no excuse to make it easy.
~~~
ciupicri
The last part sounds like blaming the victims.
~~~
LMYahooTFY
Pointing out something a victim can do to secure themselves isn't "blaming the
victim".
------
known
Reverse engineering may not work always
When the LS 400 was disassembled for engineering analysis, Cadillac engineers concluded that the vehicle could not be built using existing GM methods
[https://en.wikipedia.org/wiki/Lexus_LS#Industrial_significan...](https://en.wikipedia.org/wiki/Lexus_LS#Industrial_significance)
------
wangii
Hardworking, ambitious, and honesty: you can have only two. now pick.
|
{
"pile_set_name": "HackerNews"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.