Search is not available for this dataset
query
stringlengths
1
13.4k
pos
stringlengths
1
61k
neg
stringlengths
1
63.9k
query_lang
stringclasses
147 values
__index_level_0__
int64
0
3.11M
Given $n$ different univariate non-normal sample sets calculate for a new sample, $x$, which it most likely belongs to Say you have $n$ different, non-normal, potentially overlapping data sets of samples. Maybe their densities look something like: and you are given a new sample $x$, how would you decide to which of these sample sets it would most likely belong? I would assume it is possible to do a (kernel) density estimation of each of the sample sets, interpolate them to functions $p_n$, and then calculate $p_n(x)$ for each, selecting the one with the highest probability. However both the decision rule and the method seem, well, off. Can anybody help me in getting a better intuition of this problem? EDIT Say the data comes from a continuous scoring system, like a depression scale, and I have annotated data for many different subjects. So I can get the density plots for "severe", "mildly" and "non-depressed" subjects. Now I have a new sample and wish to know (based on the score) which the person most likely belongs to.
Which Distribution Does the Data Point Belong to? I have two distributions which are derived from 2 separate sets of data. These distributions are not normal, and it is not clear at this point if they belong to any family of known pdfs (they are not symmetric either). Given a data point, I need to decide which distribution it is most likely to belong to. If these were normal distributions, I could do usual parametric tests and go from there, but in this case I am not sure how to proceed. I searched around a bit, but I couldn't find anything, probably because I am not using the right keywords. Any help is much appreciated. Edit to clarify: I should have mentioned that it was univariate. I might as well explain the actual problem here as well. We have time-spent data for users on websites. We also have information on whether a user liked or disliked some of those sites (about 4% of all the sites). So we know the distribution of time spent for like and dislike. An obvious question, then, is for a random user who spent x amount of time, are they more likely to have liked the page or dislike the page. Our time spent information is based on seconds, so the distributions are really discreet, but in real life, they are continuous.
What regression model is the most appropriate to use with count data? I am trying to get a little into statistics, but I am stuck with something. My data are as follows: Year Number_of_genes 1990 1 1991 1 1993 3 1995 4 I now want to build a regression model to be able to predict the number of genes for any given year based on the data. I did it with linear regression until now, but I have done some reading and it does not seem to be the best choice for this kind of data. I have read that Poisson regression might be useful, but I am unsure what to use. So my question is: Is there a general regression model for this kind of data? If no, what do I have to do to find out which method is the most appropriate to use ( in terms of what I have to find out about the data)?
eng_Latn
34,500
A games/matches schedule Suppose I have 8 teams $A_1,\ldots, A_8$, competing in different 8 games $X_1,\ldots, X_8$. For example $X_1 = soccer, X_2 = basket ball, \ldots,$ . In the morning there are 4 games $X_1,X_2,X_3,X_4$ take place, with 16 matches. In the afternoon there are another 4 games $X_5,X_6,X_7,X_8$ take place with 16 matches again. Can I schedule (I can freely choose any game) such that: a) Every team participate in 4 different games in the morning and 4 different games in the afternoon. b) Every team has a change to meet with another 7 teams? c) Each games need two teams to play in one match?
Scheduling a team based activity with 10 different teams and X different games. Every team must meet each other ONCE but never twice, Hi there I am planning a ”Scavenger hunt”-like activity (I have no other words for this event), and I am having trouble creating the schedule. The idea of this hunt/run/game/competition is that 10 teams will battle each other at X different mini games. The games on are team vs. team based, meaning that there will always be two teams on each post and one winner and one loser. I require that every team fights every other team. Meaning that team 1 will have to meet team 2, 3, 4, 5, 6, 7, 8, 9, and 10, and team 2 will have to meet team 1, 3, 4, 5, 6, 7, 8, 9, 10 (and so forth). The amount of mini games must be so, that every team only meets the other teams ONCE each, but still they have to meet every team. How many posts do I need to create, and how could I schedule the overall activity? (A table of rounds and teams would be much appreciated).
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
eng_Latn
34,501
Given time series forecast and associated confidence interval, how to calculate the total return's confidence interval Suppose I bought 12 widgets for a set price, and I am able to sell one widget per month for the next 12 months. Also imagine that I have a monthly ARIMA forecast with confidence intervals for the price of a widget for this time period. How would I calculate the confidence interval around the expected profit from my widget sales? I don't think I can treat this probability distribution of a sum of 12 independent normals because each month has dependence on the prior months. Is there a good way to calculate the total profit's confidence interval given these inputs and the historical time series? Simple graph for reference:
Forecast total for a year given monthly time series I have a monthly time series (for 2009-2012 non-stationary, with seasonality). I can use ARIMA (or ETS) to obtain point and interval forecasts for each month of 2013, but I am interested in forecasting the total for the whole year, including prediction intervals. Is there an easy way in R to obtain interval forecasts for the total for 2013?
Compute $P(X_1+\cdots+X_k\lt 1)$ for $(X_i)$ i.i.d. uniform on $(0,1)$ Consider $N=\min\{n: S_n>1\}$, where $S_n=X_1+\cdots+X_n$ and $(X_i)_{i=1}^\infty$ is i.i.d. uniform on $(0,1)$. So, $N$ is the first time that $(S_n)_{n=1}^\infty$ crosses $1$. I'd like to calculate $E(N)$. To this end I'd like to calculate $\Pr(N\gt k)=\Pr(S_k\lt 1)$. Is there a way to actually do this for any arbitrary $k$? I mean, certainly there is as I know about the distribution of $S_n$ but the formula is just formidable when you move with $k$ toward $\infty$ and even though tractable with a computer, I don't think it's tractable without it. Is there maybe a better way to calculate the expectation? Thanks for any help.
eng_Latn
34,502
Characteristic function of uniform random variable I am trying to find out expectation of a function of a uniform random variable. I am given a random variable $x$ that is uniformly distributed over the interval $[0, a]$. I want to find out the expectation $E[e^{-i2\pi (m-n)\frac{x}{a}}]$, where $m$, $n$ are integers and $i=\sqrt{-1}$. I saw derivation at a few places, and it looks like that the expectation evaluates to $\delta_{km}$. It is not clear to me how this derivation is carried out. When I look at the expectation, it appears I am trying to calculate the characteristic function of a uniform random variable. This function is not a delta function. So what am I missing here?
Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of $Z_n$ disappear for $n \geq 3$? I've been wondering about this one for a while; I find it a little weird how abruptly it happens. Basically, why do we need just three uniforms for $Z_n$ to smooth out like it does? And why does the smoothing-out happen so relatively quickly? $Z_2$: $Z_3$: (images shamelessly stolen from John D. Cook's blog: ) Why doesn't it take, say, four uniforms? Or five? Or...?
Why are these estimates to the German tank problem different? Suppose that I observe $k=4$ tanks with serial numbers $2,6,7,14$. What is the best estimate for the total number of tanks $n$? I assume the observations are drawn from a discrete uniform distribution with the interval $[1,n]$. I know that for a $[0,1]$ interval the expected maximum draw $m$ for $k$ draws is $1 - (1/(1+k))$. So I estimate $\frac {k}{k+1}$$(n-1)≈$ $m$, rearranged so $n≈$ $\frac {k+1 }{k}$$m+1$. But the frequentist estimate from is defined as: $n ≈ m-1 + $$\frac {m}{k}$ I suspect there is some flaw in the way I have extrapolated from one interval to another, but I would welcome an explanation of why I have gone wrong!
eng_Latn
34,503
I want to be hit, but my AC is too high One of my players has a build that grants advantages when he is hit, especially when hit by an Opportunity Attack. The problem is that his AC is very high, so straight rolls miss. My natural inclination is that the AC numbers shouldn't matter. If a player wants to be hit, he could move into the incoming attack. So, as long as the monster doesn't roll a 1, all Opportunity Attacks should land if the Players wants them to. ("Look at me! I'm very slowly running away! I hope all these big bad monsters don't strike me as I turn my back and trim my fingernails!") Yes, I know that plate armor, for instance, will deflect any blow if hit in the center of the plate, but there has to be some way to expose/open oneself to damage. I'm having trouble finding rules for when a players wants to be hit. What's the mechanic?
Can I choose not to defend against an attack? Playing D&D4e, I have a situation in mind where a character could reasonably WANT to get hit with a normally harmful attack. Is there anything in the rules allowing the character to purposefully take the hit, not even bothering to attempt to defend or dodge out of the way of it? For example, a dragonborn character makes a breath attack that includes an ally in its blast. The dragonborn has the Nusemnee's Atonement feat, which allows a player whose AOE damages an ally to take the ally's damage instead. This can be paired with a dragonborn feat that recharges your racial dragonbreath power when you take damage of the type it deals. This combination lets the dragonborn recharge their dragonbreath as long as they hit an ally with it. Normally you'd want your ally to at least have a chance to dodge your AOE first, but in this combo, it's best if they can allow themselves to get hit! Can the ally choose not to defend against the attack, purposefully taking the hit, knowing that the character making the attack will take the damage in his place?
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
eng_Latn
34,504
Can two events be mutually exclusive but not independent? So, two events are mutually exclusive if they have an empty set, and two events are independent if they do not affect each other. However, is it possible for two events to be mutually exclusive but not independent? For example, the p(A) = {card < 5 is drawn} and p(B) = {face card is drawn}. p(A&B) = 0, p(A) = 12/52, p(B) = 12/52 Test for independence fails since p(A&B) does not equal p(A)*p(B). Therefore, this must mean that the events are not independent but are mutually exclusive. Is this logically possible?
What is the difference between independent and mutually exclusive events? Two events are mutually exclusive if they can't both happen. Independent events are events where knowledge of the probability of one doesn't change the probability of the other. Are these definitions correct? If possible, please give more than one example and counterexample.
Probability of complete coupon set in $K$ boxes where a box has $N$ distinct coupons? Say there is a contest to collect a full set of $M$ coupons. Each box of product has $N$ distinct coupons of the $M$ possible coupons in it, selected uniformly without replacement from the $M$ possible coupons, the same $N$ in number for all boxes. I'm interested in the probability that after opening $K$ boxes of product I have at least one complete set of the $M$ coupons. My attempt has been to take the probability of a coupon not being in a box, use that to determine the probability of that coupon not being in any of the $K$ boxes, and then... I get flummoxed, and don't think this is the correct approach. I searched the site for coupon collector problems, there are a couple that deal with similar situations (multiple in a box), but with the box having coupons selected with replacement, so duplication in a box is possible. Right now I'm using a CAS to get probability of sums of hypergeometric samples having all bins covered to check my ideas, but that's utterly impractical for other than tiny cases. Is this not as trivial as I think it is, or is there a direct way to calculate this?
eng_Latn
34,505
hat problem and probability There are 7 prisoners in the room. In the entrance all of them get hat in one of random 2 colors: white or black. They are sitting in the circle and the light on. All of them see hat color of the rest but can't see himself. They will be free if at least one of them say the right color of himself hat color and no one is wrong (they could see the answer or no). The other prisoners can't hear the answer. How to find the best strategy with the best probability?
A riddle about guessing hat colours (which is not among the commonly known ones) This is a riddle I heard recently, and my question is if someone happens to know the solution. I'm asking this out of curiosity more than anything else. So here it is. The riddle is one of the countless variations of the "prisoners have to guess their hat colour" puzzle. $n$ prisoners are put a hat on top of their head, which can be red or blue. The colours are chosen at random by $n$ independent fair coin tosses. Then each prisoner can guess their own hat colour (red or blue) or pass. The prisoners can see each other, but not hear each other's calls and of course they have no other means of communication. This means that each call can only depend on the other prisoners' hat colours. However, before the distributing of hats begins, the prisoners are told the rules and can agree on a strategy. The prisoners win iff no prisoner guesses wrong and at least one prisoner guesses right. Which strategy should the prisoners use so that the winning probability becomes maximal? Some remarks: A simple strategy is that one player just guesses and all other players pass, so that the maximal probabilty is at least 1/2. For $n=2$ this strategy is optimal. For $n=3$, there is a strategy that wins in 6 out of 8 cases: When a player sees (red,red) he guesses blue, for (blue,blue) he guesses red, and otherwise he passes. More generally this shows that the maximal probability is at least 3/4 for $n\ge 3$. It's possible to show that any strategy fails for at least 2 hat colour configurations (unless $n=1$), which shows that the above strategy is optimal for $n=3$. For $n=4$ there are more than $10^{15}$ strategies, and for $n=5$ it's about $10^{38}$ strategies, making it quite infeasible to just use a brute-force computer program (maybe for $n=4$ it's possible when exploiting the obvious symmetries). When changing the rules slightly by forbidding players to pass, then the maximal winning probability is always 1/2. This is a nice little exercise. Actually I heard the riddle only for $n=3$ and then thought about the general $n$. So it's entirely possible that there is no nice solution.
What's the best way to deal with hot/stuck pixels in long exposure night photographs? I discovered red/blue/white pixels on my night photographs tonight. Trying to confirm this, I took a black image with the lens on and indeed I saw a whole lot of them (I didn't count them but at least a good fifty on the whole picture). I'm not exactly sure if they are hot pixels, stuck pixels or anything else. I quickly gave a check to my last exported JPEG files (I get RAW out of the camera) and couldn't find anything. After a quick search, I've seen that Lightroom is caring about those pixels after RAW conversion. I've read that, with Canon DSLRs, there is a software process that could take care of that directly on my camera. I don't know if I can use it without getting things even more messy. My question is: Considering my crazy pixels are not visible on my pictures, should I get too emotional about them? My camera is under warranty but is it really worth it to return it to Canon? If it's only to receive it back 1 month later after Canon just used the very same menu to map out the pixels, it's a bit ridiculous. Could this be a sign that my camera has a bigger problem? It seems related to shutter speed. When shooting a black image at 1/250s I get nothing. When shooting the same setup but 1s, I get a few of them. The more I increase the shutter speed, the more crazy pixels I get.
eng_Latn
34,506
Making up groups of Coins In how many ways can a group of 100 coins be made up from 50,20,10,5,2 or 1 coin(s) respectively? An alternative way of phrasing this would be how many ways can a group of 100 coins be made from choosing coins of value 50,20,10,5,2,1? I have tried the problem, and I've realised it is quite difficult to keep track of the arrangements checked (I know the answer is around 4000 though). Can this be solved without a computer?
Making Change for a Dollar (and other number partitioning problems) I was trying to solve a problem similar to the "how many ways are there to make change for a dollar" problem. I ran across a that said I could use a generating function similar to the one quoted below: The answer to our problem (293) is the coefficient of $x^{100}$ in the reciprocal of the following: $(1-x)(1-x^5)(1-x^{10})(1-x^{25})(1-x^{50})(1-x^{100})$ But I must be missing something, as I can't figure out how they get from that to $293$. Any help on this would be appreciated.
Probability of complete coupon set in $K$ boxes where a box has $N$ distinct coupons? Say there is a contest to collect a full set of $M$ coupons. Each box of product has $N$ distinct coupons of the $M$ possible coupons in it, selected uniformly without replacement from the $M$ possible coupons, the same $N$ in number for all boxes. I'm interested in the probability that after opening $K$ boxes of product I have at least one complete set of the $M$ coupons. My attempt has been to take the probability of a coupon not being in a box, use that to determine the probability of that coupon not being in any of the $K$ boxes, and then... I get flummoxed, and don't think this is the correct approach. I searched the site for coupon collector problems, there are a couple that deal with similar situations (multiple in a box), but with the box having coupons selected with replacement, so duplication in a box is possible. Right now I'm using a CAS to get probability of sums of hypergeometric samples having all bins covered to check my ideas, but that's utterly impractical for other than tiny cases. Is this not as trivial as I think it is, or is there a direct way to calculate this?
eng_Latn
34,507
Probability of a fraction $a/b$ that cannot be simplified Let $a$ and $b$ be random integers chosen independently from the uniform distribution on $\{1, 2,\dotsc, N\}$. As $N \rightarrow \infty$, what is the probability that the fraction: $$\frac{a}{b}$$ cannot be simplified? Note: As specified in the comments, the question is the same as .
Probability that two random numbers are coprime is $\frac{6}{\pi^2}$ This is a really natural question for which I know a stunning solution. So I admit I have a solution, however I would like to see if anybody will come up with something different. The question is What is the probability that two numbers randomly chosen are coprime? More formally, calculate the limit as $n\to\infty$ of the probability that two randomly chosen numbers, both less than $n$ are coprime.
Coupon collector's problem: mean and variance in number of coupons to be collected to complete a set (unequal probabilities) There are $n$ coupons in a collection. A collector has the ability to purchase a coupon, but can't choose the coupon he purchases. Instead, the coupon is revealed to be coupon $i$ with probability $p_i=\frac 1 n$. Let $N$ be the number of coupons he'll need to collect before he has at least one coupon of each type. Find the expected value and variance of $N$. Bonus: generalize to the case where the probability of collecting the $j$th coupon is $p_j$ with $\sum\limits_{j=1}^n p_j=1$ I recently came across this problem and came up with/ unearthed various methods to solve it. I'm intending this page as a wiki with various solutions. I'll be posting all the solutions I'm aware of (4 so far) over time. EDIT: As mentioned in the comments, this question is different from the one people are saying it is a duplicate of since (for one thing) it includes an expression for the variance and it covers the general case where all coupons have unequal probabilities. The case of calculating the variance for the general case of coupons having unequal probabilities has not been covered anywhere on the site apart from , which this one intends to consolidate along with other approaches to solve this problem. EDIT: Paper on the solutions on this page submitted to ArXiv:
eng_Latn
34,508
Statistical question - conditional probability I would like to ask below question: i am trying to detect a kind of event by experiments, i repeat the experiment for 21 times, but no event of interest occurred. My quesiton is based on current results (21 repeat without any occurrence), what is the reasonable estimation of probability of the event? (Or, posterior desitribution of the probaility of the event?) Looking forward to your great ideas.
Confidence interval around binomial estimate of 0 or 1 What is the best technique to calculate a confidence interval of a binomial experiment, if your estimate is that $p=0$ (or similarly $p=1$) and sample size is relatively small, for example $n=25$?
Why are these estimates to the German tank problem different? Suppose that I observe $k=4$ tanks with serial numbers $2,6,7,14$. What is the best estimate for the total number of tanks $n$? I assume the observations are drawn from a discrete uniform distribution with the interval $[1,n]$. I know that for a $[0,1]$ interval the expected maximum draw $m$ for $k$ draws is $1 - (1/(1+k))$. So I estimate $\frac {k}{k+1}$$(n-1)≈$ $m$, rearranged so $n≈$ $\frac {k+1 }{k}$$m+1$. But the frequentist estimate from is defined as: $n ≈ m-1 + $$\frac {m}{k}$ I suspect there is some flaw in the way I have extrapolated from one interval to another, but I would welcome an explanation of why I have gone wrong!
eng_Latn
34,509
Why $0.999$... isn't the largest number before 1? Why doesn't it called like that? It seems fair, $1$ called $1$ while $0.999$... being the largest number before $1$, and not called $1$ while not look like it is. Let's say it isn't, how would that number look like?
Is it true that $0.999999999\ldots=1$? I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
Discrete Probability: Four dice are thrown, what's the probability that... Four dice are thrown, what's the probability that: a) None of them fall higher than three? b) None of them fall higher than four? c) That four is the highest number thrown? So for a, I wanna think about the denominator first. There are 6 possible outcomes for each dice, and we have four dice. So we're technically picking r out of n objects, aka picking 4 possible values out of 6. So the denominator/sample space should be $6^4$ right? Now this is where I'm getting tripped up. Order technically shouldn't count, because we only care about the quadruplets that don't have a value higher than 3. I might be wrong, maybe I am, but could someone explain a bit more? My professor stated that typically in the sample space/denominator, we want order to count. So I'm trying to think of the numerator now, so we want to find the probability that no values appear higher than 3, so some events that can occur are: $(1, 1, 2, 3), (1, 2, 2, 3), (1, 1, 1, 1)$ etc. Again, I can't see why order should count here, because if we only are concerned about what appears rather than how they appear, then we can say $(1, 1, 2, 3)$ is equal to $(1, 2, 1, 3)$? So if order does not count, and we have replacement, then does this lead the case where we use $$\binom{n-1+r}{r}$$to find out the probability? I'm pretty sure if I can do a), I could prob do b), but if someone could maybe lead me in the right direction for c) I would appreciate that too!
eng_Latn
34,510
Mean of fractions or fraction of means? When to use mediant and when to use mean? When would we use the mediant? Clarifying the problem: I was working on some numbers. De data had duration of a job, the number of steps to complete a job and the starting date of a job. All rows had similar data. I wanted to know the average duration per step. Somehow I got in the process of overthinking. What I was doing, is what I would normally do when computing an average, so in my case: $$ \frac{1}{n}\sum_{i=1}^{n} \frac{d_i}{s_i} $$ In which $d$ is the duration of a job, $s$ the number of steps to complete a job and $i$ the job (I am neglecting the optional time binning here). But why would I not just compute the average duration and the average number of steps, divide those and use that number, so: $$ \frac{\frac{1}{n}\sum_{i=1}^n d_i}{\frac{1}{n}\sum_{j=1}^n s_i} = \frac{\sum_{i=1}^n d_i}{\sum_{j=1}^n s_i} $$ also called the mediant. I understand math in the way that I know these are not computing to the same value in general. So the question is more fundamental I guess? Why can't we use the last one, just simply because an average divided by an average is not an average? When would we use the last expression (mediant)?
Difference between $\sum_{i=1}^{k}{\frac{s_i}{kn_i}}$ and $\frac{\sum_{i=1}^{k}{s_i}}{\sum_{i=1}^{k}{n_i}}$ I am getting confused at how to calculate the average probability. Suppose we repeat a kind of binary survey $k$ times each of which was done on a completely separate sample group. For each $i^{th}$ group, $i=1,2,3,...,k$, Let $n_i$ be the number of samples and $s_i$ be the number of positive results. With this we know that the probability $p_i$ of the positive result for the $i^{th}$ group is $s_i/n_i$ With my ignorance, I happened to use two ways to calculate the average probability in my work wrongly assuming that they are the same: $$\frac{p_1+p_2+p_3+...+p_k}{k}$$ and $$\frac{s_1+s_2+s_3+...+s_k}{n_1+n_2+n_3+...+n_k}$$ I don't know which is the correct way to calculate the average probability. So could you please explain the difference of these two and when to use which?
Combinatorics Distribution - Number of integer solutions Concept Explanation I reading my textbook and I don't understand the concept of distributions or number of solutions to an equation. It's explained that this problem is 1/4 types of sampling/distributions problems. An example is provided to illustrate: In how many ways can 4 identical jobs (indistinguishable balls) be distributed among 26 members (urns) without exclusion (since one member can do multiple jobs)? A sample outcome might be: $\text{_____________}$ | A | B | C |...| Z | $\text{--------------------}$ $\text{_____________}$ | o | oo| | | ||| o | $\text{_____________}$ | A | B | C |... | Z | $\text{--------------------}$ Thus, the question is reduced to, "How many $(26-1+4)$ letter words are there consisting of four circles and $(26-1)$ vertical lines?" Therefore, the solution is: $\binom{26-1+4}{4}$ I really don't understand why its $(26-1+4)$. There's only 26 different spots or "urns" to place the 4 jobs. Can someone please explain? I looked through another text to try and understand and I found it explained as such: There are $\binom{n+r-1}{r-1}$ distinct nonnegative integer valued vectors $(x_1,...,x_r$ satisfying the equation $(x_1 + ... + x_r = x_n$ for $x\ge0$. $\spadesuit$ How in the world are they deriving this? For distinct positive integers I understand: Assume I have 8 balls (n=8) and I have 3 urns (r=3): o^o^o^o^o^o^o^o, where o represents a ball and ^ represents a place holder where an urn could be placed. For this scenario: There are $\binom{n-1}{r-1}$ distinct positive integer valued vectors $(x_1,...,x_n)$ satisfying the equation: $x_1 + ... + x_n = n, x_i>0, i=1,..,r$ $\clubsuit$ It's clear that I could have this specific case ooo|ooo|oo. Here the bar represents a divide for the urn and you see I have 3 sections. So that case is clear. Can anyone please explain this problem to me? I don't understand the nonnegative integer case. Also, people who post tend to be crazy smart and explain things very in a complicated manner. I'd appreciate it if it could be explained in layman's terms as much as possible. Thank you!!!
eng_Latn
34,511
Prove that the expectation of the number of black balls preceding the first white ball is $\frac {b}{w+1}$ Balls are taken one by one out of an urn containing $w$ white and $b$ black balls until the first white ball is drawn. Prove that the expectation of the number of black balls preceding the first white ball is $\frac {b}{w+1}$ Attempt: Let $X_i$ be the random variable that denotes the number of black balls that are drawn at the $i_{th}$ step before a white ball is drawn. Then, the total number of such balls $ X= X_1 + \cdots+X_n \implies E(X)=\sum E(X_i).$ $E(X_i)= 1 \cdot \dfrac {^bC_r}{^{b+w}Cr}\cdot \dfrac {^wC_1}{^{b+w-r}C_1}$ Thus, $\sum E(X_i) = \sum_{i=1}^{b} ~ 1 \cdot \dfrac {^bC_r}{^{b+w}Cr}\cdot \dfrac {^wC_1}{^{b+w-r}C_1}$ Could someone please tell me if I attempted this correctly? Because I get a very complicated answer in the end after evaluating the above. Thanks a lot!
Expected value in an urn problem In an urn there are $n$ red balls and $m$ blue balls. I extract them without replacement. Let $X$=time of first blue. What is $E(X)$? I found PMF of $X$ and it is, if $k > n+1$ $$P(X=k) =\frac{n(n-1) \dots (n-k+1)m}{(m+n)(m+n-1) \dots (n+m-k+1)}$$ How could I evaluate $E(X)$? Edit: I'm looking for an explicit form of $E(X)$
Coupon collector's problem: mean and variance in number of coupons to be collected to complete a set (unequal probabilities) There are $n$ coupons in a collection. A collector has the ability to purchase a coupon, but can't choose the coupon he purchases. Instead, the coupon is revealed to be coupon $i$ with probability $p_i=\frac 1 n$. Let $N$ be the number of coupons he'll need to collect before he has at least one coupon of each type. Find the expected value and variance of $N$. Bonus: generalize to the case where the probability of collecting the $j$th coupon is $p_j$ with $\sum\limits_{j=1}^n p_j=1$ I recently came across this problem and came up with/ unearthed various methods to solve it. I'm intending this page as a wiki with various solutions. I'll be posting all the solutions I'm aware of (4 so far) over time. EDIT: As mentioned in the comments, this question is different from the one people are saying it is a duplicate of since (for one thing) it includes an expression for the variance and it covers the general case where all coupons have unequal probabilities. The case of calculating the variance for the general case of coupons having unequal probabilities has not been covered anywhere on the site apart from , which this one intends to consolidate along with other approaches to solve this problem. EDIT: Paper on the solutions on this page submitted to ArXiv:
eng_Latn
34,512
Quantum uncertainty affecting classical object? As far as I know, the probability of a quantum object being in a certain position depends on the wave function value for each position. That raises a question: Is this probability strictly greater than 0 for all points? If I place an electron in a box, it can be anywhere on the box, or anywhere on the universe? For example, there is always a small possibility of finding a value however far from the mean, while that is not the case for a triangular distribution. Also, slightly related: Is this property maintained when studying classical objects? Is there any possibility, even if unimaginably small, that all of the particles of a cat will simply move somewhere else at the same time, "teleporting" it?
Probability, quantum physics, and why (can't it/does it) apply to macroscale events? Quantum physics dictates that there are probabilities that determine the outcome of an event, ie: the probability of a quark passing through a wall is X, due to the size of the quark in comparison to the wall), but couldn't macroscale events be predicted this same way, assuming all variables were accounted for (lets hypothetically say we have a computer powerful enough to take into account all factors). My understanding is that as the scale of an object increases, the probability of it doing anything other than what classical physics dictates is almost 0%, but could conceivably do something not predicted (Example: a human being cannot pass through a wall, but given an infinite timescale, eventually s/he would because of probability not being 0). Is this accurate? If so, then shouldn't we, (except as a tool for conceptualization, like how we round 9.8 m/s^2 or pi to 3.14), use only quantum mechanics to explain events in any scale (for more accuracy)?
Probability of complete coupon set in $K$ boxes where a box has $N$ distinct coupons? Say there is a contest to collect a full set of $M$ coupons. Each box of product has $N$ distinct coupons of the $M$ possible coupons in it, selected uniformly without replacement from the $M$ possible coupons, the same $N$ in number for all boxes. I'm interested in the probability that after opening $K$ boxes of product I have at least one complete set of the $M$ coupons. My attempt has been to take the probability of a coupon not being in a box, use that to determine the probability of that coupon not being in any of the $K$ boxes, and then... I get flummoxed, and don't think this is the correct approach. I searched the site for coupon collector problems, there are a couple that deal with similar situations (multiple in a box), but with the box having coupons selected with replacement, so duplication in a box is possible. Right now I'm using a CAS to get probability of sums of hypergeometric samples having all bins covered to check my ideas, but that's utterly impractical for other than tiny cases. Is this not as trivial as I think it is, or is there a direct way to calculate this?
eng_Latn
34,513
Python function complexity I was asked on the best and worst case scenario for this function: def is_palindromic(the_list): result = True the_stack=Stack() for item in the_list: the_stack.push(item) for item in the_list: item_stack=the_stack.pop() if item != item_stack: result = False return result This function determines if a list is the same as its reverse using a stack. I thought the time complexity was the same for every case but when I tested, it took longer to run if the list was indeed the same as its reverse. Can anyone explain why? I am a bit confused.
Big O, how do you calculate/approximate it? Most people with a degree in CS will certainly know what . It helps us to measure how well an algorithm scales. But I'm curious, how do you calculate or approximate the complexity of your algorithms?
Is there a simple reason why the expected number of coin flips till getting $m$ more heads than tails or $n$ more tails than heads should be $mn$? I flip a coin until I get $m$ more heads than tails, or $n$ more tails than heads. Let the expected number of flips of the coin before stopping be $f(m,n)$. I obtained $f(m,n)=mn$ from the recursion $f(m,n)=1+\frac{f(m-1,n+1)+f(m+1,n-1)}2$ with $f(k,0)=f(0,k)=0$ for all $k$. Other than going through this recursion (and either solving by inspection or by writing as linear recurrence in single variable and solving brute force), is there an intuitive reason you should expect this process to take $mn$ flips? I was thinking about the more general problem with probability $p$ of getting heads and was struck by how simple the formula became when handling, what turned out to be a special case (general formula broke down) of $p=\frac12$.
eng_Latn
34,514
Calculating the probability of a function being real when it has random variables I'm trying to calculate the probability of $ax^2+bx+c$ root being a real number when the variables $a,$ $b,$ and $c$ values are all randomized by throwing a standard die. I got to the point where I can get the probability by calculating the chance of $b^2-4ac>0$, but I'm not sure how I can conveniently carry on from here and my attempts at doing it by hand (finding every possible real occurrence and calculating them by the total outcomes) have failed me. In other words, the values of $a$, $b$ and $c$ are within $\{1,2,3,4,5,6\}$ and each of the three variables is randomly picked from that list with no special weighting (so $\frac 16$ chance to get any of the $6$ values).
Probability for roots of quadratic equation to be real, with coefficients being dice rolls. I really need help with this question. The coefficients $a,b,c$ of the quadratic equation $ax^2+bx+c=0$ are determined by throwing $3$ dice and reading off the value shown on the uppermost face of each die, so that the first die gives $a$, the second $b$ and and third $c$. Find the probabilities that the roots the equations are real, complex and equal. I was thinking about using the fundamental formula but i'm not sure how to go about doing it. Help would be greatly appreciated.
Ways of getting a number with $n$ dice, each with $k$ sides Assume the dice are numbered from $1$ to $k$. My hunch is that this will form a normal distribution with a median at $n\cdot\frac{k}{2}$. However, I have no idea as to turn this fact into an answer (I have a minimal knowledge of stats, but I know that I am missing the standard distribution) and this is probably the wrong approach How can I approach and solve this problem? (Aside, this is not for a class, stats or other, so any and all approaches welcome). *Edit: * I want to find the number of ways that the sum of the numbers that are rolled has a particular value, if $n$ dice are rolled, and each has $k$ sides, numbered $1$ to $k$.
eng_Latn
34,515
indexing all combinations without making list What is the most efficient way to to find the i'th combination of all combinations without repetition and without first creating all combinations until i. K is fixed (number of elements in each combination) and N is fixed (number of elements to be combined). The order does not matter although extra kudos for finding i in the following order... 1 2 3 4 5 6 7 8 1,2,3,4 1,2,3,5 1,2,3,6 1,2,4,5 1,2,4,6 1,2,5,6 1,3,4,5 1,3,4,6 9 10 11 12 13 14 15 1,3,5,6 1,4,5,6 2,3,4,5 2,3,4,6 2,3,5,6 2,4,5,6 3,4,5,6
A positional number system for enumerating fixed-size subsets? Many combinatorial objects have some associated positional number system. For example, the subsets of a (finite) set S can be listed off by observing the bits of all $|S|$-bit numbers, treating the 1s and 0s as indicators for whether to pick a particular element or not. Permutations of a finite set S can be found by examining the representations of the first $|S|!$ natural numbers. Is there an analogous positional number system for encoding k-element subsets of an n-element set? Thanks!
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
eng_Latn
34,516
Pascal triangle - how to derive row-specific formula Is there any way I can intuitively demonstrate or remember the formula listed at for the Pascal triangle? I'm talking of $$ {n\choose k}= {n\choose k-1}\times \frac{n+1-k}{k} $$ I tried with a piece of paper and it definitely works.. but I don't know why. There is no proof or demonstration.. so: why does it work?
How can I prove the formula for calculating successive entries in a given row of Pascal's triangle? I've found in Wikipedia the formula for calculating an individual row in Pascal's Triangle: $$v_c = v_{c-1}\left(\frac{r-c}{c}\right).$$ where $r = \mathrm{row}+1$, $c$ is the column starting from $0$ on left and $v_0 = 1$. Now, I've tried to do by hand and it works, but I don't understand how to find this magic formula when I don't have access to Wikipedia...:)
Probability Problem with $n$ keys A woman has $n$ keys, one of which will open a door. a)If she tries the keys at random, discarding those that do not work, what is the probability that she will open the door on her $k^{\mathrm{th}}$ try? Attempt: On her first try, she will have the correct key with probability $\frac1n$. If this does not work, she will throw it away and on her second attempt, she will have the correct key with probability $\frac1{(n-1)}$. So on her $k^{\mathrm{th}}$ try, the probability is $\frac1{(n-(k-1))}$ This does not agree with my solutions. b)The same as above but this time she does not discard the keys if they do not work. Attempt: We want the probability on her $k^{\mathrm{th}}$ try. So we want to consider the probability that she must fail on her $k-1$ attempts. Since she keeps all her keys, the correct one is chosen with probability $\frac1n$ for each trial. So the desired probability is $(1-\frac{1}{n})^{k-1} (\frac1n)^k$. Again, does not agree with solutions. I can't really see any mistake in my logic. Can anyone offer any advice? Many thanks
eng_Latn
34,517
Say I am interested in predicting the TOTAL number of people that survive the titanic disaster, NOT each individual who died. Is it possible to run a probabilistic classifier on the data getting a value for each individual between 0 and 1, then summing those values for a best guess?
Say I have the titanic kaggle competition, but I'm not interested in the competition for predicting survival for each individual. Instead I want the most accurate estimate of total survivors on the titanic. Would this be achieved by using a probabilistic model, then adding the probabilities for each individual? For example, if I have 3 people and 1 survived, but my model produced 0.4, 0.4, and 0.4 probabilities for each person to survive, I calculate 0 survived. But if I add 0.4 for each person, I get 1.2, which is closer to the actual. Does this make sense?
Say I have the titanic kaggle competition, but I'm not interested in the competition for predicting survival for each individual. Instead I want the most accurate estimate of total survivors on the titanic. Would this be achieved by using a probabilistic model, then adding the probabilities for each individual? For example, if I have 3 people and 1 survived, but my model produced 0.4, 0.4, and 0.4 probabilities for each person to survive, I calculate 0 survived. But if I add 0.4 for each person, I get 1.2, which is closer to the actual. Does this make sense?
eng_Latn
34,518
A toy is randomly put in a given Cereal box as a promotional gift. There can be N different types of toys and each one can be of any type N (IID). (a) Find the expected number of cereal box one has to buy before she can have atleast one toy of each type, (b) If she has already collected m toys, what is the expected number of different toys she has collected. Can someone explain how to model this problem using Indicator Random variables.
What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
The entire site is blank right now. The header and footer are shown, but no questions.
eng_Latn
34,519
Say I have a table of numbers 1-6. I throw a 6 sided die a number of times. Each time I get a number I have not already had, I mark it in the table. What is the expected number of times to throw the dice to fill out the table?
What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
The entire site is blank right now. The header and footer are shown, but no questions.
eng_Latn
34,520
I am asking to use central limit theorem to solve this quesion. In an election between two candidates, A and B, one million individuals cast their vote. Among these, 2000 know candidate A from her election campaign and vote unanimously for her. The remaining 998000 voters are undecided and make their decision independently of each other by flipping a fair coin. Approximate the probability pA that candidate A wins up to 3 significant figures. I got my mean = np = (1000000-2000) * 0.5 = 499000, my sd =$\sqrt {np*(1-p)}$= $\sqrt {998000*0.5*(1-0.5)}$, then apply CLT, my new sd = $\frac{sd}{\sqrt{998000}}$, and X = $\frac{1000000}{2}$+1-2000 = 498001. Then apply normal distribution. Am I right?
I am asked to solve the following question using central limit theorem. In an election between two candidates, A and B, one million individuals cast theirvote. Among these, 2000 know candidate A from her election campaign and vote unanimously for her.The remaining 998000 voters are undecided and make their decision independently of each other byflipping a fair coin.Approximate the probability pA that candidate A wins up to 3 significant figures. It's easy to solve directly. PA = $\frac{0.5(10000-2000)+2000}{10000}$ = 0.501. However, I am quite confused about how to solve this problem by central limit theorem.
I am asked to solve the following question using central limit theorem. In an election between two candidates, A and B, one million individuals cast theirvote. Among these, 2000 know candidate A from her election campaign and vote unanimously for her.The remaining 998000 voters are undecided and make their decision independently of each other byflipping a fair coin.Approximate the probability pA that candidate A wins up to 3 significant figures. It's easy to solve directly. PA = $\frac{0.5(10000-2000)+2000}{10000}$ = 0.501. However, I am quite confused about how to solve this problem by central limit theorem.
eng_Latn
34,521
How to solve the following probability question?
0.30 * 0.40 + 0.70 * 0.10\n\n= 0.12 + 0.07 = 0.19\n\n= 19%
99.23% cheat,rest 00.77% r faitlful
eng_Latn
34,522
How to estimate the total number of raffle tickets sold based on the serial of the ones that were drawn?
Why are these estimates to the German tank problem different?
Direct proof that nilpotent matrix has zero trace
eng_Latn
34,523
Probability in a Dice Game
Probability of first actor winning a "first to roll seven with two dice" contest?
probability that there is at least one defective part
eng_Latn
34,524
Expected number of variables that are at least n
n tasks assigned to n computers, what is the EX value of a computer getting 5 or more tasks?
Not including stdlib.h does not produce any compiler error!
eng_Latn
34,525
Extension to Classical Coupon Collectors Problem
Coupon Collector Problem with Batched Selections
The coupon collectors problem
eng_Latn
34,526
How to decide bootstrap number of runs?
Rule of thumb for number of bootstrap samples
How do I ensure a piece of code runs only once?
eng_Latn
34,527
How many draws to collect all items?
The coupon collectors problem
There are no interfaces on which a capture can be done
eng_Latn
34,528
Trying to understand a probability question/concept So I made a post and there were a lot of incorrect answers, and two people were able to provide the correct answers, but I cannot understand why or how they got it. So for this particular question: A dice is thrown four times, what's the probability "four" is the highest number thrown? Okay, so what I understand is the sample space is $6^4$. Now we need to find the probability that $4$ is the highest number thrown. So that eliminates 2 other possible outcomes, 5 and 6. So we have 4 possible outcomes now. So $\frac{4^{4}}{6^{4}}$. But now what I cannot understand is why on earth do we need to subtract the number of possibilities where none of the numbers fall higher than 3? Like why does it matter? What if 4 is rolled all the time? Who cares?
Discrete Probability: Four dice are thrown, what's the probability that... Four dice are thrown, what's the probability that: a) None of them fall higher than three? b) None of them fall higher than four? c) That four is the highest number thrown? So for a, I wanna think about the denominator first. There are 6 possible outcomes for each dice, and we have four dice. So we're technically picking r out of n objects, aka picking 4 possible values out of 6. So the denominator/sample space should be $6^4$ right? Now this is where I'm getting tripped up. Order technically shouldn't count, because we only care about the quadruplets that don't have a value higher than 3. I might be wrong, maybe I am, but could someone explain a bit more? My professor stated that typically in the sample space/denominator, we want order to count. So I'm trying to think of the numerator now, so we want to find the probability that no values appear higher than 3, so some events that can occur are: $(1, 1, 2, 3), (1, 2, 2, 3), (1, 1, 1, 1)$ etc. Again, I can't see why order should count here, because if we only are concerned about what appears rather than how they appear, then we can say $(1, 1, 2, 3)$ is equal to $(1, 2, 1, 3)$? So if order does not count, and we have replacement, then does this lead the case where we use $$\binom{n-1+r}{r}$$to find out the probability? I'm pretty sure if I can do a), I could prob do b), but if someone could maybe lead me in the right direction for c) I would appreciate that too!
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
eng_Latn
34,529
Why is this argument incorrect for the Envelope Paradox Consider the envelope paradox problem: There are two envelopes both of which contain some money. One envelope contains twice the amount of the other (but other than that, you do not know how much is in them). You select one of the envelopes randomly and see the amount of money inside. You can opt to either keep the money in this envelope, or switch envelopes, which do you choose? A (false) argument for switching envelopes: Let the amount of money in the envelope you selected be $X$. There is a $50\%$ chance that the envelope with more money is selected, and a $50\%$ chance that the envelope with less money is selected. Therefore, there is a $50\%$ chance that there is $0.5X$ in the other envelope, and a $50\%$ chance that there is $2X$ in the other envelope. So the expected payout for switching is ${2X + 0.5X\over2} = 1.25X$ which is better than the payout of $X$ for not switching. Therefore you should switch. This argument is wrong... where exactly does it fall?
If you have two envelopes, and ... Suppose you're given two envelopes. Both envelopes have money in them, and you're told that one envelope has twice as much money as the other. Suppose you pick one of the envelopes. Should you switch to the other one? Intuitively, you don't know anything about either envelope, so it'd be ridiculous to say that you should switch to the other envelope to maximize your expected money. However, consider this argument. Let $x$ be the amount of money in the envelope you picked. If $y$ is the amount of money in the other envelope, then the expected value equals $$E(y) = \frac{1}{2}\left(\frac{1}{2}x\right) + \frac{1}{2}\left(2x\right) = \frac{5}{4} x$$ But $5x/4 > x$, so you should switch! The Wikipedia article says that $x$ stands for two different things, so this reasoning doesn't work. I say this is not a valid resolution. Consider opening up the envelope that you pick, and finding $\$10$ inside. Then you can run the expected value calculation to get $$E(y) = \frac{1}{2} \cdot \$5+\frac{1}{2} \cdot \$20 = \$12.50$$ This means that if you open one of the envelopes and find $\$10$, you should switch to the other envelope. The $\$10$ doesn't stand for two different things, it literally just means $\$10$. But you don't have to open up the envelope to run this calculation, you can just imagine what's inside, and run the calculation based on that. This is what "Let $x$ be the amount in the envelope" means. The problem with the argument is not that $x$ stands for two different things. So what is the problem? Previous questions on stack exchange have given the resolution that I just said I wasn't satisfied by, so please don't mark this as a duplicate. I want a different resolution, or a more satisfying explanation of why $x$ does stand for two different things. Apparently there is still research being published about this problem - maybe it isn't so obvious? I think there's something subtle wrong with the premise. Because there's no uniform probability distribution on $\mathbb{R}$, statements like "random real number" are not well-defined. Likewise, I think "one envelope has twice as much money as the other" assumes some probability distribution on $\mathbb{R}$, and perhaps our expected value calculation assumes that this distribution is uniform, which it cannot be ...
Coupon collector's problem: mean and variance in number of coupons to be collected to complete a set (unequal probabilities) There are $n$ coupons in a collection. A collector has the ability to purchase a coupon, but can't choose the coupon he purchases. Instead, the coupon is revealed to be coupon $i$ with probability $p_i=\frac 1 n$. Let $N$ be the number of coupons he'll need to collect before he has at least one coupon of each type. Find the expected value and variance of $N$. Bonus: generalize to the case where the probability of collecting the $j$th coupon is $p_j$ with $\sum\limits_{j=1}^n p_j=1$ I recently came across this problem and came up with/ unearthed various methods to solve it. I'm intending this page as a wiki with various solutions. I'll be posting all the solutions I'm aware of (4 so far) over time. EDIT: As mentioned in the comments, this question is different from the one people are saying it is a duplicate of since (for one thing) it includes an expression for the variance and it covers the general case where all coupons have unequal probabilities. The case of calculating the variance for the general case of coupons having unequal probabilities has not been covered anywhere on the site apart from , which this one intends to consolidate along with other approaches to solve this problem. EDIT: Paper on the solutions on this page submitted to ArXiv:
eng_Latn
34,530
Rummikub, replacing a joker When replacing a joker from the board state I must replace the joker gained from the board with 2 tiles from my hand. A joker that has been replaced must be used in the player's same turn with 2 or more tiles from his rack to make a new set My opponent interprets this meaning that I can only use the tiles from your hand. For example J,4,5 or J,6,6. Though I agree that I must use at least 2 tiles from my hand I find nothing that prohibits me from using another tile from the board along with the joker and the 2 tiles from my hand. For example I replaced a Joker from board, a 9+12 from my hand, and an 11 from another run to create the following. 9,J,11,12. My opponent thinks this is an illegal move. Is this move legal ?
Rules when using a Joker I had on my rack among other tiles, a black 9, 1 x blue 10, and a black 12. On the table was a set of 3 x 10s including a Joker(no blue 10), and other sets of tiles including an available black 11. Can I replace the joker with my blue 10 and then set out : my black 9, the joker(representing a black 10), a black 11 from the table, and my black 12?
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
eng_Latn
34,531
Weighting a List to make items with a lower index having a higher chance of being randomly chosen? I am receiving a List<Fruit> of unkown size, usually between 4-10 items: { Apples, Orange, Pear, ?, ?, ... } How can I weight the List in a way that Apples have the highest % chance of being selected, then Orange, then Pear? The effect should essentially be the same as picking a random item from a List that looks like this: { Apples, Apples, Apples, Apples, Orange, Orange, Pear } If the List was of a fixed size, I would've done Generate a float between 0.0-1.0; If < 0.4, return Apples. Else if < 0.75 return Orange etc. Note that this question is not about but about weighting an existing List of arbitrary length in a way that a random pick will result in an Item's probability of being picked proportional to its position in the List.
Random weighted choice Consider the class below that represents a Broker: public class Broker { public string Name = string.Empty; public int Weight = 0; public Broker(string n, int w) { this.Name = n; this.Weight = w; } } I'd like to randomly select a Broker from an array, taking into account their weights. What do you think of the code below? class Program { private static Random _rnd = new Random(); public static Broker GetBroker(List<Broker> brokers, int totalWeight) { // totalWeight is the sum of all brokers' weight int randomNumber = _rnd.Next(0, totalWeight); Broker selectedBroker = null; foreach (Broker broker in brokers) { if (randomNumber <= broker.Weight) { selectedBroker = broker; break; } randomNumber = randomNumber - broker.Weight; } return selectedBroker; } static void Main(string[] args) { List<Broker> brokers = new List<Broker>(); brokers.Add(new Broker("A", 10)); brokers.Add(new Broker("B", 20)); brokers.Add(new Broker("C", 20)); brokers.Add(new Broker("D", 10)); // total the weigth int totalWeight = 0; foreach (Broker broker in brokers) { totalWeight += broker.Weight; } while (true) { Dictionary<string, int> result = new Dictionary<string, int>(); Broker selectedBroker = null; for (int i = 0; i < 1000; i++) { selectedBroker = GetBroker(brokers, totalWeight); if (selectedBroker != null) { if (result.ContainsKey(selectedBroker.Name)) { result[selectedBroker.Name] = result[selectedBroker.Name] + 1; } else { result.Add(selectedBroker.Name, 1); } } } Console.WriteLine("A\t\t" + result["A"]); Console.WriteLine("B\t\t" + result["B"]); Console.WriteLine("C\t\t" + result["C"]); Console.WriteLine("D\t\t" + result["D"]); result.Clear(); Console.WriteLine(); Console.ReadLine(); } } } I'm not so confident. When I run this, Broker A always gets more hits than Broker D, and they have the same weight. Is there a more accurate algorithm? Thanks!
Accurately simulating the lots of dice rolls without loops? OK so if your game rolls lots of dice you can just call a random number generator in a loop. But for any set of dice being rolled often enough you will get a distribution curve/histogram. So my question is there a nice simple calculation I can run that will give me a number that fits that distribution? E.g. 2D6 - Score - % Probability 2 - 2.77% 3 - 5.55% 4 - 8.33% 5 - 11.11% 6 - 13.88% 7 - 16.66% 8 - 13.88% 9 - 11.11% 10 - 8.33% 11 - 5.55% 12 - 2.77% So knowing the above you could roll a single d100 and work out an accurate 2D6 value. But once we start with 10D6, 50D6, 100D6, 1000D6 this could save a lot of processing time. So there must be a tutorial / method / algorithm that can do this fast? It is probably handy for stock markets, casinos, strategy games, dwarf fortress etc. What if you could simulate the outcomes of complete strategic battle that would take hours to play with a few calls to this function and some basic maths?
eng_Latn
34,532
Different number of return values in Python function Is it okay/good practice to return different number of return values from a function in Python? If yes, how should the caller handle the return values? e.g. def func(): if(condition_1): return 1,2 if(condition_2): return 1 if(condition_3): return 1,2,3,4 Note: condition_1,condition_2 and condition_3 are local to the function func so the caller has no idea how many values will be returned.
Is it pythonic for a function to return multiple values? In python, you can have a function return multiple values. Here's a contrived example: def divide(x, y): quotient = x/y remainder = x % y return quotient, remainder (q, r) = divide(22, 7) This seems very useful, but it looks like it can also be abused ("Well..function X already computes what we need as an intermediate value. Let's have X return that value also"). When should you draw the line and define a different method?
How to calculate the exact probability that the second player wins? Consider a game that uses a generator which produces independent random integers between 1 and 100 inclusive. The game starts with a sum S = 0. The first player adds random numbers from the generator to S until S > 100 and records her last random number 'x'. The second player, continues adding random numbers from the generator to S until S > 200 and records her last random number 'y'. The player with the highest number wins, i.e. if y > x the second player wins. Is this game fair? Write a program to simulate 100,000 games. What is the probability estimate, based on your simulations, that the second player wins? Give your answer rounded to 3 places behind the decimal. For extra credit, calculate the exact probability (without sampling). import random CONST_TIMES = 100000 CONST_SMALL = 100 CONST_LARGE = 200 def playGame(): s = 0 while s <= CONST_SMALL: x = random.randint(1, CONST_SMALL) s = s + x; while s <= CONST_LARGE: y = random.randint(1, CONST_SMALL) s = s + y if x < y: return 's' elif x == y: return 'm' else: return 'f' fst = sec = 0 for i in range(CONST_TIMES): winner = playGame() if winner == 'f': fst = fst + 1 elif winner == 's': sec = sec + 1 secWinPro = round(float(sec) / CONST_TIMES, 3) print secWinPro The simulation probability is about 0.524. I want to know how to calculate the exact probability.
eng_Latn
34,533
Maximum number of even entries in a $3\times 3$ matrix $A$ is a $3\times 3$ matrix with integer entries such that $\det(A)=1.$ So what is the maximum possible number of entries of $A$ that are even $?$ So, I thought about the $3\times 3$ Identity matrix and wrote the answer $6$$($ Since $0$ is ***even***$)$ Was it by any chance correct $?$ Or was it wrong $?$ How to prove that $?$ Thanks.
$A \in M_3(\mathbb Z)$ be such that $\det(A)=1$ ; then what is the maximum possible number of entries of $A$ that are even ? Let $A \in M_3(\mathbb Z)$ be such that $\det(A)=1$ ; then what is the maximum possible number of entries of $A$ that are even ?
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
eng_Latn
34,534
Average Rounds for a Game Two people are playing a game, one round after another. Let the probability of person A winning be p, and the probability of person B winning be q, wherein 0<p<1, p+q=1. There is no draw, and person A will gain 1 point when they win one round, person B will gain 1 point when they win one round, neither will lose points. The game will stop when one of the two has two more points than the other player. What's the average number of rounds before the game stops? The answer I have is $2/(p^2+q^2)$, I thought that I should try to make it into a probability distribution since that's the topic I'm currently on, but I'm not sure where to start... Thank you!
Expected number of games played in a game with 'win by two' rule. Calvin and Hobbes play a match consisting of a series of games, where Calvin has probability p of winning each game (independently). They play with a “win by two”” rule: the first player to win two games more than his opponent wins the match. What is the expected number of games played? My attempt: I use Markov chains. Let $S$ denote that starting state, $W$ and $L$ denote the states where Calvin and Hobbes win respectively. Then, I have $$\mu_S = 1+ p\mu_W+q\mu_L $$ $$\mu_W = 1+ q\mu_S $$ $$\mu_L = 1+ p\mu_S $$ where $\mu_j$ denote the expected number of games from state $j$. Solving the system, I get the solution $\mu_S = \frac{2}{1-2pq}$ which I think is correct. I however would like to know how to approach it with geometric distribution instead of Markov chains. I read one solution where the number of games is given as $1+\text{Geom}(p^2+q^2)$ but I do not understand it. My understanding is that for the game to stop, the required probability is $p^2 + q^2$ since this is the probability that either Calvin or Hobbes win two games in a row. How to use this information in a Geometric random variable? Thanks.
Is there a simple reason why the expected number of coin flips till getting $m$ more heads than tails or $n$ more tails than heads should be $mn$? I flip a coin until I get $m$ more heads than tails, or $n$ more tails than heads. Let the expected number of flips of the coin before stopping be $f(m,n)$. I obtained $f(m,n)=mn$ from the recursion $f(m,n)=1+\frac{f(m-1,n+1)+f(m+1,n-1)}2$ with $f(k,0)=f(0,k)=0$ for all $k$. Other than going through this recursion (and either solving by inspection or by writing as linear recurrence in single variable and solving brute force), is there an intuitive reason you should expect this process to take $mn$ flips? I was thinking about the more general problem with probability $p$ of getting heads and was struck by how simple the formula became when handling, what turned out to be a special case (general formula broke down) of $p=\frac12$.
eng_Latn
34,535
Less common probabilities and expected values related to the coupon collector problem A company brings out a new set of $n$ collectible cards. The cards are made in equal number, so there is the same probability of any given card being in a random set. They are purchased one-by-one. Question 1: If I buy $x$ cards, on average how many unique cards $(k)$ will I have? $k$ does not need to be a whole number. Question 2: How large does x need to be that k/n is over y%? Question 3: If I buy $x$ cards, what is the probability that I have $j$ unique cards. $j$ is a whole number. EDIT: Thank you to all who have replied! As a first time poster, it is amazing to get so many helpful responses so quickly. I have learnt that this known as the coupon collector's problem. There is a very straightforward answer to the question, "What is the expected value of x needed to complete the set on n cards" \begin{align} E[x]={}&x\sum_{i=1}^n\frac1i\;. \end{align} I am struggling to find equations as simple as this to questions 1 and 2, so will post a second edit once I have them. On question 3 "user 4" kindly gave what I think is the right answer: \begin{align} \frac{n!}{(n-j)! n^x}{x \brace j} \end{align}
Expected time to roll all 1 through 6 on a die What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
Given an infinite set of cardinality $X$, is there a chain in $\mathcal P(X)$ of the same cardinalty of $\mathcal P(X)$? $P(\mathbb N)$ = power set of $\mathbb N$. $A \subset P(\mathbb N)$ is a chain if $a,b \in A \implies$ either $a \subseteq b$ or $ b \subseteq a$ That is, we have something like this: $$\cdots a \subseteq b \subseteq c \subseteq\cdots$$ where $a,b,c \in A$ are distinct. We can show easy enough that there is an uncountable chain - this is done by noting $\mathbb N\sim\mathbb Q$ then using Dedekind cuts in $\mathbb Q$ to define $\mathbb R$ we see that a family of (nearly arbitrary) cuts satisfy the condition. For instance the family $L_r=\{q \in \mathbb Q : q < r \}$ for $r > 0$ gives us the sets we need and obviously we can pick others. I tried doing this for $ \mathbb R$ and don't seem to be getting anywhere. To be more specific, does there exist a chain in $P(\mathbb R)$ with cardinality $2^ \mathbb R$? Given a set of cardinality $X$ (necessarily non-finite), is there a chain in $P(X)$ of the same cardinalty of $P(X)$?
eng_Latn
34,536
Split list in to sublist in C# I like to split the list in to multiple list, list contains GUID's. For an example, if main list contains 130 GUID's, I have kept threshold as 50, so it should return, 3 list, 1st and 2nd list should contain 50 list each and third list will have 30 GUID's. How can we do it? Please help me out!!
Split List into Sublists with LINQ Is there any way I can separate a List<SomeObject> into several separate lists of SomeObject, using the item index as the delimiter of each split? Let me exemplify: I have a List<SomeObject> and I need a List<List<SomeObject>> or List<SomeObject>[], so that each of these resulting lists will contain a group of 3 items of the original list (sequentially). eg.: Original List: [a, g, e, w, p, s, q, f, x, y, i, m, c] Resulting lists: [a, g, e], [w, p, s], [q, f, x], [y, i, m], [c] I'd also need the resulting lists size to be a parameter of this function.
Coupon collector problem with partial collection of a specific set of coupons I am very new in probability and combinatorics and have a naive question around a variation on the coupon collector problem with partial collection. Lets assume we have a box with 45 coupons labeled 1-45. Now in this case I would like to adjust the CCP such that I can calculate the expected value (amount of draws necessary) to collect 10 specific items. For example item 1-10. How do I adjust my model such that I can calculate the amount of draws necessary to collect each item n times. I assume that I have to adjust CCP2 in following post () to include the probability that I catch one item is 10/45. All tips and tricks are welcome! Thanks for your help
eng_Latn
34,537
Why we don't need to cover all possibilities in calculating expected value We choose 10 cards at random from a standard deck of 52 cards. Find EV of the number of aces. If we pick a card and then replace it back EV is 10*1/13. Why the EV is also 10*1/13 when we get an ace and don't put it back , why don't we need to cover all possibilities here? EV : Expected Value
If you draw two cards, what is the probability that the second card is a queen? We had this question arise in class today and I still don't understand the answer given. We were to assume that drawing cards are independent events. We were asked what the probability that the second card drawn is a queen if we take two from the deck. The answer given was 4/52, which seems counter-intuitive to me. How is the probability still 4/52 if there was a card drawn before it? What if the first card drawn was a queen?
Sampling with replacement or without replacement I'm writing a program in R that simulates bank losses on car loans. Here is the questions I'm trying to solve: You run a bank that has a history of identifying potential homeowners that can be trusted to make payments. In fact, historically, in a given year, only 2% of your customers default. You want to use stochastic models to get an idea of what interest rates you should charge to guarantee a profit this upcoming year. A. Your bank gives out 1,000 loans this year. Create a sampling model and use the function sample() to simulate the number of foreclosure in a year with the information that 2% of customers default. Also suppose your bank loses $120,000 on each foreclosure. Run the simulation for one year and report your loss. B. Note that the loss you will incur is a random variable. Use Monte Carlo simulation to estimate the distribution of this random variable. Use summaries and visualization to describe your potential losses to your board of trustees. C. The 1,000 loans you gave out were for 180,000. The way your bank can give out loans and not lose money is by charging an interest rate. If you charge an interest rate of, say, 2% you would earn 3,600 for each loan that doesn't foreclose. At what percentage should you set the interest rate so that your expected profit totals 100,000. Hint: Create a sampling model with expected value 100 so that when multiplied by the 1,000 loans you get an expectation of 100,000. Corroborate your answer with a Monte Carlo simulation. I'm confused about how to set up this simulation up from a high level point of view and have the following questions: 1. For part A, Should I create a pool of 1000 customers or should I create a larger pool of customers? 2. For part A, when sampling, do I sample with or without replacement? 3. For part B, I'm confused about how to set up the monte carlo simulation. Am I varying the size of the customer pool? 4. For part C, I'm not sure how to set up a sampling model that involves the interest rate. Any advice or guidance would be appreciated. I'm also thinking that if I fully understood the high level concepts for parts A and B, part C might not be such a mystery.
eng_Latn
34,538
Probability of tossing an even/odd number of heads Lets say I toss a coin $n$ times where $n \geq 1$. Is it always the case that the probability of getting an even number of heads is $\frac{1}{2}$. If so can someone explain mathematically why? edit: as an additional question, If I introduce some subset of the n coins as unfair coins(i.e. probability of getting heads may not be 0.5) how does this effect the probability of getting an even number of heads?
A coin is tossed $n$ times. What is the probability of getting odd number of heads? A coin is tossed $n$ times. What is the probability of getting odd number of heads? I started this chapter sometimes ago and faced in front of a tough problem. At first I started considering cases. Case-I : The probability of getting 1 head.Case-II : The probability of getting 3 head and so on. But there are many cases. So how can I solved this . Please help me. Thank you!
What's wrong with this equal probability solution for Monty Hall Problem? I'm confused about why we should change door in the Monty Hall Problem, when thinking from a different perspective gives me equal probability. Think about this first: if we have two doors, and one car behind one of them, then we have a 50/50 chance of choosing the right door. Back to Monty Hall: after we pick a door, one door is opened and shows a goat, and the other door remains closed. Let's call the door we picked A and the other closed door B. Now since 1 door has already been opened, our knowledge has changed such that the car can only be behind A or B. Therefore, the problem is equivalent to: given two closed doors (A and B) and one car, which door should be chosen (we know it's a 50/50 thing)? Then, not switching door = choosing A, and switching door = choosing B. Therefore, it seems that switching should be equally likely, instead of more likely. Another way to think: no matter which door we choose from the three, we know BEFOREHAND that we can definitely open a door with a goat in the remaining two. Therefore, showing an open door with a goat reveals nothing new about which door has the car. What's wrong with this thinking process? (Note that I know the argument why switching gives advantage, and I know experiments have been done to prove that. My question is why the above thinking, which seems legit, is actually wrong.) Thanks.
eng_Latn
34,539
Javascript return unexpected value when calculating numbers Please see code snippet, javascript return unexpected value when calculating numbers. I'm using newest firefox. Is there a way I could always get correct value ? console.log(100+59.59, "expected 159.59"); console.log(200+59.59, "expected 259.59"); console.log(300+59.59, "expected 359.59"); console.log(400+59.59, "expected 459.59"); console.log(500+59.59, "expected 559.59"); console.log(600+286.59, "expected 886.59"); console.log(700+286.59, "expected 986.59");
Is floating point math broken? Consider the following code: 0.1 + 0.2 == 0.3 -> false 0.1 + 0.2 -> 0.30000000000000004 Why do these inaccuracies happen?
Maximizing expected value of coin reveal game I was asked this question today in an interview and am still not sure of the answer. The question is as follows. Part one: Say I have flipped 100 coins already and they are hidden to you. Now I reveal these coins one by one and you are to guess if it is heads or tails before I reveal the coin. If you get it correct, you get $\$1$. If you get it incorrect, you get $\$0$. I will allow you to ask me one yes or no question about the sequence for free. What will it be to maximize your profit? My approach for this part of the problem was to ask if there were more heads than tails. If they say yes, I will just guess all heads otherwise I just guess all tails. I know the expected value for this should be greater than 50 but is it possible to calculate the exact value for this? If so, how would you do it? Part two: Same scenario as before but now I will charge for a question. I will allow you to ask me any amount of yes or no questions as I go through this process for $\$1$. What is your strategy to maximize your profit? I was not sure about the answer to this part of the question. Would the best option be to guess randomly? I think the expected value of this should be 50. I am not sure about the expected value of part one but if it is greater than 51, I think I could also use that approach. Anyone have a good idea for this part?
eng_Latn
34,540
How to find the total number of exchanges? Given n objects held by n people, how to find the total number of valid exchanges, where a valid exchange means that all persons hold different objects after the exchange? eg for 4 objects 1 2 3 4: valid exchanges are: 2 1 4 3 2 3 4 1 2 4 1 3 3 1 4 2 3 4 1 2 3 4 2 1 4 1 2 3 4 3 1 2 4 3 2 1 Therefor the total number of valid exchanges are 9.
I have a problem understanding the proof of Rencontres numbers (Derangements) I understand the whole concept of Rencontres numbers but I can't understand how to prove this equation $$D_{n,0}=\left[\frac{n!}{e}\right]$$ where $[\cdot]$ denotes the rounding function (i.e., $[x]$ is the integer nearest to $x$). This equation that I wrote comes from solving the following recursion, but I don't understand how exactly the author calculated this recursion. $$\begin {align*} D_{n+2,0} & =(n+1)(D_{n+1,0}+D_{n,0}) \\ D_{0,0} & = 1 \\ D_{1,0} & = 0 \end {align*} $$
Maximizing expected value of coin reveal game I was asked this question today in an interview and am still not sure of the answer. The question is as follows. Part one: Say I have flipped 100 coins already and they are hidden to you. Now I reveal these coins one by one and you are to guess if it is heads or tails before I reveal the coin. If you get it correct, you get $\$1$. If you get it incorrect, you get $\$0$. I will allow you to ask me one yes or no question about the sequence for free. What will it be to maximize your profit? My approach for this part of the problem was to ask if there were more heads than tails. If they say yes, I will just guess all heads otherwise I just guess all tails. I know the expected value for this should be greater than 50 but is it possible to calculate the exact value for this? If so, how would you do it? Part two: Same scenario as before but now I will charge for a question. I will allow you to ask me any amount of yes or no questions as I go through this process for $\$1$. What is your strategy to maximize your profit? I was not sure about the answer to this part of the question. Would the best option be to guess randomly? I think the expected value of this should be 50. I am not sure about the expected value of part one but if it is greater than 51, I think I could also use that approach. Anyone have a good idea for this part?
eng_Latn
34,541
Is it better to spawn your first overlord at 9 or 10 supply? Assuming you're not doing some early cheese build, I'd think this would be something you could just do some math on to figure out which is better. Has this problem been solved? Which is better, and what is the math behind it?
Why does everyone do 9-overlord, not 10-overlord? What is the advantage on saving the money for the overlord when you have 9 drones, and not filling your supply first and then getting the overlord? In my theory, in the first case you get to wait the saving time with 9 drones, and get to wait almost the same time with 10 drones in the second case. Second case should be better, since 10 drones mine faster than 9? So why does everybody go with the first case? All this is assuming you go 12 pool or something like that, where you will get more drones after the overlord. 6- and 8-pools are obviously different.
Why is the last digit of $n^5$ equal to the last digit of $n$? I was wondering why the last digit of $n^5$ is that of $n$? What's the proof and logic behind the statement? I have no idea where to start. Can someone please provide a simple proof or some general ideas about how I can figure out the proof myself? Thanks.
eng_Latn
34,542
need to create a random number between 10 and 20 I am trying to generate a random number between 10 and 20 for my program but the numbers being generated are less than 1 and are to 2 decimal places e.g. 0.64, 0.34 etc.. Dim TrigB As Random Dim numberb As Integer TrigB = New Random numberb = TrigB.Next(10, 20) TrigRdmb.Text = numberb.ToString what do i need to change so that it produces a number between 10 and 20 thanks
Random integer in VB.NET I need to generate a random integer between 1 and n (where n is a positive whole number) to use for a unit test. I don't need something overly complicated to ensure true randomness - just an old-fashioned random number. How would I do that?
Poisson / exponential distribution Next weekend you will be participating in 12km cross country race on a mountain.The average time between two successive wild animal sightings on the mountain is reported to be 5 minutes (a) What is the probability that you see at least one wild animal in the 11th minute of the race given that you will see 3 wild animals since the start of the race? (b) What is the probability that it will take more than a quarter of a hour before you see a wild animal after ten minutes of running? Now I have attempted (a) , but I don't know whether my thinking is correct. (b) on the other hand makes little sense to me. My attempt: (a)$ X~ Poisson(\frac{1}{5})$ and $Y~Exponential(\frac{1}{5})$ $Pr(X>4) = Pr(X>1)$ (I am thinking that this is some variation of the memoryless property) $Pr(X>1) =Pr(Y<1)$ $Pr(Y<1)= 1 - (e^{(\frac{-1}{5})})$ Perhaps, I was a bit to vague in my attempt of (a). So here goes my attempt of(b).From my understanding of the question , it is asking what it the probability that the time between events(in this case animal sightings) is more than 25 minutes given you have been running for ten minutes Now from what I know the fact that you have been running ten minutes is irrelevant this is due to memory-less property of the exponential distribution so without further ado I present my attempt at (b) let $X $ be exponentially distributed random variable with $\lambda = 1/5$ then $Pr(X >15) = 1 - Pr(X <= 15)$
eng_Latn
34,543
Expected value question Say I am rolling a 20-sided die. What is the expected number of rolls to have rolled each number? I haven't taken statistics in a while so my syntax might be off but this question has been bugging me for a while. Any help or direction is much appreciated.
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
What is syntax highlighting and how does it work? I noticed that sometimes my code gets highlighted in different colors when rendered. What is syntax highlighting? How does it work? Why isn't my code being highlighted correctly? How do I report a bug or request a new language? How do I use syntax highlighting? What languages are currently available on Stack Exchange?
eng_Latn
34,544
Expected number of variables that are at least n I wasn't sure what to title this. We have n objects distributed uniformly and independently at random among n people. Calculate the expected number of people that receive at least 5 objects. I know how to calculate the probability that one person receives at least 5 objects, but I do not know to calculate the expected number, as it seems that the probability of the second person receiving 5 objects depends on the first, and so on. Is this just using an indicator variable for each person or is there more involved?
n tasks assigned to n computers, what is the EX value of a computer getting 5 or more tasks? Say a central server assigns each of n tasks uniformly and independently at random to n computers connected to it on a network. Say a computer is ‘overloaded’ if it receives 5 or more tasks. Q: Calculate the expected number of overloaded computers. I thought of doing [1 - Pr(a computer is not overloaded)] but that leads me to a complicated expression of: $$1 - PR(NotOver) = 1 - \sum_{i=0}^4 \left( \frac{1}{n} \right)^{i} { \left( \frac{n-1}{n} \right)}^{n-i}$$ multiplying this by n would(hopefully) give the Expected value. But the answer seems not very elegant atall, is there something I'm missing or an easier way to tackle this? Thanks!
How do you calculate the likelihood of drawing certain cards in your opening hand? In Magic, at the start of the game, you draw 7 cards. How would you calculate the likelihood of drawing a specific card in your opening hand? For example, let's say I have a 60 card deck, and I'm running 4 . What is the percent chance that I will have at least one Bird in my opening hand?
eng_Latn
34,545
Stuck at an assignment for conditional probability So I'm having this assignment in probability regarding conditional probability that states the following: Each of 25 exams papers has 2 questions written on it. Neither of the 50 questions repeats itself. The student knows the answer for 44 questions. In order for the student to pass the exam he must answer correctly either on the two questions for the paper he has chosen or on one question on the first paper he has chosen and on the first question on the second paper he has chosen. What is the probability that the student will pass the exam? Any ideas? Any sort of help will be appreciated. Thank you in advance!
Probability of student passing an exam There're $25$ tests, each with $2$ questions. There're total of $50$ different questions (the questions do not repeat). The student knows the answer of $44$ questions. To pass the exam, the student needs to correctly answer both questions on the first test he drew, or answer $1$ question from the test he drew first, and $1$ question from the test he drew second. (meaning he draws twice if he answers a question incorrectly on his first try) What is the probability that the student will pass the exam? I have never faced a problem like this one so any help is appreciated. I was thinking about approaching it "manually" with some applied combinatorics but I feel like there's a much easier way that I am not familiar with.
Game Theory and Uniform Distribution question? In an Auction , two players are bidding. Their bids will be a unknown fraction of their valuations. The valuations come from a uniform distribution $$[0,1] $$ If Player 2 bids $$ v/2 $$ and Player 1 bids $$b1<1/2$$ What is the probability player 1 wins ? Clearly for player 1 to win,players 2 bid has to be less than player 1 bids. $$P(v/2 < b1)$$ $$P(v < 2b1)$$ I follow the question up to this stage. Now it says since its uniformly distributed the probability player 1 wins is $$2b1$$ Im confused how can you just get 2b1 from the inequality ? Thanks in advance
eng_Latn
34,546
How to generate combinations from arrays? Consider the case of having three arrays: X = {A , B , C}; Y = {D , E , F}; Z = {G , H , I}; How to generate all the possible combinations from these three arrays ( C++ or Python ), that's like C1 = {A , D , G}; C2 = {A , D , H}; ... C4 = {A, E , G}; ... C10 = {B , D , G}; ... ...
Get the cartesian product of a series of lists? How can I get the Cartesian product (every possible combination of values) from a group of lists? Input: somelists = [ [1, 2, 3], ['a', 'b'], [4, 5] ] Desired output: [(1, 'a', 4), (1, 'a', 5), (1, 'b', 4), (1, 'b', 5), (2, 'a', 4), (2, 'a', 5) ...]
Probability distribution in the coupon collector's problem I'm trying to solve the well known Coupon Collector's Problem by explicitly finding the probability distribution (so far all the methods I read involve using some sort of trick). However, I'm not having much luck getting anywhere as combinatorics is not something I'm particularly good at. The Coupon Collector's Problem is stated as: There are $m$ different kinds of coupons to be collected from boxes. Assuming each type of coupon is equally likely to be found per box, what's the expected amount of boxes one has to buy to collect all types of coupons? What I'm attempting: Let $N$ be the random variable associated with the number of boxes one has to buy to find all coupons. Then $P(N=n)=\frac{|A_n|}{|\Omega _n|}$, where $A_n$ is the set of all outcomes such that all types of coupons are observed in $n$ buys, and $\Omega _n$ is the set of all the possible outcomes in $n$ buys. I think $|\Omega _n| = m^n$, but I'm not even sure about that anymore, as all my attempts so far led to garbage probabilities that either diverged or didn't sum up to 1.
eng_Latn
34,547
find the probability that more than 50 of the observations of the random sample are less than 3 Let X1,X2,...,Xn be a random sample of size n=72 from a distribution with pdf f(x)={1/x2 1<x<infinity 0 otherwise Find the probability that more than 50 of the observations of the random sample are <3 (less than 3).
Compute approximately the probability that more than 50 of the observations of the random sample are less than 3. Let $X_1,X_2,...,X_n$ be a random sample of size $n = 72$ from a distribution with probability density function $f(x) =\begin{cases} 1/x^2,& 1 < x < infinity\\0,&\text{otherwise}\end{cases}$ Compute approximately the probability that more than $50$ of the ob- servations of the random sample are less than $3$.
Coupon collector's problem: mean and variance in number of coupons to be collected to complete a set (unequal probabilities) There are $n$ coupons in a collection. A collector has the ability to purchase a coupon, but can't choose the coupon he purchases. Instead, the coupon is revealed to be coupon $i$ with probability $p_i=\frac 1 n$. Let $N$ be the number of coupons he'll need to collect before he has at least one coupon of each type. Find the expected value and variance of $N$. Bonus: generalize to the case where the probability of collecting the $j$th coupon is $p_j$ with $\sum\limits_{j=1}^n p_j=1$ I recently came across this problem and came up with/ unearthed various methods to solve it. I'm intending this page as a wiki with various solutions. I'll be posting all the solutions I'm aware of (4 so far) over time. EDIT: As mentioned in the comments, this question is different from the one people are saying it is a duplicate of since (for one thing) it includes an expression for the variance and it covers the general case where all coupons have unequal probabilities. The case of calculating the variance for the general case of coupons having unequal probabilities has not been covered anywhere on the site apart from , which this one intends to consolidate along with other approaches to solve this problem. EDIT: Paper on the solutions on this page submitted to ArXiv:
eng_Latn
34,548
Changing Pixels Per Unit First off, excuse my ignorance. I am just getting started in game dev. and ran into a little issue. So I had these weird rippling-lines going across my screen, and saw in a couple forums that this means I don't have what is called a "pixel perfect" game. To fix this, I am supposed to change the pixels-per-unit of each sprite down to 1. Currently I had it on 32 because that is my sprite size and I am a complete noob. Now, I change it down to one but it completely destroyed my game. Everything that is generated on my map/grid is sucked into the center in this giant orb of sprites (I procedurally generate my map off a grid of 1/0s). I am guessing there is just a setting or some such thing that I am missing. Any ideas? Also would be happy with an alternative to remove the rippling lines.
Pixel Perfect 2D tiles I'm trying to make an 64x64 tilemap using Super Tilemap Editor & Unity but my tiles looks terrible. I did my research about pixel perfect 2D and all of the solutions were for pixel art, they didn't work for me. I have the following tileset I made in Adobe Illustrator as an example Tiles looks like this when placed in game (1x zoom) I'm pretty sure they're not 64x64 in game, they also look pixelated probably because they're smaller than they actually are on camera. So my question is, how can I calculate the correct camera size for 64x64 tiles and do I have to make any changes in order to make sprites look pixel perfect? I'm currently using 1080/64/2=8.4375 (vertical resolution / PPU / 2) as camera size.
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
eng_Latn
34,549
For a one-shot, how do I handle starting equipment above level 1? I'm looking to do a one-shot to try my hand at DM-ing. In addition to this, I'm also building the setting and objective by myself as well. I've decided for everyone to be level 5 and have built encounters around it, however I want to be more accurate to the equipment a level 5 group might acquire. I'm assuming they have the starting equipment from level 1, plus whatever types of items they can be reasonably expected to acquire in 4 levels. Obviously, characters with magic items will fair much better against monsters with certain resistances against non-magic items. I want everyone to roll characters just before the session starts. How do I handle their starting equipment when they are above first level (specifically level 5)? Do I give them gold and let them buy directly from the PHB? How much then?
What's the starting wealth for higher levels? The PHB only gives starting wealth for Level 1 PCs. Based on my experience in other games, specifically Pathfinder, I would expect to get more at higher levels. Where can I find an appropriate table or calculation?
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
eng_Latn
34,550
Statistical reasoning in board game In each round of the board game "The Resistance" three players are randomly and secretly chosen to be spies while the rest of the players are part of the resistance. The spies are then made aware of each other while the players do not know the identity of the rest. Imagine that 8 persons are playing the game and that three rounds have been completed. A new round has just begun. Among the players are Ted and Bob. Bob thinks that Ted is a spy and tries to convince the other players: "I think that Ted is a spy", he says. Ted replies: "I have been a spy in the last three rounds. It is very unlikely that I am a spy four times in a row. It is more likely that I am good in this round. If you let $X$ denote the number of times I am a spy in the four rounds, then X will be binomially distributed. The expected value of $X$ is then $3/8\cdot 4=1.5$. So in two of the rounds I should be good and in two of the rounds I should be a spy. Thus I should be good in this round." Bob then says: "Your status in each round of the game is independent of your status in the other rounds, so your reasoning is wrong. The probability of you being a spy in this round is $3/8$." Which player is correct?
Does 10 heads in a row increase the chance of the next toss being a tail? I assume the following is true: assuming a fair coin, getting 10 heads in a row whilst tossing a coin does not increase the chance of the next coin toss being a tail, no matter what amount of probability and/or statistical jargon is tossed around (excuse the puns). Assuming that is the case, my question is this: how the hell do I convince someone that is the case? They are smart and educated but seem determined not to consider that I might be in the right on this (argument).
Covering ten dots on a table with ten equal-sized coins: explanation of proof Note: . I have moved it here because: I am curious about the answer The OP has not shown any interest in moving it himself In the Communications of the ACM, , Peter Winkler asked the following question: On the table before us are 10 dots, and in our pocket are 10 $1 coins. Prove the coins can be placed on the table (no two overlapping) in such a way that all dots are covered. Figure 2 shows a valid placement of the coins for this particular set of dots; they are transparent so we can see them. The three coins at the bottom are not needed. In the , he presented his proof: We had to show that any 10 dots on a table can be covered by non-overlapping $1 coins, in a problem devised by Naoki Inaba and sent to me by his friend, Hirokazu Iwasawa, both puzzle mavens in Japan. The key is to note that packing disks arranged in a honeycomb pattern cover more than 90% of the plane. But how do we know they do? A disk of radius one fits inside a regular hexagon made up of six equilateral triangles of altitude one. Since each such triangle has area $\frac{\sqrt{3}}{3}$, the hexagon itself has area $2 \sqrt{3}$; since the hexagons tile the plane in a honeycomb pattern, the disks, each with area $\pi$, cover $\frac{\pi}{2\sqrt{3}}\approx .9069$ of the plane's surface. It follows that if the disks are placed randomly on the plane, the probability that any particular point is covered is .9069. Therefore, if we randomly place lots of $1 coins (borrowed) on the table in a hexagonal pattern, on average, 9.069 of our 10 points will be covered, meaning at least some of the time all 10 will be covered. (We need at most only 10 coins so give back the rest.) What does it mean that the disks cover 90.69% of the infinite plane? The easiest way to answer is to say, perhaps, that the percentage of any large square covered by the disks approaches this value as the square expands. What is "random" about the placement of the disks? One way to think it through is to fix any packing and any disk within it, then pick a point uniformly at random from the honeycomb hexagon containing the disk and move the disk so its center is at the chosen point. I don't understand. Doesn't the probabilistic nature of this proof simply mean that in the majority of configurations, all 10 dots can be covered. Can't we still come up with a configuration involving 10 (or less) dots where one of the dots can't be covered?
eng_Latn
34,551
Is this a correct use of Bayesian statistics when choosing a box A-E? Occasionally my friends and I attend a local pub quiz. In the final round of the quiz, the winning team is allowed to select one box from five, labelled: A, B, C, D, E In one of these boxes is a £250 prize. The others contain the following prizes: £1, £10, £20, £50 Once a decision has been made, the contents of the other boxes will be revealed one by one (complete with drum roll for dramatic effect). This continues until there are only two boxes left on the screen: The one that definitely has £250, and the box they picked. They are now given the opportunity to swap their box for the other box, and open that one instead. So let's say the team picked B. The first "reveal" has been done and now on the screen only boxes A and B are left. One definitely has the £250 in it. The team can now either open B (their first choice), or swap and open A. At this point, a friend of mine (who has a university degree in mathematics) always says the following: "Bayesian statistics says that they should swap boxes." I have no idea what "Bayesian statistics" is but, from a common-sense point of view, I see this as a straight 50-50 choice, and swapping should make no difference to one's chances of winning the top prize. So my questions are: 1) Is this a case where my friend should even be applying Bayesian statistics? 2) Does the Bayesian model say he is right that the team should swap their box at the end? Thanks in advance. This should help solve a rather acrimonious and pointless debate.
Two envelope problem revisited I was thinking of this problem. I believe the solution and I think I understand it, but if I take the following approach I'm completely confused. Problem 1: I will offer you the following game. You pay me \$10 and I will flip a fair coin. Heads I give you \$5 and Tails I give you \$20. The expectation is \$12.5 so you will always play the game. Problem 2: I will give you an envelope with \$10, the envelope is open and you can check. I then show you another envelope, closed this time and tell you: This envelope either has \$5 or $20 in it with equal probability. Do you want to swap? I feel this is exactly the same as problem 1, you forgo \$10 for a \$5 or a \$20, so again you will always switch. Problem 3: I do the same as above but close the envelopes. So you don't know there are $10 but some amount X. I tell you the other envelope has double or half. Now if you follow the same logic you want to switch. This is the envelope paradox. What changed when I closed the envelope?? EDIT: Some have argued that problem 3 is not the envelope problem and I'm going to try and provide below why I think it is by analysing how each views the game. Also, it gives a better set up for the game. Providing some clarification for Problem 3: From the perspective of the person organising the game: I hold 2 envelopes. In one I put \$10 close it and give it to the player. I then tell him, I have one more envelope that has either double or half the amount of the envelope I just gave you. Do you want to switch? I then proceed to flip a fair coin and Heads I put \$5 in and Tails I put \$20. And hand him the envelope. I then ask him. The envelope you just gave me has twice or half the amount of the envelope you are holding. Do you want to switch? From the perspective of the player: I am given an envelope and told there is another envelope that has double or half the amount with equal probability. Do I want to switch. I think sure I have $X$, hence $\frac{1}{2}(\frac{1}{2}X + 2X) > X$ so I want to switch. I get the envelope and all of a sudden I am facing the exact same situation. I want to switch again as the other envelope has either double or half the amount.
How can I get a significant overall ANOVA but no significant pairwise differences with Tukey's procedure? I performed with R an ANOVA and I got significant differences. However when checking which pairs were significantly different using the Tukey's procedure I did not get any of them. How can this be possible? Here is the code: fit5_snow<- lm(Response ~ Stimulus, data=audio_snow) anova(fit5_snow) > anova(fit5_snow) Analysis of Variance Table Response: Response Df Sum Sq Mean Sq F value Pr(>F) Stimulus 5 73.79 14.7578 2.6308 0.02929 * Residuals 84 471.20 5.6095 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 df<-df.residual(fit5_snow) MSerror<-deviance(fit5_snow)/df comparison <- HSD.test(audio_snow$Response, audio_snow$Stimulus, df, MSerror, group=FALSE) > comparison <- HSD.test(audio_snow$Response, audio_snow$Stimulus, df, MSerror, group=FALSE) Study: HSD Test for audio_snow$Response Mean Square Error: 5.609524 audio_snow$Stimulus, means audio_snow.Response std.err replication snow_dry_leaves 4.933333 0.6208034 15 snow_gravel 6.866667 0.5679258 15 snow_metal 6.333333 0.5662463 15 snow_sand 6.733333 0.5114561 15 snow_snow 7.333333 0.5989409 15 snow_wood 5.066667 0.7713110 15 alpha: 0.05 ; Df Error: 84 Critical Value of Studentized Range: 4.124617 Comparison between treatments means Difference pvalue sig LCL UCL snow_gravel - snow_dry_leaves 1.9333333 0.232848 -0.5889913 4.455658 snow_metal - snow_dry_leaves 1.4000000 0.588616 -1.1223246 3.922325 snow_sand - snow_dry_leaves 1.8000000 0.307012 -0.7223246 4.322325 snow_snow - snow_dry_leaves 2.4000000 0.071587 . -0.1223246 4.922325 snow_wood - snow_dry_leaves 0.1333333 0.999987 -2.3889913 2.655658 snow_gravel - snow_metal 0.5333333 0.989528 -1.9889913 3.055658 snow_gravel - snow_sand 0.1333333 0.999987 -2.3889913 2.655658 snow_snow - snow_gravel 0.4666667 0.994348 -2.0556579 2.988991 snow_gravel - snow_wood 1.8000000 0.307012 -0.7223246 4.322325 snow_sand - snow_metal 0.4000000 0.997266 -2.1223246 2.922325 snow_snow - snow_metal 1.0000000 0.855987 -1.5223246 3.522325 snow_metal - snow_wood 1.2666667 0.687424 -1.2556579 3.788991 snow_snow - snow_sand 0.6000000 0.982179 -1.9223246 3.122325 snow_sand - snow_wood 1.6666667 0.393171 -0.8556579 4.188991 snow_snow - snow_wood 2.2666667 0.103505 -0.2556579 4.788991
eng_Latn
34,552
Random placement of rooks on a chessboard $8$ rooks are placed randomly on an $8\times 8$ chess board. What is the probability of having exactly one rook each row and each column? I guess there is no meaningful order here?
What is the probability that when you place 8 towers on a chess-board, none of them can beat the other. What is the probability that when you place 8 towers on a chess-board, none of them can beat the other. Attempt: ${64 \choose 8}^{-1} \approx1$ in $4\ 400\ 000\ 000$ Correct answer: ${64 \choose 8}^{-1} \cdot 8! \approx 1$ in $9\ 000\ 000$. I disagree with the $8!$. If there's combinations (binomial coefficient) in the denominator, why would there be permutations i.e. the order counts, in the numerator?
Sampling with replacement or without replacement I'm writing a program in R that simulates bank losses on car loans. Here is the questions I'm trying to solve: You run a bank that has a history of identifying potential homeowners that can be trusted to make payments. In fact, historically, in a given year, only 2% of your customers default. You want to use stochastic models to get an idea of what interest rates you should charge to guarantee a profit this upcoming year. A. Your bank gives out 1,000 loans this year. Create a sampling model and use the function sample() to simulate the number of foreclosure in a year with the information that 2% of customers default. Also suppose your bank loses $120,000 on each foreclosure. Run the simulation for one year and report your loss. B. Note that the loss you will incur is a random variable. Use Monte Carlo simulation to estimate the distribution of this random variable. Use summaries and visualization to describe your potential losses to your board of trustees. C. The 1,000 loans you gave out were for 180,000. The way your bank can give out loans and not lose money is by charging an interest rate. If you charge an interest rate of, say, 2% you would earn 3,600 for each loan that doesn't foreclose. At what percentage should you set the interest rate so that your expected profit totals 100,000. Hint: Create a sampling model with expected value 100 so that when multiplied by the 1,000 loans you get an expectation of 100,000. Corroborate your answer with a Monte Carlo simulation. I'm confused about how to set up this simulation up from a high level point of view and have the following questions: 1. For part A, Should I create a pool of 1000 customers or should I create a larger pool of customers? 2. For part A, when sampling, do I sample with or without replacement? 3. For part B, I'm confused about how to set up the monte carlo simulation. Am I varying the size of the customer pool? 4. For part C, I'm not sure how to set up a sampling model that involves the interest rate. Any advice or guidance would be appreciated. I'm also thinking that if I fully understood the high level concepts for parts A and B, part C might not be such a mystery.
eng_Latn
34,553
Expected number of siblings in a family The question goes as follows: Given a family where the number of children is a random variable $(\mu=1.8, \sigma=0.36)$, if one selects a child randomly, what’s the expected number of siblings the child has? I stumbled upon this problem while reading Sheldon Ross’s a first course to probability as I’m an undergrad in Physics, still early in my studies, just entered my 2nd year and I’m trying to get a grasp on probability. Any guide towards the solution will be deeply appreciated. Thanks for the help in advance, Cheers!
Expected number of siblings, given the expectation of children in a family The number of children in a family is a random variable $X$, where $ \mathbb E(X) = 1.8 $ and $Var(X) = 0.36 $. If we randomly choose a child, which is the expected number of its siblings? (it is given that the answer $> 0.8$) Let the number of a child's siblings be a random variable $Y$, so we are looking for $ \mathbb E(Y) $. If found that, $$ \mathbb E(Y) = \sum_{k=0}^{\infty}k \; \mathbb P(Y=k) = \sum_{k=0}^{\infty}k \; \mathbb P(X=k+1) = $$ $$ = \sum_{k=0}^{\infty}k \; \mathbb P(X=k)-\sum_{k=0}^{\infty}\mathbb P(X=k) +\mathbb P(X=0)= \mathbb E(X)-1+\mathbb P(X=0) $$ which is indeed greater than $0.8$, if $\; \mathbb P(X=0)>0$. But, I don't know how to make use of the variance to find the final answer. Thank you in advance
What kind of distribution is this "almost" uniformly distributed data for calls/week? My supervisor asked me to find out which distribution represents a particular situation. I have a VoIP generator that generates calls "uniformly" distributed between callers. This means that the volume per caller distribution is "almost" uniformly distributed between a minimum and maximum. So by running a test with 10000 users and a min value equal to 30 calls per week and a max value equal to 90 calls/week i obtain that not all the users respect this limits: we have some users that generate <30 calls and some other users that generate >90 calls. It is clear that the obtained distribution is not uniform. The situation is this: He said that i have to perform a sort of numerical process in order to find some formulas that could define this distribution. Initially, as wrote before,we wanted to obtain a uniform (min,max) distribution (the green area in the figure) but this is not the case as proved with chi-square test. Moreover the curve in figure is not symmetric, the probability of call generation below 30 call/week and greater that 90 call/week are not identical (it is high for 90calls/week). The variability of the number of generated call increases with the increasing call generation rates. "Actually implementation of this distribution is nothing but assigning different call rates in a range for users in domain which indicates implementation of several delta functions. As the call rates increases the variability of the generated calls also increases with the average call rate and this leads to the asymmetric behavior of the curve." [cit. from the Voipgenerator documentation] Someone can help me?I think that now i cannot use Q-Q plot because i don't know which theoretical distribution i have to use in order to compare it with my empirical data. Sorry if I have stressed with a similar problem a few weeks ago, but initially we thought we could change the implementation, but now we cannot. Hence i have to discover the type of the distribution i obtained and i don't know how can i do this.
eng_Latn
34,554
Finding the Probability that the Expected Outcome is the Actual Outcome
probability of getting 50 heads from tossing a coin 100 times
Direct proof that nilpotent matrix has zero trace
eng_Latn
34,555
Can we predict throwing a dice?
Is it really impossible to calculate in advance the result of throwing dice?
Is it really impossible to calculate in advance the result of throwing dice?
eng_Latn
34,556
A quick count in the college car park tells me that out of 40 cars parked there, 15 are hatch-backs. Assuming?
I don't really understand your question, but I would say if 15 are hatch-backs, then 15 people at college need a good bit of boot space. I would also deduce that 25 cars would not be hatchbacks, so I would say there are 25 teachers and 15 students (with their own cars).
Like the first answer..but let's expand on it a little.\n\nThe reorder level is the point at which you need to order more of that specific item in order to maintain production/sales.\nTaking the example of muffler's another level...let's expand on it:\nLet's say you have 100 muffler's on hand.\nLet's say you average selling 50 muffler's per month, so you have a two month supply on hand...given the average sale of that item.\nLet's say it takes 2 weeks for any company to fulfill your order.\nNow, when you take a look, the question remains: Do I need to maintain 100 muffler's on hand? of course not.\nYou could actually maintain a 4 week supply on hand (50) and have a reorder level of 25 (and then reorder 50)...thereby reducing your stock on hand, saving your company some money in the process.\n\nTo expand further, one also must take into account such things as product returns and defects. Hopefully the company that you'd order from would have this as a minimal amount...but the number there should be added on as a "buffer" so as to ensure no empty shelves/or lack of production time waiting for the piece to come into play.\n\nHope this helps.
eng_Latn
34,557
Probability question involving quadratic equation
Probability of Obtaining the Roots in a Quadratic Equation by Throwing a Die Three Times
Probability of Obtaining the Roots in a Quadratic Equation by Throwing a Die Three Times
eng_Latn
34,558
Can multiple doom tokens be removed in one hit in the final battle? If I have two players and my investigator is blessed and has a resultant fight of 11 and rolls 8 successes, do I outright take 4 doom tokens off the ancient one? We spent 2-3 hours slogging through the game and getting hurt at every turn, but when the final battle came, it was quick and short due to this situation, surprising me that it was so easy to defeat the ancient one after all that.
In the Final Battle, when you remove doom tokens for rolling successes against an Ancient One do you stop counting successes for that roll? In Arkham Horror you have to roll a cumulative number of successes equal to the number of players to knock off a doom token from the Elder God's health. If you reach the number of players worth of successes do you then stop counting for the player and move on or do you count the extra successes against the next doom token? We played it so that you kept on adding them but it meant that we turned Nyalothotep into mince meat quite quickly (One Character was blessed had a fight of 6 a shotgun and Fight +1 skill card though)
Is this a grammatically correct line in a poem: “Will he roll the dice, and follow it to Vegas?”? I want to use the following line in a poem: "Will he roll the dice, and follow it to Vegas?" A couple of things to note; firstly, obviously I'm using "roll the dice" in both a figurative/idiomatic sense (as in taking a risk), and in a somewhat more literal sense (painting the image of the person actually following the dice that have been rolled to Las Vegas). That being said, my confusion here is if I can get away with using "follow it" or if I need to say "follow them" .... As I've read online "dice" traditionally refers to more than one die (i.e., the plural of die), but in the modern usage, "dice" can also be used to refer to the singular (i.e., just one die). So, it seems like if I was referring to the image of just one die, then the way I phrased it could be correct. However, the image I'm trying to paint is the one usually associated with the phrase "roll the dice" (i.e., rolling two dice)... So could "follow it" still be considered acceptable in this sense... or would I need to use "follow them"?... And if not referring to the dice themselves, could using, could the "it" be taken to refer to the act of rolling the dice, or would that not work either? Thanks so much in advance!!
eng_Latn
34,559
My webhoster seems to have some troubles with DDOS attacks and routing overload. This makes my IP to be not available sometimes and I'd like to add a failover IP for the domain. However, 2 A-Records means that it's similar to a load balancer which is NOT what i am looking for. I do not want the fallback machine to be accessed any time - except in case of real troubles or downtime of first choice IP. Is there a out-of-the box solution that allows me such a failover ip? If not, if it requires a DNS re-configuration: Is there a recommended script or similar that can be run on the secondary machine (of course, this is with a different hoster). This has to check whether the first choice ip is down and must overwrite the A-record, right? This requires my domain hoster to provide an API for the DNS settings...
This is a about DoS and DDoS mitigation. I found a massive traffic spike on a website that I host today; I am getting thousands of connections a second and I see I'm using all 100Mbps of my available bandwidth. Nobody can access my site because all the requests time out, and I can't even log into the server because SSH times out too! This has happened a couple times before, and each time it's lasted a couple hours and gone away on its own. Occasionally, my website has another distinct but related problem: my server's load average (which is usually around .25) rockets up to 20 or more and nobody can access my site just the same as the other case. It also goes away after a few hours. Restarting my server doesn't help; what can I do to make my site accessible again, and what is happening? Relatedly, I found once that for a day or two, every time I started my service, it got a connection from a particular IP address and then crashed. As soon as I started it up again, this happened again and it crashed again. How is that similar, and what can I do about it?
When a divination wizard character uses their Portent ability: Starting at 2nd level when you choose this school, glimpses of the future begin to press in on your awareness. When you finish a long rest, roll two d20s and record the numbers rolled. You can replace any attack roll, saving throw, or ability check made by you or a creature that you can see with one of these foretelling rolls. You must choose to do so before the roll, and you can replace a roll in this way only once per turn. ... to replace a roll, can a player with the Lucky Feat: Whenever you make an attack roll. an ability check, or a saving throw, you can spend one luck point to roll an additional d20. You can choose to spend one of your luck points after you roll the die, but before the outcome is determined. You choose which of the d20s is used for the attack roll, ability check, or saving throw. You can also spend one luck point when an attack roll is made against you. Roll a d20, and then choose whether the attack uses the attackers roll or yours. If more than one creature spends a luck point to influence the outcome of a roll, the points cancel each other out; no additional dice are rolled. ... use the Luck Roll to replace the result? Or does the Portent trump all rolls and simply give you the final outcome? This question spun off from on . Some relevant points: . Mike Mearls tends to believe that luck can triumph in some cases, but admits this is , and further seems to conflate the question with . I would suggest reading the linked discussion before formulating an answer. There's some good thought there.
eng_Latn
34,560
How do I find out how many people use a particular PPA. I see on a particular PPA ; Package build summary A total of 108 builds have been dispatched for this PPA. Completed builds 108 successful 0 failed Does that mean only 108 people have added this PPA to their sources list and installed the package?
I'd like to know how many downloads of a given package in a PPA there have been since it was first published. I remember there was about it about getting these metrics on the web UI, but as far as I know, it never got implemented. But I think the number of downloads can nevertheless be obtained via the if I'm the owner of that PPA. Any pointers?
Say I have the titanic kaggle competition, but I'm not interested in the competition for predicting survival for each individual. Instead I want the most accurate estimate of total survivors on the titanic. Would this be achieved by using a probabilistic model, then adding the probabilities for each individual? For example, if I have 3 people and 1 survived, but my model produced 0.4, 0.4, and 0.4 probabilities for each person to survive, I calculate 0 survived. But if I add 0.4 for each person, I get 1.2, which is closer to the actual. Does this make sense?
eng_Latn
34,561
3 steps to find one of 12 balls Imagine you have 12 balls. They are all the same color. We know that one ball is different in weight from the others but we dont know is it heavier or easier. You have a beam scale without measurements on it. Just two sides where you can put any number of balls on it. In three steps find the ball that is different in weight from other balls.
A logic puzzle involving a balance. Possible Duplicate: You have 12 balls and you know that they all weigh the same except for 1 which is heavier or lighter than all the others (you don't know which though). How can you make sure you know which ball is the heaviest/lightest in only 3 weighings? The way I approached it was to split up the 12 balls into three sets of 4 and weigh two of the sets. If the sets balanced the scale, then I know the ball I am looking for must be in the set of 4 balls not weighed, else, I disregard said set and arbitrarily choose the heaviest set of 4 (as opposed to choosing the lightest set). I split the heaviest set of 4 balls into 2 and weigh that... etc. Repeating this process until all 3 tries have been "used up", even if everything just so happened to be in your favor (the arbitrary choice you have in choosing the heaviest or lightest set happens to be the correct choice) in the end you still end up having to choose between 2 balls. A 50% chance is good, but I am wondering, is there a way to make sure 100%?
Is this a grammatically correct line in a poem: “Will he roll the dice, and follow it to Vegas?”? I want to use the following line in a poem: "Will he roll the dice, and follow it to Vegas?" A couple of things to note; firstly, obviously I'm using "roll the dice" in both a figurative/idiomatic sense (as in taking a risk), and in a somewhat more literal sense (painting the image of the person actually following the dice that have been rolled to Las Vegas). That being said, my confusion here is if I can get away with using "follow it" or if I need to say "follow them" .... As I've read online "dice" traditionally refers to more than one die (i.e., the plural of die), but in the modern usage, "dice" can also be used to refer to the singular (i.e., just one die). So, it seems like if I was referring to the image of just one die, then the way I phrased it could be correct. However, the image I'm trying to paint is the one usually associated with the phrase "roll the dice" (i.e., rolling two dice)... So could "follow it" still be considered acceptable in this sense... or would I need to use "follow them"?... And if not referring to the dice themselves, could using, could the "it" be taken to refer to the act of rolling the dice, or would that not work either? Thanks so much in advance!!
eng_Latn
34,562
If some respondents are skipping certain questions because they do not apply to them, how would I address these missing values? If there's a questionaire that has questions like if you answered yes to this question, then skip the next 2 questions, how would I address these missing values? Would I ignore them when making a model for this data?
Can I still use logistic regression if some variables are not applied to all the observational units? Some of the variables are questions that are not being asked to every single person in the questionaire depending on how they answered previous question(s). So would I still be able to use a logistic regression if some of the questions (the independent variables) are only measuring the responses of SOME of the respondents?
Calculate E[X] from incomplete data? The exercise I'm doing describes the random variable $X$ as the following | Number of cars | 0 | 1 | 2 | 3 | 4+ | | % of families | 15 | 45 | 25 | 13 | 2 | Then it asks me to evaluate $E[X]$. But if there could be cases with 4+ cars, wouldn't that make it impossible to calculate the expectancy?
eng_Latn
34,563
What does the "Dice" package look like? Looking at a of an inverting regulator, I found something peculiar: I've never heard of a Pin-Package called "Dice" before, and it doesn't even specify the pin-count. On the of the manufacturer, this package isn't mentioned either. What does "Dice" mean in this context and why would I want to use it?
What is a "DIE" package? In a list of ICs, along with the familiar package names such as QFN32, LQFP48, etc., I've seen a few ICs to be listed as DIE for the package size. I've never seen that description before as an IC package size, and does not list it either. What can it be? I assume it's some kind of chip-scale package, but it does not reveal the silicon size or any other properties, like number of pins, etc.
Is this a grammatically correct line in a poem: “Will he roll the dice, and follow it to Vegas?”? I want to use the following line in a poem: "Will he roll the dice, and follow it to Vegas?" A couple of things to note; firstly, obviously I'm using "roll the dice" in both a figurative/idiomatic sense (as in taking a risk), and in a somewhat more literal sense (painting the image of the person actually following the dice that have been rolled to Las Vegas). That being said, my confusion here is if I can get away with using "follow it" or if I need to say "follow them" .... As I've read online "dice" traditionally refers to more than one die (i.e., the plural of die), but in the modern usage, "dice" can also be used to refer to the singular (i.e., just one die). So, it seems like if I was referring to the image of just one die, then the way I phrased it could be correct. However, the image I'm trying to paint is the one usually associated with the phrase "roll the dice" (i.e., rolling two dice)... So could "follow it" still be considered acceptable in this sense... or would I need to use "follow them"?... And if not referring to the dice themselves, could using, could the "it" be taken to refer to the act of rolling the dice, or would that not work either? Thanks so much in advance!!
eng_Latn
34,564
Probability that Amy gets more heads than Bob? Say that Amy tosses a coin 6 times, and Bob tosses a coin 5 times. What's the probability that Amy gets more heads than Bob does?
A probability theory question about independent coin tosses by two players Say Bob tosses his $n+1$ fair coins and Alice tosses her $n$ fair coins. Lets assume independent coin tosses. Now after all the $2n+1$ coin tosses one wants to know the probability that Bob has gotten more heads than Alice. The way I thought of it is this : if Bob gets $0$ heads then there is no way he can get more heads than Alice. Otherwise the number of heads Bob can get which allows him to win is anything in the set $\{1,2,\dots,n+1\}$. And if Bob gets $x$ heads then the number of heads that Alice can get is anything in the set $\{0,1,2,..,x-1\}$. So\begin{align}P(\text{Bob gets more heads than Alice})&= \sum_{x=1}^{n+1} \sum_{y=0}^{x-1} P( \text{Bob gets x heads }\cap \text{Alice gets y heads }) \\[0.2cm]&= \sum_{x=1}^{n+1} \sum_{y=0}^{x-1} \left(C^{n+1}_x \frac{1}{2}^{x} \frac{1}{2}^{n+1-x}\right)\left( C^n_y \frac{1}{2}^y \frac {1}{2}^{n-y}\right)\\[0.2cm]& = \sum_{x=1}^{n+1} \sum_{y=0}^{x-1} \frac{C^{n+1}_x C^n_y}{2^{2n+1}}\end{align} How does one simplify this? Apparently the answer is $\frac{1}{2}$ by an argument which looks like this, Since Bob tosses one more coin that Alice, it is impossible that they toss both the same number of heads and the same number of tails. So Bob tosses either more heads than Alice or more tails than Alice (but not both). Since the coins are fair, these events are equally likely by symmetry, so both events have probability 1/2.
Asking feedback/rating for a mobile app? Fig.1: A typical prompt, asking user for feedback/rating in iOS Background Any serious smartphone users must have seen a message similar to Fig.1 above at least once. Variations of such prompt include: "Please give us 5 star ratings if you love this app!", "Rate us :)", "Your rating helps improve this app!", etc. Basically, Fig.1 asks user to leave a (nice) feedback/rating for the app on its AppStore(or equivalents). The Shorter Question When and how often should a mobile app user be asked to leave a feedback/ratings? The Longer Question #1. Which User-statistics? What are the significant user's behavioral statistics that should be based on to generate Fig.1? For example: Number of runs since install/reinstall of the application Time elapsed since current application run Time elapsed since first application run Time elapsed since first application install Certain events (e.g. after certain number of stages are cleared for a game app) #2. When to Ask? If multiple user-statistics are chosen to be calculated to generate Fig.1, specifically how should the user-statistics satisfy the condition of the event? For example: Form a unified equation (e.g. X^2+Y^2=Z^2) with all significant user-statistics partaken fairly to trigger the event. Create (multiple of) "absolute" event triggers (e.g. every 10th app run; elapsing 2 hours since current application run). Both of the two. Generate Fig.1 whichever comes true first. Neither of the two. "There's a better way!" #3. How often? What would be the optimal min/max frequency of Fig.1 generation? By optimal, I mean users are not to be bothered too much, yet developers gain maximum output of great feedbacks from users? #4. "Don't bother me again :(" Should users have an option to opt-out from receiving Fig.1 at all? #5. Universal User-statistics for All Mobile Apps? Should apps under different categories be based on different user-statistics? There are significant differences in size and type of user-statistics in apps under different categories. For example, a "Flashlight" app is not typically run for one hour straight, while a music/movie streaming app may be run for over one hour. #6. "This app does not need more feedback." Should every app of all kinds of user base ask this message? If any, should any mobile app stop generating Fig.1 when target conditions are met (e.g. downloads count, active user count, feedback count)? #7. Feedback/Rating Before Deletion Should users be asked to leave a feedback before deleting as seen in Fig.2 below? Users are more likely to leave very critical review on app deletion, but this kind of feedback will be very helpful for the developers to strengthen their app's weakness for next version release.                                                                                                 Fig.2: Asking user for rating before app deletion
eng_Latn
34,565
Server Sizing Methodology
Can you help me with my capacity planning?
Is "TABLESAMPLE BERNOULLI(1)" not very random at all?
eng_Latn
34,566
Expected number of triangles with random lines
Probability of area being greater than 0.5 with random lines
Proof about Graph with no triangle
eng_Latn
34,567
What does the Rule of Two actually mean? Does it mean "A master has only one apprentice, if he becomes a master we part and search for new apprentices each" or always "enter random number here" masters with apprentice, or even "There can only be two" in Highlander fashion?
Does the Rule of Two Serve any Actual Purpose? I know Darth Bane came up with the Rule of Two to ensure the survival of the Sith, but, in reality (or in the Star Wars reality), does it (or any Sith rule) serve a true purpose? And why does each successive Sith bother with it? Due to the nature of the Sith, Sith Lords tend toward arrogance and self-importance. Both the Sith master and Sith apprentice seem to regularly break the rule by training others. Deceit and personal interest over loyalty is a way of life for the Sith. What is to keep any Sith from following the Rule of Two (or any rule, for that matter) if it is against their own self interest? If a Sith Master can stay in power longer by training a 2nd apprentice and pitting the two against each other, is he really going to worry about the Rule of Two more than his own plans and survival? The Rule of Two also assumes that every Sith Master would place the survival of the Sith over his own survival, as opposed to attempting to ensure his own immortality in whatever way he could (like Plagueis). When you have a group that, by default, is arrogant and self-serving, as well as deceptive, why would they want the Sith to survive themselves? I can see how the Rule of Two, and any other Sith beliefs would be possible guidelines, but it terms of the nature of ambitious people (and the Sith are ambitious), it's hard to believe that each new Sith Master would follow this rule faithfully. So does this rule, or any Sith rule, actually work? Or is it more of a suggestion that is broken when convenient?
Magic 8 Ball Problem This problem is probably simple enough to have an analogous problem, I just don't know the name so I'm going to describe it and hopefully somebody can point me in the right direction. The problem is this: estimating the number of sides of a . Let's say you perform the following process: Shake the "Magic 8 Ball" and mark down the result If the result has not previously been seen, add it to the set of possible results and label the trial "N" (for "new") If the result has been previously seen, label the trial "O" (for "old") Repeat all steps I imagine the following kind of sequence occurring: NNNNONNOONOONOOONOOOONONNOONOOONNOONOOOOOONOOONOOOOOOONOOOOOOOONOOOOOOOO... Now imagine we don't know there are twenty sides to a Magic 8 Ball. Or imagine that you have a Magic 8 Ball with 1000 sides. As the number of "new" trials approaches the actual number of sides, we'll get increasingly more "old" trials showing up in the mix. Once we've seen all of the possible results, we'll always get "old" results. But we're never 100% sure we've seen every single possible result. So here are the questions I'm interested in: As we proceed with trials, can we estimate the total number of "sides" of the Magic 8 ball based on the number of "new" and "old" trials up to this point? Can we calculate a probability that our current estimate is correct, or a probability that some bounded estimate is correct?
eng_Latn
34,568
The Prisoner's Release Probability Problem The release of two out of three prisoners has been announced. but their identity is kept secret. One of the prisoners considers asking a friendly guard to tell him who is the prisoner other than himself that will be released, but hesitates based on the following rationale: at the prisoner's present state of knowledge, the probability of being released is $\frac{2}{3}$, but after he knows the answer, the probability of being released will become $\frac{1}{2}$, since there will be two prisoners (including himself) whose fate is unknown and exactly one of the two will be released. What is wrong with this line of reasoning?
Understanding a probability paradox Three prisoners are informed by their jailer that one of them has been chosen at random to be executed and the other two are to be freed. Prisoner A asks the jailer to tell him privately which of his fellow prisoners will be set free, claiming that there would be no harm in divulging this information because he already knows that at least one of the two will go free. The jailer refuses to answer the question, pointing out that if A knew which of his fellow prisoners were to be set free, then his own probability of being executed would rise from 1 3 to 1 2 because he would then be one of two prisoners. What do you think of the jailer’s reasoning? If the jailer refuses to say anything, then the probability that prisoner $A$ is excecuted is $\frac{1}{3}$. If the jailer says to prisoner $A$ that prisoner $B$ will walk free, then $2$ prisoners remain to be considered, $A$ and $C$. One dies, one does not. Heads or tails essentially. $\frac{1}{2}$ ought to be the conditional probability that $A$ dies given that $B$ walks free no? Apparently not though, allegedly the correct answer is still $\frac{1}{3}$. Even my attempt to calculate the correct answer yielded the result $\frac{1}{2}$. Let $A_D$ and $C_D$ respectively denote the event of $A$ and $C$ dying. Let $B_F$ denote the event that $B$ walks free. Assume that the jailer tells prisoner $A$ that prisoner $B$ will walk free. Here's my attempt. $$P(A_D\mid B_F)=\frac{P(A_D\cap B_F)}{P(B_F)}=\frac{P(A_D\cap B_F)}{P((B_F\cap A_D)\cup (B_F\cap C_D))}=\frac{P(A_D)P(B_F\mid A_D)}{P(A_D)P(B_F\mid A_D)+P(C_D)P(B_F\mid C_D)}=\frac{\frac{1}{3}\times 1}{\frac{1}{3}\times 1+\frac{1}{3}\times 1}=\frac{1}{2}$$ What am I doing wrong? Edit 1: Intuitively I am still troubled but I understand now that $B_F$ may occur even though the jailer does not necessarally say $B$. Edit 2: I suppose that it makes some sense if the faiths of the prisoners had already been decided before prisoner $A$ asked the jailer the question. If the jailer decides to reveal one of the others who will walk free then that must've been the case.
A probability theory question about independent coin tosses by two players Say Bob tosses his $n+1$ fair coins and Alice tosses her $n$ fair coins. Lets assume independent coin tosses. Now after all the $2n+1$ coin tosses one wants to know the probability that Bob has gotten more heads than Alice. The way I thought of it is this : if Bob gets $0$ heads then there is no way he can get more heads than Alice. Otherwise the number of heads Bob can get which allows him to win is anything in the set $\{1,2,\dots,n+1\}$. And if Bob gets $x$ heads then the number of heads that Alice can get is anything in the set $\{0,1,2,..,x-1\}$. So\begin{align}P(\text{Bob gets more heads than Alice})&= \sum_{x=1}^{n+1} \sum_{y=0}^{x-1} P( \text{Bob gets x heads }\cap \text{Alice gets y heads }) \\[0.2cm]&= \sum_{x=1}^{n+1} \sum_{y=0}^{x-1} \left(C^{n+1}_x \frac{1}{2}^{x} \frac{1}{2}^{n+1-x}\right)\left( C^n_y \frac{1}{2}^y \frac {1}{2}^{n-y}\right)\\[0.2cm]& = \sum_{x=1}^{n+1} \sum_{y=0}^{x-1} \frac{C^{n+1}_x C^n_y}{2^{2n+1}}\end{align} How does one simplify this? Apparently the answer is $\frac{1}{2}$ by an argument which looks like this, Since Bob tosses one more coin that Alice, it is impossible that they toss both the same number of heads and the same number of tails. So Bob tosses either more heads than Alice or more tails than Alice (but not both). Since the coins are fair, these events are equally likely by symmetry, so both events have probability 1/2.
eng_Latn
34,569
Intuition about a coupon problem were we ask for the distribution of the unique coupons when the number of draws is fixed Alternative viewpoint of the coupon collectors problem In the coupon collectors problem we draw from a collection of $n$ coupons, with replacement and ask the question how many draws $K$ it takes to collect all or a subset of size $l$ of the coupons (ie, draw $l$ of them at least once). Thus $K$, the number of draws, is the random variable for which a distribution is derived. In this problem we have the the number of draws, $k$, fixed and ask the question how many, $L$, unique coupons we have selected in those draws. Thus $L$, the number of unique coupons, is the random variable. Relation between the two We can relate these two quantitites via the cumulative distribution functions. The probability that we obtain $l$ or less coupons, given $k$ draws, is equal to the probability that we need to draw more than $k$ times to obtain $l+1$ coupons. $$P(L \leq l|k) = 1 - P(K \leq k| l+1)$$ Question So basically this question can be answered indirectly with the answer to the coupon collectors problem. That problem has an intuitive and straightforward , since the distribution of the number of times to collect $l$ coupons is equal to the sum of $l$ geometric distributed variables. Question: Can we also describe a similar straightforward derivation more directly for $P(L=l|k)$ by not using the equation $P(L \leq l|k) = 1 - P(K \leq k| l+1)$?
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
Proving AM-GM with induction I am trying to prove AM-GM with the following steps: Prove that AM-GM holds for two variables Prove that if AM-GM holds for $k$ variables then it holds for $2k$ variables Prove that if AM-GM holds for $k$ variables then it holds for $k-1$ variables If I prove all of these, I will have proved AM-GM. I have proved (1) and (2), but I am still struggling with (3). Proof for (1) By the Trivial Inequality, we know that the first statement is true. Therefore \begin{align} (a-b)^2 \ge 0 \implies\\ a^2-2ab+b^2 \ge 0 \implies \\ a^2+2ab+b^2 \ge 4ab \implies\\ a+b \ge 2\sqrt{ab} \implies\\ \frac{a+b}{2} \ge \sqrt{ab}\hspace{3mm}\blacksquare\\ \end{align} Proof for (2) Let our numbers be $a_1, a_2, a_3, \ldots, a_{2k}$. Denote $$x = \frac{\displaystyle\sum_{i=1}^{k}a_i}{k}, y = \frac{\displaystyle\sum_{i=k+1}^{2k}a_i}{k}$$ We know that by AM-GM for 2 variables, that $$\frac{x+y}{2}\ge \sqrt{xy}$$ By our definition of $x, y$ this means that $$\frac{x+y}{2} = \frac{\frac{\displaystyle\sum_{i=1}^{2k}a_i}{k}}{2}=\frac{a_1+a_2+a_3+\ldots+a_{2k}}{2k}$$ Again by definition, $$\sqrt{xy} = \sqrt{\left(\frac{x_1+x_2+\ldots+x_k}{k}\right)\left(\frac{x_{k+1}+x_{k+2}+\ldots+x_{2k}}{k}\right)}$$ By AM-GM for $k$ variables, $$\sqrt{\left(\frac{x_1+x_2+\ldots+x_k}{k}\right)\left(\frac{x_{k+1}+x_{k+2}+\ldots+x_{2k}}{k}\right)} \ge \sqrt{\sqrt[k]{x_1x_2\ldots x_k}\sqrt[k]{x_{k+1}x_{k+2}\ldots x_{2k}}}$$ This is simply equal to $$\sqrt[2k]{x_1x_2x_3\ldots x_{2k}}\hspace{3mm}\blacksquare$$ However, I have no idea how to proceed on (3). I tried using the definition of AM and GM but did not come up with anything useful. Feel free to point out any errors or flaws on my proofs for (1) and (2).
eng_Latn
34,570
Imagine that I've got a big $N \times N$ boolean matrix (all entries are $0$ or $1$). At the beginning, the matrix is full of zeros. Then, I execute the following algorithm Matrix M empty of type N x N while (there is a 0 in M) { Position pos; do { pos = random position in M } while (pos is 1); // Here pos is 0 mark pos as 1; } I want the cost of the previous algorithm (suppose that "there is a $0$ in $M$" has no cost, and that "random position in M" has constant, i.e. $\Theta(1)$, cost). My intuition says that it is $\Theta(\cdot)$ of $$ 1 + \textrm{Geom}\left(\frac{N^2 -1}{N^2}\right) + \textrm{Geom}\left(\frac{N^2 -2}{N^2}\right) + \cdots + \textrm{Geom}\left(\frac{1}{N^2}\right) $$
What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
Prove that there are no integers $a,b \gt 2$ such that $a^2{\mid}(b^3 + 1)$ and $b^2{\mid}(a^3 + 1)$.
eng_Latn
34,571
I have been thinking about one problem, which is not understandable to me. Consider the following: I have a deck of cards (52 cards). I pick card after card (no replacement), but I stop when the current card belongs to the same family (Aces, Kings, etc.) as the card picked before. The number of observations is an event, so we can say in our sample space there are 50 events (2-52), BUT we should add another, which is that nothing happens at all. This last part gives me a huge headache and I don't know how to consider even start thinking about a problem and create a formula for this problem, such as find probability for the formula for an event. I would really really appreciate it if someone could help me understand this problem. Thank you. edit: the current card must match the card picked before, not any card picked before.
What is the probability that a shuffled standard deck of 52 cards has no two cards of the same rank together ? I am unable to get a handle on this problem, and wonder whether there is an analytical solution ?
The new Top-Bar does not show reputation changes from Area 51.
eng_Latn
34,572
When should there be a roll for an Ability check? Should a player make a roll for things like: He is travelling and wants to play a flute Wanting to carve a goblin's teeth as a trophy Wanting to play on a drum that he found Wanting to split through of a captured enemy or can they be considered as automatic success?
When should I make my players actually roll for stuff? I've tried to focus my group heavily on Role Playing, perhaps too well given some of my recent problems, which are Orthagonal, and linked mostly to provide historical context. A lot of the answers to these questions revolve around "have off the rail's multiple angles" and "don't make people roll for information they must have". That seems well understood to me at this juncture, in fact I have been doing just that, and I think it's causing some other problems. I have only rarely made people roll for anything (combat aside) anymore, unless it's absolutely non critical. I try to get the players to Role Play their investigations and give them information when they look in the right places, and their are multiple right places. This results in skills, especially social/mental, being completely inessential, and players not gaining important information because they aren't poking around the right area at all, or asking people who aren't interested in talking to them because they're currently a bunch of random Joe's. The players aren't stuck at this time, I'm just trying to get better and not have functionally dead scenes, that don't rely on the players to play "what is the GM thinking" nor do I want to just give them every answer. The reality is, I'm not sure when to make them roll to find things out. what criteria? or other advice, can I use to decide that the players are better off making a Roll to move forward than to purely use Role Playing? update This question focuses on the Social and to a lesser degree Mental aspects, as it's usually pretty easy to tell when a Physical Roll is needed, even out of combat. It's harder to tell when to make people roll for things they can talk their way through. I'm playing World of Darkness 2nd Edition, but answers not specific to that are welcome.
How do I model the fighter's Great Weapon Fighting fighting style in Anydice? I was trying to create an AnyDice function to model the Great Weapon Fighting fighting style (which lets you reroll 1s and 2s), but I couldn't get it to work on any arbitrary dice. I've found this one: function: reroll R:n under N:n { if R < N { result: d12 } else {result: R} } output [reroll 1d12 under 3] named "greataxe weapon fighting" And it works fine. But I don't know how to make the function generic so i don't need to change the d12 every time i want a different dice to reroll. I've tried function: reroll R:n under N:n { if R < N { result: d{1..R} } else {result: R} } output [reroll 1d12 under 3] named "greataxe weapon fighting" but it is not giving the right probabilities. Maybe if I could fetch the die size inside the function...
eng_Latn
34,573
The way my DM wants to try is roll 3d6 and reroll any 1's or 2's once. If you roll a 2 and the reroll ends up being a 1 you have to take the 1.
I was trying to create an AnyDice function to model the Great Weapon Fighting fighting style (which lets you reroll 1s and 2s), but I couldn't get it to work on any arbitrary dice. I've found this one: function: reroll R:n under N:n { if R < N { result: d12 } else {result: R} } output [reroll 1d12 under 3] named "greataxe weapon fighting" And it works fine. But I don't know how to make the function generic so i don't need to change the d12 every time i want a different dice to reroll. I've tried function: reroll R:n under N:n { if R < N { result: d{1..R} } else {result: R} } output [reroll 1d12 under 3] named "greataxe weapon fighting" but it is not giving the right probabilities. Maybe if I could fetch the die size inside the function...
When a divination wizard character uses their Portent ability: Starting at 2nd level when you choose this school, glimpses of the future begin to press in on your awareness. When you finish a long rest, roll two d20s and record the numbers rolled. You can replace any attack roll, saving throw, or ability check made by you or a creature that you can see with one of these foretelling rolls. You must choose to do so before the roll, and you can replace a roll in this way only once per turn. ... to replace a roll, can a player with the Lucky Feat: Whenever you make an attack roll. an ability check, or a saving throw, you can spend one luck point to roll an additional d20. You can choose to spend one of your luck points after you roll the die, but before the outcome is determined. You choose which of the d20s is used for the attack roll, ability check, or saving throw. You can also spend one luck point when an attack roll is made against you. Roll a d20, and then choose whether the attack uses the attackers roll or yours. If more than one creature spends a luck point to influence the outcome of a roll, the points cancel each other out; no additional dice are rolled. ... use the Luck Roll to replace the result? Or does the Portent trump all rolls and simply give you the final outcome? This question spun off from on . Some relevant points: . Mike Mearls tends to believe that luck can triumph in some cases, but admits this is , and further seems to conflate the question with . I would suggest reading the linked discussion before formulating an answer. There's some good thought there.
eng_Latn
34,574
How difficult is it to collect the full set of toys from McDonald's Happy Meal?
Probability associated with experiencing all outcomes
What the #$@&%*! is that called?
eng_Latn
34,575
When to Stop Rolling and Bail
Would you ever stop rolling the die?
WITH ROLLUP WHERE x IS NULL
eng_Latn
34,576
Probability last passenger on plane sits in correct seat when the first two passengers choose seats randomly There are 100 passengers with assigned seats. The first two people that boards chooses a seat at random uniformly. The remaining passengers will sit in their assigned seat if it is still open, but if it's not open, they will choose an open seat at random. What is the probability that the last person sits in his assigned seat? The one person variant is posted in where the solution was found to be $0.5$. I am struggling with adapting that approach to the current problem. My intuition is that in general, the $i$-th person, if their seat was taken by someone who boarded earlier, has exactly $100 - i + 1$ choices. So if there was a person with exactly 2 choices, it must be (a) the 99th person and (b) the 99th person's seat must be taken. (3) Both of the first and second person's seats must not both be seated in already because if they were the 99th seat would be open. (i) If both were not seated, then the 2 choices for the 99th person must be one of these two seats, and the seat assigned to the 100th person must have been taken. (ii) If exactly one of the first two person's assigned seats were not taken, then the 99th person has the other of the 2 seats as a choice and the 100th seat as a choice. So I think maybe the key to the problem is figuring out the probability of (i) and (ii) occuring?
Taking Seats on a Plane This is a neat little problem that I was discussing today with my lab group out at lunch. Not particularly difficult but interesting implications nonetheless Imagine there are a 100 people in line to board a plane that seats 100. The first person in line realizes he lost his boarding pass so when he boards he decides to take a random seat instead. Every person that boards the plane after him will either take their "proper" seat, or if that seat is taken, a random seat instead. Question: What is the probability that the last person that boards will end up in his/her proper seat. Moreover, and this is the part I'm still pondering about. Can you think of a physical system that would follow this combinatorial statistics? Maybe a spin wave function in a crystal etc...
How do I get my players to form a PC party without just forcing them to? I believe that it is generally a good thing that all the PCs travel together, as a "party". This way the GM has to manage one environment – not, say, five. The problem is how the GM can ensure such a relationship between the characters. One simple and really great solution is to require the players to design their characters so that they know each other. This has the added advantage of cross-backstories – as the players try to adjust their stories to fit together, they get and create hooks for more narrative! However, I am now experimenting with giving the players complete freedom at character creation and later meeting in-game. No luck so far. It is not easy, as trusting someone with your life (what happens during combat) is not a trivial thing. Furthermore, people generally like others with the same behaviour/interests/skin colour. Fantasy is all about diversity – you do not character-generate a strong man with a sword, no, you generate a half-ogre with an axe never before seen! How can the GM get players, with diverse characters who are strangers to each other, decide to have their PCs act as a coherent group. I absolutely want to avoid asking my players to just solve this through metagaming ("Hey Joe, think of a way to get together with those, I can't story-tell two separate groups at the same time!"). I'm not having the stereotypical problem where a bunch of PCs meet for the first time and are automatically so paranoid of strangers that they start trying to kill each other. My players have just created a bunch of characters who have their own legitimate interests, and are faithfully following those in completely different directions that don't result in a classic "party". I put the PCs in great peril together and everyone escaped by their own devices, but now I have to figure out get the guy who ran for the hills back together with the couple that entered the city. And, I want to do it by resorting only to reasonable in-game events, not out-of-game metagame suggestions.
eng_Latn
34,577
predicting the color of the next card drawn [edited based on comment and answer offered] Note: This question is different from as that question asks for the payoff of a slightly different game. This question asks if there is a strategy to win. One comment gave as a strategy, "Wait until the last card is to be drawn, then predict it with 100% accuracy". A similar strategy would be to wait until more cards of one color have been overturned, then predict that the next card will be of the other color. But is there an optimal or best number for the value of "ahead", the difference in the count of one color vs the other that have so far been dealt? This problem feels similar in some ways to the Secretary Problem, but I can't provide reasoning for that assertion. -- A card is drawn one by one from a shuffled deck. At each draw, you can either pass (draw the card with no prediction), or predict the color of the next card. At the start, you can predict red with 50% chance of success. What is a strategy to make a better prediction? My strategy is to see some draws, wait till one color is "ahead" by a certain amount, then predict the other color on the next draw. That entails knowing how much "ahead" I should allow one color to get before the prediction. I ran a simulation of this to discover what a reasonable amount of "ahead" to wait for would be, assigning -1 to red and +1 to black and taking a sum. My questions are: How do I determine the right value of "ahead" to make the prediction? In R, how do I find the location and value of that value in the cum vector set seed(100) sim <- replicate(1000000, {cum <- cumsum(sample(c(-1,1), size=52, replace=TRUE)); max(abs(cum))}) summary(sim) > summary(sim) Min. 1st Qu. Median Mean 3rd Qu. Max. 2.000 6.000 8.000 8.567 11.000 34.000
guess the color of the next card There are 26 red cards and 26 black cards on the table which are randomly shuffled and are facing down onto the table. The host turns up the cards one at a time. You can stop the game any time (even at the beginning of the game). Once you stops the game, the next card is turned up: if it is red, you get $1; otherwise you pay the host one dollar. What is the payoff of this game?
How many ways are there to select a pair of cards from a standard deck of cards such that one of the cards is red and the other one is black? Suppose that a pair of cards are selected from a standard deck of 52 cards. And since there are 26 black cards and 26 red cards, so I'm wondering if my thinking is right? ${26!/25! * 26!/25!}$ so it would be ${26*26 = 676}$
eng_Latn
34,578
A woman goes to the funeral of her relative. She sees a man there she is attracked to. The next day she goes to her sisters house and kills her. Why?... NOTE (there is no wrong answer BUT there is an answer that tells you something very interesting about yourself)... I'll let you know that when i pick the best one.
Because her sister is her twin and she doesnt want the man being attracted to her twin? Haha..I dont know if this is right, just a guess
Your logic is flawed. Say you always pick card #2. The first time, you'll have 14.2857...% (that's the same as saying one seventh) of the chances of winning the car.\n\nThe second time, with a new set of cards, your chances that the card #2 is the winner are again 14.2857...%, and so will every new time, because your chances are always 1 out of 7. You are picking one card out of seven cards that you are given.\n\nYour reasoning would be right if you could be assured that the car card will always be a different one (that for example, it will be in the first case #4, then #6, and so on). In that case, after the 7th. guess (if not earlier), you'd get the car. But the way you explained the game, nothing prevents the car from being always for example at card #5. If that is the case, always trying card #2 will make you fail every time.
eng_Latn
34,579
how to play bonus video poker correctly
Bonus Poker. Introduction. This page shows my strategy for 8/5 Bonus Poker. With optimal strategy, the expected return of 8/5 Bonus Poker is 99.17%. Often, 8/5 Bonus Poker is the best available video poker game in a given casino, so it is a valuable strategy to know. The following table shows the probability and return for each hand.
Lastet opp 1. mai 2009. Watch more How to Play Card Games videos: http://www.howcast.com/videos/274-How... Impress your poker buddies by learning how to shuffle and deal like you do it for a living. Step 1: Split the deck in half. Learn how to do the classic shuffle, which is called the riffle shuffle. Split the deck into approximate halves and put them face down horizontally on a table. Step 2: Position your hands.
eng_Latn
34,580
Number of restricted ways to two-color a necklace There are $n$ beads placed on a circle, $n\ge 3$. They are numbered in random order as viewed clockwise. Beads for which the number of the previous bead is less than the number of a next bead are painted in white color,and others - in black. Two colorations that can be made equal by rotation are considered identical. How many different colorations can occur?
Black and white beads on a circle There are $n$ beads placed on a circle, $n\ge 3$. They are numbered in random order as viewed clockwise. Beads for which the number of the previous bead is less than the number of a next bead are painted in white color,and others - in black. Two colourations that can be made equal by rotation are considered identical. How many different colourations can occur? I've write a programm and for $n=3...11$ I've got answers $2 , 1 , 6 , 7 , 18 , 25 , 58 , 93 , 186$
Experimental data for asymmetric Newton cradle Using a "successive impact model" (as if each ball were seperated from the other ones), I produced the following animations: You can see any combination of balls with masses of 1 or 2 (left) or 1 and 4 (right). Unfortunately, I do not have any Newton cradle to make some experiments, and I'd like to compare my results with observations. I contacted the author of , which is the most instructive one I found about asymmetric Newton cradle. In particular, he writes that the "OoO" configuration behaves as the "ooo" one, which is not what I simulate. But the author is not sure whether who is wrong because he did the experiments a long time ago. I am especially interested in asymmetric Newton cradle with simultaneous impacts: left and right balls collinding at the same time. Please let me know if you have any idea on where to find such experimental data. Edit Just of few general remarks to avoid extending the comments. Energy and momentum are conserved. However, they are sufficient to ensure uniqueness of the post-impact velocities only when there are 2 balls. The successive impact model precisely consists in propagating the impact ball after ball, and therefore leads to a unique solution (which conserves momentum and energy globally). From the waves point of view, changing the mass of a ball can be done by increasing the diameter, or the density (or a combination of both). This simple model cannot account for such subtleties of course, but maybe it does not matter when the balls are "small enough". Even if I think it's quite basic, following WetSavannaAnimalakaRodVance's comment, I am ready to hand out my Mathematica code to anybody who wants it. I'll check for good ways to share it.
eng_Latn
34,581
problem statement 5 1 . 2 problem statement .
Hyperparameters: optimize, or integrate out?
Stable Sampling Equilibrium in Common Pool Resource Games
nno_Latn
34,582
How to calculate the optimal number of bins for severly skewed data
Calculating the optimal number of bins for severely skewed data
Calculating the optimal number of bins for severely skewed data
eng_Latn
34,583
How can i find the following Poisson distribution when i have a non-integer value
Poisson distribution (solution check)
Integer partitions in all orderings
eng_Latn
34,584
Show the expected number of packets to buy to collect a whole set of minifigures
calculating expected number of packets.
Subset of $C[0,1]$ is nowhere dense
eng_Latn
34,585
How to determine this sampling distribution?
You must have left something out. I have not a clue where you got that answer it is not what I got based on the information given.
Does your school give you access to SPSS?\n\nCheck with one of the librarians at your campus library....they can really help you out with this type of stuff.\n\nGood Luck!
eng_Latn
34,586
How to find a 95% confidence interval for sample proportion p (sample size =5) without normal approximation??
For n = 5? Seriously?\n\nIt is not clear what you are asking.\n\nSuppose that:\n\n1. Your model is of a sequence of independent Bernoulli trials each of which succeeds with probability p.\n\n2. Your null hypothesis H0 is p = 0.3\n\n3. Your alternative hypothesis H1 is p is not 0.3.\n\n4. You plan to observe 5 trials with the intent of accepting or rejecting the null hypothesis with 95% confidence.\n\nThen from p + q = 1 (i.e., q = 1 - p) and 1^5 = (p+q)^5 = p^5 + 5p^4q + 10p^3q^2 + 10p^2q^3 + 5pq^4 + q^5 you know that the probabilities are\n\n5 successes in 5 trials: p^5 = 0.3^5 = 0.00243\n4 succeses in 5 trials: 0.02835\n3 successes: 0.13230\n2 successes: 0.30870\n1 success: 0.36015\n0 successes: 0.16807\n\ni.e., 5 or 4 successes occur <= 5% of the time so you accept H0 on 3 or fewer successes and reject H0 on 4 or 5 successes.\n\nAnother interpretation of your question:\n\nSuppose your model is of independent identical Bernoulli trials each with unknown probability p of success and you would like to estimate p from observing the outcomes of 5 independent trials. Given k out of 5 successes as the observed outcome for which p can we say that "k" or "k or fewer" or "k or more" successes out of 5 has probability <= 5% of being observed (that is three different questions by the way). For each of k = 0, 1, ..., 5 you plot the probability of that outcome given p and exclude those p for which the plot drops to 5% or less.
There are several ways to go about this:\n\n1. You can hire a market research professional\n2. Or do it yourself, which if you are not a market research professional may have a significant learning curve\n\nYou did not indicate whether you have the budget to hire someone (usually cost about a thousand dollars or way more) or you will do it yourself.\n\nYou may want to do a focus group discussion, where an experienced facilitator can talk with about 10-15 of the target audience you want. Questions could include where do they purchase their dog food, what are the factors they consider when deciding what treats to buy their dogs, how much they typically spend on pet food, and others. Focus groups are candid, no holds barred discussions and a good facilitator would know how to control the free wheeling discussion.\n\nYou can also do a survey (say, about 100 respondents to give you a good sample). You have to work carefully in crafting the questions, striking a balance between closed questions and open ended questions without making the questionnaire too long affecting your response rate. The question of course is where to send your questionnaire, and this is where you need help.
eng_Latn
34,587
how to plot a binomial
How to plot a binomial or Poisson distribution. 1 Download the Prism file. 2 To modify this file, change the value of lamda (for Poission) or the probability, n, and cutoff (Binomial) in the Info sheet. Enter new values there, and the graph updates. 3 If you want to recreate graphs like these, keep in mind these points:
1. Put the binomial terms in order. While 4 + x is a valid binomial, binomials, like other polynomials, are normally written with the highest order exponent after the variable first. Thus, you would reorder 16 + 4x as 4x +16 and 27 + x 3 as x 3 + 27.. Put the binomial terms in order. While 4 + x is a valid binomial, binomials, like other polynomials, are normally written with the highest order exponent after the variable first. Thus, you would reorder 16 + 4x as 4x +16 and 27 + x 3 as x 3 + 27.
eng_Latn
34,588
Probability to pick last four pages
Chance of selecting the last k pages in correct order from a set of n pages
Chance of selecting the last k pages in correct order from a set of n pages
eng_Latn
34,589
Expected number of tosses until winner in game with two players with two different coins
How to solve this probability problem analytically (instead of using simulation)? Probability of Bill wins the game if he goes first
Keras - no prediction probability for multiple output models?
eng_Latn
34,590
Generating Functions to design nonstandard dice
More Generating Functions problems
How to unprotect GeneratingFunction
eng_Latn
34,591
calculating the mean for each subj in r
calculate the mean of trials for each subject in R
R.Java not generated
eng_Latn
34,592
Probability that no balls go in the right boxes
Number of permutations of $n$ elements where no number $i$ is in position $i$
probability of $k$ boxes contain exactly $1$ ball
eng_Latn
34,593
Recurrence problem with a game of probability
Gambler's ruin and coin toss
Did I correctly derive this recurrence equation formula
eng_Latn
34,594
Calculate Multiple Dice Probabilities
Lots of people think that if you roll three six sided dice, you have an equal chance of rolling a three as you have rolling a ten. This is not the case, however, and this article will show you how to calculate the mean and standard deviation of a dice pool.
Tired of the regular Nuzlocke? Looking for a challenge? Then try winning the game with a single Pokémon! Same thing, with one difference: you only have one chance, and if your Pokémon dies, it's game over!
eng_Latn
34,595
What should n be so that the probability is less than 0.5
Understanding the "Birthday Problem"
Probability of area being greater than 0.5 with random lines
eng_Latn
34,596
Probability of drawing balls with different colours
Probability distribution in the coupon collector's problem
What is the probability that a man likes pink?
eng_Latn
34,597
If I'm sampling from N items, how many samples are needed to get all N items at least once?
Expected time to roll all 1 through 6 on a die
Direct proof that nilpotent matrix has zero trace
eng_Latn
34,598
I need to make a chart with calculating probability of rolliing two heads out of seven flips. I have lined up 0,1,2,3,4,5,6,7 and next to it i need to put the probability..i dont know how to calculate it ...please help me!
In order to measure probabilities, mathematicians have devised the following formula for finding the probability of an event.\n\nProbability Of An Event \nP(A) = The Number Of Ways Event A Can Occur \nThe Total Number Of Possible Outcomes \n \nThe probability of event A is the number of ways event A can occur divided by the total number of possible outcomes.
I would suggest purchasing a book like Forgotten Calculus and Forgotten Algebra by Barbara Lee. I used those for Calc I and for Calc II I used Schaum's 3000 solved problems in Calculus and the site http://tutorial.math.lamar.edu/ for when I wanted instructions or a quick understanding of algebra, trigonometry and calculus. There are something called cheat sheets, which will give you the most common rules. Don't forget, you need a really good understanding of Algebra before you can tackle calculus. Trigonometry is not too bad. Good luck.
eng_Latn
34,599