Search is not available for this dataset
query
stringlengths 1
13.4k
| pos
stringlengths 1
61k
| neg
stringlengths 1
63.9k
| query_lang
stringclasses 147
values | __index_level_0__
int64 0
3.11M
|
---|---|---|---|---|
flexible random - effects models using bayesian semi - parametric models : applications to institutional comparisons .
|
Bayesian Density Estimation and Inference Using Mixtures
|
A Bayesian Analysis of Some Nonparametric Problems
|
eng_Latn
| 34,300 |
Why we cannot computing the integral of P(x) in Bayesian Inference analytically?
|
Why sometimes evidence is said to be too complex to compute and other times is negelected cause it is fixed?
|
Every principal ideal domain satisfies ACCP.
|
eng_Latn
| 34,301 |
Non-universal usability?: a survey of how usability is understood by Chinese and Danish users
|
Meta-analysis of correlations among usability measures
|
Bayesian optimization and attribute adjustment
|
eng_Latn
| 34,302 |
Sparse Multinomial Logistic Regression via Bayesian L 1 Regularisation
|
Fast Marginal Likelihood Maximisation for Sparse Bayesian Models
|
Workflow mining: A survey of issues and approaches
|
eng_Latn
| 34,303 |
A mathematical model of the finding of usability problems
|
Heuristic evaluation of user interfaces
|
fast bayesian hyperparameter optimization on large datasets ∗ .
|
eng_Latn
| 34,304 |
Bayesian Attack Model for Dynamic Risk Assessment
|
Dynamic Security Risk Management Using Bayesian Attack Graphs
|
Human spinal locomotor control is based on flexibly organized burst generators
|
kor_Hang
| 34,305 |
Identifying cyber risk hotspots : A framework for measuring temporal variance in computer network risk
|
Network vulnerability assessment using Bayesian networks
|
RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems
|
eng_Latn
| 34,306 |
A Survey of Role Mining
|
Generative models for access control policies: applications to role mining over logs with attribution
|
A Review on Bilevel Optimization: From Classical to Evolutionary Approaches and Applications
|
eng_Latn
| 34,307 |
Game-Theoretic Approach to Feedback-Driven Multi-stage Moving Target Defense
|
Dynamic Security Risk Management Using Bayesian Attack Graphs
|
A graph-based system for network-vulnerability analysis
|
kor_Hang
| 34,308 |
Cognitive computation: A Bayesian machine case study
|
how to grow a mind : statistics , structure , and abstraction .
|
B-APT: Bayesian Anti-Phishing Toolbar
|
eng_Latn
| 34,309 |
Bayesian Statistics in Software Engineering: Practical Guide and Case Studies
|
A practical guide for using statistical tests to assess randomized algorithms in software engineering
|
Extractive Summarisation Based on Keyword Profile and Language Model
|
eng_Latn
| 34,310 |
A Novel Attack Graph Posterior Inference Model Based on Bayesian Network
|
automated generation and analysis of attack graphs .
|
Arrested development: The effects of incarceration on the development of psychosocial maturity
|
kor_Hang
| 34,311 |
Diagnosing Root Causes and Generating Graphical Explanations by Integrating Temporal Causal Reasoning and CBR
|
Diagnosing the root-causes of failures from cluster log files
|
Bayesian Compressive Sensing
|
eng_Latn
| 34,312 |
define bayesian inference
|
Bayesian statistics 1 Bayesian Inference Bayesian inference is a collection of statistical methods which are based on Bayesâ formula.
|
An inference is usually made about something with a degree of certainty, based on facts like statistics, calculations, observations or generalizations. âInferâ is the verb form of âinferenceâ, having the same meaning, to form an opinion or reach a conclusion based on known facts. For example: We can infer that it is cold outside based on what we see people wearing.
|
eng_Latn
| 34,313 |
P-value and Bayesian Inference
|
How can one express frequentist p-value in terms of Bayesian concepts?
|
Any subgroup of index $p$ in a $p$-group is normal.
|
eng_Latn
| 34,314 |
Enumeration of Selected Salivary Bacterial Groups
|
The proportional distribution of Streptococcus salivarius and other streptococci in various parts of the mouth.
|
Counting without sampling: new algorithms for enumeration problems using statistical physics
|
eng_Latn
| 34,315 |
Optimal dividend-payout in random discrete time
|
On optimal periodic dividend strategies for L\'evy risk processes
|
The deceiving simplicity of problems with infinite charge distributions in electrostatics
|
eng_Latn
| 34,316 |
Improving Covariate Balance in 2^K Factorial Designs via Rerandomization
|
Rerandomization to improve covariate balance in experiments
|
Rerandomization to improve covariate balance in experiments
|
eng_Latn
| 34,317 |
We study the mixing time of the unit-rate zero-range process on the complete graph, in the regime where the number $n$ of sites tends to infinity while the density of particles per site stabilizes to some limit $\rho>0$. We prove that the worst-case total-variation distance to equilibrium drops abruptly from $1$ to $0$ at time $n\left(\rho+\frac{1}{2}\rho^2\right)$. More generally, we determine the mixing time from an arbitrary initial configuration. The answer turns out to depend on the largest initial heights in a remarkably explicit way. The intuitive picture is that the system separates into a slowly evolving solid phase and a quickly relaxing liquid phase. As time passes, the solid phase {dissolves} into the liquid phase, and the mixing time is essentially the time at which the system becomes completely liquid. Our proof combines meta-stability, separation of timescale, fluid limits, propagation of chaos, entropy, and a spectral estimate by Morris (2006).
|
EXAMPLE1. Top in at random shuffle. Consider the following method of mixing a deck of cards: the top card is removed and inserted into the deck at a random position. This procedure is repeated a number of times. The following argument should convince the reader that about n log n shuffles suffice to mix up n cards. The argument depends on following the bottom card of the deck. This card stays at the bottom until the first time (TI) a card is inserted below it. Standard calculations, reviewed below, imply this takes about n shuffles. As the shuffles continue, eventually a second card is inserted below the original bottom card (this takes about n/2 further shuffles). Consider the instant (T,) that a second card is inserted below the original bottom card. The two cards under the original bottom card are equally likely to be in relative order low-high or high-low. Similarly, the first time a h r d card is inserted below the original bottom card, each of the 6 possible orders of the 3 bottom cards is equally likely. Now consider the first time T,-, that the original bottom card comes up to the top. By an inductive argument, all (n l ) ! arrangements of the lower cards are equally likely. When the original bottom card is inserted at random, at time T = q,-, + 1, then all n! possible arrangements of the deck are equally likely.
|
With the development of the shipping industry, Automatic Identification System (AIS) used for ship communication becomes more and more important. Aiming at the problem of mixing position estimation of AIS mixed signals, this paper improves the double sliding-window detection algorithm to estimate the mixing position of miscellaneous signals accurately. There is a significant difference in energy between the aliased and unmixed parts of the mixed signal. When unmixed parts just enter one of the energy detections windows, the decision function reaches its peak value by establishing a proper decision function, that is to say, the position of the beginning and end of the mixing part is estimated. The simulation results show that the proposed algorithm which is compared with the frequency and amplitude detection algorithm can achieve the mixing position estimation with low complexity and strong robustness, and the estimation accuracy is close to the Cramer-Rao Bound under the condition of the high signal-to-noise ratio.
|
eng_Latn
| 34,318 |
Generally, we price options by calculating the expected value of future cash flows, discounted with the appropriate risk-free interest rate. However, the closed-form solutions for many multi-asset options don't exist. In this paper we consider the pricing of the multi-asset options by Monte Carlo method. As a test case, we take the quanto option for example, which is a typical multi-asset option. At the same time, we use the antithetic variates technique, a variance reduction technique, to increase simulation efficiency.
|
An approximate method is developed for computing the values of European options on the maximum or the minimum of several assets. The method is very fast and is accurate for parameter ranges that are often of the most interest. The approach casts the problem in terms of order statistics and can be used to handle situations where the initial asset prices, the asset variances, and the covariances are all unequal. Numerical values are given to illustrate the accuracy of the method.
|
ABSTRACTUNC-45A is an ubiquitously expressed protein highly conserved throughout evolution. Most of what we currently know about UNC-45A pertains to its role as a regulator of the actomyosin system...
|
eng_Latn
| 34,319 |
A central problem in Microeconomics is to design auctions with good revenue properties. In this setting, the bidders' valuations for the items are private knowledge, but they are drawn from publicly known prior distributions. The goal is to find a truthful auction (no bidder can gain in utility by misreporting her valuation) that maximizes the expected revenue. Naturally, the optimal-auction is sensitive to the prior distributions. An intriguing question is to design a truthful auction that is oblivious to these priors, and yet manages to get a constant factor of the optimal revenue. Such auctions are called prior-free. Goldberg et al. presented a constant-approximate prior-free auction when there are identical copies of an item available in unlimited supply, bidders are unit-demand, and their valuations are drawn from i.i.d. distributions. The recent work of Leonardi et al. [STOC 2012] generalized this problem to non i.i.d. bidders, assuming that the auctioneer knows the ordering of their reserve prices. Leonardi et al. proposed a prior-free auction that achieves a $O(\log^* n)$ approximation. We improve upon this result, by giving the first prior-free auction with constant approximation guarantee.
|
We give a simple analysis of the competitive ratio of the random sampling auction from [10]. The random sampling auction was first shown to be worst-case competitive in [9] (with a bound of 7600 on its competitive ratio); our analysis improves the bound to 15. In support of the conjecture that random sampling auction is in fact 4-competitive, we show that on the equal revenue input, where any sale price gives the same revenue, random sampling is exactly a factor of four from optimal.
|
ABSTRACTUNC-45A is an ubiquitously expressed protein highly conserved throughout evolution. Most of what we currently know about UNC-45A pertains to its role as a regulator of the actomyosin system...
|
eng_Latn
| 34,320 |
The tness value of a knowledge base (KB) can be unknown and only some imprecise information about it can be obtained. In some cases this information is given by means of an interval where we know the tness is contained. Thus, the comparison of two randomly distributed intervals is necessary in this context in order to be able to determine the preferences among individuals. This contribution is a rst approach to the use of statistical preference as a tool to compare this kind of intervals. We consider the probabilistic relation associated to the stochastic comparison of every pair of intervals and we study the cycle-transitivity of this relation. The defuzzication of this probabilitic relation, that is, the statistical preference relation, is studied and some properties are obtained. Our studies are particulary detailed for the case of the uniform distribution.
|
In some cases the fitness value of a knowledge base is not completely determined, but just bounded in an interval. In this case the fitness value is modelled by a random variable. Thus the comparison of random variables allows to compare the fitness values when they are not completely determined. In this contribution we consider a quite new proposal in stochastic comparison: statistical preference. We recall the advantages of this method with respect to (the classical) stochastic dominance. We also order by statistical preference two fitness values modelled by beta distributions with some special parameters.
|
Distillation at an infinite reflux ratio in combination with an infinite number of trays has been investigated.
|
eng_Latn
| 34,321 |
Inferences for the inflation parameter in the zip distributions: The method of moments
|
Abstract The zero-inflated Poisson (ZIP) distribution is widely used for modeling a count data set when the frequency of zeros is higher than the one expected under the Poisson distribution. There are many methods for making inferences for the inflation parameter in the ZIP models, e.g. the methods for testing Poisson (the inflation parameter is zero) versus ZIP distribution (the inflation parameter is positive). Most of these methods are based on the maximum likelihood estimators which do not have an explicit expression. However, the estimators which are obtained by the method of moments are powerful enough, easy to obtain and implement. In this paper, we propose an approach based on the method of moments for making inferences about the inflation parameter in the ZIP distribution. Our method is also compared to some recent methods via a simulation study and it is illustrated by an example.
|
Abstract We present a convenient analytical parametrization, in both configuration and momentum spaces, of the deuteron wave-function calculated with the Paris potential.
|
eng_Latn
| 34,322 |
Collecting Coupons with Random Initial Stake
|
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). ::: We show that the expected number of rounds until we have a coupon of each type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor} + (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon collector with random initial stake is by half a round faster than the one starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms). ::: This result implies that classic simple heuristic called \emph{randomized local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations to find the optimum of any monotonic function defined on bit-strings of length $n$.
|
Let Di be a domain in nj-dimensional Euclidean space, for j = 1, .-., k. Suppose that for each j = 1,..., k, Nj points are chosen independently at random in Dj. A theorem, which is an extension of a theorem of Crofton, is
|
eng_Latn
| 34,323 |
Optimal Dynamic Pricing and Ordering Policies for Products with Time and Price Sensitive Demand
|
In this paper,the optimal dynamic pricing and ordering polices for time and price sensitive demand products are considered,and developed an inventory model for maximizing the retailer's total profit.Then,we presented an analysis for the model.A stage algorithm is developed to determine the optimal order policy,and a numerical example is given.Some managerial insights are obtained.
|
Consider a randomized load balancing problem consisting of a large number n of server sites each equipped with K servers. Under the greedy policy, clients randomly probe a site to check whether there is still a server available. If not, d -- 1 other sites are probed and the task is assigned to the site with the fewest number of busy servers. If all the servers are also busy in each of these d -- 1 sites, the task is lost. This short paper analyzes a set of policies, i.e., (L, d) policies, that will occasionally probe additional sites even when there is still a server available at the site that was probed first. Using mean field methods, we show that these policies, that preventively probe other sites, can achieve the same loss probability while requiring a lower overall probe rate.
|
eng_Latn
| 34,324 |
The Public Policy System to Prompt the Development of Cyclic Economy
|
On the base of the foreign experiences of cyclic economy development,this article proposes the basic strategies to prompt the development of cyclic economy in China.What the government should do are not enjoying in taking part in the specific process of gaming,but working out and construting the public policies to adjust,norm,and lead the activities of the main body of cyclic economy.The public policy system includes the development planes,the law system to support the promption of cyclic economy,the economic policy system,and the society management policy system.
|
Consider a randomized load balancing problem consisting of a large number n of server sites each equipped with K servers. Under the greedy policy, clients randomly probe a site to check whether there is still a server available. If not, d -- 1 other sites are probed and the task is assigned to the site with the fewest number of busy servers. If all the servers are also busy in each of these d -- 1 sites, the task is lost. This short paper analyzes a set of policies, i.e., (L, d) policies, that will occasionally probe additional sites even when there is still a server available at the site that was probed first. Using mean field methods, we show that these policies, that preventively probe other sites, can achieve the same loss probability while requiring a lower overall probe rate.
|
eng_Latn
| 34,325 |
On the Effectiveness of Evolution Compared to Time-Consuming Full Search of Optimal 6-State Automata
|
The Creature's Exploration Problem is defined for an independent agent on regular grids. This agent shall visit all non-blocked cells in the grid autonomously in shortest time. Such a creature is defined by a specific finite state machine. Literature shows that the optimal 6-state automaton has already been found by simulating all possible automata. This paper tries to answer the question if it is possible to find good or optimal automata by using evolution instead of time-consuming full simulation. We show that it is possible to achieve 80% to 90% of the quality of the best automata with evolution in much shorter time.
|
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). ::: We show that the expected number of rounds until we have a coupon of each type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor} + (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon collector with random initial stake is by half a round faster than the one starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms). ::: This result implies that classic simple heuristic called \emph{randomized local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations to find the optimum of any monotonic function defined on bit-strings of length $n$.
|
eng_Latn
| 34,326 |
Controlled random sequences and Markov chains
|
CONTENTSIntroduction Chapter I. Foundations of the general theory of controlled random sequences and Markov chains with the expected reward criterion § 1. Controlled random sequences, Markov chains, and models § 2. Necessary and sufficient conditions for optimality § 3. The Bellman equation for the value function and the existence of (e-) optimal strategies Chapter II. Some problems in the theory of controlled homogeneous Markov chains § 4. Description of the solutions of the Bellman equation, a characterization of the value function, and the Bellman operator § 5. Sufficiency of stationary strategies in homogeneous Markov models § 6. The lexicographic Bellman equation References
|
The sequential analogue of the Behrens-Fisher problem is considered, The pooled-variance two sample sequential t test is modified to account for unequal variances. Operating cha-racteristic and average-sample-number curves are calculated for both the pooled variance and the modified t tests by computer simulations, An example is given using data from a tissue assay for breast cancer tumors.
|
eng_Latn
| 34,327 |
As the discount level (on the horizontal axis) gets up to e of a cent, 20 billion pieces become presorted.
|
The discount level is shown on the horizontal axis.
|
On the horizontal axis, the discount level remains flat at 0 and no pieces are presorted.
|
eng_Latn
| 34,328 |
A stitch in time... : management
|
Succession planning is the planning of how certain events will occur while the owner of assets is still alive to ensure a smooth and financially sustainable transition of the current business to a younger generation who may or may not be a family member.
|
A shuttle loom runs irregularly, noisily, and sometimes inefficiently due, in part, to the nature of the conventional picking mechanism. The fact has been demonstrated by measuring the instantaneous speed and power values over a number of loom cycles. The probable reasons for the behavior are discussed from a theoretical viewpoint and some supporting data are presented to show that the simple classical theories do not hold good under actual dynamic conditions.
|
eng_Latn
| 34,329 |
In ecology, which Q is a rectangular frame laid on the ground to define an area for study?
|
How to carry out ecological sampling page 1. The estimation can be improved by dividing the quadrat into a grid of 100 squares each representing 1% cover. This can either be done mentally by imagining 10 longitudinal and 10 horizontal lines of equal size superimposed on the quadrat, or physically by actually dividing the quadrat by means of string or wire attached to the frame at standard intervals. This is only practical if the vegetation in the area to be sampled is very short, otherwise the string/wire will impede the laying down of the quadrat over the vegetation. Quadrats are most often used for sampling, but are not the only type of sampling units. It depends what you are sampling. If you are sampling aquatic microorganisms or studying water chemistry, then you will most likely collect water samples in standard sized bottles or containers. If you are looking at parasites on fish, then an individual fish will most likely be your sampling unit. Similarly, studies of leaf miners would probably involve collecting individual leaves as sampling units. In these last two cases, the sampling units will not be of standard size. This problem can be overcome by using a weighted mean, which takes into account different sizes of sampling unit, to arrive at the mean number of organisms per sampling unit. There are three main ways of taking samples. 1. RANDOM SAMPLING Random sampling is usually carried out when the area under study is fairly uniform, very large, and or there is limited time available. When using random sampling techniques, large numbers of samples/records are taken from different positions within the habitat. A quadrat frame is most often used for this type of sampling. The frame is placed on the ground (or on whatever is being investigated) and the animals, and/ or plants inside it counted, measured, or collected, depending on what the survey is for. This is done many times at different points within the habitat to give a large number of different samples. In the simplest form of random sampling, the quadrat is thrown to fall at random within the site. However, this is usually unsatisfactory because a personal element inevitably enters into the throwing and it is not truly random. True randomness is an important element in ecology, because statistics are widely used to process the results of sampling. Many of the common statistical techniques used are only valid on data that is truly randomly collected. This technique is also only possible if quadrats of small size are being used. It would be impossible to throw anything larger than a 1m2 quadrat and even this might pose problems. Within habitats such as woodlands or scrub areas, it is also often not possible to physically lay quadrat frames down, because tree trunks and shrubs get in the way. In this case, an area the same size as the quadrat has to be measured out instead and the corners marked to indicate the quadrat area to be sampled. A better method of random sampling is to map the area and then to lay a numbered grid over the map. A (computer generated) random number table is then used to select which squares to sample in. ( Random number Table ). For example, if we have mapped our habitat , and have then laid a numbered grid over it as shown (Figure - below) , we could then choose which squares we should sample in by using the random number table . A numbered grid map of an area to be sampled If we look at the top of the first column in the random number table , our first number is 20. Moving downwards, the next two numbers in the random number table would be 74 and 94, but our highest numbered square on our grid is only 29 (Figure above). We would therefore ignore 74 and 94 and move on to the next number which is 22. We would then sample in Square 22. Continuing down the figures in this column, we would soon come across the number 20 again. As we have already se
|
hanging hardware included choice of 3 wood frames (see Frame Details tab below) This panorama is printed on pH neutral, heavy art paper and officially licensed. The Houston Texans playing to their home crowd at Reliant Stadium was photographed by James Blakeway. Reliant Stadium opened for business in 2002, and the Texans made their regular season debut against the Dallas Cowboys. On that day, they unveiled a gameday atmosphere that would become a staple of Texans football games. The first NFL stadium with a retractable roof, Reliant Stadium has earned a reputation as one of the premier big-event venues in American sports, hosting Big 12 Championship games, NCAA Men's Basketball tournaments, the annual Texas Bowl and Super Bowl XXXVIII. Previously the Houston Astrodome, Reliant Stadium features the largest HD video/scoreboard in pro sports in 2013. Two 52 feet high and 277 feet wide video boards replaced the existing one during this offseason. One of the most notable aspects of the design is the unique operable fabric roof. Posters are not available unframed. The print images on our website don't do justice to the actual finished piece that you will receive. Why? Because though the print images shown on our website are taken from the actual print or photo, the mat and frame shown around the image is an illustration created in Photoshop meant to represent the actual mat and frame. TICKET FRAMING Send us your game tickets or photos to frame with this panoramic print! see samples Note: The matted option must be selected to include ticket framing.
|
eng_Latn
| 34,330 |
Imagine the following problem: You listen to a radio station and take notes how often was each song played. How can you estimate based on your notes (e.g. 30 songs played once, 2 played twice, one song three times) how many songs has the radio station available when we assume all the songs are played with a uniform distribution?
|
If shuffle-playing playlist ×100 resulted in [10 13 10 3 2 2] different songs being repeated [1 2 3 4 5 6] times, what is the estimate for the total number of songs? (assuming shuffle play was completely random) Update: (R code) k <- 50 # k number of songs on the disk indexed 1:k n <- 100 # n number of random song selections m <- 20 # m number of repeat experiments colnum <- 10 mat <- matrix(data=NA,nrow=m,ncol=colnum) df <- as.data.frame(mat) for(i in 1:m){ played <- 1+floor(k*runif(n)) # actual song indices (1:k) selected freq <- sapply(1:k,function(x){sum(played==x)}) # = number of times song with index x is being played histo <- sapply(1:colnum,function(x){sum(freq==x)}); for(j in 1:colnum){ df[i,j] <- histo[j] } } df Resulting in: e.g. 20 distributions (V1=number of single plays, V2=number of double plays, etc): V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 1 15 13 11 1 2 2 0 0 0 0 2 15 12 10 1 4 0 1 0 0 0 3 12 14 7 6 3 0 0 0 0 0 4 17 16 6 4 2 0 1 0 0 0 5 17 10 12 5 0 0 1 0 0 0 6 13 15 11 6 0 0 0 0 0 0 7 10 14 9 3 2 1 1 0 0 0 8 12 17 5 6 3 0 0 0 0 0 9 9 19 8 3 1 2 0 0 0 0 10 13 9 11 6 1 0 1 0 0 0 11 16 9 12 5 2 0 0 0 0 0 12 15 9 11 6 2 0 0 0 0 0 13 19 9 7 4 4 1 0 0 0 0 14 17 11 4 7 3 1 0 0 0 0 15 11 20 8 1 3 1 0 0 0 0 16 14 12 10 5 0 2 0 0 0 0 17 9 12 8 7 3 0 0 0 0 0 18 10 15 9 4 2 0 1 0 0 0 19 14 11 12 7 0 0 0 0 0 0 20 16 14 11 3 1 1 0 0 0 0 Now I need to get from here to the Poisson modelling--my R is a bit rusty (?lmer)...--Any help would be appreciated... Attempted Poisson modelling: disappointing fit?! plot(1:colnum,df[1,1:colnum],ylim=c(0,30), type="l",xlab="repeats",ylab="count") for(i in 1:m){ clr <- rainbow(m)[i] lines(1:colnum,df[i,1:colnum],type="l",col=clr) points(1:colnum,df[i,1:colnum],col=clr) } df.lambda=data.frame(lambda=seq(1,5,0.1),ssq=c(NA));df.lambda for(ii in 1:dim(df.lambda)[1]){ l <- df.lambda$lambda[ii] ssq <- 0 for(i in 1:20){ for(j in 1:10){ ssq <- ssq + (df[i,j] - n*dpois(j,l))^2 } } print(ssq) df.lambda$lambda[ii] <- l df.lambda$ssq[ii] <- ssq } df.lambda lambda.est <- df.lambda$lambda[which.min(df.lambda$ssq)] lambda.est # 2.4 points(x <- 1:10, n*dpois(1:10,lambda.est),type="l",lwd=2) 100*dpois(1:10,3) n/lambda.est the estimated lambda stays around 2.3, with an n estimate of around 43; the fitted curve seems very discrepant, and seems to worsen with rising n !? Doesn't this have to do with the fact, that our repeats are different from the 'classical' Poisson distributions: it's not just ONE event that repeats itself x number of times, but the sum of repeats of different items (songs)?!
|
Please use UK pre-uni methods only (at least at first). Thank you.
|
eng_Latn
| 34,331 |
Okay this is dumb, but what is a ERA and how is it calculated?
|
With the exception of wins and losses, earned run average (ERA) is the most important baseball statistic for pitchers. ERA says how many earned runs a pitcher gives up per nine innings pitched.\n \nSteps:\n1. Add up the total innings pitched. For every out that is recorded while you are pitching, you get one-third of an inning.\n \n2. Add up the total number of earned runs given up. If there are no errors in the inning, all the runs are earned runs. If there are errors, reconstruct the inning without the errors to see how many runs would have scored if the fielding had been perfect.\n \n3. Multiply the earned runs by 9.\n \n4. Divide by the total innings pitched.\n \n5. Round the number to the second decimal place. For example, 3.2051 is 3.21.\n \n \nTips:\nWhen calculating earned runs, always give the benefit of the doubt to the pitcher.\n \nIf an error occurs with two outs, all runs scored after that error are unearned runs.\n \nAny runner who scores is the responsibility of the pitcher who let him or her get on base, even if that pitcher has left the game when the runner scores.\n \nIf you leave the game while a batter is still at bat and the pitching count is in the batter's favor (that is, there are more balls than strikes in the count), then the batter is your responsibility.
|
Eric is just going through a difficult phase right now. I think the kettle is ready.Pick up the phone its my friend dara who says its up to you.
|
eng_Latn
| 34,332 |
I need more points! I've just looked at the leader board and 12 points per answer seems typical and some users make over 15 points per answer. I need to get my average up how is it done?
|
I know, I know! It is possible to get lots of point per answer. It is simple (no one shall copy my solution, nia!).\n\nAnyway, just vote 20 times a day and answer 1 question. YOu have averaged 22 points an answer.
|
A batting average of .350 means that Kyler hits an average of 350 out of 1000 at bats. This is equivalent to 350/1000 or 35%.\n\nAssuming he keeps this rate, it is a simple multiplication to estimage the number of times he will hit.\n\n20 times at bat x 0.350 batting average = 7 hits\n\nYou can also do this as a pair of ratios:\n350 ..... x\n------ = ----\n1000 ... 20\n\nCross multiplying:\n1000x = 7000\nx = 7000/1000\nx = 7 hits
|
eng_Latn
| 34,333 |
Why do Dwellers return from the Wasteland by themselves? In Fallout Shelter why do some Dwellers return from the wasteland by themselves?
|
Dwellers return automatically after finding a low amount of items in Wasteland I have a strange thing happening - dwellers I send to explore wasteland are automatically returning to vault after finding a little amount of items (4 or 5). The exploration log says: "That's as much as I can carry. I better head back and show the Overseer what I found". Before today they successfully carried up to 30 items and never returned to vault on their own. I don't know is it relevant or not, but I have a lot of free storage space available. What caused this change? What can I do to prevent this?
|
What happens in these end-game situations? This is intended to be a canonical question regarding possible endings of the game. For each question, assume a single proper strike was made, and note where an improper strike would change the game outcome (beyond an additional point for the opponent). If the Queen has been covered: What happens if I pocket my last C/m and the Striker? What happens if I pocket my opponent's last C/m? What happens if I pocket my opponent's last C/m and the Striker? What happens if I pocket both players' last C/m? What happens if I pocket both players' last C/m and the Striker? If the shot covers the Queen: What happens if I pocket my last C/m and the Queen? What happens if I pocket my last C/m, the Queen and the Striker? What happens if I pocket my opponents' last C/m and the Queen? What happens if I pocket my opponents' last C/m, the Queen and the Striker? What happens if I pocket both players' last C/m and the Queen? What happens if I pocket both players' last C/m, the Queen and the Striker? If the Queen is left on the board: What happens if I pocket my last C/m, leaving the Queen? What happens if I pocket my last C/m and the Striker, leaving the Queen? What happens if I pocket my opponent's last C/m, leaving the Queen? What happens if I pocket my opponent's last C/m and the Striker, leaving the Queen? What happens if I pocket both players' last C/m, leaving the Queen? What happens if I pocket both players' last C/m and the Striker, leaving the Queen?
|
eng_Latn
| 34,334 |
Running top in Busybox when I ran TOP command in Busy box, I just wanted to know if VSZ% is MEM%, if not how can get MEM% with TOP command in Busy box
|
How to interpret busybox "top" output? I am using BusyBox on a small embedded ARM system. I'm trying to read the "top" output, in particular for the Python process listed. How much real memory is this process using? Also what does VSZ stand for? The system only has 64MB of RAM. Mem: 41444K used, 20572K free, 0K shrd, 0K buff, 18728K cached CPU: 3% usr 3% sys 0% nic 92% idle 0% io 0% irq 0% sirq Load average: 0.00 0.04 0.05 1/112 31667 PID PPID USER STAT VSZ %VSZ %CPU COMMAND 777 775 python S 146m 241% 3% /usr/bin/python -u -- dpdsrv.py
|
Expected value of subset of variables in Bayesian setting Assume we have $N$ random variables $X_1, \ldots, X_N$ with known (posterior) distributions that are easy to sample from. For simplicity, assume that I am interested in the expected value of the ten largest of these random variables. There will be some uncertainty surrounding this expected value as there is uncertainty with respect to the rank of a given $X_i$ in case the distributions of $X_1, \ldots, X_N$ overlap. I want to use simulation based methods. My approach is to simulate $M$ samples from the distributions of $X_1, \ldots, X_N$, where in each iteration, I construct a binary indicator of length $N$ indicating whether random variable $i$ is within the set of the ten largest samples or not. Averaging over these $M$ draws gives me the probability that random variable $i$ is member of the top ten group. Denote this probability as $\pi_i$. Define a binary random variable that indicates membership in the top ten group as $S_i$. $S_i$ follows a Bernoulli distribution s.t. $S_i \sim Ber(\pi_i)$. In a second step, I draw $L$ samples from the distribution of $S_i$ and from the distributions of $X_1, \ldots, X_N$. Given a draw of $S_i$, I can select the top ten out of the samples of $X_1, \ldots, X_N$ in each iteration and average the respective samples within the top ten group. I save this average, so the result after $L$ iterations will be the distribution of the average outcome within the top ten group, taking into account all relevant sources of uncertainty. My questions are: Is this simulation approach valid in general? From a Bayesian perspective, is this simulation valid or do I need to treat additional quantities (e.g. $S_i$ or $\pi_i$) as random variables that need a prior?
|
eng_Latn
| 34,335 |
Suggesting Policies and Practices for Increasing Justice and Assuring the Sustainability of the U.S. Healthcare System
|
The goal of this chapter is to suggest policies and practices that would render the U.S. health care system just and sustainable by: (1) Curbing current unethical and scientifically unsound medical practices that are unjustifiably harmful and of little or no benefit; (2) Financing healthcare by drawing upon the profits earned by all those individuals, organizations, and manufacturers that provide healthcare. To have this happen, the healthcare system must be purged of conflicts of interest, especially the FDA. Dealing justly with costs requires that excess profits that amount to gouging be mandated to be used for medical interventions.
|
Consider a randomized load balancing problem consisting of a large number n of server sites each equipped with K servers. Under the greedy policy, clients randomly probe a site to check whether there is still a server available. If not, d -- 1 other sites are probed and the task is assigned to the site with the fewest number of busy servers. If all the servers are also busy in each of these d -- 1 sites, the task is lost. This short paper analyzes a set of policies, i.e., (L, d) policies, that will occasionally probe additional sites even when there is still a server available at the site that was probed first. Using mean field methods, we show that these policies, that preventively probe other sites, can achieve the same loss probability while requiring a lower overall probe rate.
|
eng_Latn
| 34,336 |
On the bias of recombination fractions, Kosambi's and Haldane's distances based on frequencies of gametes
|
The estimation of recombination frequencies is a crucial step in genetic mapping. For the construction of linkage maps, nonadditive recombination fractions must be transformed into additive map distances. Two of the most commonly used transformations are Kosambi’s and Haldane’s mapping functions. This paper reports on the calculation of the bias associated with estimation of recombination fractions, Kosambi’s distances, and Haldane’s distances. I calculated absolute and relative biases numerically for a wide range of recombination fractions and sample sizes. I assumed that the ratio of recombinant gametes to the total number of gametes can be adequately represented by a binomial function. I found that the bias in recombination fraction estimates is negative, i.e., the estimator is an underestimate. However, significant values were only obtained when recombination fractions were large and sample sizes were small. The relevant estimates of recombination fractions were, therefore, nearly unbiased. Haldane’s ...
|
We point out that aging occurs for the following simple model of fragmentation-coagulation inspired by Pitman's coalescent random forests. For every $n\in \N$, we consider a uniform random tree with $n$ vertices, and at each step, depending on the outcome of an independent fair coin tossing, either we remove one edge chosen uniformly at random amongst the remaining edges, or we replace one edge chosen uniformly at random amongst the edges which have been removed previously. The process that records the sizes of the tree-components evolves by fragmentation and coagulation. It exhibits aging in the sense that when it is observed after $k$ steps in the regime $k\sim tn+s\sqrt n$ with $t>0$ fixed, it seems to reach a statistical equilibrium as $n\to\infty$; but different values of $t$ yield distinct pseudo-stationary distributions. The approach owes much to the construction by Aldous and Pitman of the standard additive coalescent via Poissonian cuts on the skeleton of a Continuum Random Tree.
|
eng_Latn
| 34,337 |
Surface Trap for Cs atoms based on Evanescent-Wave Cooling
|
We demonstrate a gravito-optical surface trap for Cs atoms which exploits cooling in an evanescent light wave. About 10{sup 5} atoms were cooled down to 3{mu}K and formed a sample with a mean height of {approximately}20 {mu}m above the surface of a dielectric prism. The trap does not use a magnetic field and leads to very small atomic level perturbations. The excited-state population of the stored atoms is {approximately}1.5{times}10{sup {minus}6} and collisional losses are strongly suppressed. {copyright} {ital 1997} {ital The American Physical Society}
|
Cellular manufacturing system facilitates lean manufacturing in terms of production flexibility and control simplification. The paper presents a case study on a newly constructed cellular manufacturing system adopted by an electronic assembly factory as the back end process. The original rabbit chase is infeasible in this case because products handled are multi-types and multi-paths. Further, the cycle times are largely imbalance. The application of two proposed rabbit chase models was investigated through computer simulations enhanced with ANOVA and surface response methodology. The allocation of operators and the impact of changing lot size to the performances of the cell are investigated. For the findings, there are clear indications of the effects of the number of operators and the lot size for the performances of the system, regardless which rabbit chase model used.
|
eng_Latn
| 34,338 |
Quadruple systems containing AG(3,2)
|
A quadruple system of order v, denoted QS(v) is an ordered pair (X, Q) where X is a set of cardinality v and Q is a set of 4-subsets of X called blocks, with the property that every 3-subset of X is contained in a unique block. The points and planes of the affine geometry AG(3, 2) form a QS(8). We prove that a QS(v) containing a proper subsystem isomorphic to AG(3, 2) exists if and only if v>=16 and v=2 or 4 (mod 6).
|
Abstract We study the learning behavior of a population of buyers and a population of sellers whose members are repeatedly randomly matched to engage in a sealed bid double auction. The agents are assumed to be boundedly rational and choose their strategies by imitating successful behavior and adding innovations triggered by random errors or communication with other agents. This process is modelled by a two-population genetic algorithm. A general characterization of the equilibria in mixed population distributions is given and it is shown analytically that only one price equilibria are attractive for the GA dynamics. Simulation results confirm these findings and imply that in cases with random initialization with high probability the gain of trade is equally split between buyers and sellers.
|
eng_Latn
| 34,339 |
Roulette Model of Systematic Sampling
|
Considering a natural model of systematic sampling, we can calculate the expectation of statistic exactly as it is without regarding it as any other sampling methods. I will demonstrate the details of calculation and get some results. It can be shown that the sample mean of systematic sampling is an unbiased estimator of population mean. Variance of sample mean can be described explicitly and depends on how to sort a population list. Sum of the variance of sample mean and the mean of sample variance is kept constant between random and systematic sampling. As the sample size becomes larger, the variance of sample mean converges to 0.
|
We consider a single-server retrial system with one and several classes of customers. In the case of several classes, each class has its own orbit for retrying customers. The retrials from the orbits are generated with constant retrial rates. In the single class case, we are interested in finding an optimal retrial rate. Whereas in the multi-class case, we use game theoretic framework and find equilibrium retrial rates. Our performance criteria balance the number of retrials per retrying customer with the number of unhappy customers.
|
eng_Latn
| 34,340 |
Finding an acceptable upper limit on a random generator with an unknown distribution
|
Multi armed bandit for general reward distribution
|
Not including stdlib.h does not produce any compiler error!
|
eng_Latn
| 34,341 |
Efficient evaluation of integrals with kernel 1/rχ for quadrilateral elements with irregular shape
|
Abstract In this paper, integrals with kernel 1 / r χ are concerned with the following three aspects: a). the near singularity caused by distorted element shape; b). the near singularity derived from the angular direction; c). the singularity/near singularity in the radial direction. A conformal polar coordinate transformation (CPCT) is proposed to eliminate the shape effect of elements, which can keep the shape characteristic of distorted elements, and an improved sigmoidal transformation is introduced to alleviate the near singularity in the angular direction. By combination of the two strategies with existing methods, such as singularity subtraction method and distance transformation method utilized in this paper, an efficient and robust numerical integration approach can be obtained for various orders of singular/nearly singular integrals, and a distorted curved quadrilateral element extracted from a cylinder surface is provided to demonstrate the efficiency and robustness of the proposed method.
|
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). ::: We show that the expected number of rounds until we have a coupon of each type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor} + (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon collector with random initial stake is by half a round faster than the one starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms). ::: This result implies that classic simple heuristic called \emph{randomized local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations to find the optimum of any monotonic function defined on bit-strings of length $n$.
|
eng_Latn
| 34,342 |
Det individuella samlandet : från längtan till begär
|
This essay deals with the individual collecting of four individuals with different backgrounds and lives. The essay investigates what, how and why people collect. It provides a stepping-stone in mo ...
|
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). ::: We show that the expected number of rounds until we have a coupon of each type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor} + (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon collector with random initial stake is by half a round faster than the one starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms). ::: This result implies that classic simple heuristic called \emph{randomized local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations to find the optimum of any monotonic function defined on bit-strings of length $n$.
|
swe_Latn
| 34,343 |
Designing distribution patterns for long-term inventory routing with constant demand rates
|
This paper proposes a practical solution approach for the challenging optimization problem of minimizing overall costs in an integrated distribution and inventory control system. Constant customer demand rates are assumed and therefore a long-term, cyclic planning approach is adopted. The concept of distribution patterns, consisting of vehicles performing multiple tours with possibly different frequencies, is used to extend the traditional concept of a single tour per vehicle. A heuristic is proposed that is capable of solving a cyclical distribution problem involving real-life features, such as customer capacity restrictions, loading and unloading extra times and prespecified minimum times between consecutive deliveries.
|
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). ::: We show that the expected number of rounds until we have a coupon of each type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor} + (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon collector with random initial stake is by half a round faster than the one starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms). ::: This result implies that classic simple heuristic called \emph{randomized local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations to find the optimum of any monotonic function defined on bit-strings of length $n$.
|
eng_Latn
| 34,344 |
[Envelope containing a letter from Chandler, Carleton & Robertson to John L. Haynes regarding several current cases]
|
This item is an envelope which enclosed a letter from Chandler, Carelton & Robertson to John L. Haynes regarding several current cases.
|
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). ::: We show that the expected number of rounds until we have a coupon of each type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor} + (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon collector with random initial stake is by half a round faster than the one starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms). ::: This result implies that classic simple heuristic called \emph{randomized local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations to find the optimum of any monotonic function defined on bit-strings of length $n$.
|
eng_Latn
| 34,345 |
LibGuides: Boolean, Truncation, and Wildcards: Home
|
This guide walks you through how to make your search more effective by using boolean operators, truncation and wildcards.
|
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). ::: We show that the expected number of rounds until we have a coupon of each type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor} + (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon collector with random initial stake is by half a round faster than the one starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms). ::: This result implies that classic simple heuristic called \emph{randomized local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations to find the optimum of any monotonic function defined on bit-strings of length $n$.
|
eng_Latn
| 34,346 |
A and B throws a Fair dice one after another. Whoever throws 6 first wins , What is the probability that A wins ?
|
A and B throws a Fair dice one after another. Whoever throws 6 first wins. A Start's first. What is the probability that B wins?
|
How many medals is India expected to win at the 2016 Rio Olympics?
|
eng_Latn
| 34,347 |
In a party of 5 persons compute the probability that at least 2 have the same birthday(month/day),assume a 365-day year.
|
$n$ people attend the same meeting, what is the chance that two people share the same birthday? Given the first $b$ birthdays, the probability the next person doesn't share a birthday with any that went before is $(365-b)/365$. The probability that none share the same birthday is the following: $\Pi_{0}^{n-1}\frac{365-b}{365}$. How many people would have to attend a meeting so that there is at least a $50$% chance that two people share a birthday? So I set $\Pi_{0}^{n-1}\frac{365-b}{365}=.5$ and from there I manipulated some algebra to get $\frac{364!}{(364-n)!365^{n}}=.5\iff (364-n)!365^{n}=364!/.5=.....$ There has to be an easier way of simplifying this.
|
I was wondering if someone could critique my argument here. The problem is to find the probability where exactly 2 people in a room full of 23 people share the same birthday. My argument is that there are 23 choose 2 ways times $\displaystyle \frac{1}{365^{2}}$ for 2 people to share the same birthday. But, we also have to consider the case involving 21 people who don't share the same birthday. This is just 365 permute 21 times $\displaystyle \frac{1}{365^{21}}$. To summarize: $$\binom{23}{2} \frac{1}{365^2} \frac{1}{365^{21}} P\binom{365}{21}$$
|
eng_Latn
| 34,348 |
N balls are randomly dropped into k boxes (k <=n). What is the probability that no box is empty?
|
If 5 balls are randomly distributed over 3 boxes, what is the probability that none of the boxes is empty?
|
A bat and ball cost $1.10. The bat costs one dollar more than the ball. How much does the ball cost?
|
eng_Latn
| 34,349 |
This method of solving problems with a random number generator is named after the casino city of Monaco
|
Monte Carlo Simulation - Introduction to Programming in Java Jan 31, 2009 ... Named after famous casino in Monaco. ... Generating random numbers. ... The pseudo-random number generator will use 1234567 as the seed. ... Simulate this process in 2D using Monte Carlo methods: Create a 2D grid and .... (In 1925, Ising solved the problem in one dimension - no phase transition.
|
Jeopary Questions page 1274 - ODDS - TriviaBistro.com ODDS: If a coin shows heads on 99 straight tosses, these are the odds on heads for the 100th toss LET'S TALK .... MISS MANNERS' MANNERS: "Miss Manners always believes in sending" these "notes. It encourages people to give more".
|
eng_Latn
| 34,350 |
EDITED John is playing a game on $n$ days, each day being independent. On each day $i$, his probability of success is $p_i$. We have $\frac{1}{n} \sum_{i=1}^n p_i = p$, and typically, the standard deviation $\sigma$ of these $p_i$ is small. $\sigma$ is known. So I have a succession of $Bernoulli(p_i)$. I want to model the probability of having k success after n trials, using $p$ and $\sigma$ for instance. I can approximate this with a Binomial distribution, but even though it's close it still leads to biases. Any thoughts on how to improve this?
|
If 20 independent Bernoulli trials are carried out each with a different probability of success and therefore failure. What is the probability that exactly n of the 20 trials was successful? Is there a better way of calculating these probabilities rather than simply summing together the combinations of success and failure probabilities?
|
If $2^p-1$ is a prime, (thus $p$ is a prime, too) then $p\mid 2^p-2=\phi(2^p-1).$ But I find $n\mid \phi(2^n-1)$ is always hold, no matter what $n$ is. Such as $4\mid \phi(2^4-1)=8.$ If we denote $a_n=\dfrac{\phi(2^n-1)}{n}$, then $a_n$ is , but how to prove it is always integer? Thanks in advance!
|
eng_Latn
| 34,351 |
How to accurately, unambiguously and concisely say something in the following cases: Case 1. The predictor is significant, with 1.5615 times as much likely to get higher scores when it is true. Case 2. Subjects with A are 1.5615 times as much likely to get higher scores as those without A. Maybe can I use "chance" instead of "likely" to form the sentence better?
|
Suppose John has 5 sweets. Is there any difference between the following two sentences? Jack has 3 times as many sweets as John. Jack has 3 times more sweets than John. I prefer the first construction and would know unambiguously that Jack has 15 sweets in this case. However in the second construction I would be inclined to think that Jack has 20 sweets, since it seems to suggest 15 sweets in addition to the original 5.
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,352 |
You've got a discrete uniform distribution - what is the expected number of trials until each point is hit at least once? I started my thinking with maybe a Geometric distribution representing each individual point - if your probability of success is 1/100 in the uniform distribution, then would the number of trials until first success be 100? That is for one point so to have 100 points = 10000 trials until you hit 100 points?? That doesn't sound right, because if you fail to hit one specific point, you've succeeded in hitting a different point, so it's not exactly a matter of first success at one point. What am I missing? Follow up question: how about after 100 trials, what percentage of points are expected to have been hit?
|
What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,353 |
I ran across an apparent paradox which I then located in the paper as such: Imagine that you are shown two identical boxes. You know that one of them contains. \$b and the other \$2b. Picking one at random and opening it, you must decide whether to keep it (and its contents), or exchange it for the other box. In short, when you find $x dollars in a box, the expected value of the other box is .5*.5x + .5*2x = 1.25x, meaning that it's always better to switch. This appears to violate the symmetry of the problem and the fact that you still know nothing meaningful about either box. The paper goes another direction with it, talking about how having prior knowledge of expected values gives a more meaningful analysis (along with other discussions). However, what if there's no prior knowledge, and we have the original problem as stated. Can anyone give me some intuition to make sense of this? EDIT: Someone found which asks a slightly different formulation, but an identical problem. The accepted answer there just points to , and I'm having difficulty understanding the paper. It explains away the paradox by noting the expectation is based on an infinite sum, and the value depends on the order the sum is evaluated. I'm not familiar with how the order of a sum can change a value, and I also don't see talking about a different way to evaluate the expectation explains the strange result outlined above. My math understanding is primarily based on reading textbooks as a hobby, and I haven't yet worked up to fully understanding math academic papers, so a simpler explanation would be helpful.
|
Suppose there are two face down cards each with a positive real number and with one twice the other. Each card has value equal to its number. You are given one of the cards (with value $x$) and after you have seen it, the dealer offers you an opportunity to swap without anyone having looked at the other card. If you choose to swap, your expected value should be the same, as you still have a $50\%$ chance of getting the higher card and $50\%$ of getting the lower card. However, the other card has a $50\%$ chance of being $0.5x$ and a $50\%$ chance of being $2x$. If we keep the card, our expected value is $x$, while if we swap it, then our expected value is: $$0.5(0.5x)+0.5(2x)=1.25x$$ so it seems like it is better to swap. Can anyone explain this apparent contradiction?
|
The new Top-Bar does not show reputation changes from Area 51.
|
eng_Latn
| 34,354 |
Each Chocolate Frog comes with one collectable illustrated wizard card (very cool and not dorky at all, honest). There are equal odds of each card being in a pack (i.e., they have all been produced and distributed evenly). How many packs must we buy in order to have an 80% chance of having obtained all 12 cards? How about 90%? Thanks.
|
I'm trying to solve the well known Coupon Collector's Problem by explicitly finding the probability distribution (so far all the methods I read involve using some sort of trick). However, I'm not having much luck getting anywhere as combinatorics is not something I'm particularly good at. The Coupon Collector's Problem is stated as: There are $m$ different kinds of coupons to be collected from boxes. Assuming each type of coupon is equally likely to be found per box, what's the expected amount of boxes one has to buy to collect all types of coupons? What I'm attempting: Let $N$ be the random variable associated with the number of boxes one has to buy to find all coupons. Then $P(N=n)=\frac{|A_n|}{|\Omega _n|}$, where $A_n$ is the set of all outcomes such that all types of coupons are observed in $n$ buys, and $\Omega _n$ is the set of all the possible outcomes in $n$ buys. I think $|\Omega _n| = m^n$, but I'm not even sure about that anymore, as all my attempts so far led to garbage probabilities that either diverged or didn't sum up to 1.
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,355 |
There are $n$ seats in a classroom and $n$ students who have been assigned seats on small slips of paper. Unfortunately, the first student to enter the classroom has lost his slip, so he just chooses a seat at random and sits in it. Each of the remaining students enters one at a time and either sits in their assigned seat if it is empty or if someone is sitting in their seat, chooses a seat at random from those that are empty. For $n\geq 2,$ show that the probability that the last student to enter will sit in their assigned seat is $\frac{1}2.$ I tried a few small examples and drew some tree diagrams to list out the possibilities, and got $\frac{1}2$ each time. I did make a few obvious observations though. Firstly, if the first student sits in the right spot, everyone else will too, by the premise. There is a $\frac{1}n$ chance for this. Secondly, if the first student chooses the last person's seat, the last person won't be able to sit in their assigned seat. And thirdly, the only way the last person can sit in their assigned seat is if everyone before him/her did not choose his/her seat. We can label the students $S_1,\cdots, S_n.$ I'm not quite sure how to determine the sample space for this problem (i.e. the total set of points or possibilities to consider).
|
This is a neat little problem that I was discussing today with my lab group out at lunch. Not particularly difficult but interesting implications nonetheless Imagine there are a 100 people in line to board a plane that seats 100. The first person in line realizes he lost his boarding pass so when he boards he decides to take a random seat instead. Every person that boards the plane after him will either take their "proper" seat, or if that seat is taken, a random seat instead. Question: What is the probability that the last person that boards will end up in his/her proper seat. Moreover, and this is the part I'm still pondering about. Can you think of a physical system that would follow this combinatorial statistics? Maybe a spin wave function in a crystal etc...
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,356 |
A gambler rolls two unbiased dice and stands to loss of $ \$2$ if he fails to throw a six, to win $ \$4$ if he throws one six and to win $ \$10$ if he throws two sixes. Is the game fair? I can find probability for all three cases but to decide whether game is fair or not? Could someone help me with this?
|
Suppose $X_n$ is the fortune of a gambler after $n$ th game. Then the game is called fair (Breiman 1968) if $$E[X_{n+1} \mid X_1, \dots, X_n] = X_n \forall n$$ My question is why a fair game is not defined as the following $$E[X_{n+1}] = E[X_n] \forall n$$ i.e. $$E[X_{n+1}- X_n]=0$$. This should be the proper definition as a fair game is where avg. gain is zero. Nothing conditioning should be there.
|
There should be infinitely many primes of the form $5+6n$. How do you prove it? The same should be true for $7+6n$.
|
eng_Latn
| 34,357 |
This is a question from Sheldon M Ross' Intro to Probability Models. Suppose that each coupon obtained is, independent of what has been previously obtained, equally likely to be any of $m$ different types. Find the expected number of coupons one needs to obtain in order to have at least one of each type. Hint: Let $X$ be the number needed. It is useful to represent $X$ by $X = \sum_{i=1}{X_i}$ where each $X_i$ is a geometric random variable. I did it using this hint and I now get $m^2$ as $E[X]$. I'm not sure if this is right and more importantly, I'm not able to understand why/how it can be modeled as the sum of many geometric distributions. My initial approach was to define the indicator random variable $$X_i = \begin{cases} 1 & \textrm{if ith coupon is new} \\ 0 & \textrm{otherwise} \end{cases} $$ and then take $X = \sum_{i=1}{X_i}$ but I'm not sure how to proceed with that.
|
What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
|
Prove that there are no integers $a,b \gt 2$ such that $a^2{\mid}(b^3 + 1)$ and $b^2{\mid}(a^3 + 1)$.
|
eng_Latn
| 34,358 |
Suppose the mean noon-time temperature for September days in San Diego is $24^{∘}$ and the standard deviation is $3.2$. (Temperature in this problem is measured in degrees celsius). Using Chebyshev’s theorem, what is the minimal probability (in percents) that the noon-time temperature of a september days is between $17.6^{∘}$ and $30.4^{∘}$? A. On September 26, 1963, the all-time record of noon-time temperature in San Diego of $44^{∘}$ was hit. Assume the temperature distribution is symmetric around the mean, what is the Chebyshev bound for the probability of breaking (or tieing) this record? B.
|
Suppose the mean noon-time temperature for September days in San Diego is 24∘ and the standard deviation is 4.9. (Temperature in this problem is measured in degrees celsius) On September 26, 1963, the all-time record of noon-time temperature in San Diego of 44∘ was hit. Assume the temperature distribution is symmetric around the mean, what is the Chebyshev bound for the probability of breaking (or tieing) this record? I am having a hard time understanding this. Could someone explain to me how to do this?
|
Suppose the mean noon-time temperature for September days in San Diego is 24∘ and the standard deviation is 4.9. (Temperature in this problem is measured in degrees celsius) On September 26, 1963, the all-time record of noon-time temperature in San Diego of 44∘ was hit. Assume the temperature distribution is symmetric around the mean, what is the Chebyshev bound for the probability of breaking (or tieing) this record? I am having a hard time understanding this. Could someone explain to me how to do this?
|
eng_Latn
| 34,359 |
100 participants have a fair coin each, on a given round, the not already discarded participants flip their coins, those who flip a tail are discarded from the game, the remaining ones continue to play until nobody is left (everyone has been discarded). What would be the average number of trials (where each trial consists of a tossing and removing the tails) one would expect from doing this experiment? Does conditional expectation work for something like this? I know that each individual coin follows a Geometric distribution, but I am trying to figure out the sum of them to determine the average number of trials for a game like this. My Logic/Thought Process: I started out trying to think of the probability that a particular coin makes it to round $r$ which is $\frac{1}{2^m}$. I then realized that each coin outcome can be modeled by a Geometric random variables with $p = 0.5$. I am just now unsure how to take the leap from this single case to a case with 100 coins. I presume it has to do with summing the geometric random variables, but I am not sure.
|
Given $n$ independent geometric random variables $X_n$, each with probability parameter $p$ (and thus expectation $E\left(X_n\right) = \frac{1}{p}$), what is $$E_n = E\left(\max_{i \in 1 .. n}X_n\right)$$ If we instead look at a continuous-time analogue, e.g. exponential random variables $Y_n$ with rate parameter $\lambda$, this is simple: $$E\left(\max_{i \in 1 .. n}Y_n\right) = \sum_{i=1}^n\frac{1}{i\lambda}$$ (I think this is right... that's the time for the first plus the time for the second plus ... plus the time for the last.) However, I can't find something similarly nice for the discrete-time case. What I have done is to construct a Markov chain modelling the number of the $X_n$ that haven't yet "hit". (i.e. at each time interval, perform a binomial trial on the number of $X_n$ remaining to see which "hit", and then move to the number that didn't "hit".) This gives $$E_n = 1 + \sum_{i=0}^n \left(\begin{matrix}n\\i\end{matrix}\right)p^{n-i}(1-p)^iE_i$$ which gives the correct answer, but is a nightmare of recursion to calculate. I'm hoping for something in a shorter form.
|
The new Top-Bar does not show reputation changes from Area 51.
|
eng_Latn
| 34,360 |
In a gambling game you can win a very huge amount of money if you can get 10 consecutive heads tossing a coin. Each toss costs you 1$. How much money will you bet, on average, in order to get 10 consecutive heads?
|
A fair coin is tossed repeatedly until 5 consecutive heads occurs. What is the expected number of coin tosses?
|
I have to determine how many six-integer subsets of $\{1, 2, 3, \ldots, 20\}$ are possible if no two consecutive integers are in a set.
|
eng_Latn
| 34,361 |
I recently came across this interesting fact: Take some (pseudo)random numbers between $0$ and $1$. Now sum these numbers and count how many are needed in order for the sum to be greater than $1$. If you repeat the experiment $n$ times, you will see that, on average, you need $\mathrm{e}$ (Euler's number) numbers! My mind is blown! But why? How does this work?
|
I define $X_i$ as a random variable that is uniformly distributed between (0,1). What is the expected number of such variables I require to make the sum go just higher than 1. Thanks
|
Please use UK pre-uni methods only (at least at first). Thank you.
|
eng_Latn
| 34,362 |
$s$ balls are sampled and then replaced from/into a box with $N$ balls. On average, how many samplings are to be made until all of the balls have been taken at least once? ps: this is not a homework problem. I’m trying to reduce the number of updates on parameters during a neural network’s optimization phase. I’m able to simulate this experiment and get a good numerical approximation, but doing so for every different value of $s$ and $N$ seems a little unclever.
|
I am trying to solve a variation on the . In this scenario, someone is selecting coupons at random with replacement from n different possible coupons. However, the person is not selecting coupons one at a time, but instead, in batches. Here's an example problem formulation: There are 100 distinct coupons. A person makes selections in 10-coupon batches at random (each coupon with replacement). What is the expected number of batches necessary to have selected 80 unique coupons? I have been able to determine the expected number of selections necessary to have selected k unique coupons when selecting one at a time (much like Henry's ), but I'm a bit stumped as to how to go about solving it with this particular wrinkle. Any tips/guidance would be greatly appreciated.
|
When clicking help, you get this: First, it shouldn't have a button to return to the main site, as this site isn't attached to any other site. Second, What's Meta's wording should be changed, because again, there is no main site.
|
eng_Latn
| 34,363 |
I have yet to play RfS yet, mostly because I'm concerned about difficulties.. how do I know if that door's lockpicking DC of 5 is too much or too little? Should I use a passive or active opposition? I'm very confused as to how I should set DCs in a microsystem like this Edit: I don't think this is a duplicate because I'm talking about passive opposition as well, not just "how many dice do I roll", but rather "Is a pre-established X DC too much" too
|
I'd like to run a game of , but I'm not sure how many dice I roll when the characters face various challenges. I'm under the impression that it's supposed to be the same number of dice the character uses, but that seems to make impossible tasks far too easy to perform. Is there a general rule saying how many dice the GM rolls against a character's attempt at a given task? Should the GM's dice equal the character's dice, or should the GM's dice vary depending on the task's difficulty?
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,364 |
Problem statement : +++++++++++++++ Given two identical unbiased dice, determine the probability of getting sum as 7. Event = Sum of dots on the top face of both dice is 7. $E = {(1,6),\ (2,5),\ (3,4),\ (4,3),\ (3,4),\ (5,2),\ (6,1)}$ $|Sample Space|$ = $36$. Hence, $P(E) = 1/6$ I have a doubt here. As the two dice are given identical, why do we have to consider ordered pairs? Shouldn't it be unordered consisting of only 3 possible pairs $\{(1,6),\ (2,5),\ (3,4)\} $? Hence, $|S| = 21$ and $P(E) = 3/21$.
|
In either case, one coin flip resulted in a head and the other resulted in a tail. Why is {H,T} a different outcome than {T,H}? Is this simply how we've defined an "outcome" in probability? My main problem with {H,T} being a different outcome than {T,H} is that we apply binomial coefficients (i.e. we count subsets of sets) in some common probability problems. But if we take {H,T} and {T,H} to be different outcomes, then our "sets" are ordered, but sets are by definition unordered... I feel as though the fact that I'm confused about something so basic means that I am missing something fundamental. Any help or insight whatsoever is greatly appreciated!
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,365 |
To go with the Lego Movie, Lego sell minifigures of the characters from the movie. They are sold in packets, where each packet contains one minifigure, and from the outside of the packet it is impossible to tell which minifigure is inside. There are $n$ minifigures to collect. Assuming that each packet that my son buys is equally likely to contain any one of the minifigures, show that the expected number of packets that he needs to buy to collect the whole set is approximately $n \log n$. [Hint: express the random variable that is the total number of packets as a sum of simpler random variables, and use linearity of expectation.]
|
What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,366 |
Suppose I have a circular markov chain. At each state, you are equiprobable to transition to the state immediately to your left or right. I'm interested in the probability of visiting the jth state last? That is to say, what is the probability of reaching all the other states at least once before the jth state?
|
For a graph $G$, let $W$ be the (random) vertex occupied at the first time the random walk has visited every vertex. That is, $W$ is the last new vertex to be visited by the random walk. Prove the following remarkable fact: For the random walk on an $n$-cycle, $W$ is uniformly distributed over all vertices different from the starting vertex.
|
The new Top-Bar does not show reputation changes from Area 51.
|
eng_Latn
| 34,367 |
I am having an issue with understanding how to calculate a specific case of the Coupon Collector's Problem. Say I have a set of 198 coupons. I learned how to find the estimated amount of draws to see all 198 coupons, using the following equation: $$n \sum_{k=1}^n\frac1k$$ It turns out that for $n = 198$, the expected number of draws is approximately 1162. Let's assume, however, that I already have some of the coupons, say 50. How should I go about solving the same problem, given that I've already collected $X$ of them?
|
What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
|
So what I decided to do is to start a small case with only 3 people. There are three possible combinations that I could pair up the people, using the symbols A, B, and C to represent the persons involved: A&B, A&C, B&C. The probability of each pair having the same birthday is $1\over365$ (the first guy gets any birthday, and the second guy only gets one birthday to choose from). What my intuition directed me to do next is to find the probability that any of the three events are true, or the union of the three events, which involved adding up the probabilities. To put this generally, the equation for the probability SHOULD be: ${n \choose 2} \div 365$ However, clearly this is not the case since $n \choose 2$ gets over 365 when $n = 28$. And with 28 people, there is obviously still a chance that they all have different birthdays (Person 1 has Jan. 1, Person 2 has Jan. 2. ... Person 28 has Jan. 28). Could anyone tell me what's wrong with my intuition? I don't want to know what's the right solution, I just want to know what's wrong with my solution .. it makes sense to my intuition, even though it's incorrect.
|
eng_Latn
| 34,368 |
Two grocers agree that the daily demand for a particular item has Poisson distribution. However, grocer $A$ claims that the mean demand is $3$ items per day, while grocer $B$ claims that the mean demand is $6$ items per day. They agree to resolve the disagreement by observing the demand on one particular day: $B$ agrees to accept $A$’s claim if the observed demand is $4$ or less, and $A$ agrees to accept $B$’s claim if the observed demand is $5$ or more. (a) Calculate the probability that $A$’s claim is accepted when, in fact, $B$’s claim is correct
|
I want to find the probability that mean is less than 5 given that poisson distribution states it is 6 ie find p(x<5|x~po(6)) Here is the actual question: Two grocers agree that the daily demand for a particular item has Poisson distribution. However, grocer A claims that the mean demand is 3 items per day, while grocer B claims that the mean demand is 6 items per day. They agree to resolve the disagreement by observing the demand on one particular day: B agrees to accept A’s claim if the observed demand is 4 or less, and A agrees to accept B’s claim if the observed demand is 5 or more. (a) Calculate the probability that A’s claim is accepted when, in fact, B’s claim is correct
|
I want to find the probability that mean is less than 5 given that poisson distribution states it is 6 ie find p(x<5|x~po(6)) Here is the actual question: Two grocers agree that the daily demand for a particular item has Poisson distribution. However, grocer A claims that the mean demand is 3 items per day, while grocer B claims that the mean demand is 6 items per day. They agree to resolve the disagreement by observing the demand on one particular day: B agrees to accept A’s claim if the observed demand is 4 or less, and A agrees to accept B’s claim if the observed demand is 5 or more. (a) Calculate the probability that A’s claim is accepted when, in fact, B’s claim is correct
|
eng_Latn
| 34,369 |
Since this is a uniform distribution, can we just use symmetry for this, and say that our four numbers will be spaced apart, so they will be on the number line at the following locations: $\frac{1}{5}$, $\frac{2}{5}$, $\frac{3}{5}$, $\frac{4}{5}$ Therefore, the smallest number is expected to be $\frac{1}{5}$.
|
$X_1, X_2, \ldots, X_n$ are $n$ i.i.d. uniform random variables. Let $Y = \min(X_1, X_2,\ldots, X_n)$. Then, what's the expectation of $Y$(i.e., $E(Y)$)? I have conducted some simulations by Matlab, and the results show that $E(Y)$ may equal to $\frac{1}{n+1}$. Can anyone give a rigorous proof or some hints? Thanks!
|
My supervisor asked me to find out which distribution represents a particular situation. I have a VoIP generator that generates calls "uniformly" distributed between callers. This means that the volume per caller distribution is "almost" uniformly distributed between a minimum and maximum. So by running a test with 10000 users and a min value equal to 30 calls per week and a max value equal to 90 calls/week i obtain that not all the users respect this limits: we have some users that generate <30 calls and some other users that generate >90 calls. It is clear that the obtained distribution is not uniform. The situation is this: He said that i have to perform a sort of numerical process in order to find some formulas that could define this distribution. Initially, as wrote before,we wanted to obtain a uniform (min,max) distribution (the green area in the figure) but this is not the case as proved with chi-square test. Moreover the curve in figure is not symmetric, the probability of call generation below 30 call/week and greater that 90 call/week are not identical (it is high for 90calls/week). The variability of the number of generated call increases with the increasing call generation rates. "Actually implementation of this distribution is nothing but assigning different call rates in a range for users in domain which indicates implementation of several delta functions. As the call rates increases the variability of the generated calls also increases with the average call rate and this leads to the asymmetric behavior of the curve." [cit. from the Voipgenerator documentation] Someone can help me?I think that now i cannot use Q-Q plot because i don't know which theoretical distribution i have to use in order to compare it with my empirical data. Sorry if I have stressed with a similar problem a few weeks ago, but initially we thought we could change the implementation, but now we cannot. Hence i have to discover the type of the distribution i obtained and i don't know how can i do this.
|
eng_Latn
| 34,370 |
"What is the probability of getting sum exactly 420 as total for throwing 100 dice?" is the actual question. But maybe I can modify it as title? Just give me directions by which I can be able to solve the problem. How can we calculate such probability of large dices? Do I need to go with theoretical probability to solve it?
|
The probability of two dice summing to k is simple enough, make a diagram of the throws, $$ \begin{array}{c} &|&1&2&3&4&5&6\\\hline 1&|&2&3&4&5&6&7\\ 2&|&3&4&5&6&7&8\\ 3&|&4&5&6&7&8&9\\ 4&|&5&6&7&8&9&\color{green}{10}\\ 5&|&6&7&8&9&\color{green}{10}&11\\ 6&|&7&8&9&\color{green}{10}&11&12\\ \end{array} $$ and the probability of a sum is just the length of the corresponding diagonal divided by $36$. For instance, $\mathrm{P}(\mathrm{sum} = 10) = \frac{3}{36} = \frac{1}{12}$. However, it's not as easy to draw a diagram for three dice or more, so how would a general formula look?
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,371 |
how would I be able to answer this question? The first box contains 3 white and 7 black balls, and the second box contains 6 white and 3 black balls, A ball is chosen at random from the first box, and, without looking at its colour, put into the second box. Then a ball is chosen at random from the second box, and it is white. Is it more likely that the ball moved from the first box to the second was black?
|
Bag A has $3$ white and $2$ black marbles. Bag B has $4$ white and $3$ black marbles. Suppose we draw a marble at random from Bag A and put it in Bag B. After doing this, we draw a marble at random from Bag B, which turns out to be white. Given this information, what is the probability that the marble we moved from Bag A to Bag B is white? At first since the probability that white is $1/2$ is $2/5$, and the probability that white is $5/8$ is $3/5$, I thought a sufficient answer would be $$\begin{align}\tfrac 1 2 \cdot \tfrac 2 5 + \tfrac 5 8 \cdot \tfrac 3 5 & = \tfrac 1 5 + \tfrac 3 8 \\ & = \tfrac 8 {40} + \tfrac {15}{40} \\ & = \tfrac {23}{40}\end{align}$$, but that was wrong. Can someone provide me with an answer? Thanks
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,372 |
I have a collection of values and associated probabilities for each value (all positive, but far from being equal). I would like to sample from this distribution many times. One technically correct way to do this is linear search, which would be like this: def sample(xs,ps,p): for i in xrange(len(ps)): if ps[i] <= p: return xs[i] in Python notation. Then for sampling you would take p=random.random(). This linear search approach is very slow if xs and ps are very large. Accordingly, I want to do something along the lines of binary search. My first idea was to build a binary tree, and sample by traversing the tree using a random bit sequence (go left if a bit is zero, right if it is one). I would build the tree by splitting the list of cumulative sums of probabilities in a way which resembles quicksort: the left part of the tree is the values <= 1/2, the right part of the tree is the values >1/2, and then I recurse, so that the LL part of the tree is the values <= 1/4, the LR part is the values >1/4 but <=1/2, etc. I actually did implement this on a toy example, where the cumulative sum of the probabilities was [0.26,0.91,0.99,1]. (So 26% of the time you get the first value, 65% of the time you get the second, 8% the third, 1% the last). I wind up with two problems. One problem I already fixed: there are certain nodes that only have one child based on the sorting mechanism above, such as the first movement to the right in my example. This was easy enough to fix: I just don't change the tree and appropriately update the splitting mechanism, applying it to what I already have. But by doing it that way, all values that are located at a given level of the tree become equally likely. So the first value corresponds to L (1/2), the second value corresponds to RL (1/4), the third value corresponds to RRL (1/8), and the last value corresponds to RRR (1/8). These are very different from the probabilities that I wanted! So my question is: how can I build an efficient data structure and traversal algorithm for a sampling procedure like the one above?
|
Recently I needed to do weighted random selection of elements from a list, both with and without replacement. While there are well known and good algorithms for unweighted selection, and some for weighted selection without replacement (such as modifications of the resevoir algorithm), I couldn't find any good algorithms for weighted selection with replacement. I also wanted to avoid the resevoir method, as I was selecting a significant fraction of the list, which is small enough to hold in memory. Does anyone have any suggestions on the best approach in this situation? I have my own solutions, but I'm hoping to find something more efficient, simpler, or both.
|
The new Top-Bar does not show reputation changes from Area 51.
|
eng_Latn
| 34,373 |
There is a long line of people waiting outside a theatre to buy tickets. The theatre owner comes out and announces that the first person to have a birthday same as someone standing anywhere before him in the line gets a free ticket. Where will you stand to maximize your chance?
|
At a movie theater, the whimsical manager announces that a free ticket will be given to the first person in line whose birthday is the same as someone in line who has already bought a ticket. You have the option of choosing any position in the line. Assuming that you don't know anyone else's birthday, and that birthdays are uniformly distributed throughout the year (365 days year), what position in line gives you the best chance of getting free ticket?
|
At a movie theater, the whimsical manager announces that a free ticket will be given to the first person in line whose birthday is the same as someone in line who has already bought a ticket. You have the option of choosing any position in the line. Assuming that you don't know anyone else's birthday, and that birthdays are uniformly distributed throughout the year (365 days year), what position in line gives you the best chance of getting free ticket?
|
eng_Latn
| 34,374 |
A friend of mine (an I.T. guy) gave me this tricky probability problem, that I couldn't solve. $n$ persons go to a cinema that has $n$ seats, and everyone of them has an assigned position written on the ticket. The first person $p_1$ commits an error and chooses the wrong seat. Then $p_2,\ldots,p_n$ arrive (in order) and they choose their seat according to the following rule: "If the assigned seat is free, go there. Otherwise, choose a random seat among the free ones." What is the probability for the last person $p_n$ to sit in his assigned seat?
|
This is a neat little problem that I was discussing today with my lab group out at lunch. Not particularly difficult but interesting implications nonetheless Imagine there are a 100 people in line to board a plane that seats 100. The first person in line realizes he lost his boarding pass so when he boards he decides to take a random seat instead. Every person that boards the plane after him will either take their "proper" seat, or if that seat is taken, a random seat instead. Question: What is the probability that the last person that boards will end up in his/her proper seat. Moreover, and this is the part I'm still pondering about. Can you think of a physical system that would follow this combinatorial statistics? Maybe a spin wave function in a crystal etc...
|
Have a seat, guys I think "have a seat" is a fixed phrase that does not change whether you address someone or a group of people. Is that correct? If you have more to add to the post, please do so as I would love to know more.
|
eng_Latn
| 34,375 |
Go to do apt-get update and it can't find anything. 403 and 404 errors. Go to us.archive.ubuntu.com/ubuntu/dists/ and there is no disco there.
|
Recently I have installed an older version of Ubuntu on my old machine. Whenever I try to install any software, I get an error saying it couldn't be found: $ sudo apt-get install vlc Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package vlc
|
Say you take a dice and you roll it twice so that you have a pair (X,Y) where X represents the first roll, Y represents the second. When you have a distribution like max(X,Y), what type of distribution is that? It's not uniform, obviously, because the outcomes of max(X,Y) are clearly more favorable towards the higher numbers. I'm being asked to "describe" the distribution. I know that we can look at each distinct outcome, like: max(1,1) = 1 max(1,2) = 2 max(1,3) = 3 max(1,4) = 4 max(1,5) = 5 max(1,6) = 6 max(2,1) = 2 max(2,2) = 2 max(2,3) = 3 max(2,4) = 4 max(2,5) = 5 max(2,6) = 6 max(3,1) = 3 max(3,2) = 3 max(3,3) = 3 max(3,4) = 4 max(3,5) = 5 max(3,6) = 6 max(4,1) = 4 max(4,2) = 4 max(4,3) = 4 max(4,4) = 4 max(4,5) = 5 max(4,6) = 6 max(5,1) = 5 max(5,2) = 5 max(5,3) = 5 max(5,4) = 5 max(5,5) = 5 max(5,6) = 6 max(6,1) = 6 max(6,2) = 6 max(6,3) = 6 max(6,4) = 6 max(6,5) = 6 max(6,6) = 6 But is that the most effective way to look at a distribution like this? Is there a faster and more concise way to describe this distribution other than saying the probability of each outcome? i.e., P(max = 1) = 1/36 P(max = 2) = 3/36 = 1/12 P(max = 3) = 5/36 P(max = 4) = 7/36 P(max = 5) = 9/36 = 1/4 P(max = 6) = 11/36 So, obviously, I understand how to LOOK at a discrete distribution and see what the probabilities are, but not until I look at each possible outcome. But what does it mean to "describe" a distribution? This isn't uniform or any sort of discrete distribution we're taught. Can someone provide clarity?
|
eng_Latn
| 34,376 |
We roll a die until each side appears at least once, then we stop. What is the probability of rolling exactly $n$ dice? I guess the answer is $$6-6\left(\dfrac{5}{6}\right)^n\;,$$ but this may be wrong. In trying to solve this problem, I read , but I couldn't solve the problem.
|
What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
|
We roll a die until each side appears at least once, then we stop. What is the probability of rolling exactly $n$ dice? I guess the answer is $$6-6\left(\dfrac{5}{6}\right)^n\;,$$ but this may be wrong. In trying to solve this problem, I read , but I couldn't solve the problem.
|
eng_Latn
| 34,377 |
Lets say I toss a coin $n$ times where $n \geq 1$. Is it always the case that the probability of getting an even number of heads is $\frac{1}{2}$. If so can someone explain mathematically why? edit: as an additional question, If I introduce some subset of the n coins as unfair coins(i.e. probability of getting heads may not be 0.5) how does this effect the probability of getting an even number of heads?
|
A coin is tossed $n$ times. What is the probability of getting odd number of heads? I started this chapter sometimes ago and faced in front of a tough problem. At first I started considering cases. Case-I : The probability of getting 1 head.Case-II : The probability of getting 3 head and so on. But there are many cases. So how can I solved this . Please help me. Thank you!
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,378 |
So here is the deal: There is a team of 16 people. There are 16 rooms and there is another room with a box with gold in it. The team knows that the box is closed at the beginning. After that each team member is locked in a room and they do not have any contact with each other. Randomly one player is picked, he can go to the room with the box in it and he can decide if he wants to open it, close it or leave it so that it was when he entered the room. After that this player have to go back his room and a next player is picked randomly. The procedure repeats itself until somebody claims that now all of the team member were at least once in the room with the box in it. If it is true they win the gold form the box, if not they don't. Which strategy should the team apply to win? Can anybody give me an idea how should I start to think? Is it possible to solve it as a model of a Turing-machine?
|
100 prisoners are imprisoned in solitary cells. Each cell is windowless and soundproof. There's a central living room with one light bulb; the bulb is initially off. No prisoner can see the light bulb from his or her own cell. Each day, the warden picks a prisoner equally at random, and that prisoner visits the central living room; at the end of the day the prisoner is returned to his cell. While in the living room, the prisoner can toggle the bulb if he or she wishes. Also, the prisoner has the option of asserting the claim that all 100 prisoners have been to the living room. If this assertion is false (that is, some prisoners still haven't been to the living room), all 100 prisoners will be shot for their stupidity. However, if it is indeed true, all prisoners are set free and inducted into MENSA, since the world can always use more smart people. Thus, the assertion should only be made if the prisoner is 100% certain of its validity. Before this whole procedure begins, the prisoners are allowed to get together in the courtyard to discuss a plan. What is the optimal plan they can agree on, so that eventually, someone will make a correct assertion? I was wondering does anyone know the solution for the 4,000 days solution. Also, anyone got any ideas on how to solve this? Just wanted to know if anyone got any ideas how to solve this. is offering prize money for a proof of a optimal solution for it. I suppose to make it clear the people want a plan that would mean on average it would take least amount of time to free the prisoner. The point is the average run time is as low as possible.
|
It shows no question owner, not even a simple name (without link). What happened?
|
eng_Latn
| 34,379 |
I have a printer Hp Laserjet Mfp 436 nda, I am unable to install the printer drivers in hplip-gui ubuntu 16.04, 14.04 & 12.04 because the mfp 436 model is not available while installing the drivers. I need help how to install the drivers.
|
I have a HP MFP127fw. It's "supported" by Ubuntu 17.10 - except print jobs will not release from the print que because a special copywrited driver for HP is not installed. I went through this with 16.10 and had to install a small HP driver that I found with Synaptic. Then the jobs would release. It isn't part of HPLIPS. Apparently Ubuntu 17,10 version doesn't run synaptic due to Wayland. Where can I find this driver?
|
Say you take a dice and you roll it twice so that you have a pair (X,Y) where X represents the first roll, Y represents the second. When you have a distribution like max(X,Y), what type of distribution is that? It's not uniform, obviously, because the outcomes of max(X,Y) are clearly more favorable towards the higher numbers. I'm being asked to "describe" the distribution. I know that we can look at each distinct outcome, like: max(1,1) = 1 max(1,2) = 2 max(1,3) = 3 max(1,4) = 4 max(1,5) = 5 max(1,6) = 6 max(2,1) = 2 max(2,2) = 2 max(2,3) = 3 max(2,4) = 4 max(2,5) = 5 max(2,6) = 6 max(3,1) = 3 max(3,2) = 3 max(3,3) = 3 max(3,4) = 4 max(3,5) = 5 max(3,6) = 6 max(4,1) = 4 max(4,2) = 4 max(4,3) = 4 max(4,4) = 4 max(4,5) = 5 max(4,6) = 6 max(5,1) = 5 max(5,2) = 5 max(5,3) = 5 max(5,4) = 5 max(5,5) = 5 max(5,6) = 6 max(6,1) = 6 max(6,2) = 6 max(6,3) = 6 max(6,4) = 6 max(6,5) = 6 max(6,6) = 6 But is that the most effective way to look at a distribution like this? Is there a faster and more concise way to describe this distribution other than saying the probability of each outcome? i.e., P(max = 1) = 1/36 P(max = 2) = 3/36 = 1/12 P(max = 3) = 5/36 P(max = 4) = 7/36 P(max = 5) = 9/36 = 1/4 P(max = 6) = 11/36 So, obviously, I understand how to LOOK at a discrete distribution and see what the probabilities are, but not until I look at each possible outcome. But what does it mean to "describe" a distribution? This isn't uniform or any sort of discrete distribution we're taught. Can someone provide clarity?
|
eng_Latn
| 34,380 |
To run is good Running is good What is the difference in meaning?
|
I am wondering whether there is any difference between a gerund acting as subject and an infinitive acting as a subject.
|
A coin with heads probability $p$ is flipped $n$ times. A "run" is a maximal sequence of consecutive flips that are all the same. For example, the sequence HTHHHTTH with $n=8$ has five runs, namely H, T, HHH, TT,H. Show that the expected number of runs is $$1+2(n-1)p(1-p).$$ I have tried to use some generating function on this but calculus got pretty messy and didn't work.
|
eng_Latn
| 34,381 |
I know the binomial distribution is used to calculate the probability of N heads in M coin tosses. But what if I want to calculate the number of ocurrences of particular pair of events, such as 'HH', in a random string?
|
I'm trying to find the probability of getting 8 trials in a row correct in a block of 25 trials, you have 8 total blocks (of 25 trials) to get 8 trials correct in a row. The probability of getting any trial correct based on guessing is 1/3, after getting 8 in a row correct the blocks will end (so getting more than 8 in a row correct is technically not possible). How would I go about finding the probability of this occurring? I've been thinking along the lines of using (1/3)^8 as the probability of getting 8 in a row correct, there are 17 possible chances to get 8 in a row in a block of 25 trials, if I multiply 17 possibilities * 8 blocks I get 136, would 1-(1-(1/3)^8)^136 give me the likelihood of getting 8 in a row correct in this situation or am I missing something fundamental here?
|
Please use UK pre-uni methods only (at least at first). Thank you.
|
eng_Latn
| 34,382 |
A friend of mine is working analysing 2000 twits per day and categorize them as postive, negative or neutral. This is a really boring task but the algorithms that do this classification are not very good because they can't detect sarcasm. A simple solution to make more easy this task is to do a subsample of the original $N = 2000$ data points. Doing some tests we saw that with $30\%$ of the data the normalized histograms of the subsample and the original data points look very similar but we need to know a better estimation of the error of doing this subsample. Theoretically the data points are an i.i.d. sequence $(X_i)_{i=1}^N$ (big assumption) in the space $A = \{0,1,2\}$ (positive,negative,neutral). Let $(X_{(i)})_{i=1}^n$ be a subsample of size $n \leq N$ (draw $n$ elements uniformly without replacement). In some sense I want to characterize the distribution of $(X_{(i)})_{i=1}^n$ in order to choose a $n$ such that the empirical distribution of $(X_{(i)})_{i=1}^n$ is close to the empirical distribution of $(X_i)_{i=1}^N$. Any help will be appreciated
|
I have a population of phone calls - 200,000. There are different reasons for each call, but lets assume the number of reasons is known. i.e. 7 different call reasons: 1) Check on order 2) Cancel order 3) Billing information 4) Account information etc... My question is, I want to get a statistically significant sample size so the number of calls I listen to in my sample are representative of the distribution of the reasons in the overall population of 200,000 calls. What should the sample size be? Which methods to use to calculate the sample size?
|
How to find all positive integers $n$ such that $(n-1)!+1$ can be written as $n^k$ for some positive integer $k$?
|
eng_Latn
| 34,383 |
If 2 players toss a fair die, one tosses it 999 times and the other tosses it 1000 times, what is the probability that one gets more even numbers than the other? It's clearly comparing binomial distributions but I'm not sure how to do it.
|
First player rolls a 6-sided die 100 times and the second player 101 times. What are the odds of the second player getting more odd numbers than the first one? I tried solving this via listing the options which led to a nasty sum that I wasn't able to solve neither by hand nor by using wolfram alpha. Clearly there has to be a more clever solution using the properties of a binomial distribution, but so far I haven't come across anything useful.
|
What is the best term to describe the fact that most users have an initial dislike of any change to a UI even though it may really be net better over time? Note: Question originally appeared
|
eng_Latn
| 34,384 |
Suppose you are a prisoner. The king gives you $100$ marbles, $50$ black and $50$ white. You have to put all the marbles into $2$ urns such that none of the urns is empty. The king will then choose an urn at random and from that a marble at random. If the marble is white, you will be set free, otherwise you will be sentenced to death. So you have to distribute the marbles in such a way so as to maximize the chance of survival. Basically you have to maximize the probability that the king chooses white marble. My attempt : Let there be $b$ black and $w$ white marbles in the 1st urn . Therefore, there will be $50-b$ black and $50-w$ white marbles in the 2nd urn. Therefore P(Survival) = P(Choosing white marble) = $\frac{1}{2}\cdot\frac{w}{w+b} +\frac{1}{2}\cdot\frac{50-w}{100-w-b}$ . Now I need to maximize the above equation but I don't know how. Any ideas?
|
I've tried searching for this question but couldn't find it on stackexchange. This is a common type of interview question; I ran into it doing brain teasers on a probability puzzles app, and if you fine people agree with my logic, I will inform the app developer that his/her answers are incorrect. The problem is essentially this: You are sentenced to death for thievery. The King is magnanimous and decides to put your fate in the hands of chance. You are given $100$ white marbles and 100 black marbles, and $2$ urns. The king will choose an urn at random and pull out a single marble at random; if the marble is white, you live, if its black, you die. If you place the marbles in the best way possible, what is your probability of survival? I started with the base case: $100$ white marbles in one urn, $100$ black marbles in the other. This comes down to a $50$-$50$ chance of survival. I then worked my way to deciding that placing $1$ white marble in one urn and $99$ white marbles + $100$ black marbles in the other urn would be the "best way possible", which yields the following: $$P(\text{Survival}) = \frac{1}{2}(1+\frac{99}{199}) \approx .749$$ Selecting $1$ of $2$ urns at random gives $\frac{1}{2}$, the urn containing $1$ marble gives $1$, and the other that contains $99$ white marbles and $100$ black marbles gives $\frac{99}{199}$ because there are $99$ possible white marbles to select out of $199$ total marbles. The app claims that the correct answer is $\frac{1}{2}(1+\frac{99}{200}) \approx .748$ I see where the $200$ comes from, but I do not think it is right to say that there are $200$ marbles in the other urn. Who is correct?
|
I've tried searching for this question but couldn't find it on stackexchange. This is a common type of interview question; I ran into it doing brain teasers on a probability puzzles app, and if you fine people agree with my logic, I will inform the app developer that his/her answers are incorrect. The problem is essentially this: You are sentenced to death for thievery. The King is magnanimous and decides to put your fate in the hands of chance. You are given $100$ white marbles and 100 black marbles, and $2$ urns. The king will choose an urn at random and pull out a single marble at random; if the marble is white, you live, if its black, you die. If you place the marbles in the best way possible, what is your probability of survival? I started with the base case: $100$ white marbles in one urn, $100$ black marbles in the other. This comes down to a $50$-$50$ chance of survival. I then worked my way to deciding that placing $1$ white marble in one urn and $99$ white marbles + $100$ black marbles in the other urn would be the "best way possible", which yields the following: $$P(\text{Survival}) = \frac{1}{2}(1+\frac{99}{199}) \approx .749$$ Selecting $1$ of $2$ urns at random gives $\frac{1}{2}$, the urn containing $1$ marble gives $1$, and the other that contains $99$ white marbles and $100$ black marbles gives $\frac{99}{199}$ because there are $99$ possible white marbles to select out of $199$ total marbles. The app claims that the correct answer is $\frac{1}{2}(1+\frac{99}{200}) \approx .748$ I see where the $200$ comes from, but I do not think it is right to say that there are $200$ marbles in the other urn. Who is correct?
|
eng_Latn
| 34,385 |
Two players, $A$ and $B$, alternately and independently flip a coin and the first player to obtain a head wins. Player $A$ flips first. What is the probability that $A$ wins? Official answer: $2/3$, but I cannot arrive at it. Thought process: Find the probability that a head comes on the $n$th trial. That's easy to do, it's just a Geometric random variable. Then find the probability that $n$th turn is player's A turn. Finally, multiply both probabilities. When I came up with each probability, both of them depended on the amount of trials $n$, so my answer was a non-constant function of $n$. However, what I find quite fantastic is that the answer is a constant, so the amount of tries until a head comes doesn't seem to matter.
|
Two players, $A$ and $B$, alternately and independently flip a coin and the first player to get a head wins. Assume player $A$ flips first. If the coin is fair, what is the probability that $A$ wins? So $A$ only flips on odd tosses. So the probability of winning would be $$ P =\frac{1}{2}+\left(\frac{1}{2} \right)^{2} \frac{1}{2} + \cdots+ \left(\frac{1}{2} \right)^{2n} \frac{1}{2}$$ Is that right? It seems that if $A$ only flips on odd tosses, this shouldn't matter. Either $A$ can win on his first toss, his second toss, ...., or his $n^{th}$ toss. So the third flip of the coin is actually $A$'s second toss. So shouldn't it be $$P = \frac{1}{2} + \left(\frac{1}{2} \right)^{2} + \left(\frac{1}{2} \right)^{3} + \cdots$$
|
Please use UK pre-uni methods only (at least at first). Thank you.
|
eng_Latn
| 34,386 |
I was posed the following question recently: Suppose a brand of cereal has four different toys included in their cereal boxes, each with an equal probability of being found: On average, how many boxes of cereal do you need to buy to get all four toys? Since getting a toy in each box is an independent event and the probability stays constant, I guessed that you could model this using the binomial distribution: \begin{align} X &= \text{number of unique toys}\\ X &\sim B(n, 0.25) \end{align} Where $n$ is the number of trials. The probability function for getting four unique toys would therefore be: $$P(X = 4) = {n \choose 4} \cdot 0.25^4 \cdot 0.75^{n-4}$$ So, to find the mean number of trials, I perform the infinite sum: $$\sum \limits_{n=4}^\infty \left[n\cdot P(X=4)\right] = \sum \limits_{n=4}^\infty \left[n\cdot {n \choose 4} \cdot 0.25^4 \cdot 0.75^{n-4}\right]$$ Which, when I plug this into Wolfram Alpha returns $76$. So, according to my calculation, on average you need to buy $76$ boxes of cereal to get all four toys. Is this right?
|
What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ Can someone please explain this to me, possibly with a link to a general topic?
|
I'd prefer as little formal definition as possible and simple mathematics.
|
eng_Latn
| 34,387 |
If you choose an answer to this question at random, what is the chance you will be correct? A) $25\%$ B) $50\%$ C) $60\%$ D) $25\%$
|
I found this math "problem" on the internet, and I'm wondering if it has an answer: Question: If you choose an answer to this question at random, what is the probability that you will be correct? a. $25\%$ b. $50\%$ c. $0\%$ d. $25\%$ Does this question have a correct answer?
|
I found this math "problem" on the internet, and I'm wondering if it has an answer: Question: If you choose an answer to this question at random, what is the probability that you will be correct? a. $25\%$ b. $50\%$ c. $0\%$ d. $25\%$ Does this question have a correct answer?
|
eng_Latn
| 34,388 |
I have a continuous distribution that I was thinking of binning for computing MI and H. I often arbitrarily decide on bin size. Is there a general consensus on how to set bin size and number? Thanks for your input!
|
I want to quantify the relationship between two variables, A and B, using mutual information. The way to compute it is by binning the observations (see example Python code below). However, what factors determines what number of bins is reasonable? I need the computation to be fast so I cannot simply use a lot of bins to be on the safe side. from sklearn.metrics import mutual_info_score def calc_MI(x, y, bins): c_xy = np.histogram2d(x, y, bins)[0] mi = mutual_info_score(None, None, contingency=c_xy) return mi
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,389 |
I've read that the answer to such a problem is the inverse probability. So here getting a one has probably 1/6, so the number of tries you would be expected to run in order to get a 1 is 6. I'm not sure I understand this.
|
On average, how many times must I roll a dice until I get a $6$? I got this question from a book called Fifty Challenging Problems in Probability. The answer is $6$, and I understand the solution the book has given me. However, I want to know why the following logic does not work: The chance that we do not get a $6$ is $5/6$. In order to find the number of dice rolls needed, I want the probability of there being a $6$ in $n$ rolls being $1/2$ in order to find the average. So I solve the equation $(5/6)^n=1/2$, which gives me $n=3.8$-ish. That number makes sense to me intuitively, where the number $6$ does not make sense intuitively. I feel like on average, I would need to roll about $3$-$4$ times to get a $6$. Sometimes, I will have to roll less than $3$-$4$ times, and sometimes I will have to roll more than $3$-$4$ times. Please note that I am not asking how to solve this question, but what is wrong with my logic above. Thank you!
|
A coin is tossed three times. The probability of zero heads is 1/8 and the probability of zero tails is 1/8. What is the probability that there is at least one head and at least one tail? So, if P(zero heads)= 1/8 , then that should be the same of p(all tails)? We would use the complement rule and Multiplication Rule? P(at least one head) = 1 - P(no heads) = 1 - 1/8= 7/8 P(at least one tail) = 1 - P(no tails) = 1 - 1/8= 7/8 would I multiply the two values?
|
eng_Latn
| 34,390 |
Players of a certain TRPG have characters with 6 ability scores, each ability score ranging from 3-18. One method of generating those is by rolling 4d6 drop lowest. That means four six-faced-dice are rolled, and the three highest results are added. What's the probability that, given 5 players, one player will have a highest ability score equal or lower than the lowest ability score of another player? The related question shows how to get the distribution of 4d3 drop lowest, but how do I get from there to an answer of my question above? A good answer would explain the result in a way that a statistics novice can follow.
|
Repeatedly rolling a six sided die four times and summing the highest three results gives you a distribution with what mean and standard deviation? I've only taken AP statistics, but I would like to learn how to do this.
|
The entire site is blank right now. The header and footer are shown, but no questions.
|
eng_Latn
| 34,391 |
estimate confidence interval of empirical percentile Suppose $n$ samples are generated from an unknown distributions and the empirical percentile is estimated. In python: import numpy as np n_samples = 30 desired_percentile = 90 x = np.random.uniform(0, 1, n_samples) # data from an unknown distribution perc = np.percentile(x, desired_percentile) How can I estimate how close my estimated percentile is to the real percentile of the distribution? I would like to estimate something like $p=P(|perc- real~perc| > \delta)$ that in this example is equivalent to $P(|perc-0.9|>\delta)$. Note that in general the $real~perc$ value is not known and that I expect $p \rightarrow 0 ~if ~ n \rightarrow\infty$ UPDATE 1 All the answers at the moment seems to suggest to estimate the confidence interval using bootstrap. Unfortunately bootstrap will work only if enough samples are available. Suppose using bootstrap we estimate $l$ and $u$ such that: $P(l < real~perc < p) > 0.9$ this will not take into account the fact that the data is not enough to correctly approximate the original distribution. Indeed for example in the case of a $n=30$ and $x_i\sim U(0, 1)$ we have: $P(x_1<0.95,\dots x_n<0.95)=0.21$ as a consequence the $P(l < real~perc < p)$ should be less than $0.79$. I think an estimate of the CI needs to explicitly take the number of samples $n$ as parameter.
|
How to obtain a confidence interval for a percentile? I have a bunch of raw data values that are dollar amounts and I want to find a confidence interval for a percentile of that data. Is there a formula for such a confidence interval?
|
Sampling with replacement or without replacement I'm writing a program in R that simulates bank losses on car loans. Here is the questions I'm trying to solve: You run a bank that has a history of identifying potential homeowners that can be trusted to make payments. In fact, historically, in a given year, only 2% of your customers default. You want to use stochastic models to get an idea of what interest rates you should charge to guarantee a profit this upcoming year. A. Your bank gives out 1,000 loans this year. Create a sampling model and use the function sample() to simulate the number of foreclosure in a year with the information that 2% of customers default. Also suppose your bank loses $120,000 on each foreclosure. Run the simulation for one year and report your loss. B. Note that the loss you will incur is a random variable. Use Monte Carlo simulation to estimate the distribution of this random variable. Use summaries and visualization to describe your potential losses to your board of trustees. C. The 1,000 loans you gave out were for 180,000. The way your bank can give out loans and not lose money is by charging an interest rate. If you charge an interest rate of, say, 2% you would earn 3,600 for each loan that doesn't foreclose. At what percentage should you set the interest rate so that your expected profit totals 100,000. Hint: Create a sampling model with expected value 100 so that when multiplied by the 1,000 loans you get an expectation of 100,000. Corroborate your answer with a Monte Carlo simulation. I'm confused about how to set up this simulation up from a high level point of view and have the following questions: 1. For part A, Should I create a pool of 1000 customers or should I create a larger pool of customers? 2. For part A, when sampling, do I sample with or without replacement? 3. For part B, I'm confused about how to set up the monte carlo simulation. Am I varying the size of the customer pool? 4. For part C, I'm not sure how to set up a sampling model that involves the interest rate. Any advice or guidance would be appreciated. I'm also thinking that if I fully understood the high level concepts for parts A and B, part C might not be such a mystery.
|
eng_Latn
| 34,392 |
cumulative distribution functions (CDFs) i want to know why is important use CDF for this analysis of TMY (typical meteorological year) because i have the data of the month for compare with the long term mean This the example of the manual I have 2 CDF's with equal number of points that I want to compare. These are from: Temperature of 1 month from 2012 Mean temperature across months What can I do to obtain this difference, this is from the formula of Finkelstein-Schafer?? the manual is this manual
|
can you help me to do a difference of CDF? I have 2 CDF's with equal number of points that I want to compare. These are from: Temperature of 1 month from 2012 Mean temperature across months What can I do to obtain this difference, this is from the formula of Finkelstein-Schafer
|
Coupon collector problem with partial collection of a specific set of coupons I am very new in probability and combinatorics and have a naive question around a variation on the coupon collector problem with partial collection. Lets assume we have a box with 45 coupons labeled 1-45. Now in this case I would like to adjust the CCP such that I can calculate the expected value (amount of draws necessary) to collect 10 specific items. For example item 1-10. How do I adjust my model such that I can calculate the amount of draws necessary to collect each item n times. I assume that I have to adjust CCP2 in following post () to include the probability that I catch one item is 10/45. All tips and tricks are welcome! Thanks for your help
|
eng_Latn
| 34,393 |
How do I estimate probability of success with no successes? My $6$ friends and I tried buying tickets to a popular event. Everyone who wanted a ticket got a random number and if your number is less than or equal the number of tickets available, you can buy a ticket. None of us $7$ was selected to buy tickets. How can I estimate the probability of successfully buying a ticket? Using maximum likelihood on a binomial model the answer is $0$, but I know tickets were sold, so it can't be $0$.
|
How to tell the probability of failure if there were no failures? I was wondering if there is a way to tell the probability of something failing (a product) if we have 100,000 products in the field for 1 year and with no failures? What is the probability that one of the next 10,000 products sold fail?
|
Coupon Collector Problem with Batched Selections I am trying to solve a variation on the . In this scenario, someone is selecting coupons at random with replacement from n different possible coupons. However, the person is not selecting coupons one at a time, but instead, in batches. Here's an example problem formulation: There are 100 distinct coupons. A person makes selections in 10-coupon batches at random (each coupon with replacement). What is the expected number of batches necessary to have selected 80 unique coupons? I have been able to determine the expected number of selections necessary to have selected k unique coupons when selecting one at a time (much like Henry's ), but I'm a bit stumped as to how to go about solving it with this particular wrinkle. Any tips/guidance would be greatly appreciated.
|
eng_Latn
| 34,394 |
Uniform distribution on $\{1,\dots,7\}$ from rolling a die This was a job interview question someone asked about on Reddit. () How can you generate a uniform distribution on $\{1,\dots,7\}$ using a fair die? Presumably, you are to do this by combining repeated i.i.d draws from $\{1,\dots,6\}$ (and not some Rube-Goldberg-esque rig that involves using the die for something other than rolling it).
|
Simulate repeated rolls of a 7-sided die with a 6-sided die What is the most efficient way to simulate a 7-sided die with a 6-sided die? I've put some thought into it but I'm not sure I get somewhere specifically. To create a 7-sided die we can use a rejection technique. 3-bits give uniform 1-8 and we need uniform 1-7 which means that we have to reject 1/8 i.e. 12.5% rejection probability. To create $n * 7$-sided die rolls we need $\lceil log_2( 7^n ) \rceil$ bits. This means that our rejection probability is $p_r(n)=1-\frac{7^n}{2^{\lceil log_2( 7^n ) \rceil}}$. It turns out that the rejection probability varies wildly but for $n=26$ we get $p_r(26) = 1 - \frac{7^{26}}{2^{\lceil log_2(7^{26}) \rceil}} = 1-\frac{7^{26}}{2^{73}} \approx 0.6\%$ rejection probability which is quite good. This means that we can generate with good odds 26 7-die rolls out of 73 bits. Similarly, if we throw a fair die $n$ times we get number from $0...(6^n-1)$ which gives us $\lfloor log_2(6^{n}) \rfloor$ bits by rejecting everything which is above $2^{\lfloor log_2(6^{n}) \rfloor}$. Consequently the rejection probability is $p_r(n)=1-\frac{2^{\lfloor log_2( 6^{n} ) \rfloor}}{6^n}$. Again this varies wildly but for $n = 53$, we get $p_r(53) = 1-\frac{2^{137}}{6^{53}} \approx 0.2\%$ which is excellent. As a result, we can roll the 6-face die 53 times and get ~137 bits. This means that we get about $\frac{137}{53} * \frac{26}{73} = 0.9207$ 7-face die rolls out of 6-face die rolls which is close to the optimum $\frac{log 7}{log6} = 0.9208$. Is there a way to get the optimum? Is there an way to find those $n$ numbers as above that minimize errors? Is there relevant theory I could have a look at? P.S. Relevant python expressions: min([ (i, round(1000*(1-( 7**i ) / (2**ceil(log(7**i,2)))) )/10) for i in xrange(1,100)], key=lambda x: x[1]) min([ (i, round(1000*(1- ((2**floor(log(6**i,2))) / ( 6**i )) ) )/10) for i in xrange(1,100)], key=lambda x: x[1]) P.S.2 Thanks to @Erick Wong for helping me get the question right with his great comments. Related question:
|
There are 'n' candies and 't' boxes. Find the number of ways to place the candies in the boxes for each of the conditions (given in the problem). There are 'n' candies and 't' boxes. Find the number of ways to place the candies in the boxes for each of the conditions (all the candies must be spread out) : (a) candies and boxes are different; (b) candies of the same diiferent boxes should not be empty cartons: (c) candies equally new, boxes are different; Edit :(d) candies are different, the boxes are the same, there should be no empty boxes; (e) candies of different boxes alike.(Edit : candies are different, boxes are equal) Specify the display type that matches the placement, if possible. My Answers : (a) Each layout is encoded with a word of $n$ letters from the alphabet of 't' letters $\implies$ possible $n^t$ variants. (b) Write, $n$ candies in the form of balls ina line, we need to put $(t-1)$ partition in $(n-1)$ place, but we can't put two partitions on one place, so we get : $^{t-1}C_{n-1}$. (c) First we need to choose a candy in a box (in the first box $n$ ways, in the second box $(n-1)$, $\cdots$ in the $t-th$ : $(n-t+1)$ ways $\implies$ total $n!/(n-t)!$ methods, and then we distribute the remaining $(n-t)$ candies into $t$ boxes, this is encoded with owrds from the alphabets (for each candy $t$ variants) $\implies$ $t^{n-t}$. The Answer is : $\frac {n!}{(n-t)!}*t^(n-t)$ Are my answers (a), (b) & (c) are correct? For (d) & (e) I don't know how to proceed? Please help me.
|
eng_Latn
| 34,395 |
(Efficiently) Generate Unique Random Numbers Sorry my terminology is somewhat limited so I'll have to use more words to explain what I mean. In games that require critical hits, I've heard it's not good to generate a random number but instead one should randomly pick a number from a list of possibilities then exclude that number from the next picking. This way, the dev is given more control over the player's experience. To do this, I've been thinking somewhere along the lines of generating a random number then excluding that number from the next generation. In C#, how can I exclude a number from a random number generation? I don't think that using a while loop to continually check for another number if the currently generated one is excluded is a very efficient method, especially when most of the numbers have been excluded. Is there a built-in method to do this or some clever way to implement it? Not: I need upwards of 100 numbers to possibly generate so hard-coding it would be tedious.
|
Make a fake random distribution? Sometimes a "real" random event seems unfair and makes players frustrated. For instance a enemy has probability of 20% to cause double damage("critical hit"). Thus he could make 4 critical hits in a row with 1/725 probability. It's not as small as it sounds. I hope the probability could be adjusted after each hit. If the player just got a critical hit, the probability decreases for the next hit. Otherwise it increases. Is there a mathematical model for this behaviour?
|
Birthday problem: why is this solution wrong? This question is about the birthday problem: the probability that in a group of n people, at least two of them have the same birthday (). An easy way to calculate the probability is to calculate first the probability that no two people have the same birthday. Let's say that I want to calculate the probability that in a group of 20 people, NO two people have the same birthday. So, for 20 people with different birthdays, I can choose the first birthday in 365 ways, the second in 364 ways and so on, while for 20 people who can have the same birthday, I can choose the first birthday in 365 ways, the second also in 365 ways, and so on. At the end: $p={365 \cdot 364 \cdot 363 \cdot ... \cdot 346 \over 365 \cdot 365 \cdot 365 \cdot ... \cdot 365}\approx 0.59$ This is the right method and I understand it. I don't understand why the following method is wrong: The probability that in a group of 20 people NO two people have the same birthday is the ratio between the combination without repetition of 20 birthdays and the combination with repetition of 20 birthdays: $$p={C_{365,20} \over C'_{365,20}}={\binom{365}{20} \over \left(\binom{365}{20}\right)}={\binom{365}{20} \over \binom{365+20-1}{20}}=\frac{{{365!}\over {20!\cdot(365-20)!}}}{{(365+20-1)!}\over{20!\cdot{(365-1)!}}}\approx 0.35$$ I understand the mistake is in the denominator (as the numerator is the same of the other method, after simplifying that 20!), but why? Isn't it right to calculate the k-combination with repetition of 20 birthdays? Thanks for helping!
|
eng_Latn
| 34,396 |
Proof of "friendship" in group of 101 people In a group of 101 people, for every 50 people, there is another person who is friends with all of these 50 people. Prove that there exists a person who is friends with the other 100 people. My first thoughts were pigeonhole, but this didn't allow me to prove the last part. I additionally tried to use some smaller examples, but could not generate any proof from it. Any help greatly appreciated.
|
There are $2n+1$ people. For each $n$ people there is somebody who is friend with each of them. Prove there is a "know-them-all" person. I have a following problem. There are $2n+1$ people in the room. For any group of $n$ people there is a person (different from them) who is friends with each person in this group. Prove that there is a person, who knows all $2n$ other people. One can easily see, that $\min_v \deg v = n$. From this we can get a lower bound for number of edges $N_e$: $$ N_e = \frac{1}{2}\sum_v \deg v \geq \frac{(2n+1) \cdot n}{2} $$ I do not know where to move from this point (and do not think, that this is the right direction) and pretty stuck right now. Can you give a hint?
|
Coupon Collector Problem with Batched Selections I am trying to solve a variation on the . In this scenario, someone is selecting coupons at random with replacement from n different possible coupons. However, the person is not selecting coupons one at a time, but instead, in batches. Here's an example problem formulation: There are 100 distinct coupons. A person makes selections in 10-coupon batches at random (each coupon with replacement). What is the expected number of batches necessary to have selected 80 unique coupons? I have been able to determine the expected number of selections necessary to have selected k unique coupons when selecting one at a time (much like Henry's ), but I'm a bit stumped as to how to go about solving it with this particular wrinkle. Any tips/guidance would be greatly appreciated.
|
eng_Latn
| 34,397 |
Assuming the rest of your application is solid, is a gre score of 160Q and 155V good enough for a math phd? I plan on applying to graduate school school soon and I'm confident the rest of my application will be pretty solid. However, I'm worried that a quant score of 160-163 will put me below the other applicants. Most have quant scores of 165+. I could break this, but I have a huge problem of making silly mistakes when under pressure and trying to keep a good pace.
|
How does the admissions process work for Ph.D. programs in the US, particularly for weak or borderline students? When applying to a PhD program in the US, how does the admissions process work? If an applicant is weak in a particular area, is it possible to offset that by being strong in a different area? Note that this question originated from this . Please feel free to edit the question to improve it.
|
Maximizing expected value of coin reveal game I was asked this question today in an interview and am still not sure of the answer. The question is as follows. Part one: Say I have flipped 100 coins already and they are hidden to you. Now I reveal these coins one by one and you are to guess if it is heads or tails before I reveal the coin. If you get it correct, you get $\$1$. If you get it incorrect, you get $\$0$. I will allow you to ask me one yes or no question about the sequence for free. What will it be to maximize your profit? My approach for this part of the problem was to ask if there were more heads than tails. If they say yes, I will just guess all heads otherwise I just guess all tails. I know the expected value for this should be greater than 50 but is it possible to calculate the exact value for this? If so, how would you do it? Part two: Same scenario as before but now I will charge for a question. I will allow you to ask me any amount of yes or no questions as I go through this process for $\$1$. What is your strategy to maximize your profit? I was not sure about the answer to this part of the question. Would the best option be to guess randomly? I think the expected value of this should be 50. I am not sure about the expected value of part one but if it is greater than 51, I think I could also use that approach. Anyone have a good idea for this part?
|
eng_Latn
| 34,398 |
Generate 4-digit number without random class I am trying to generate a random 4-digit number using the Java Math.random method. The number must always be 4 digits, i.e., allow for results such as 0001, or 0023, or 0123 to be possible. However, the traditional Math.random method formula below only allows numbers less than 1000 to be 1, 23, or 123. int i = (int)(Math.random()*10000); Is there a way to do this using math.random and/or loops? Thank you, in advance, for any suggestions. I tried researching how to ensure the 4-digit number always has 4 digits, but all results seem to suggest using Random class, or min/max. I DO NOT want to use this random class or max/min code as I have not yet studied it yet.
|
How can I pad an integer with zeros on the left? How do you left pad an int with zeros when converting to a String in java? I'm basically looking to pad out integers up to 9999 with leading zeros (e.g. 1 = 0001).
|
How often do you have to roll a 6-sided die to obtain every number at least once? I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$. However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1); for i=1:1000000, %# assume it's never going to take us >100 rolls r=randi(6,100,1); %# since R2013a, unique returns the first occurrence %# for earlier versions, take the minimum of x %# and subtract it from the total array length [~,x]=unique(r); mx(i,1)=max(x); end %# make sure we haven't violated an assumption assert(numel(x)==6) %# find the expected value for the coupon collector problem expectationForOneRun = mean(mx) %# find the expected number of rolls as a maximum of four independent players maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) ) expectationForOneRun = 14.7014 (SEM 0.006) maxExpectationForFourRuns = 21.4815 (SEM 0.01)
|
eng_Latn
| 34,399 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.