Skip to main content

Posts

Showing posts from January, 2014

Three Random Numbers



Q: You play a game with a friend where he chooses two random numbers between 0 and 1. Next you choose a random number between 0 and 1. If your number falls between the prior two numbers you win. What is the probability that you would win?

A: Consider the number line between 0 and 1 shown in figure below


Let \(x\) and \(y\) be the two numbers chosen. The probability of a win, i.e. choosing a number in the range between the two chosen numbers can be estimated as \(\frac{|y-x|}{1}\). Note, we have chosen the modulus operation because \(x\) could be greater than \(y\) or vice versa. The feasible region of numbers to be chosen to win, is the ratio of the absolute difference between \(x\) and \(y\) divided by the total possible range, which is 1. In order to estimate the probability that a third chosen number will lie between the two we integrate out (a double integral) between the ranges of \([0,1]\). This is estimated as
$$
P(\text{win}) = \int_{0}^{1}\int_{0}^{1}|y…

The Three Magical Boxes



Q: You are playing a game wherein you are presented 3 magical boxes. Each box has a set probability of delivering a gold coin when you open it. On a single attempt, you can take the gold coin and close the box. In the next attempt you are free to either open the same box again or pick another box. You have a 100 attempts to open the boxes. You do not know what the win probability is for each of the boxes. What would be a strategy to maximize your returns?

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series)

A: Problems of this type fall into a category of algorithms called "multi armed bandits". The name has its origin in casino slot machines wherein a bandit is trying to maximize his returns by pulling different arms of a slot machine by using several "arms". The dilemma he faces is similar to the game described above. Notice, the problem is a bit different from a typical estimation exercise. You co…