Follow @ProbabilityPuz |
Q: You are playing a game wherein you are presented 3 magical boxes. Each box has a set probability of delivering a gold coin when you open it. On a single attempt, you can take the gold coin and close the box. In the next attempt you are free to either open the same box again or pick another box. You have a 100 attempts to open the boxes. You do not know what the win probability is for each of the boxes. What would be a strategy to maximize your returns?
Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series)
A: Problems of this type fall into a category of algorithms called "multi armed bandits". The name has its origin in casino slot machines wherein a bandit is trying to maximize his returns by pulling different arms of a slot machine by using several "arms". The dilemma he faces is similar to the game described above. Notice, the problem is a bit different from a typical estimation exercise. You could simply split your 100 attempts into 3 blocks of 33,33 & 34 for each of the boxes. But this would not be optimal. Assume that one of the boxes had just a \(1\%\) probability of yielding a golden coin. Even as you probe and explore that box you know intuitively that you have spent a fair amount of attempts to simply reinforce something you already knew. You need a strategy that adjusts according to new information that you gain from each attempt. Something that gradually transitions away from a box that yields less to a box that yields more.
Assume at the beginning of the game you do not know anything about the yield probabilities. Assign a prior set of values of \(\big[\frac{1}{2}, \frac{1}{2},\frac{1}{2}\big]\). Simultaneously maintain a set of likelihoods using which you will decide which box to sample next. Initially all three values are set to 1s \(\{p_1 = 1,p_2 = 1,p_3 = 1\}\). First open the boxes in succession and use up \(n\) attempts per box. If you denote the number of successes for each box as \(\{s_1,s_2,s_3\}\), then you could update the posterior distribution of your belief in what box yields as follows
$$
p_1 = \frac{1 + s_1}{2 + n} \\
p_2 = \frac{1 + s_2}{2 + n} \\
p_3 = \frac{1 + s_3}{2 + n}
$$
Think of this as your initializing phase. Once you initialize your estimates, subsequent choice of boxes should be based on a re-normalized probability vector derived from \(p_1,p_2,p_3\). What this means is that the probability you would pick a box is computed as follows
$$
P(\text{pick box 1}) = \frac{p_1}{p_1 + p_2 + p_3} \\
P(\text{pick box 2}) = \frac{p_2}{p_1 + p_2 + p_3} \\
P(\text{pick box 3}) = \frac{p_3}{p_1 + p_2 + p_3}
$$
What ends up happening here is that you will pick the box which has the highest probability of winning based on information gleaned up to a certain point. Another benefit of this approach is you are learning in real time. If a certain box isn't yielding as much as another you don't discard opening that box all together, instead you progressively sample it less often.
If you are looking to buy some books in probability here are some of the best books to own
Fifty Challenging Problems in Probability with Solutions (Dover Books on Mathematics)
This book is a great compilation that covers quite a bit of puzzles. What I like about these puzzles are that they are all tractable and don't require too much advanced mathematics to solve.
Introduction to Algorithms
This is a book on algorithms, some of them are probabilistic. But the book is a must have for students, job candidates even full time engineers & data scientists
Introduction to Probability Theory
Overall an excellent book to learn probability, well recommended for undergrads and graduate students
An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition
This is a two volume book and the first volume is what will likely interest a beginner because it covers discrete probability. The book tends to treat probability as a theory on its own
The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists (and Everyone Else!)
A good book for graduate level classes: has some practice problems in them which is a good thing. But that doesn't make this book any less of buy for the beginner.
Introduction to Probability, 2nd Edition
A good book to own. Does not require prior knowledge of other areas, but the book is a bit low on worked out examples.
Bundle of Algorithms in Java, Third Edition, Parts 1-5: Fundamentals, Data Structures, Sorting, Searching, and Graph Algorithms (3rd Edition) (Pts. 1-5)
An excellent resource (students, engineers and even entrepreneurs) if you are looking for some code that you can take and implement directly on the job
Understanding Probability: Chance Rules in Everyday Life
This is a great book to own. The second half of the book may require some knowledge of calculus. It appears to be the right mix for someone who wants to learn but doesn't want to be scared with the "lemmas"
Data Mining: Practical Machine Learning Tools and Techniques, Third Edition (The Morgan Kaufmann Series in Data Management Systems)
This one is a must have if you want to learn machine learning. The book is beautifully written and ideal for the engineer/student who doesn't want to get too much into the details of a machine learned approach but wants a working knowledge of it. There are some great examples and test data in the text book too.
Discovering Statistics Using R
This is a good book if you are new to statistics & probability while simultaneously getting started with a programming language. The book supports R and is written in a casual humorous way making it an easy read. Great for beginners. Some of the data on the companion website could be missing.
A Course in Probability Theory, Third Edition
Covered in this book are the central limit theorem and other graduate topics in probability. You will need to brush up on some mathematics before you dive in but most of that can be done online
Probability and Statistics (4th Edition)This book has been yellow-flagged with some issues: including sequencing of content that could be an issue. But otherwise its good
Comments
Post a Comment