Skip to main content

The Naive Bayesian Approach to Machine Learning

On this write up, I'll explain the Naive Bayesian (NB) approach that is used in machine learning. First of, the "naive" in NB is part of the name, not an adjective added. The method is simple, robust and fairly effective for a lot of cases. Every engineer dealing with data absolutely must know this technique. It is one of those techniques which is simple and computable yet fairly simple to explain to people without a machine learning background.

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series)

To start with, let us look at some data. Assume you run  a coffee shop. You keep a log of the gender of every customer along with their age which you estimate. Granted this estimation may not be accurate, but it should be reasonably within range. You also keep track of whether the customer bought a coffee cup that you had put up prominently on display. The data is shown in the table below


Gender Age Buy Cup (Y/N)
M teen n
M teen n
M middle y
M middle y
M middle n
M middle n
M elder y
M elder y
M elder y
F teen n
F teen y
F middle y
F middle y
F middle y
F middle n
F elder y
F elder y
F elder n
F elder n

You want to be able to predict, given you see a customer walk in, and you are able to estimate the gender & age, the probability that the customer would buy the cup?

The first assumption in the NB approach is that the features are independent. Implying having a feature (gender or age) take on a certain value has no bearing on how the other feature will turn out to be.

Next, assume a customer walks in and that customers gender is known to be 'M' and age to be 'middle'. We make a Bayesian estimation as follows



Next we exploit the fact that the features are independent, so the above equation simplifies to


One more simplification: we do not really need P(M,middle) because we are going to estimate P(N|M,middle) in a similar way and combine P(Y|M,middle) & P(N|M,middle) to get a final estimate. In this approach P(M,middle) becomes a normalizing factor (or would cancel out in the final estimate, whichever way you want to think about it). P(N|M,middle) works out as follows


Let us estimate each of the components:

P(Y)               = 11/19      = 57.8%
P(M | Y)         = 5/(5 + 6) = 45.4%
P(middle | Y)  = 5/11       = 45.4%
P(N)               = 8/19       = 42.2%
P(M | N)         = 4/8         = 50.0%
P(middle | N)  = 3/8         = 37.5%

The sought probability is simply


Plug in all these numbers and you get ~ 60%

What happens if one of the features are numeric in value (and not factors as in the example above). The method still remains the same, simply choose the distribution that best describes the data, usually Gaussian. Next for the given test feature, find the probability density at that point (the feature that is numerical) and use that for P(Feature | Y) & P(Feature | N).

The NB method is quite robust and can be scaled easily. There are some scenarios where we don't really want any of the probability numbers to be zero or 100%, just so that the method predicts a non-zero probability for unseen cases and never 100% for any particular case.

Some must buy books on probability
Fifty Challenging Problems in Probability with Solutions (Dover Books on Mathematics)
This book is a great compilation that covers quite a bit of puzzles. What I like about these puzzles are that they are all tractable and don't require too much advanced mathematics to solve.

Introduction to Algorithms
This is a book on algorithms, some of them are probabilistic. But the book is a must have for students, job candidates even full time engineers & data scientists

Introduction to Probability Theory

An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition

The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists (and Everyone Else!)

Introduction to Probability, 2nd Edition

The Mathematics of Poker
Good read. Overall Poker/Blackjack type card games are a good way to get introduced to probability theory

Let There Be Range!: Crushing SSNL/MSNL No-Limit Hold'em Games
Easily the most expensive book out there. So if the item above piques your interest and you want to go pro, go for it.

Quantum Poker
Well written and easy to read mathematics. For the Poker beginner.


Bundle of Algorithms in Java, Third Edition, Parts 1-5: Fundamentals, Data Structures, Sorting, Searching, and Graph Algorithms (3rd Edition) (Pts. 1-5)
An excellent resource (students/engineers/entrepreneurs) if you are looking for some code that you can take and implement directly on the job.

Understanding Probability: Chance Rules in Everyday Life A bit pricy when compared to the first one, but I like the look and feel of the text used. It is simple to read and understand which is vital especially if you are trying to get into the subject

Data Mining: Practical Machine Learning Tools and Techniques, Third Edition (The Morgan Kaufmann Series in Data Management Systems) This one is a must have if you want to learn machine learning. The book is beautifully written and ideal for the engineer/student who doesn't want to get too much into the details of a machine learned approach but wants a working knowledge of it. There are some great examples and test data in the text book too.

Discovering Statistics Using R
This is a good book if you are new to statistics & probability while simultaneously getting started with a programming language. The book supports R and is written in a casual humorous way making it an easy read. Great for beginners. Some of the data on the companion website could be missing.


Comments

Popular posts from this blog

The Best Books to Learn Probability

If you are looking to buy some books in probability here are some of the best books to learn the art of Probability The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists (and Everyone Else!) A good book for graduate level classes: has some practice problems in them which is a good thing. But that doesn't make this book any less of buy for the beginner. An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition This is a two volume book and the first volume is what will likely interest a beginner because it covers discrete probability. The book tends to treat probability as a theory on its own Discovering Statistics Using R This is a good book if you are new to statistics & probability while simultaneously getting started with a programming language. The book supports R and is written in a casual humorous way making it an easy read. Great for beginners. Some of the data on the companion website could be missing. Fifty Cha

The Best Books for Linear Algebra

The following are some good books to own in the area of Linear Algebra. Linear Algebra (2nd Edition) This is the gold standard for linear algebra at an undergraduate level. This book has been around for quite sometime a great book to own. Linear Algebra: A Modern Introduction Good book if you want to learn more on the subject of linear algebra however typos in the text could be a problem. Linear Algebra (Dover Books on Mathematics) An excellent book to own if you are looking to get into, or want to understand linear algebra. Please keep in mind that you need to have some basic mathematical background before you can use this book. Linear Algebra Done Right (Undergraduate Texts in Mathematics) A great book that exposes the method of proof as it used in Linear Algebra. This book is not for the beginner though. You do need some prior knowledge of the basics at least. It would be a good add-on to an existing course you are doing in Linear Algebra. Linear Algebra, 4th Ed

Fun with Uniform Random Numbers

Q: You have two uniformly random numbers x and y (meaning they can take any value between 0 and 1 with equal probability). What distribution does the sum of these two random numbers follow? What is the probability that their product is less than 0.5. The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists A: Let z = x + y be the random variable whose distribution we want. Clearly z runs from 0 to 2. Let 'f' denote the uniform random distribution between [0,1]. An important point to understand is that f has a fixed value of 1 when x runs from 0 to 1 and its 0 otherwise. So the probability density for z, call it P(z) at any point is the product of f(y) and f(z-y), where y runs from 0 to 1. However in that range f(y) is equal to 1. So the above equation becomes From here on, it gets a bit tricky. Notice that the integral is a function of z. Let us take a look at how else we can simply the above integral. It is easy to see that f(z-y) = 1 when (