What Every Manager Should Know About Machine Learning

What Every

Here is an excerpt from an article written by Mike Yeomans for Harvard Business Review and the HBR Blog Network. To read the complete article, check out the wealth of free resources, obtain subscription information, and receive HBR email alerts, please click here.

* * *

Perhaps you heard recently about a new algorithm that can drive a car? Or invent a recipe? Or scan a picture and find your face in a crowd? It seems as though every week companies are finding new uses for algorithms that adapt as they encounter new data. Last year Wired quoted an ex-Google employee as saying that “Everything in the company is really driven by machine learning.”

Machine learning has tremendous potential to transform companies, but in practice it’s mostly far more mundane than robot drivers and chefs. Think of it simply as a branch of statistics, designed for a world of big data. Executives who want to get the most out of their companies’ data should understand what it is, what it can do, and what to watch out for when using it.

Not just big data, but wide data

The enormous scale of data available to firms can pose several challenges. Of course, big data may require advanced software and hardware to handle and store it. But machine learning is about how the analysis of the data also has to adapt to the size of the dataset. This is because big data is not just long, but wide as well. For example, consider an online retailer’s database of customers in a spreadsheet. Each customer gets a row, and if there are lots of customers then the dataset will be long. However, every variable in the data gets its own column, too, and we can now collect so much data on every customer – purchase history, browser history, mouseclicks, text from reviews – that the data are usually wide as well, to the point where there are even more columns than rows. Most of the tools in machine learning are designed to make better use of wide data.

Predictions, not causality

The most common application of machine learning tools is to make predictions. Here are a few examples of prediction problems in a business:

o Making personalized recommendations for customers
o Forecasting long-term customer loyalty
o Anticipating the future performance of employees
o Rating the credit risk of loan applicants

These settings share some common features. For one, they are all complex environments, where the right decision might depend on a lot of variables (which means they require “wide” data). They also have some outcome to validate the results of a prediction – like whether someone clicks on a recommended item, or whether a customer buys again. Finally, there is an important business decision to be made that requires an accurate prediction.

One important difference from traditional statistics is that you’re not focused on causality in machine learning. That is, you might not need to know what happens when you change the environment. Instead you are focusing on prediction, which means you might only need a model of the environment to make the right decision. This is just like deciding whether to leave the house with an umbrella: we have to predict the weather before we decide whether to bring one. The weather forecast is very helpful but it is limited; the forecast might not tell you how clouds work, or how the umbrella works, and it won’t tell you how to change the weather. The same goes for machine learning: personalized recommendations are forecasts of people’s preferences, and they are helpful, even if they won’t tell you why people like the things they do, or how to change what they like. If you keep these limitations in mind, the value of machine learning will be a lot more obvious.

Separating the signal from the noise

So far we’ve talked about when machine learning can be useful. But how is it used, in practice? It would be impossible to cover it all in one article, but roughly speaking there are three broad concepts that capture most of what goes on under the hood of a machine learning algorithm: feature extraction, which determines what data to use in the model; regularization, which determines how the data are weighted within the model; and cross-validation, which tests the accuracy of the model. Each of these factors helps us identify and separate “signal” (valuable, consistent relationships that we want to learn) from “noise” (random correlations that won’t occur again in the future, that we want to avoid). Every dataset has a mix of signal and noise, and these concepts will help you sort through that mix to make better predictions.

* * *

Here is a direct link to the complete article.

Mike Yeomans is a post-doctoral fellow in the Department of Economics at Harvard University.

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.