Machine learning @ booking.com

Machine learning allows you to make the service much more convenient for users. It’s not so difficult to start implementing recommendations, you can get the first results, even without a well-established infrastructure, the main thing is to start. And only then build a large-scale system. That was how it all started on Booking.com. And what is the result, what approaches are being used now, how are the models being introduced into production, how to monitor them, said Viktor Bilyk on HighLoad ++ Siberia. Possible errors and problems were not left out of the report, it will help someone to get around the shoals, and will push someone to new ideas.
 3r3704.
 3r3704. Machine learning @ booking.com  3r3704.
 3r3704. 3r3694. About the speaker:
Viktor Bilyk introduces machine learning products into commercial operation on Booking.com.
 3r3704.
 3r3704.

 3r3704. First, let's see where Booking.com uses machine learning, in which products.
 3r3704. 3r3333.
 3r3704.
 3r3704. First, this is a large number of recommendation systems for hotels, directions, dates, and in different points of the sales funnel and in different contexts. For example, we are trying to guess where you will go when you have not entered anything into the search line at all.
 3r3704.
 3r3704. 3r3342.
 3r3704.
 3r3704. This is a screenshot in my account, and I will definitely visit two of these areas this year.
 3r3704.
 3r3704.  3r3704.
 3r3704. We process almost any text messages from clients, starting with banal spam filters, ending with such complex products as Assistant and ChatToBook, where models are used to define intentions and recognize entities. In addition, there are models that are not so noticeable, for example, Fraud Detection.
 3r3704.
 3r3704. We analyze reviews. Models tell us why people go to, say, Berlin.
 3r3704. 3r362.
 3r3704.
 3r3704. With the help of machine learning models it is analyzed, for which they praise the hotel so that you do not have to read thousands of reviews yourself.
 3r3704.  3r3704.
 3r3704. In some places of our interface, almost every piece is tied to the predictions of some models. For example, here we are trying to predict when the hotel will be sold out.
 3r3704. 3r376.
 3r3704.
 3r3704. Often we are right - after 19 hours the last room is already booked.
 3r3704.  3r3704.
 3r3704. Or, for example, - a badge "Favorable offer". Here we are trying to formalize the subjective: what is a good offer in general? How to understand that the prices offered by the hotel for these dates are good? After all, this, apart from the price, depends on many factors, such as additional services, and often, in general, from external causes, if, for example, a football world championship or a large technical conference is taking place in this city.
 3r3704.
 3r3704. 3r33553. Begin implementation 3r3696.  3r3704. 3r33556.
 3r3704. Let's wind off a few years ago, in 2015. Some of the products I have mentioned already exist. At the same time, the system that I will talk about today is not yet available. How did the introduction happen at that time? Things were, frankly, not very. The fact is that we had a huge problem, part of which was technical, and part was organizational.
 3r3704. 3r3-300.
 3r3704.
 3r3704. We sent deytasayentistov to already existing cross-functional teams that are working on a specific user problem, and expected that they would somehow improve the product.
 3r3704.
 3r3704. Most often, these pieces of product were built on a Perl stack. There is a very obvious problem with Perl - it is not designed for intensive calculations, and our backend is already loaded with other things. At the same time, the development of serious systems that would solve this problem would not have been possible to prioritize within the team, because the team’s focus is on solving a user problem and not on solving a user problem using machine learning. Therefore, the Product Owner (PO) would be quite against it.
 3r3704.
 3r3704. Let's see how it happened then.
 3r3704.
 3r3704. There were only two options - I know for sure, because at that time I was working on such a team and helped the data scientists to bring their first models into battle.
 3r3704.
 3r3704. The first option - it was r3r3694. materialization of predictions 3r3-33695. . Suppose there is a very simple model with only two features:
 3r3704.
 3r3704.
 3r3704. 3r33613. the country where the visitor is located; 3r31616.  3r3704. 3r33613. the city in which he is looking for a hotel. 3r31616.  3r3704.
 3r3704. We need to predict the likelihood of some event. We just blow up all the input vectors: let's say 10?000 cities, 200 countries - a total of 20 million lines in MySQL. It sounds like a completely workable option for outputting some small ranking systems or other unpretentious models to production.
 3r3704. 3r33140.
 3r3704.
 3r3704. Another option is
embedding predictions directly into the backend code 3r336959. . There are big limitations - hundreds, maybe thousands of coefficients - that's all we could afford.
 3r3704. 3r3149.
 3r3704.
 3r3704. Obviously, neither one nor the other way allows you to bring any complex model into production. This limited the deytasayentists and the success they could achieve by improving products. Obviously, this problem had to be solved somehow.
 3r3704.
 3r3704.
Prediction Service 3r36969.  3r3704. 3r3674.
 3r3704. The first thing we did was a prediction service. Probably, even the simplest architecture ever shown on Habré and HighLoad ++.
 3r3704. 3r3r1616.
 3r3704.
 3r3704. We wrote a small application on Scala + Akka + Spray, which simply received incoming vectors and gave the prediction back. Actually, I’m a little crafty - the system was a bit more complicated, because we had to somehow monitor and roll it out. In reality, it all looked like this:
 3r3704. 3r3173.
 3r3704.
 3r3704. Booking.com has an event system — something like a magazine for all systems. There it is very easy to write, and this stream is very simple to redirect. At first, we needed to send client telemetry with perceived latencies and detailed information from the server side to Graphite and Grafana.
 3r3704. 3r33180.
 3r3704.
 3r3704. We made simple client libraries for Perl - we hid the whole RPC in a local call, put several models in there and the service started to take off. It was easy enough to sell such a product because we got the opportunity to
introduce more complex models and spend much less time on doing this. .
 3r3704. 3r3189.
 3r3704.
 3r3704. The data experts began working with much less restrictions, and the work of the back-tenders in some cases was reduced to one-liner.
 3r3704.
 3r3704.
Predictions in product 3r3r6969.  3r3704. 3r3674.
 3r3704. But let's briefly go back to how we used these predictions in the product.
 3r3704.
 3r3704. There is a model that makes prediction based on known facts. Based on this prediction, we somehow change the user interface. This, of course, is not the only scenario of using machine learning in our company, but rather common.
 3r3704.
 3r3704. 3r3694. What is the problem of launching such features? 3r369595. The thing is that these are two things in one bottle: the model and the change of the user interface. It is very difficult to separate the effects from one and the other.
 3r3704.
 3r3704. Imagine launching the “Favorable Offer” badge as part of an AB-experiment. If it does not take off - there is no statistically significant change in the target metrics - it is not known what the problem is: an incomprehensible, small, imperceptible badge or a bad model.
 3r3704.
 3r3704. In addition, models can degrade, and there can be a lot of reasons for this. What worked yesterday doesn't necessarily work today. In addition, we are constantly in the cold-start mode, constantly connecting new cities and hotels, people from new cities come to us. We need to somehow understand that the model still summarizes well in these pieces of incoming space.
 3r3704.  3r3704.
 3r3704. The most probably recent case of degradation of the model was the story of Alexa. Most likely, as a result of retraining, she began to understand random noises as a request to laugh, and began to laugh at night, frightening the owners.
 3r3704.
 3r3704.
Monitoring predictions
 3r3704. 3r3674.
 3r3704. In order to monitor the predictions, we have slightly modified our system (the diagram below). Similarly, from the event-system, we redirected the stream to Hadoop and started to save, besides everything that we saved before, also all the input vectors, and all the predictions that our system made. Then, using Oozie, we aggregated them into MySQL and from there we showed a small web application to those who are interested in some of the qualitative characteristics of the models.
 3r3704.  3r3704.
 3r3704. However, it is important to figure out what to show there. The thing is that it’s very difficult to calculate the usual metrics used in training models, in our case, because we often have a huge delay in labels.
 3r3704.
 3r3704. Consider this by example. We want to predict whether the user is going on vacation alone or with his family. We need this prediction when a person chooses a hotel, but we can learn the truth only in a year. Only having already gone on vacation, the user will receive an invitation to leave a review, where among other things there will be a question whether he was there alone or with his family.
 3r3704.
 3r3704. That is, you need to store somewhere all the predictions made in a year, and also so that you can quickly find matches with the incoming labels. It sounded like a very serious, maybe even a very heavy investment. Therefore, until we have coped with this problem, we decided to do something simpler.
 3r3704.
 3r3704. This “simpler” was just
histogram of predictions
made by model.
 3r3704.  3r3704.
 3r3704. Above on the chart is a logistic regression that predicts whether the user will change the date of his trip or not. It can be seen that it not bad divides users into two classes: on the left a hill are those who do not do it; on the right is the hill - those who will do it.
 3r3704.  3r3704.
 3r3704. In fact, we even show two graphs: one for the current period and the other for the previous one. It is clearly seen that this week (this is a weekly chart), the model predicts a change in dates a little more often. It is difficult to say exactly whether seasonality is, or the very degradation with time.
 3r3704.
 3r3704. This led to a change in the workflow of deytasayentists, who stopped engaging other people and began to more quickly iterate their models. They sent models in production to dry-run along with backend engineers. That is, the vectors were collected, the model made a prediction, but these predictions were not used in any way.
 3r3704.
 3r3704. In the case of a badge, we simply did not show anything, as before, but collected statistics. This allowed us not to waste time on pre-failure projects. We freed up the time for front-end designers and designers for other experiments. 3r3694. As long as the deytasayentist is not sure that the model works as he wants, he simply does not involve others in this process. 3r369595.
 3r3704.
 3r3704. It is interesting to see how the graphs change in different cuts.
 3r3704.  3r3704.
 3r3704. On the left - the probability of changing dates on the desktop, on the right - on the tablets. It is clearly seen that on the plates the model predicts a more likely change of dates. This is most likely due to the fact that the tablet is often used for planning a trip and less often for booking.
 3r3704.
 3r3704. It is also interesting to see how these graphs change as users move along the sales funnel.
 3r3704. 3r33333.
 3r3704.
 3r3704. On the left, the probability of changing dates on the search page, on the right - on the first booking page. It can be seen that a much larger number of people who have already decided on their dates get to the booking page.
 3r3704.
 3r3704. But these were good graphics. What do the bad look like? Very different. Sometimes it's just noise, sometimes a huge hill, which means that the model cannot effectively separate any two classes of predictions.
 3r3704. 3r33312.
 3r3704.
 3r3704. Sometimes these are huge peaks.
 3r3704. 3r33333.
 3r3704.
 3r3704. This is also a logistic regression, and until a certain moment it showed a beautiful picture with two hills, but one morning it became like this.
 3r3704.
 3r3704. In order to understand what happened inside, you need to understand how the logistic regression is calculated.
 3r3704.
 3r3704. 3r33395. Quick Reference
 3r3704.
 3r3704. 3r33338.
 3r3704.
 3r3704. This is the logistic function of the scalar product, where x n - these are some features. One of these features was the price of the night at the hotel (in euros).
 3r3704. 3r33347.
 3r3704.
 3r3704. Calling this model would cost something like this:
 3r3704. 3r33354.
 3r3704.
 3r3704. Pay attention to the selection. It was necessary to convert the price into euros, but the developer forgot to do it.
 3r3704. 3r33333.
 3r3704.
 3r3704. Currencies like rupees or rubles repeatedly increased the scalar product, and, therefore, forced this model to give a value close to one, more often, which we see on the graph.
 3r3704.
 3r3704. 3r33395. Thresholds
 3r3704.
 3r3704. Another useful feature of these histograms was the possibility of conscious and optimal selection of threshold values.
 3r3704.  3r3704.
 3r3704. If you place the ball on the highest hill on this histogram, push it and imagine where it will stop, this will be the point that is optimal for class separation. Everything on the right is one class, everything on the left is another.
 3r3704.
 3r3704. However, if you start to move this point, you can achieve very interesting effects. Suppose we want to run an experiment, which in case the model says “yes”, somehow changes the user interface. If you move this point to the right, the audience of our experiment is reduced. After all, the number of people who received this prediction is the area under the curve. However, in practice, the accuracy of predictions (precision) is much higher. Similarly, if you do not have enough static power, you can increase the audience of your experiment, but by lowering the accuracy of the predictions.
 3r3704.
 3r3704. In addition to the predictions themselves, we began to monitor the incoming values ​​in the vectors.
 3r3704.
 3r3704. 3r33395. One Hot Encoding
 3r3704.
 3r3704. Most of the features in our simplest models are categorical. This means that these are not numbers, but certain categories: the city from which the user is, or the city in which he is looking for a hotel. We use One Hot Encoding and turn each of the possible values ​​into a unit in a binary vector. Since at first we used only our own computational core, it was easy to determine situations where there is no place for the incoming category in the incoming vector, that is, the model did not see this data during training.
 3r3704. 3r3403.
 3r3704.
 3r3704. So it usually looks like.
 3r3704. 3r33410.
 3r3704.
 3r3704. destination_id is the city where the user is looking for a hotel. It is quite natural that the model did not see about 5% of the values, since we constantly connect new cities. visitor_cty_id = 2?32%, because data scientists sometimes deliberately omit rare cities.
 3r3704.
 3r3704. In a bad case, it might look like this:
 3r3704. 3r33421.
 3r3704.
 3r3704. Immediately 3 properties, 100% of the values ​​of which the model has never seen. Most often this occurs due to the use of formats other than those used in training, or simply trivial typos.
 3r3704.
 3r3704. Now with the help of dashboards, we detect and correct such situations very quickly.
 3r3704.
 3r3704. 3r33553. Showcase machine learning
 3r3704. 3r33556.
 3r3704. Let's talk about other issues that we solved. After we did client libraries and monitoring, the service began to gain momentum very quickly. We were literally flooded with bids from different parts of the company: “Let's connect this model too! Let's update the old one! ”We just stitched, in fact, any new development stopped.
 3r3704.
 3r3704. We got out of the situation by doing
Self-service kiosk for deytasayentistov 3r3-33695. . Now you can just go to our portal, the one that we used at first only for monitoring, and literally pressing the button to load the model into production. In a few minutes, it will work and give predictions.
 3r3704.
 3r3704. There was one more problem.
 3r3704.  3r3704.
 3r3704. Booking.com is about 200 IT teams. How to let the team know in some completely different part of the company that there is a model that could help them? You may simply not know that such a command even exists. How to find out what models there are and how to use them? Traditionally, external communications in our teams are engaged in PO (Product Owner). This does not mean that we do not have any other horizontal links, just PO is doing this more than others. But it is obvious that on such a scale, one-on-one communication does not scale. Need something to do with it.
 3r3704.
 3r3704. 3r3694. How can you facilitate communication? 3r369595.
 3r3704.
 3r3704. We suddenly realized that the portal that we did solely for monitoring, gradually begins to turn into a showcase of machine learning in our company.
 3r3704. 3r33469.
 3r3704.
 3r3704. We have given the opportunity to deytasayentist to describe their models in detail. When there were a lot of models, we added tags to topics and areas of applicability for convenient grouping.
 3r3704. 3r33476.
 3r3704.
 3r3704. We linked our tool with ExperimentTool. This is a product within our company that provides A /B experiments and stores the entire history of experimentation.
 3r3704.  3r3704.
 3r3704. Now, along with the description of the model, you can also see what other teams have done with this model before and how successfully. It changed everything.
 3r3704.
 3r3704. Seriously, this has changed the way IT works, because even in situations where there is no deytasayentist in the team, machine learning can be used.
 3r3704. 3r3494.
 3r3704.
 3r3704. For example, many teams use it during brainstorming. When they come up with some new product ideas, they simply select the models that suit them and use them. To do this, do not need anything complicated.
 3r3704.
 3r3704. 3r3694. What did it mean for us? 3r369595. Right now at the peak we deliver about 200 thousand predictions per second, while with latency less than 20-30 ms, and including HTTP round trip, and placing more than 200 models.
 3r3704.
 3r3704. It may seem that it was such an easy walk in the park: we did everything perfectly, everything works, everyone is happy!
 3r3704. 3r33511.
 3r3704.
 3r3704. This, of course, does not happen. There were mistakes. At the very beginning, for example, we laid a small time bomb. For some reason, we assumed that most of our models would be recommender systems with heavy input vectors, and the Scala + Akka stack was chosen precisely because it is very easy to organize parallel computations with it. But in reality, the overhead of all this parallelization, the collection together turned out to be higher than the possible gain. At some point, we processed only 10?000 RPS of our 100 machines, and failures occurred with quite characteristic symptoms: CPU utilization is low, but timeouts are obtained.
 3r3704. 3r33518.
 3r3704.
 3r3704. Then we went back to our computational core, revised, made the benchmarks, and as a result of the capacity-testing we learned that for the same traffic we only need 4 machines. Of course, we do not do that, because we have several data centers, we need redundancy of calculations and everything else, but, nevertheless, theoretically we can serve more than 10?000 RPS with just 4 machines.
 3r3704.
 3r3704. We are constantly looking for some new monitors that can help us find and correct errors, but do not always take steps in the right direction. At some point, we gathered a small number of models that were used literally across the entire funnel, starting with the index page, ending with a confirmation of the reservation.
 3r3704.
 3r3704. We decided - let's look at how models change their predictions for the same user. We calculated the variance by grouping everything by user ID, no serious problems were found. The predictions of the models were stable, the variance was around 0.
 3r3704. 3r33333.
 3r3704.
 3r3704. Another mistake - again, both technical and organizational - we began to rest on the memory.
 3r3704. 3r33540.
 3r3704.
 3r3704. The fact is that we store all models on all machines. We started to rest on the memory and thought that it was time to do shards. But the problem is that at the same time batchy were in development - this is the possibility of predictions for one model, but many times. Imagine, for example, a search page, and for each hotel you need to predict something there.
 3r3704.
 3r3704. When we started doing sharding, we looked at the live data and were going to chardise very simply - by the model ID. The load and volumes of the model were distributed approximately evenly - 49-51%. But when we finished sharding, the batch was already used in production. We had hot models that were used much more than others, and the imbalance was big. Finally, we will solve this problem when we finally go to the containers.
 3r3704.
 3r3704. 3r33553. Plans for the future
 3r3704. 3r33556.
 3r3704. 3r3611.  3r3704. 3r33613. Label based metrics
 3r3704. 3r31616.  3r3704. 3r3618.
 3r3704. First of all, we still want to give the data scientists the opportunity to observe in dynamics the same metrics that they use in their training. We want Label based metrics, and to observe precision and recall in real time.
 3r3704.
 3r3704. 3r3611.  3r3704. 3r33613. More tools & integrations
 3r3704. 3r31616.  3r3704. 3r3618.
 3r3704. The company still has internal tools and products with which we are poorly integrated. Basically, these are high-load projects, because for everything else we have made a couple of client libraries for Perl and Java, and everyone who needs it can use it. Analysts have easy integration with Spark, they can use our models for their own purposes.
 3r3704.
 3r3704. 3r3611.  3r3704. 3r33613. Reusable training pipelines
 3r3704. 3r31616.  3r3704. 3r3618.
 3r3704. We want to be able to deploy custom code together with the models.
 3r3704.
 3r3704. For example, imagine a spam classifier. All procedures that occur before receiving the incoming weight vector, for example, splitting the text into sentences, into words,
steaming 3r39595. - you must repeat in the production environment again, preferably in the same way in order to avoid mistakes.
 3r3704.
 3r3704. We want to get rid of this problem. We want to deploy a piece of pipeline developed for training a model, along with a model. Then you can send just letters to us, and we will say, spam or not spam.
 3r3704.
 3r3704. 3r3611.  3r3704. 3r33613. Async models
 3r3704. 3r31616.  3r3704. 3r3618.
 3r3704. We want to make an asynchronous prediction. The complexity of our models is growing, but
we consider everything slower than 50 ms to be very slow
. Imagine a model that makes predictions solely on the basis of the history of visited pages on our site. Then we can run these predictions at the time of rendering the page, and take them and use them when we need to.
 3r3704.
 3r3704.
Start small
 3r3704. 3r3674.
 3r3704.
The most important thing that I learned while working on the introduction of models in production on Booking.com, and that I want you to remember, take it home and use it - start small!
 3r3704. 3r3698.
 3r3704. We achieved our first machine learning successes, it is ridiculous to say, exploding the predictions in MySQL. Perhaps you also have the first steps that you can take now. You do not need any sophisticated tools for this. B I have the same advice to deytasayentistam. If you do not work with video, with voice and with an image, if your task is somehow connected with transactional data - do not take too complex models at once.
 3r3704.
 3r3704.
Why do you need a neural network before you try logistic regression?
 3r3704. 3r3698.
 3r3704.
Monitor
 3r3704. 3r3674.
 3r3704. Monitor everything in the world - you monitor your software, web servers, hardware. 3r3694. The model is the same software
. The only difference is the software that was not written by you. He was written by another program, which, in turn, was written by a deithasayentist. Otherwise, everything is the same: input arguments, return values. Know what is happening in reality, how well the model copes with its work, whether everything goes in a regular way - monitor!
 3r3704.
 3r3704.
Organization footprint
 3r3704. 3r3674.
 3r3704. Think about how your organization works. Virtually any of your steps in this direction will change the way people work around you. Think about how they can be helped to solve their problems and how you can come to great success.
 3r3704.
 3r3704.
(Don’t) Follow our steps
 3r3704. 3r3674.
 3r3704. I shared some successes, failures, problems that we faced. I hope this will help someone to bypass the shoals we sat on. Do as we do, but do not repeat our mistakes. Or repeat - who in the end said that your situation is exactly the same as ours? Who said that what didn’t work for us won’t work for you?
 3r3704.
 3r3704. 3r3694. Try, make mistakes, share your mistakes! 3r369595.
 3r3704.
 3r3704.
On 3r3688. HighLoad ++ 201?
which will take place on November 8 and 9 at SKOLKOVO, will be 3r3694. 135 speakers 3r369595. already ready to share the results of their experiments. Additionally in 3r3692. Schedule 3r3693. 3r3694. 9 tracks of master classes and mitaps 3r363695. . There are topics for everyone, and tickets can still be booked.
 3r3704. 3r3698.
3r3704. 3r3704.
3r3704.
+ 0 -

Add comment