In this post, we present the new simulator we developed at Criteo Marketplace
Decisions in rich environments are hard to model. At Criteo, we receive billions of display opportunities on a daily basis. Every time we see a display opportunity we have only a few milli-seconds to answer many questions. Which campaigns would we like to display? Which products are we going to show? How should the banner look? And, last but not least: How much are we going to bid on the RTB market?
Our bidding algorithm is crucial, for it determines the traffic we get for our clients. The price, the volume, the quality, and the spread over time of the traffic are the result of this bid. We must take care of it! It combines machine learning and control tools to optimize our partners’ values. New versions of the algorithm are usually AB tested online. This is the industry most standard practice.
Still, we may be reluctant to do an AB test when we fear the modification might break something. Besides, we cannot test hundreds of strategies at the same time with standard online test.
Thus, no surprise: an offline testing tool providing business metrics is a very valuable instrument.
In a recent project, we developed a simulator for our bidding controller.
What the controller is
The controller is the key component of our bidding algorithm handling the bid levels. It is in charge of reaching the campaign objectives over time.
The controller should be precise (the campaign constraints should be meet). It should be reactive (it should adapt to a change of market behavior, marketing event, etc.). And, of course, stable. Unfortunately, there is a trade-off between the last two criteria: one cannot be both very reactive and very stable
In general how does a simulator work?
You have to create a virtual environment that mimics reality. It can be deterministic (an input always yields the same output) or stochastic (the environment reacts at random).
For the data you have two main options: sample events from your log or learn parameters to calibrate generative models. Depending on the application, one approach might be more straightforward than the other.
We have to deal with a specificity of advertising auctions. Indeed, we do not know if a display opportunity we lost would have led to a click (or a conversion). Likewise, we do not know the minimal bid that would have led to a won auction. Thus, working only with won opportunities would bias our simulations.
What our new simulator does
Our simulator is of the stochastic kind, and uses the second approach. We first learn from the data:
– the distribution of the display opportunity properties
– the competition intensity on the RTB market
– the spread of the opportunity along the day.
– the delay of the sales
The specificity of this simulator is our modelling of the sources of instability. We took a specific care on the information delay and the scarcity of conversion events in particular.
In particular, we cannot guess that a banner will trigger a conversion at time of display: the conversion (if any) will happen in the future. Since the sales delay impacts the reactivity and the stability of the controller, we need to model it in the simulator. To do so we fit the historical delays of each campaign with Weibull or exponential laws.
We then use the results of this calibration phase to generate realistic scenarios. We can now test variations of our controller on the generated scenarios and bench them using business metrics!
Post written by:
Software Engineer, Engine R&D
Our lovely Community Manager / Event Manager is updating you about what's happening at Criteo Labs.See DevOps Engineer roles