Unlock the Power of Quantitative Strategies: Explore Our Cutting-Edge Website Today!

Machine Learning

Learning to Rank with TensorFlow

Luis Campos

10/04/2019

No Comments

Alphabet, the largest Internet-based company, has based its success on sophisticated information retrieval algorithms since its origins. Now, 20 years later, one of its divisions is open-sourcing part of its secret sauce, drawing attention from developers all over the world.

Since Google was founded back in 1998, it has grown from a simple Ph.D. research project to one of the largest companies in history, with a current market cap over $800B and 100.000 employees all over the world. The complexity of such a huge firm is overwhelming, though the principle behind its main product, the web search engine, is simple: providing users the best information possible. That simple principle every search engine is based on hides a vast field of research on information retrieval, where ranking is one of its fundamental problems. But ranking, defined as the ordering of a list with the aim of maximizing its utility, is not useful just for search engines. Apart from solving information retrieval problems, it is widely applicable in several domains, such as Natural language processing (NLP), Machine translation, Computational biology or Sentiment analysis.

How ‘Learning to Rank’ works

Increasingly, ranking problems are approached by researchers from a supervised Machine Learning perspective or the so-called Learning to rank techniques. LTR differs from standard supervised learning in the sense that instead of looking at a precise score or class for each sample, it aims to discover the best relative order for a group of items. From an ML point of view, there are three main approaches to this problem:

  • Pointwise: transforms the ranking problem into a regression/classification, where the ranking of each sample is mapped as a continuous or discrete class value.
  • Pairwise: uses regression or classification to discover the best order between two items at a time, aiming to build the ranking for the whole group looping throughout the list. Some examples of pairwise methods include RankNet, LambdaRank or LambdaMART.
  • Listwise: tackles the problem as a whole list optimization. Instead of defining the loss function over each individual example (pointwise) or considering scores of a pair of examples (pairwise), the listwise loss is defined over the whole list of items. Examples include ListNet and ListMLE.

Referring to this set-up, Google recently published a paper where they stated the following:

“While in a classification or a regression setting a label or a value is assigned to each individual document, in a ranking setting we determine the relevance ordering of the entire input document list. […] The majority of the existing learning-to-rank algorithms model such relativity at the loss level using pairwise or listwise loss functions. However, they are restricted to pointwise scoring functions, i.e., the relevance score of a document is computed based on the document itself, regardless of the other documents in the list. […] This setting could be less optimal for ranking problems for multiple reasons.”

In this sense, to evaluate the quality of a ranking the research paper proposes a direct optimization over the ranking metric. In the document, as in many others in the literature, this ranking metric happens to be the Discounted Cumulative Gain. With DCG, the usefulness of a ranking is measured by the relative position of the items in the list, accumulated from the top to the bottom with a logarithmic discounting factor:

An important research challenge in learning-to-rank is direct optimization of ranking metrics such as this one. These metrics, while being able to measure the performance of ranking systems better than indirect pointwise or pairwise approaches, have the unfortunate property of being either discontinuous or flat. Following an ML approach where we have a loss function to minimize means that a standard stochastic gradient descent optimization of these metrics is problematic.

Hands-on with TF-Ranking

Fortunately, Google recently open-sourced its TensorFlow-based library for learning-to-rank. As stated in the related paper, the library promises to be highly scalable and useful to learn ranking models over massive amounts of data. It provides, for example, a framework that addresses the ranking metric optimization problem stated before with the so-called LambdaLoss method. After all, they claim to have applications of TF-Ranking already running inside Gmail and Google Drive.

Applications are endless, and fortunately one of them happens to be quantitative finance. Ranking is a recurrent problem in portfolio management, where we aim to maximize future performance over a set of assets while meeting user-specific constraints. This problem shares some important features with a document retrieval problem, in particular, the fact that it is preferable to tolerate fewer errors at higher ranked positions rather than at the lower positions. In the same way as a user is more willing to open a document in the top of the list without even checking further items, an ML investment strategy is probably allocating more weight towards assets in the top of the rank, reason why optimal loss functions should penalize those errors to a greater extent than those in lower positions.

In particular, given a set of n assets alongside market and sentiment related features and a score reflecting future performance, we might want to build a predictive model able to rank out-of-sample data so that future performance is maximized. The implementation of this problem in TF-Ranking would have the following structure:

  1. Dependencies and global variables definition
  2. Input Pipeline definition
  3. Scoring Function definition
  4. Evaluation metrics
  5. Estimator initialization
  6. Model training, testing and visualizing

In addition to the programming simplicity, TF-Ranking is integrated with the rest of the TensorFlow ecosystem. For example, the train and evaluation steps above store checkpoints, metrics, and other useful information about the network that can be visualized using Tensorboard.

Indeed, TF-Ranking is a great add-on to the TensorFlow stack. It is optimized for large datasets and provides a very simple developer experience based on TensorFlow Estimators. What is more, as the open-source community welcomes its adoption, expect more functionalities across the way, such as a Keras user-friendly API.

I believe the adoption of Machine Learning techniques such as LTR, far from just being applied to solve specific narrow-scope problems, can potentially make an impact across every industry. And you?

0 Comments
Inline Feedbacks
View all comments