An FPGA-based accelerator for LambdaRank in Web search engines

ACM Transactions Reconfigurable Technology Systems | , pp. 1.042361111-1.054861111

Publication

In modern Web search engines, Neural Network (NN)-based learning to rank algorithms is intensively used to increase the quality of search results. LambdaRank is one such algorithm. However, it is hard to be efficiently accelerated by computer clusters or GPUs, because: (i) the cost function for the ranking problem is much more complex than that of traditional Back-Propagation(BP) NNs, and (ii) no coarse-grained parallelism exists in the algorithm. This article presents an FPGA-based accelerator solution to provide high computing performance with low power consumption. A compact deep pipeline is proposed to handle the complex computing in the batch updating. The area scales linearly with the number of hidden nodes in the algorithm. We also carefully design a data format to enable streaming consumption of the training data from the host computer. The accelerator shows up to 15.3X (with PCIe x4) and 23.9X (with PCIe x8) speedup compared with the pure software implementation on datasets from a commercial search engine.