We develop new AI methods to build effective and accurate recommender systems.
Abstract: This paper focuses on developing effective and efficient algorithms for top-N recommender systems. A novel Sparse Linear Method (SLIM) is proposed, which generates top-N recommendations by aggregating from user purchase/rating profiles. A sparse aggregation coefficient matrix W is learned from SLIM by solving an ℓ1 -norm and ℓ2 -norm regularized optimization problem. W is demonstrated to produce high quality recommendations and its sparsity allows SLIM to generate recommendations very fast. A comprehensive set of experiments is conducted by comparing the SLIM method and other state-of-the-art top-N recommendation methods. The experiments show that SLIM achieves significant improvements both in run time performance and recommendation quality over the best existing methods.
Xia Ning and George Karypis. SLIM: Sparse linear methods for top-n recommender systems. In Proceedings of the 2011 IEEE 11th International Conference on Data Mining, ICDM’11, pages 497–506, Dec 2011. https://ieeexplore.ieee.org/document/6137254
The SLIM code is available here and here. SLIM has been used by Mendeley, Alibaba, Ebay, etc., and has been re-implemented in R, Python, C# and Java, and included in various libraries (librec, mymedialite, mrec) and textbooks. The SLIM paper won the 10-Years-Highest-Impact-Paper Award, IEEE International Conference on Data Mining (ICDM) 2020.
Abstract: Sequential recommendation aims to identify and recommend the next few items for a user that the user is most likely to purchase/review, given the user’s purchase/rating trajectories. It becomes an effective tool to help users select favorite items from a variety of options. In this manuscript, we developed hybrid associations models (HAM) to generate sequential recommendations using three factors: 1) users long-term preferences, 2) sequential, high-order and low-order association patterns in the users most recent purchases/ratings, and 3) synergies among those items. HAM uses simplistic pooling to represent a set of items in the associations, and element-wise product to represent item synergies of arbitrary orders. We compared HAM models with the most recent, state-of-the-art methods on six public benchmark datasets in three different experimental settings. Our experimental results demonstrate that HAM models significantly outperform the state of the art in all the experimental settings, with an improvement as much as 46.6%. In addition, our run-time performance comparison in testing demonstrates that HAM models are much more efficient than the state-of-the-art methods, and are able to achieve significant speedup as much as 139.7 folds.
Bo Peng, Zhiyun Ren, Srinivasan Parthasarathy, and Xia Ning. HAM: Hybrid associations model with pooling for sequential recommendation. EEE Transactions on Knowledge and Data Engineering, vol. 34, no. 10, pp. 4838-4853, 1 Oct. 2022, doi: 10.1109/TKDE.2021.3049692. Code is available here.
Abstract: Next-basket recommendation considers the problem of recommending a set of items into the next basket that users will purchase as a whole. In this paper, we develop a novel mixed model with preferences, popularities and transitions (M2) for the next-basket recommendation. This method models three important factors in next-basket generation process: 1) users’ general preferences, 2) items’ global popularities and 3) transition patterns among items. Unlike existing recurrent neural network-based approaches, M2 does not use the complicated networks to model the transitions among items, or generate embeddings for users. Instead, it has a simple encoder-decoder based approach (ed-Trans) to better model the transition patterns among items. We compared M2 with different combinations of the factors with 5 state-of-the-art next-basket recommendation methods on 4 public benchmark datasets in recommending the first, second and third next basket. Our experimental results demonstrate that M2significantly outperforms the state-of-the-art methods on all the datasets in all the tasks, with an improvement of up to 22.1%. In addition, our ablation study demonstrates that the ed-Trans is more effective than recurrent neural networks in terms of the recommendation performance. We also have a thorough discussion on various experimental protocols and evaluation metrics for next-basket recommendation evaluation.
Bo Peng, Zhiyun Ren, Srinivasan Parthasarathy, and Xia Ning. M2: Mixed models with preferences, popularities and transitions for next-basket recommendation. IEEE Transactions on Knowledge and Data Engineering, 35(4):4033–4046, 2022. https://www.computer.org/csdl/journal/tk/2023/04/09681238/1A8c5TwZjtS