Title | : | Probabilistic Factorization Machine |
Speaker | : | Avijit Saha (IITM) |
Details | : | Tue, 7 Apr, 2015 2:00 PM @ BSB 361 |
Abstract: | : | Support Vector Machines (SVMs) are one of the most popular predictors in machine learning which work on the generic features extracted from the data. But SVMs fail on very sparse data as encountered in applications like recommender systems. On the other hand, factorization models like matrix factorization and tensor factorization consider interactions between all the variables and provably work well on sparse data. However, designing a factorization model specific to a given problem requires expert knowledge. Factorization machine (FM) is a generic framework which combines the advantages of feature based representation of SVMs and the interaction of variables in factorization models by representing the data as generic feature formats like SVMs and considering the interactions between each pair of variables. We develop a scalable variational Bayesian inference algorithm for factorization machine which converges faster than the existing state-of-the-art Markov chain Monte Carlo based inference algorithm and does not suffer from convergence issues. Additionally, for large scale learning, we develop a stochastic variational Bayesian algorithm for factorization machine which utilizes stochastic gradient descent to optimize the lower bound in variational approximation. The stochastic gradient descent based variational method outperforms existing online algorithms for factorization machine as validated by extensive experiments performed on numerous large-scale real world data. On the other direction, FM assume Gaussian assumption over the data which may not be suitable for many real-world datasets being represented as count data, as for count data discrete distribution is a better fit than continuous distribution. Hence, we develop a Parametric Poisson Factorization Machine (PPFM) algorithm which models data using a Poisson distribution. Additionally, FM requires manual tuning for finding the latent variable dimension (model selection) which is an expensive process and becomes difficult for large datasets. Hence, we also develop a Non-parametric Poisson Factorization Machine (NPFM) algorithm where the number of latent factors are theoretically unbounded and is estimated while computing the posterior. Both PPFM and NPFM have linear time and space complexity with respect to the number of observations. Extensive experiments on four different movie review datasets show that our methods outperform two strong baseline methods by huge margins. |