RankingWithComparisons

  |   Source
In [1]:
import numpy as np
import matplotlib.pyplot as pl
%matplotlib inline

As scientists, from time to time we have to apply for positions when our contract is close to its end. These positions are competitive. You submit a Curriculum Vitae and a sketch of the project you want to carry out during the new contract. Typically, there is a commission that meets for a few hours and tries to end up with an ordered list of all the candidates. Being part of this commission is not easy at all and ranking the candidates, specially in the usual case that all of them are very good, is quite tough.

A few days ago, in discussions with some colleagues after one of these lists was published, I suggested if there is the possibility of ranking a list of candidates using a crowdranking process. A large set of people receives the documentation of two candidates and has to return which one of the two is better. My feeling was that, if you have a large enough set of people doing these rankings, it is possible to rank the list of candidates. A short search on Google returned the paper by Yi et al. (2013) (http://www.aaai.org/ocs/index.php/HCOMP/HCOMP13/paper/view/7536/7421) which deals exactly with this problem and solves it using matrix completion ideas. Interestingly, Yi et al. say that crowdranking with pairwise comparisons is preferable over traditional filtering with numerical ratings, specially because of the subjectivity that any numerical rating has.

The idea of Yi et al. is very simple and based on a sparsity constrain. Let $\mathbf{F}$ be the $\mathbb{R}^{n\times m}$ that contains the ranking of the $m$-th item carried out by the user $n$. The basic idea is to assume that the rank of this matrix is low. In other words, the rows of this matrix (the list of rankings of each user) can be seen as a linear combination of a small set of rating scores. In the ideal case that all users have exactly the same ideas about ranking, the rank of the $\mathbf{F}$ matrix is 1. All the rows will be just proportional to each other. This is never the case, but the assumption of low rank is desirable for every commission. If not, they will never agree on the final list.

Let us encode in the triplet $(i,j,k)$ the comparison of user $i$, giving that item $j$ is better (has a higher ranking) than item $k$. Let $\Omega=\{(i,j,k)\}$ be the set of all pairwise comparisons carried out during the process. Yi et al. propose to solve the following problem:

$$ \min_{\mathbf{F}} \lambda \, \mathrm{rank}(\mathbf{F}) + \sum_{(i,j,k) \in \Omega} \ell(F_{i,j} - F_{i,k}) $$

This problem minimizes the rank of the $\mathbf{F}$ while also minimizing the inconsistency between the matrix and all the pairwise comparisons. The function $\ell(x)$ is chosen to be the smooth hinge loss:

In [3]:
def hinge(z):
    if (z <= 0):
        return 0.5-z
    elif (z >= 1):
        return 0
    else:
        return 0.5*(1-z)**2
    
z = np.linspace(-1,2,100)
vhinge = np.vectorize(hinge)
f = vhinge(z)
pl.plot(z, f)
Out[3]:
[<matplotlib.lines.Line2D at 0x3ddc7d0>]

As you see, when $F_{i,j}>1+F_{i,k}$ for the triplet $(i,j,k)$, the inconsistency returns 0 because the matrix $\mathbf{F}$ is giving the correct ranking to the $j$ and $k$ items to fulfill the pairwise comparison $(i,j,k)$. In any other case, it returns a monotonic increasing function. Therefore, optimizing the second argument of the merit function forces the matrix $\mathbf{F}$ to fulfill all the pairwise comparisons.

Concerning the first term, it is known that $\mathrm{rank}(\mathbf{F})$ is a non-convex function. It was realized by Candès and Recht (2009) that you can approximate this term by the convex envelope trace norm $|\mathbf{F}|_\mathrm{tr}$. It has been found that the optimization is equivalent to the optimization of the rank of the matrix. Remember that the trace norm of a matrix is just the addition of the singular values obtained after a singular value decomposition. Finally, we have to optimize

$$ \min_{\mathbf{F}} \mathcal{L}(\mathbf{F}) = \lambda \, |\mathbf{F}|_\mathrm{tr} + \sum_{(i,j,k) \in \Omega} \ell(F_{i,j} - F_{i,k}) $$

Yi et al. use a gradient descent method:

$$ \mathbf{F}_{t+1} = \mathbf{F}_{t} - \eta_t \partial \mathcal{L}(\mathbf{F}) $$

where $\eta_t$ is the time step and $\partial \mathcal{L}(\mathbf{F})$ is the gradient of the merit function. The gradient is separated in two contributions:

$$ \partial \mathcal{L}(\mathbf{F}) = \partial \mathcal{L}(\mathbf{F})_1 + \partial \mathcal{L}(\mathbf{F})_2 $$

The first part is easily obtained by using the fact that the gradient of the trace norm is given by $\mathbf{F}_t = \mathbf{U}_t \mathbf{V}_t^T$. This product is computed by setting all singular values to one and reconstructing the matrix back from its singular value decomposition: $\mathbf{F}_t = \mathbf{U}_t \mathbf{\Sigma}_t \mathbf{V}_t^T$.

The second part is just obtained by setting $ell'(F_{i,j}-F_{i,k})$ on the $(i,j)$ position of the gradient and $-ell'(F_{i,j}-F_{i,k})$ on the $(i,k)$ position.

Iterating the process leads to the low rank ranking matrix.

Example

Let us see an example. We assume that we have 50 users that want to rank 60 items. Each user chooses ten paris of two random items and returns a list of triplets $(i,j,k$) saying that item $j$ is better than item $k$. The original order is 5 points to objects [0:20], 3 points to objects [20:40] and 1 point to objects [40:60].

In [25]:
class ranking(object):
    def __init__(self):
        self.nItems = 60
        self.nUsers = 50
        self.order = np.zeros(self.nItems)
        self.order[0:20] = 5.0
        self.order[20:40] = 3.0
        self.order[40:60] = 1.0
        self.loop = 0
		
        self.weight = 0.5
        self.nComparisons = 15
		
			
# Generate perturbed ranking matrix
        self.F = np.zeros((self.nUsers,self.nItems))
        for i in range(self.nUsers):
            v = 2*np.round(np.random.rand(self.nItems))-1
            self.F[i,:] = self.order #+ v
			
# Generate pairs of comparisons
        self.Z = []
        for i in range(self.nUsers):
            arr1 = np.arange(self.nItems)
            np.random.shuffle(arr1)
            arr2 = np.arange(self.nItems)
            np.random.shuffle(arr2)
            loop = 0
            j = 0
            comparisons = []
            while (j < self.nComparisons):
                if (arr1[loop] != arr2[loop]):
                    if (self.order[arr1[loop]] != self.order[arr2[loop]]):
                        if (self.order[arr1[loop]] > self.order[arr2[loop]]):
                            comparisons.append((arr1[loop],arr2[loop]))
                        if (self.order[arr2[loop]] > self.order[arr1[loop]]):
                            comparisons.append((arr2[loop],arr1[loop]))
                        
                        j += 1
                loop += 1
            self.Z.append(comparisons)		
			
    def gradientLoss(self, z):
        if (z <= 0):
            return -1.0
        elif (z >= 1):
            return 0.0
        else:
            return z-1.0
        
    def gradientDescent(self):
        eta = 0.2
        F = np.ones((self.nUsers,self.nItems))
        
        S = np.zeros((self.nUsers,self.nItems))
        iterations = 500
        for loop in range(iterations):
            u, s, v = np.linalg.svd(F)
            n = s.size
            S[:n, :n] = np.diag(np.ones(n))
            subgradient1 = np.dot(u, np.dot(S, v))
            
            subgradient2 = np.zeros((self.nUsers,self.nItems))
            
            for i in range(self.nUsers):
                for j in range(self.nComparisons):
                    gradLoss = self.gradientLoss(F[i,self.Z[i][j][0]] - F[i,self.Z[i][j][1]])
                    subgradient2[i,self.Z[i][j][0]] += gradLoss
                    subgradient2[i,self.Z[i][j][1]] -= gradLoss
            F -= eta * (subgradient2 + subgradient1)
            
        return F
		
rank = ranking()
res = rank.gradientDescent()
fig, ax = pl.subplots(nrows=2, ncols=1, figsize=(12,8))
im1 = ax[0].imshow(res)
ax[0].set_title('Reconstructed')
im2 = ax[1].imshow(rank.F)
ax[1].set_title('Original')
Out[25]:
<matplotlib.text.Text at 0x4f85e10>
In [26]:
mnRec = np.mean(res,axis=0)
mnOrig = np.mean(rank.F,axis=0)
pl.plot(mnRec)
pl.plot(mnOrig)
Out[26]:
[<matplotlib.lines.Line2D at 0x51f8a10>]
In [ ]: