Mathematical reasoning behind soft cosine measure and reduce complexity from O(N²) to O(N)

math linear-algebra soft-cosine time-complexity

Mathmatical proofs of validity of soft cosine measure and reduce complexity from O(N²) to O(N) in deployment.

Wilson Yip
2023-08-14

\[ \newcommand\norm[1]{\lVert#1\rVert} \newcommand\fnorm[1]{\left\lVert#1\right\rVert} \newcommand\inner[2]{\left\langle#1,#2\right\rangle} \newcommand\rank{\text{rank}} \newcommand\diag{\text{diag}} \]

Introduction

Soft cosine measure is widely used in text mining to retrieve the move relevant documents. It is a modification to the well known cosine similarity score, where the similarity between two vectors \(\mathbf{x}\) and \(\mathbf{y}\) is given by

\[ \text{cosine-score} = \frac{\mathbf{x}^T \mathbf{y}}{\sqrt{\mathbf{x}^T \mathbf{x}} \cdot \sqrt{\mathbf{y}^T \mathbf{y}}}. \]

Such calculation assumes every dimension is completely independent of each other. In reality, this may not be the case. In text mining, words like money and wealth are very closely related but this relationship cannot be captured in the generic cosine similarity score.

In 2014, the soft cosine measure was introduced in this paper. It adjusts the cosine similarity score by putting a similarity matrix between the dot product.

\[ \text{soft-cosine} = \frac{\mathbf{a}^T S\, \mathbf{b}}{\sqrt{\mathbf{a}^T S \, \mathbf{a}} \cdot \sqrt{\mathbf{b}^T S \, \mathbf{b}}}. \]

This article will dive deep into the mathematical ground to understand this calculation.

Properties of Positive Definite [Semidefinite] Matrix

Definition 1 (Inner Product) Let \(\mathsf{V}\) be a vector space over \(\mathbb{R}\). An inner product on \(\mathsf{V}\) is a function \(\langle \cdot \; , \; \cdot \rangle : \mathsf{V} \times \mathsf{V} \rightarrow \mathbb{R}\) such that for all \(\mathbf{x}, \mathbf{y}, \mathbf{z} \in \mathsf{V}\) and all \(c \in \mathbb{R}\), the following conditions hold:

  1. \(\langle \mathbf{x} + \mathbf{z}, \mathbf{y} \rangle = \langle \mathbf{x}, \mathbf{y} \rangle + \langle \mathbf{z}, \mathbf{y} \rangle\).
  2. \(\langle c \mathbf{x}, \mathbf{y} \rangle = c \langle \mathbf{x}, \mathbf{y} \rangle\).
  3. \(\langle \mathbf{x}, \mathbf{y} \rangle = \langle \mathbf{y}, \mathbf{x} \rangle\).
  4. \(\langle \mathbf{x}, \mathbf{x} \rangle > 0\) if \(\mathbf{x} \neq \mathbf{0}\).

Definition 2 (Norm) Let \(\mathsf{V}\) be an inner product space. For \(\mathbf{x} \in \mathsf{V}\), define the norm or length of \(\mathbf{x}\) by \(\norm{\mathbf{x}} = \sqrt{\inner{\mathbf{x}}{\mathbf{x}}}\).

Definition 3 (Transpose) Let \(A \in \mathsf{M}_{m \times n}(\mathbb{R})\). Define the transpose of \(A\) to be \(A^T \in \mathsf{M}_{n \times n}(\mathbb{R})\) such that \((A^T)_{ij} = A_{ji}\) for all \(i, j\).

Definition 4 (Symmetric) Let \(A \in \mathsf{M}_n(\mathbb{R})\). A is symmetric if \(A = A^T\).

Definition 5 (Orthogonal) Let \(\mathsf{V}\) be in inner product space. Vectors \(\mathbf{x}, \mathbf{y} \in \mathsf{V}\) are orthogonal (perpendicular) if \(\langle \mathbf{x}, \mathbf{y} \rangle = 0\). A subset \(S\) of \(\mathsf{V}\) is orthogonal if any two distinct vectors in \(S\) are orthogonal. A vector \(\mathbf{x} \in \mathsf{V}\) is a unit vector if \(\lVert \mathbf{x} \rVert = 1\). Finally, a subset \(S\) of \(\mathsf{V}\) is orthonormal if \(S\) is orthogonal and consists entirely of unit vectors.

Definition 6 (Orthogonal Matrix) Let \(A \in \mathsf{M}_n(\mathbb{R})\). \(A\) is called an orthogonal matrix if \(A^TA = AA^T = I\).

Theorem 1 Let \(A \in \mathsf{M}_n(\mathbb{R})\). Then \(A\) is symmetric if and only if \(A = Q^TDQ\), where \(D\) is a diagonal matrix consists of eigenvalues of \(A\) and \(Q\) is an orthogonal matrix whose columns consists of eigenvectors of \(A\).

Theorem 2 Let \(A \in \mathsf{M}_n(\mathbb{R})\). \(A\) is symmetric if and only if for all \(\mathbf{x} \in \mathbb{R}^n\) \[\mathbf{x} = \sum_{i = 1}^n a_i \mathbf{v}_i,\] where \(\mathbf{v}_i\)’s are the orthonormal eigenvectors of \(A\) and \(a_i \in \mathbb{R}\) for \(i = 1, 2, \dots, n\).

Definition 7 (Positive Definite) Let \(A \in \mathsf{M}_n(\mathbb{R})\). \(A\) is called positive definite [positive semidefinite] if \(A\) is symmetric and \(\mathbf{x}^T A \mathbf{x} > 0\) \(\left[ \mathbf{x}^T A \mathbf{x} \geq 0 \right]\) for all \(\mathbf{0} \neq \mathbf{x} \in \mathbb{R}^n\).

Theorem 3 Let \(A \in \mathsf{M}_n(\mathbb{R})\). Then \(A\) is positive definite [semidefinite] if and only if all its eigenvalues are positive [nonnegative].

Proof. Since \(A\) is symmetric, by Theorem 2, we know that for all \(\mathbf{x} \in \mathbb{R}^n\),

\[\mathbf{x} = \sum_{i = 1}^n a_i \mathbf{v}_i,\]

where \(\mathbf{v}_i\)’s are the orthonormal eigenvectors of \(A\) and \(a_i \in \mathbb{R}\) for \(i = 1, 2, \dots, n\). Then

\[A\mathbf{x} = \sum_{i = 1}^n a_i A\mathbf{v}_i = \sum_{i = 1}^n a_i \lambda_i \mathbf{v}_i.\]

Hence

\[ \begin{align*} \mathbf{x}^T A \mathbf{x} &= \left( \sum_{i = 1}^n a_i \mathbf{v}_i \right) \left( \sum_{j = 1}^n a_j \lambda_j \mathbf{v}_j \right) \\[5pt] &= \sum_{i = 1}^n \sum_{j = 1}^n a_i a_j \lambda_j \mathbf{v}_i \mathbf{v}_j \\[5pt] &= \sum_{i = 1}^n a_i^2 \lambda_i. \end{align*} \]

Hence, the result holds.

Theorem 4 Let \(A \in \mathsf{M}_n(\mathbb{R})\). if \(A\) is positive definite [semidefinite], there exists an unique positive definite [semidefinite] matrix \(B \in \mathsf{M}_n(\mathbb{R})\) such that \(A = B^T B\).

Proof. Suppose \(A\) is positive definite [semidefinite]. By Theorem 1, we know that

\[A = Q^TDQ,\]

where \(Q\) consists of orthogonal eigenvectors of \(A\) and \(D = \diag(\lambda_1, \lambda_2, \dots, \lambda_n)\) are the eigenvalues of \(A\) and hence are all positive [nonnegative] by Theorem 3.

Define the matrix

\[B = Q^T D^{1/2} Q,\]

where \(D^{1/2} = \diag(\sqrt{\lambda_1}, \sqrt{\lambda_2}, \dots, \sqrt{\lambda_n})\). Since \(\sqrt{\lambda_i} > 0\) \(\left[\sqrt{\lambda_i} \geq 0\right]\) for \(i = 1, 2, \dots, n\), we know that \(B\) is also positive definite [semidefinite]. Also,

\[B^2 = Q^T D^{1/2} Q Q^T D^{1/2} Q = Q^T D^{1/2} D^{1/2} Q = Q^T D Q = A.\]

For the uniqueness part, suppose \(C\) is another positive definite [semidefinite] roots of \(A\). Let

\[C = P^T H P,\]

where \(H = \diag(\mu_1, \mu_2, \dots, \mu-n)\) and \(P\) is orthogonal. Since \(C^2 = P^T H^2 P = A\), WLOG, we may assume \(\mu_i^2 = \lambda_i\) for \(i = 1, 2, \dots, n\). Thus, we have

\[C = P^T D^{1/2} P.\]

And since \(B^2 = A = C^2\), we have

\[ \begin{align*} (Q^T D^{1/2} Q)^2 &= (P^T D^{1/2} P)^2 \\[5pt] Q^T D Q &= P^T D P \\[5pt] (P Q^T) D &= D (P Q^T). \end{align*} \]

Let \(W = PQ^T\),

\[ D = \begin{pmatrix} \lambda_{1} I_{k_1} & 0 & \cdots & 0 \\ 0 & \lambda_{2} I_{k_2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_{r} I_{k_r} \end{pmatrix} \]

and

\[ W = \begin{pmatrix} W_{11} & W_{12} & \cdots & W_{1,k_r} \\ W_{21} & W_{22} & \cdots & W_{2,k_r} \\ \vdots & \vdots & \ddots & \vdots \\ W_{k_1,1} & W_{k_2,2} & \cdots & W_{k_r,k_r} \end{pmatrix} \]

Since \(WD = DW\), we know that \(W_{ij} = O\) for \(i \neq j\). Now consider

\[ \begin{align*} WD^{1/2} &= \diag(W_{11}, \dots, W_{k_r, k_r}) \cdot \diag(\sqrt{\lambda_{k_1}}I_{k_1}, \dots, \sqrt{\lambda_{k_r}}I_{k_r}) \\[5pt] &= \diag(\sqrt{\lambda_{k_1}}W_{11}, \dots, \sqrt{\lambda_{k_r}}W_{k_r, k_r}) \\[5pt] &= \diag(\sqrt{\lambda_{k_1}}I_{k_1}, \dots, \sqrt{\lambda_{k_r}}I_{k_r}) \cdot \diag(W_{11}, \dots, W_{k_r, k_r}) \\[5pt] &= D^{1/2}W. \end{align*} \]

Hence,

\[ \begin{align*} PQ^TD^{1/2} &= D^{1/2} PQ^T \\[5pt] Q^T D^{1/2} Q &= P^T D^{1/2} P \\[5pt] B &= C, \end{align*} \]

which shows the uniqueness of \(B\).

Theorem 5 If \(A \in \mathsf{M}_n(\mathbb{R})\) is positive definite, then \(\mathbf{x}^T A \mathbf{y}\) defines an inner product for all \(\mathbf{x}, \mathbf{y} \in \mathbb{R}^n\).

Proof. We just need to confirm the 4 conditions listed in Definition 1. For all \(\mathbf{x}, \mathbf{y} \in \mathbb{R}^n\), let \(\inner{\mathbf{x}}{\mathbf{y}} = \mathbf{x}^T A \mathbf{y}\). Then

\[ \begin{align*} \inner{\mathbf{x}}{\mathbf{y}} &= (\inner{\mathbf{x}}{\mathbf{y}})^T \\[5pt] &= (\mathbf{x}^T A \mathbf{y})^T \\[5pt] &= \mathbf{y}^T A \mathbf{x} \qquad \text{($A$ is symmetric)} \\[5pt] &= \inner{\mathbf{y}}{\mathbf{x}}. \end{align*} \]

The other three conditions are trivial.

Soft Cosine Similarity

Recall that Cosine Similarity between two vectors \(\mathbf{x}\) and \(\mathbf{y}\) is given by

\[ \begin{align*} \text{cosine}(\mathbf{x}, \mathbf{y}) &= \frac{\mathbf{x}^T \mathbf{y}}{\sqrt{\mathbf{x}^T \mathbf{x}} \cdot \sqrt{\mathbf{y}^T \mathbf{y}}} \\[5pt] &= \left( \frac{\mathbf{x}}{\sqrt{\mathbf{x}^T \mathbf{x}}} \right)^T \left( \frac{\mathbf{y}}{\sqrt{\mathbf{y}^T \mathbf{y}}} \right) \\[5pt] &= \hat{\mathbf{x}}^T \hat{\mathbf{y}} \\[5pt] &= \sum_{i = 1}^n \hat{x}_i \hat{y}_i \end{align*} \]

where \(n\) is the dimension of the vectors and \(\hat{\mathbf{x}}\) is the unit vector of \(\mathbf{x}\). The time and space complexity of the calculation is \(O(n)\).

The problem of the generic cosine similarity score is that it assumes all the dimension are completely independent to each other. In mathematical terms, they are mutually orthogonal. The soft-cosine measure introduced in this paper solve this problem by introducing a correlation matrix to the middle of the dot product when calculating the cosine score. Below shows the matrix form of the calculation.

\[ \text{soft-cosine} = \frac{\mathbf{a}^T S\, \mathbf{b}}{\sqrt{\mathbf{a}^T S \, \mathbf{a}} \cdot \sqrt{\mathbf{b}^T S \, \mathbf{b}}} \]

If we replace \(S\) by \(I\) (the identity matrix), it reduces to the generic cosine score immediately. Theorem 5 also shows that this calculation is indeed another inner product. This means it is another valid way to measure the distance between two vectors. For instance, by Definition 2, the distance between \(\mathbf{a}\) and \(\mathbf{b}\) with respect to the correlation matrix \(S\) is given by

\[ \begin{align*} \norm{\mathbf{a} - \mathbf{b}}_S &= \sqrt{(\mathbf{a} - \mathbf{b})^T S (\mathbf{a} - \mathbf{b})}. \end{align*} \]

Besides, by Theorem 4, there exists a unique positive definite matrix \(S^{1/2}\) such that \(S = S^{1/2} S^{1/2}\). With some manipulation, the soft cosine measure between \(\mathbf{a}\) and \(\mathbf{b}\) with respect to the correlation matrix \(S\) can reduce to a simple dot product between the unit vectors \(\hat{\alpha}\) and \(\hat{\beta}\), where \(\alpha = S^{1/2} \mathbf{a}\) and \(\beta = S^{1/2} \mathbf{b}\).

\[ \begin{align*} \text{soft-cosine} &= \frac{\mathbf{a}^T S\, \mathbf{b}}{\sqrt{\mathbf{a}^T S \, \mathbf{a}} \cdot \sqrt{\mathbf{b}^T S \, \mathbf{b}}}\\[5pt] &= \frac{(S^{1/2}\mathbf{a})^T \cdot (S^{1/2}\mathbf{b})}{\sqrt{(S^{1/2}\mathbf{a})^T(S^{1/2}\mathbf{a})} \cdot \sqrt{(S^{1/2}\mathbf{b})^T(S^{1/2}\mathbf{b})}} \\[5pt] &= \frac{\alpha^T \cdot \beta}{\sqrt{\alpha^T \cdot \alpha} \cdot \sqrt{\beta^T \cdot \beta}}\\[5pt] &= \hat{\alpha} \cdot \hat{\beta}. \end{align*} \]

This can significantly reduce the time and space complexity from \(O(n^2)\) to \(O(n)\) if we store the transformed vectors in database and perform the soft cosine measure by calculating the dot product when needed.