$$ \mathbf{A} = \begin{bmatrix} 0 & 1 & \ldots & a_{1, n} \\ 1 & 1 & \ldots & a_{2, n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n, 1} & a_{n, 2} & \ldots & a_{n, n} \end{bmatrix} $$
Rows and columns represent vertices in the graph
So $a_{i,j} = 1$ if there is an edge between vertex $i$ and vertex $j$
$$ \mathbf{A}_{i, j} = \begin{align*}\begin{cases} 1, \ &if \ (i, j) \in E \\ 0, \ &if \ (i, j) \notin E \end{cases}\end{align*}$$
$$ \mathbf{D} = \begin{bmatrix} 4 & 0 & \ldots & 0 \\ 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 16 \end{bmatrix} $$
The degree is the number of edges that vertex $i$ has
$$ \mathbf{D}_{i, j} = \begin{align*}\begin{cases} deg(v_{i}), \ &if \ i = j \\ 0, \ &if \ i \neq j \end{cases}\end{align*}$$
$$ \mathbf{L} = \mathbf{D} - \mathbf{A} = \begin{bmatrix} 4 & -1 & \ldots & -a_{1, n}\\ -1 & 1 & \ldots & -a_{2, n} \\ \vdots & \vdots & \ddots & \vdots \\ -a_{n, 1} & -a_{n, 2} & \ldots & 16 \end{bmatrix} $$
The fact that the rows sum to 0 makes it straightforward to find $\lambda_{0} = 0$
The eigenvalues of $L$ are greater than or equal to zero
$$ 0 = {\lambda_{0} \leq \lambda_{1} \leq \dots \leq \lambda_{n-1}} $$
The symmetry guarantees orthoganlity in the eigenvectors
The positive semidefiniteness is required for the rayleigh quotient to be convex
The graph laplacian spectrum is the set of eigenvalues associated with a laplacian matrix
$$ \begin{align*} \Lambda &= {\lambda_{0}, \lambda_{1}, \dots, \lambda_{n-1}} \\ X &= {\mathbf{x}_{0}, \mathbf{x}_{1}, \dots, \mathbf{x}_{n-1}} \end{align*} $$
Typically the eigenvalues are ordered from smallest to largest
$\lambda_{0} \leq \lambda_{1} \leq \dots \leq \lambda_{n-1} $
The trivial eigenvector $\lambda_{0}$
$$ \begin{align*} L \cdot \mathbf{x} &= \mathbf{x} \cdot \lambda_{0} \end{align*} $$
Since the rows of $L$ sum to $0$, if we set $\mathbf{x} = \mathbf{1}$ we get
$$ \begin{align*} \mathbf{0} &= \mathbf{1} \cdot \lambda_{0} \\ \lambda_{0} &= 0 \end{align*} $$
$$ \begin{align*} R(L, x) &= \frac{x^{T}Lx}{x^{T}x} \\ &= x^{T}Lx \\ &= \sum_{(i, j) \in E} (x_{i} - x_{j})^{2} \end{align*} $$
Finding $\lambda_{1}$
$$ \begin{align*} \lambda_{1} = \min_{x \perp \mathbf{1}, ||x||=1} \sum_{(i, j) \in E} (x_{i} - x_{j})^{2} \end{align*} $$
$$ \begin{align*} (x_{5} - x_{6})^{2} &= (-0.733 - 0.502)^{2} \\ &= -1.235^{2} \\ &= 1.525 \end{align*} $$
$$ \begin{align*} (x_{3} - x_{4})^{2} &= (-0.972 - -1.005)^{2} \\ &= 0.033^{2} \\ &= 0.001 \end{align*} $$
Since the laplacian is a real valued symmetric matrix, it is a speacial case of a hermitian matrix, that is, it is equal to it's conjugate transpose.
$$L = \overline{L^{T}}$$The normalised cut problem aims to find groups with high intra-group connectivity and a low inter-group connectivity
The eigenvector associated with $\lambda_{1}$ is called the fiedler vector
It can be used to partition a graph into 2 clusters
$$ \begin{align*} \mathbf{[}&-1.014 & -1.019 & -1.09 & -0.972 & -1.005 & -0.733 \\ &0.502 & 1.348 & 0.744 & 0.828 & 1.348 & 1.063\mathbf{]} \end{align*} $$
k-means is a convex relaxation of the normalised cut problem
Details can be found in https://cims.nyu.edu/~sling/MATH-SHU-236-2020-SPRING/MATH-SHU-236-Lecture-15-mincut.pdf