Skip to content

Question 2 example 1

⬅️ [Question 2 example 2](<./Question 2 example 2.md>) | ⬆️ [Generated Test Questions](<./README.md>) | [Question 1 example 2](<./Question 1 example 2.md>) ➡️

Question 2 (25 points total)

Part A: Quiz-Style Recall & Comprehension (15 points; 5 points each)
Answer the following questions concisely.

1. In the context of stereo vision and 3D reconstruction, what is the fundamental mathematical relationship between disparity and depth?
2. How many degrees of freedom does a 2D scaled rotation (Similarity) transformation have, and what are they?
3. When performing graph-based image segmentation using the Normalized Cuts (Ncut) algorithm, why is the cut penalty normalized by the volume (association) of the regions rather than just using a standard minimum cut?

Part B: Mathematical Proof (10 points)
4. During the dimensionality reduction lecture on Principal Component Analysis (PCA) and Eigenfaces, it was noted that the true covariance matrix (let's call it $XX^T$) is often too massive to compute directly because the number of pixels is much larger than the number of training images $N$. To solve this, we use a computational trick: we compute the eigenvectors of the much smaller $X^TX$ matrix instead. Proof: Provide a short mathematical proof showing that if $v$ is an eigenvector of $X^TX$ with eigenvalue $\lambda$, then $Xv$ is an eigenvector of the massive covariance matrix $XX^T$ with the same eigenvalue $\lambda$.


Answer Key / Solutions

Part A (15 points)
1. Solution: Disparity is inversely proportional to depth. (Objects closer to the camera move a greater distance across the visual field than objects far away).
2. Solution: A 2D Similarity transformation has 4 degrees of freedom. (Specifically: 1 for scale, 1 for rotation angle, and 2 for translation).
3. Solution: It is normalized to encourage balanced segmentation sizes and to penalize degenerate cuts, such as isolating a single, outer node. A standard min-cut is heavily biased toward cutting the fewest edges, which often just splits a single pixel off from the rest of the image.

Part B (10 points)
4. Solution: This is a classic linear algebra proof validating the computational trick used in Eigenfaces. * Step 1: Assume $v$ is an eigenvector of $X^TX$ with eigenvalue $\lambda$. By the definition of an eigenvector, we can write: $(X^TX)v = \lambda v$
Step 2: Multiply both sides of the equation on the left by the matrix $X$:
$X(X^TX)v = X(\lambda v)$
Step 3: Use the associative property of matrix multiplication and pull the scalar $\lambda$ to the front on the right side:
$(XX^T)(Xv) = \lambda (Xv)$
Conclusion:* If we let a new vector $u = Xv$, the equation becomes $(XX^T)u = \lambda u$. This strictly matches the definition of an eigenvector, proving that $Xv$ is an eigenvector of the true covariance matrix $XX^T$ with the exact same eigenvalue $\lambda$.


⬅️ [Question 2 example 2](<./Question 2 example 2.md>) | ⬆️ [Generated Test Questions](<./README.md>) | [Question 1 example 2](<./Question 1 example 2.md>) ➡️