Skip to content

25/03/2026 4:37 PM - PACES

⬅️ [25/03/2026 4:33 PM - Lecture](<./25_03_2026 4_33 PM - Lecture.md>) | ⬆️ [EECS 504](<./README.md>) | [25/03/2026 5:18 PM - Presentation](<./25_03_2026 5_18 PM - Presentation.md>) ➡️

Aidan Dempster - adempst - 6125 9596

Problem

Visual character recognition (and other tasks, but part 2 is on character recognition) done using MLPs are sensitive to position and scale and hand crafted feature extractors are suboptimal.

Approach

The paper introduces LeNet which is a convolutional neural network designed explicitly for image data. The convolutions allow for local receptive field while downsampling allows deeper networks to access global context and reduce parameter counts.

Contribution

Convolutional neural networks are natively scale invariant and learn scale and distortion invariance during training. Using fully learned convolutional filters achieves much higher accuracy than hand crafted features.

Evaluation

The paper evaluates the method on the MNIST hand drawn characters dataset. They compare the test error with various other ML models such as and MLP, SVMs, and k-means clustering.

Substantiation

This paper uses a robust comparison across multiple methods on a single dataset which is highly convincing that it is the best method for this specific task. In hindsight we understand that learned convolutions are an excellent method, but just from this paper it is not clear that it generalizes to more diverse tasks.


⬅️ [25/03/2026 4:33 PM - Lecture](<./25_03_2026 4_33 PM - Lecture.md>) | ⬆️ [EECS 504](<./README.md>) | [25/03/2026 5:18 PM - Presentation](<./25_03_2026 5_18 PM - Presentation.md>) ➡️