Abstract
Diffractive Neural Networks (DNN), based on free space optical diffraction to mimic hundreds of billions of connections between neurons, provide privileges of high parallelism, high speed calculation, and low energy consumption over electronic devices to solve many machine learning tasks. In the pursuit of improving the performances of DNN, increasing their spatial complexity is a common approach. With the growing needs of spatial complexity, especially neurons and layers, implementation difficulties arise.
In this talk, I would like to show that the performances of DNN strongly rely on the Fresnel Number of the system. By controlling the Fresnel number, the expression capability of DNN can be optimized without increasing the space complexity. Experimentally, with appropriate Fresnel numbers, a single layer DNN based on spatial light modulator (SLM) in visible light have an impressive performance on binary MNIST and Fashion-MNIST datasets. Meanwhile, a double-layer DNN also based on SLM with appropriate Fresnel numbers is powerful enough for grayscale image processing.
References
1, Minjia Zheng, Lei Shi, and Jian Zi, Optimize performance of a diffractive neural network by controlling the Fresnel number, Photonics Research 10, 2667, 2022
2, Minjia Zheng, Lei Shi, and Jian Zi, Diffractive Neural Networks with Maximum Expressive Power for Grayscale Image Classification, unpublished.
Please contact phweb@ust.hk should you have questions about the talk.