CIM data processing shares **practical parallels with deep learning**.

The core data processing engine within current CIM prototypes implements successive matrix-vector (GEMV) multiplication operations to compute incremental perturbations to apply to each state variable on the basis of measured values of all the other state variables. These operations are essentially similar to those that lie at the heart of current deep learning neural network implementations, and in the CIM context must be executed at high speed with low latency to keep pace with the light-speed circulation of optical data packets around the fiber-based memory storage ring. In current CIM prototypes, the multiplication operations are executed by specially designed FPGA arrays. While this approach affords great flexibility in testing arbitrary optimization problem instances, and could support a wide range of future generalizations of the underlying CIM algorithm, the FPGA arrays are the most costly and energy-consuming subsystems of current CIM prototypes. Can future CIM implementations achieve substantial improvements in speed and energy efficiency through the use of novel opto-electronic GEMV engines? Here we find a crucial CIM research direction that aligns with much broader investigations of special-purpose machine learning hardware.

**See also:**

R. Hamerly *et al.*, “Large-Scale Optical Neural Networks Based on Photoelectric Multiplication,” Phys. Rev. X **9**, 021032 (2019).

### Like this:

Like Loading...