Emerging Directions in Optical Computing and Information Processing

Emerging Directions in Optical Computing and Information Processing

June 26 and 27, 2025 – Virtual Conference


Abstracts


Model-Free End-to-End Training in Experiments of a Multimode Semiconductor Laser Network Comprising 10,000 Neurons

Daniel Brunner

We recently implemented in hardware input as well as readout weights and a recurrent nonlinear neural network with 10,000 neurons by leveraging the high-dimensional state space of a multi-mode semiconductor laser. For maximal efficiency, the largest fraction of a NN’s hardware should be dedicated to the core computational task, while auxiliary infrastructure should be pushed into the background. I will demonstrate in situ training of our autonomous photonic NN with minimal support by a classical digital computer. We achieve this by employing exclusively black-box, evolutionary optimization algorithms, and I will show that for real-world analog neural networks these hold substantial promises while removing the most critical block to truly hardware based realtime learning.


Analog Information Processing with Programmable Photonic Circuits

Ali Miri


Engineered Nonlocalities for Analog Optical Computing

Andrea Alu

Engineered nonlocal responses in metasurfaces have recently unveiled a paradigm for image processing and analog-based computing. Several demonstrations performing edge-detection and image-processing using tailored spatial nonlocality have shown a path towards ultrafast, efficient, massively parallel analog image processing based on passive devices, which holds the promise for general analog computing platforms. Space-time nonlocal metasurfaces performing space- and time-operations on the incoming signal have also been envisioned by tailoring frequency and momentum dispersions. In this talk, I discuss opportunities for this research field, showcasing compact meta-structures with reconfigurable properties that can perform mathematical operations, solve mathematical problems and address technological needs.


Energy Scaling of Optical Transformers

Peter McMahon

Transformers are the dominant neural-network architecture for language modeling. I will present results from a simulation and experimental study we conducted to assess the prospects for using optical matrix-vector multipliers to reduce the energy consumption of inference with Transformers. We found that Transformers can operate with no loss of accuracy relative to a digital-electronic implementation with 8-bit arithmetic with a number of photons per multiply-accumulate that decreases with the size of the vector (embedding dimension, in language models). I will explain how we can conclude that large (>100x) improvements in overall system energy efficiency for Transformers may be possible by using scaled and carefully engineered optical hardware. Reference: M. Anderson et al. Optical Transformers. TMLR (2024), preprint: arXiv:2302.10360.


Nonlinear Computation in Optical Neural Networks

Demetri Psaltis


Photonic Computing with Metamaterials and Metasurfaces

Nader Engheta

In recent years, there has been growing interest and extensive research activities in exploring specially designed metamaterials and metasurfaces for optical analog computing. In my group, we have been exploring various scenarios for such light-based computing and have introduced and developed methodologies for vector-matrix multiplication, matrix inversion, equation solving, and constrained optimization using such metastructures. In this talk, I will present an overview of some of our most recent results from our ongoing research programs in my group. Salient features and physical insights into our findings will be presented, and possible future research directions will be forecasted.


Reimagining Data Centers with Hyperspectral Compute-in-Memory: Integrating Optical Communication and Computing

Myoung-Gyun Suh

Modern data centers face growing challenges in data communication bandwidth, chip heating, and the energy efficiency of large-scale computing networks. While optics provide efficient long-range data transmission, using optics for computation remains difficult due to limited scalability, programmability, and computational flexibility. We propose a hyperspectral compute-in-memory (CIM) architecture that integrates optical communication and computation to address these challenges. By leveraging wavelength- and space-division multiplexing for massively parallel data flow and replacing intra-chip electronic communication with optics, our architecture tackles the memory wall, thermal constraints, and scalability limitations. This hybrid optoelectronic system enables programmable, high-throughput computing with high on-chip compute density and offers a promising new direction for data center infrastructure.


Circuits That Solve Optimization Problems by Exploiting Physics Inequalities

Eli Yablonovitch

Optimization is vital to Engineering, Artificial Intelligence, and to many areas of Science. Mathematically, we usually employ steepest-descent, or other digital algorithms. But every inequality in Physics performs optimization in the normal course of dynamical evolution—for free. In effect, Physics can provide machines which solve digital optimization problems much faster than any conventional computer.


Computing Using Exciton-Polaritons

Vinod Menon

In this talk I will discuss the use of exciton-polaritons, light-matter hybrid quasiparticles for computing. Specifically, we will discuss the use of polariton condensates formed in organic molecular systems to realize optically and lithographically imprinted lattices. Their implementation for reservoir computing at room temperature will also be presented. I will conclude with a brief discussion on the use of these condensates for realizing coherent polaritonic circuits.


Ultrafast Neuromorphic Computing with Optical Parametric Oscillators

Midya Parto

Over the past few decades, photonics has played a crucial role in applications ranging from long-haul ultrafast telecommunications to optical sensing and spectroscopy. The sheer volume of the data generated from the widespread implementation of such technologies is rapidly growing to levels where the conventional computing hardware architectures are failing to catch up. The emerging field of optical computing aims to effectively harness the high bandwidths offered by light for faster and more efficient information processing. In this talk, I will discuss our recent work on leveraging ultrafast and efficient nonlinear photonic processes to perform high-speed optical processing using a variety of platforms.


HiLAB: A New Paradigm in Inverse Design of Large-scale Nanophotonic Devices

Ali Adibi

This talk is focused on HiLAB (Hybrid inverse-design with Latent-space learning, Adjoint-based partial optimizations, and Bayesian optimization) as a new paradigm for inverse design of large-scale freeform nanophotonic devices, e.g., metamaterials. HiLAB integrates early-terminated topological optimization combined with image augmentation to generate a reliable training set for machine learning algorithms, a Vision Transformer–based variational autoencoder to reduce the design dimensionality of a freeform structure by 3–4 orders of magnitude, and a Bayesian search to form a surrogate model for the input-output relation with huge reduction in computational complexity while avoiding weak local optima.


Open Quantum Systems as a Platform for Machine Learning

Christopher Gies

For Quantum Reservoir Computing (QRC), we discuss the relationship between the reservoir’s performance for quantum machine learning (QML) and its physical properties in terms of the optical absorption. We also address expressivity measures in QRC: in establishing a link between gate-based QML and QRC, we show that the role of the reservoir itself is of little impact for its capability to produce non-linear output functions of a given input. For currently suggested input-encoding schemes, no exponential advantage of the quantum system can be exploited, posing the general question of quantum advantage in QML.


Interpretable AI and Robotics for Physics

Sachin Vaidya

AI-driven scientific discovery currently faces two key challenges: enhancing interpretability to extract meaningful insights from data to advance theory, and automating experiments to accelerate progress in the lab. In this talk, I will discuss our efforts to tackle both. First, I will introduce Kolmogorov-Arnold Networks (KANs), which is a fully transparent AI architecture that provides insights into complex physical systems, unlike traditional black-box approaches. In the second part, I will focus on the automation of optical experiments using our AI-driven robotic platform, which integrates an LLM-based agent, a robotic arm, and computer vision for the assembly and alignment of optical setups.


Noise-robust Information Extraction from Optical Image Sensors using Eigentasks

Hakan Tureci

Extracting information from optical sensors is especially challenging under dim-light conditions. I will discuss how computing a set of transformations on raw sensor data—called Eigentasks—enables robust information extraction from noisy sensor data. I will introduce the Eigentask framework for physical neural network–based sensors operating in the shot-noise limit, and present its application in experiments using two different optical image sensors. Eigentasks here are found to yield a low-dimensional latent space that consistently outperforms standard methods like PCA and low-pass filtering. I’ll conclude with a perspective on how Eigentasks may offer a robust preprocessing solution to boost the efficiency and accuracy of machine-vision pipelines.


Spatial Ising Machines and High-Dimensional Computing

Claudio Conti

I will review our activity on the Ising machines based on spatial light modulators, and about new algorithms based on high-dimensional spins.


Entropy Computing: A Paradigm for Optimization in Open Photonic Systems

Yong Meng Sua

We introduce a novel computing paradigm, entropy computing, that works by conditioning a quantum reservoir thereby enabling the stabilization of a ground state. In this work, we experimentally demonstrate the feasibility of entropy computing by building a hybrid photonic-electronic computer that uses measurement-based feedback to solve non-convex optimization problems. The system functions by using temporal photonic modes to create qudits in order to encode probability amplitudes in the time-frequency degree of freedom of a photon. This scheme, when coupled with electronic interconnects, allows us to encode an arbitrary Hamiltonian into the system and solve non-convex continuous variables and combinatorial optimization problems.


Programming Light Diffraction for Information Processing and Computational Imaging

Aydogan Ozcan

I will discuss the integration of programmable diffraction with digital neural networks. Diffractive networks are designed by deep learning to all-optically implement various complex functions as the input light diffracts through spatially engineered surfaces. These diffractive processors integrated with digital networks have various applications, e.g., image analysis, feature detection, object classification and seeing through diffusers. These diffractive systems can broadly impact (1) optical statistical inference engines, (2) computational camera/microscope designs and (3) inverse design of task-specific optical systems. I will give examples of each group, enabling transformative capabilities for applications in, e.g., autonomous systems, defense/security, telecommunications, and biomedical imaging.


Integrated photonic-electronic deep neural networks

Firooz Aflatouni

Photonic deep neural networks can perform fast and energy-efficient classification through computation by propagation and low-loss optical interconnects. Here, a review of our work on the first integrated end-to-end photonic-electronic deep neural network for direct image classification that achieved a classification time of 570 ps is presented. Then, the first monolithically integrated trainable optical nonlinear activation function designed in a 45nm CMOS SOI process for scalable silicon photonic deep neural networks is introduced, which effectively mitigates process, voltage, and temperature variations, ensuring robustness in photonic deep networks.


Solving computational problems with coupled lasers

Nir Davidson

Computational problems may be solved by realizing physics systems that can simulate them. Here we present a new system of coupled lasers in a modified degenerate cavity that is used to solve difficult computational tasks. The degenerate cavity possesses a huge number of degrees of freedom (300,000 modes in our system), that can be coupled and controlled with direct access to both the x-space and k-space components of the lasing mode. Placing constraints on these components are mapped on different computational minimization problems. Due to mode competition, the lasers select the mode with minimal loss to find the solution.  We demonstrate this ability for simulating XY spin systems and finding their ground state, for phase retrieval, for imaging through scattering medium, and more.