Microsoft Unveils Light-Powered AI Computer 100x Faster Than GPUs
Serge Bulaev
Microsoft has built a new type of computer that uses light instead of electricity to do AI work. This machine can handle some tasks up to 100 times faster and with much less energy than today's best chips. It uses tiny LEDs and sensors to do math with light, making deep learning much quicker. Early tests show it helped speed up MRI scans and solved big financial problems fast. If Microsoft can make this technology work on a large scale, it could change how data centers use power and make AI much more efficient.

Microsoft's new light-powered AI computer promises to reshape hardware economics by using photons instead of electrons. This Analog Optical Computer (AOC) prototype can perform specific AI tasks up to 100 times faster and with 100 times less energy than top-tier GPUs, according to benchmarks from Microsoft Research (news.microsoft.com). The experimental platform leverages analog photonics for complex calculations, signaling a potential turning point for data center efficiency and operational costs.
How the optical core works
Microsoft's Analog Optical Computer is an experimental device that performs AI computations using light instead of electricity. It processes information by modulating the intensity of light beams through micro-LED arrays, allowing it to execute complex mathematical operations, like those in neural networks, at nearly the speed of light.
Unlike digital chips that rely on billions of transistors, the AOC works by modulating light intensity using arrays of micro-LEDs and photodetectors. Core deep learning calculations, like matrix multiplication, occur as light beams interact within optical waveguides, effectively collapsing millions of digital clock cycles into the instantaneous travel time of photons.
Key prototype numbers:
- Current Scale: 256 on-chip weights, with a clear path to 4,096 using current fabrication methods.
- Operating Conditions: Runs at room temperature without cryogenic cooling.
- Projected Performance: A peak of approximately 500 teraflops once fully scaled.
Early benchmarks and use cases
Two peer-reviewed studies published in 2025 have already demonstrated the AOC's real-world value:
* MRI Reconstruction: Reduced medical imaging time from 30 minutes to just 5 minutes while maintaining full diagnostic quality.
* Financial Optimization: Solved complex transaction settlement problems with over 99 percent accuracy compared to its digital counterpart.
Both studies highlight the AOC's analog precision, although noise mitigation continues to be an active area of research.
Competitive and commercial landscape
The push for photonic computing is industry-wide. Startups like Lightelligence and Celestial AI are developing optical interconnects, while giants like Intel and Broadcom advance silicon photonics. However, full optical processors face significant challenges, including hybrid integration, the lack of optical memory, and manufacturing scale, as outlined in a 2025 Photonics21 report (Photonics21 PDF). While co-packaged optics are entering production now, most experts project broader photonic co-processors will reach hyperscale data centers by 2027.
Outlook for Microsoft's program
To mitigate risk, Microsoft is developing the AOC alongside a high-fidelity software simulator for testing large-scale models. Researchers believe miniaturizing the micro-LED arrays could scale the system to hundreds of millions of weights, making it competitive for mainstream AI inference. Although still a prototype, the AOC's proven efficiency directly addresses the urgent need to reduce the massive energy consumption of cloud-based AI. If Microsoft can maintain its 100x performance advantage in a production-ready design, optical computing could transition from a research project to a fundamental component of future AI infrastructure.
What exactly is Microsoft's Analog Optical Computer and how does it work?
Microsoft's Analog Optical Computer (AOC) is a light-powered prototype that replaces electrons with photons to perform AI calculations. Instead of switching transistors on and off, the machine shapes laser beams with arrays of tiny micro-LEDs; the brightness of each LED acts as a tunable "weight" in an analog neural network. Because light naturally multiplies and adds intensities, a single flash can complete a matrix-vector product that would require thousands of GPU clock cycles.
How much faster and more efficient is the AOC compared with today's GPUs?
In head-to-head tests on optimization and inference tasks, the AOC delivered roughly 100× the speed and 100× the energy efficiency of leading GPUs running the same large-language-model workloads, all while sitting on a desktop at room temperature. The team projects ~500 teraflops of analog compute within the current footprint once fully scaled.
Where has the prototype been tested in the real world?
Two peer-reviewed studies in Nature (2025) validated the hardware on industry problems:
- Healthcare: Microsoft reconstructed noisy MRI data; simulations show scan time could drop from 30 min to 5 min without loss of image quality.
- Banking: The system found optimal settlement paths for complex transaction batches, matching the digital twin prediction at >99 % accuracy.
What are the biggest hurdles before commercial deployment?
- Scale: The demo chip hosts only 256 weights; practical models need hundreds of millions.
- Memory bottleneck: Photons cannot yet store data, so the AOC must convert back to electronic DRAM between layers, adding latency.
- Noise control: Analog light circuits are sensitive to temperature drift and require active calibration to keep AI precision.
- Manufacturing cost: Photonic integrated circuits still trail silicon in yield and volume.
Who else is racing toward light-based AI accelerators?
The field is crowded. Broadcom, Intel and Cisco are shipping co-packaged optics for data-center switches, while start-ups push deeper into analog compute:
- Lightelligence (MIT spin-out) already sells a 64-core AI inference ASIC with on-chip optical routing.
- Celestial AI raised >$330 M for a "Photonic Fabric" that moves data between processors at light speed.
- Neurophos uses meta-surface modulators to fit 128×128 weight arrays on a single die.
Microsoft's bet is that by merging free-space optics with low-noise analog electronics, it can leapfrog these interconnect-first approaches and deliver an end-to-end optical accelerator.