Blog

July 18, 2025

When Light Meets Silicon: Why M12 is Betting on Photonics to Power AI’s Next Leap

When Light Meets Silicon

Michael Stewart

The rules of data center design are changing. The rapid expansion and scale of AI workloads are driving an evolution of compute and storage focused data centers into “AI Factories”—specialized facilities with power and networking requirements so big that traditional data center architectures can’t handle them. Companies investing in these facilities aren’t just buying more servers, they’re investing in their organization’s future by securing access to the breakthrough technologies that will define AI’s next chapter.

Overview

AI Factories represent a fundamental departure from traditional data center models. To run training and inference workloads for tomorrow’s large language models, compute jobs are no longer contained within single racks or even traditional server clusters. Instead, they require connecting tens or hundreds of thousands of processing elements via kilometer-length links that function as a single, unified machine. This massive scale creates entirely new networking demands that push the capabilities of components originally designed for CPU architectures beyond their limits.

The shift is driving unprecedented interest in “scale-up” network architectures—new approaches to wiring massive GPU clusters together with higher sustained bandwidth while operating within the power constraints of modern data centers. 

At the heart of this transformation is optical networking, or photonics, which uses light rather than electrical signals to move data. This approach promises significantly better energy efficiency and higher bandwidth capacity, making it particularly well-suited for the interconnection-heavy demands of AI workloads.

Challenges & Solutions

The primary challenge facing AI infrastructure today is that traditional networking solutions create a bottleneck in performance due to physical limitations: expensive and bulky copper cables limit options on configuration, higher power to run SerDes and retimers at near future speeds, and the associated heat buildup in crowded server racks. AI-focused data centers use significantly more power for networking than traditional ones because connecting GPUs requires much higher bandwidth components than what is normal to create clusters of CPUs. Copper-based networking simply cannot scale to meet the bandwidth and power efficiency demands of tomorrow’s AI workloads.

The most likely solution will be in optical networking technologies that are specifically designed for the newest AI servers. We are going to be seeing a lot of this in data centers soon, where optic fibers are already mainstream for longer distance interconnects. Co-packaged optics (CPO), which integrate optical components directly with processors at the packaging level, promises to dramatically boost data transfer rates while reducing power consumption. These schemes for embedding optical components into advanced packaging for networked GPUs promise energy efficiency figures that range from 2-5 pJ/bit to less than 1 pJ/bit, dropping power an order of magnitude from today’s state of the art. Leading startups in this space are now attracting massive investment. 

The long awaited debut of CPO technology requires a mixture of new optical materials, manufacturing techniques, and supply chains to be introduced, and beyond that, must meet the challenging energy efficiency targets while integrating properly with a huge number of other constraints to the system such as temperature control and resistance, reliability, and ease of installation and maintenance. While CPO combines new light emitters, detectors, modulators, and fiber optic interfaces into a single product, Linear Pluggable Optics (LPO) offers some of the performance advantages of optics in a more familiar format, that of a pluggable retimed module, incorporating the latest optical components but omitting the digital signal processor (DSP). 

For customers wishing to adopt new optical technology in their scale-up and scale-out networks, or even in the compute element itself, the choices available start to multiply quickly. 

For M12, we expect to see the leaders in CPO, companies integrating into LPO, and even a renewed cohort of optical compute startups, become established suppliers to AI Factories over the next five years. The challenge is to understand which of these options are ready for production, and offer expandability as the AI Factories grow another order of magnitude in scale. Since the concept of CPO and LPO aren’t new, the priority for us nearer term is to seek companies that offer leading and disruptive scalability to the solution by virtue of compatibility with manufacturing and future changes to how light is sent over these networks, e.g., wavelength and modulation type. It is also useful to be neutral to any chosen optical packaging choice upstream or downstream that might be made in the coming years. 

An example of where this thesis holds up is nEye Systems, who are developing an optical circuit switch (OCS) called SuperSwitch, which operates at zero to ultra-low power during operation, scales to high port count (port radix) without adding additional optical loss, in a super compact form factor that is fully compatible with conventional silicon foundry technology available worldwide today, enabling unbeatable economics. Compared to existing or competing approaches to OCS, nEye’s technology is 100x smaller, 1,000x lower power consumption, 10,000x faster, and 10x lower cost. OCS will become a key component with other technologies that generate and control light at locations closer to the server or GPUs themselves.

Final Thoughts

The transformation of data centers into AI Factories isn’t just an incremental upgrade—it’s a fundamental architectural shift that demands breakthrough technologies in networking, chip design, and data center operations. Optical networking represents the most promising path forward for connecting the massive scale of compute required for next-generation AI workloads while maintaining energy efficiency.

The companies that succeed in this space will be those that can deliver not just better performance, but solutions that become the preferred—and often exclusive—technology for AI Factory deployments. As hyperscalers and neo-clouds continue to invest heavily in AI infrastructure, the startups building the foundational technologies for these systems will capture significant value in what is becoming one of the most important infrastructure buildouts of our time.