Piris Labs
blog3

Piris Labs: Reinventing AI Infrastructure

In the rapidly evolving world of artificial intelligence, infrastructure has become the new frontier of innovation. While headlines often focus on breakthrough models and applications, the true bottleneck lies deeper—in the physical and architectural limits of computation itself. Piris Labs, a young startup founded in 2025 and part of Y Combinator’s Winter 2026 batch, is positioning itself precisely at that critical layer. Based in San Francisco and led by a team of physicists and AI infrastructure veterans, the company is pursuing an ambitious goal: delivering AI inference at what it calls “light speed.”

Piris Labs operates in a domain that most end users never see but that determines whether AI can scale economically. The company offers a full-stack inference service designed to eliminate one of the most persistent constraints in modern computing—the AI data movement bottleneck. By combining proprietary photonic hardware with a vertically optimized software stack, Piris Labs aims to dramatically reduce latency, energy consumption, and cost per inference operation.

This approach matters because the future of AI depends not only on smarter algorithms but on whether those algorithms can run efficiently at scale. As models grow to trillions of parameters, the cost of deploying them threatens to outpace the value they generate. Piris Labs believes it has a solution that could rebalance those economics and unlock the next phase of AI adoption.

What Problem Is Piris Labs Trying to Solve?

Artificial intelligence has made extraordinary progress, but its infrastructure has not kept pace with its ambitions. The central issue is often described as the “memory wall”—a term referring to the growing gap between processor speed and memory bandwidth. In practical terms, this means that modern AI systems spend more time moving data than actually computing results.

Traditional data centers rely heavily on copper-based interconnects to move information between processors and memory. While this technology has served computing well for decades, it struggles under the demands of modern AI workloads. As models grow larger and more complex, data must travel longer distances more frequently, creating latency and energy inefficiencies that compound at scale.

The consequences are significant. High latency slows down real-time applications such as conversational AI and autonomous systems. Inefficient hardware utilization drives up operational costs. Energy consumption becomes unsustainable, both financially and environmentally. Ultimately, these factors impose a ceiling on AI profitability, limiting which use cases can be deployed commercially.

Piris Labs identifies this bottleneck as the primary obstacle preventing AI from reaching its full potential. Rather than attempting incremental improvements, the company is rethinking the interconnect layer itself—the pathways through which data flows inside AI systems.

How Does Photonic Technology Change the Game?

At the core of Piris Labs’ approach is photonics—the use of light instead of electricity to transmit information. Optical communication is already widely used in long-distance networking, but integrating it deeply into computing infrastructure presents both engineering challenges and transformative possibilities.

Light travels faster than electrical signals and generates far less heat, allowing for higher bandwidth and lower energy consumption. By developing proprietary optical interconnects, Piris Labs seeks to move data between memory and processors at speeds unattainable with copper wiring.

The company’s technology centers on full-stack optical interconnects paired with a purpose-built software layer. This vertical integration is crucial. Hardware alone cannot deliver optimal performance unless the software is designed to exploit its capabilities. Piris Labs therefore optimizes both layers simultaneously, ensuring that data flows efficiently across the entire system.

According to the company, this architecture delivers five times lower latency, ten times lower power consumption per bit, and twice the cost efficiency per token compared to conventional solutions. If validated at scale, these improvements could fundamentally alter the economics of AI deployment.

Why Is Vertical Optimization the Key Strategy?

Modern computing history shows that performance breakthroughs often come from vertical integration—designing hardware and software together rather than treating them as separate domains. Piris Labs is applying this philosophy to the interconnect layer, an area traditionally dominated by standardized components.

The startup draws inspiration from earlier successes in compute optimization, where companies redesigned processors specifically for AI workloads. Piris Labs extends this logic to the communication pathways within data centers. By controlling both the physical interconnects and the software orchestration, the company can eliminate inefficiencies that arise when disparate systems attempt to work together.

This “grow-style optimization,” as described by the company, focuses on maximizing effective FLOP utilization—the proportion of computational capacity actually used for meaningful work. In conventional systems, a significant percentage of processing power is wasted waiting for data to arrive. Piris Labs’ architecture aims to keep processors continuously fed with information, thereby improving overall efficiency without requiring more hardware.

Such optimization could prove especially valuable for inference workloads, where speed and cost determine whether AI services can operate profitably. As businesses increasingly rely on real-time AI responses, the ability to deliver low-latency inference at scale becomes a competitive necessity.

Who Are the Founders Behind Piris Labs?

Piris Labs was founded by two individuals whose backgrounds span both advanced physics and large-scale AI deployment. This combination reflects the interdisciplinary nature of the problem they are tackling.

Ali Khalatpour, the company’s Founder and CEO, is an MIT-trained engineer and optical scientist with experience across academia and government research. His work includes developing groundbreaking semiconductor laser technologies and contributing to NASA projects. Such expertise in photonics provides the technical foundation for Piris Labs’ hardware innovations.

Keyvan Rezaei Moghadam, Co-Founder and President, brings a decade of experience scaling AI and machine learning products at major technology companies, including Meta and X (formerly Twitter). Known for leading high-stakes “zero-to-one” initiatives, he specializes in translating experimental technologies into deployable systems.

Together, the founders represent a rare blend of deep scientific knowledge and practical engineering leadership. Their shared vision is to bridge the gap between theoretical breakthroughs and real-world infrastructure capable of supporting the next generation of AI applications.

Why Is Inference the Strategic Focus?

While much public attention centers on training massive AI models, inference—the process of running trained models to generate outputs—is where most real-world usage occurs. Every chatbot response, recommendation engine update, or autonomous decision relies on inference operations.

Inference workloads differ from training in critical ways. They demand low latency, predictable performance, and cost efficiency. For consumer-facing applications, delays of even a few hundred milliseconds can degrade user experience. For enterprise deployments, operating costs determine whether AI adoption is financially viable.

Piris Labs targets inference because it represents the most immediate constraint on AI scalability. As organizations deploy models across millions of users, the cumulative cost of inference becomes enormous. By reducing latency and energy consumption, the company aims to make advanced AI economically sustainable for widespread use.

The focus on inference also aligns with broader industry trends. As model development matures, attention is shifting toward deployment infrastructure—the systems that bring AI capabilities to everyday products and services.

How Could Piris Labs Reshape AI Economics?

If Piris Labs succeeds, the implications extend far beyond technical performance metrics. Lower inference costs could enable new business models and applications that are currently impractical.

For example, real-time AI assistants could operate continuously without prohibitive expenses. Autonomous systems could process complex data streams instantly. Scientific research could leverage large models without requiring massive budgets. Even smaller organizations could deploy sophisticated AI capabilities previously reserved for tech giants.

Reducing energy consumption also addresses environmental concerns associated with data centers. As global AI usage grows, sustainability becomes an increasingly urgent issue. Photonic interconnects offer a pathway toward greener computing infrastructure.

Ultimately, Piris Labs’ vision is to remove the economic ceiling that currently limits AI innovation. By making trillion-parameter models financially viable, the company hopes to accelerate progress across industries ranging from healthcare to finance.

What Lies Ahead for Piris Labs?

As an early-stage startup with a team of two founders, Piris Labs faces significant challenges. Building new hardware architectures requires substantial capital, manufacturing expertise, and industry partnerships. Demonstrating reliability and scalability will be essential to winning the trust of major data-center operators.

The company is currently focused on delivering its electro-optical engine prototype and securing seed funding to advance development. Success at this stage could attract strategic collaborators and investors eager to gain a foothold in next-generation AI infrastructure.

Piris Labs operates in a competitive landscape where established players and emerging startups alike are racing to solve similar problems. However, its emphasis on photonic interconnects and full-stack optimization differentiates it from companies focusing solely on GPUs or software solutions.

Could Piris Labs Define the Future of AI Infrastructure?

The history of technology suggests that transformative shifts often occur when underlying infrastructure evolves. Just as cloud computing enabled the mobile revolution, breakthroughs in AI infrastructure could unlock capabilities not yet imagined.

Piris Labs represents a bold attempt to redefine how data moves within intelligent systems. By harnessing the speed of light and the precision of vertically integrated design, the company seeks to overcome limitations that have long constrained computational progress.

Whether Piris Labs ultimately becomes a dominant player or a catalyst inspiring broader innovation remains to be seen. What is clear is that the challenge it addresses—the memory wall and the economics of AI compute—is central to the future of artificial intelligence.

As AI continues to permeate every aspect of society, the success of companies like Piris Labs may determine not just how powerful these systems become, but how accessible and sustainable they are for the world at large.