Elevate networking for the age of Gen AI

The Industry's First 3.2 Tbps Multi-GPU SuperNIC Chip

  • Delivers 8x elastic bandwidth for GPUs, congestion-free
  • Multi-port 800GbE, PCIe Gen5 and CXL 2.0+ interfaces
  • Enabled by Enfabrica's patented Accelerated Compute Fabric (ACF) architecture

Before

Before

AI server networking component sprawl

  • NICs, PCIe and Rail Switches connected as stovepipes
  • Limited bandwidth and fault-tolerance

Congestion-prone data movement across GPUs

  • Multiple device hops
  • Unpredictable load distribution, incast-prone

Ballooning AI Cluster TCO

  • Stranded resources with lower effective bandwidth
  • GPU link failure stalls entire job
Diagram of Before

A New Approach

A New Approach

Collective 8X NIC for collective GPUs

  • 8X scale-out RDMA bandwidth
  • Efficient traffic distribution across GPUs

Cut down GPU data movement latencies

  • Up to 66% fewer device hops in large GPU clusters.
  • Congestion-free network-to-GPU traffic

Boost AI cluster operational efficiency

  • Eliminates job failures due to link flaps
  • 50% lower network power, 80% fewer components
Diagram of A New Approach

Deploy Easily

Deploy easily

Connect seamlessly within AI clusters

  • Using 100/200/400/800GbE optical or copper links
  • To spine switches with high radix multipathing

Scale quickly without changing AI software

  • Plug and play with standard ibverbs and xCCL
  • Transparent scale-out networking acceleration
Diagram of Deploy Easily

Experience it

Experience it

  • Flexible mix of CPU, GPU and CXL.Mem/SSD nodes
  • Familiar interface for AI and HPC frameworks
  • SuperNIC and Memory expansion SDKs
  • Ideal for fast piloting and deployment

The Enfabrica SuperNIC System