Skip to content

Conversation

@mresvanis
Copy link
Contributor

@mresvanis mresvanis commented Jan 15, 2026

Description

This PR enables Fabric Manager configuration for vm-passthrough workloads using Shared NVSwitch virtualization mode.

It enables users to configure Fabric Manager modes (i.e. FABRIC_MODE=[0,1,2], 0 - full-passthrough, 1 - shared NVSwitch, 2 - vGPU) through the ClusterPolicy CRD, providing better support for NVIDIA multi-GPU systems in virtualized environments.

In the FM shared NVSwitch virtualization model the NVIDIA driver is used for the NVSwitch devices, while the GPU devices are bound to the vfio-pci driver. The goal is for the GPU devices to be passed-through to kubevirt VMs, while the respective fabric is managed on the host.

Changes

  • API Extensions:
    • add FabricManagerSpec to the ClusterPolicy CRD with support for two modes:
      • full-passthrough (FABRIC_MODE=0) - default mode.
      • shared-nvswitch (FABRIC_MODE=1) - shared NVSwitch virtualization mode.
  • Controller Logic:
    • add validation to ensure driver is enabled when using vm-passthrough with FM shared NVSwitch mode.
    • integrate FM configuration checks into the state manager workflow.
  • Driver State Management:
    • add logic to detect and handle Fabric Manager shared NVSwitch mode.
    • update driver startup probe behavior for vm-passthrough and FM shared NVSwitch mode case.
    • adjust the driver startup probe to accommodate Fabric Manager requirements in vm-passthrough with shared NVSwitch mode.
  • VFIO manager:
    • wait for driver to be ready when FM shared NVSwitch mode.
    • replace init container with vfio-manage unbind --all when FM shared NVSwitch mode.
  • CRD Updates:
    • update all CRD manifests across bundle, config, and deployment directories to include the new Fabric Manager configuration fields.

Checklist

  • No secrets, sensitive information, or unrelated changes
  • Lint checks passing (make lint)
  • Generated assets in-sync (make validate-generated-assets)
  • Go mod artifacts in-sync (make validate-modules)
  • Test cases are added for new code paths

Testing

TBD

@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 15, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@LandonTClipp
Copy link

How coincidental that I resolved to implement something like this and 2 hours ago you submitted this draft!

I want to ask what the plan is for the CDI-side. The ideal scenario is that the fabricmanager can be spawned as a Kata container, which means we need to inject the NVSwitch VFIO cdevs just like how we do for passthrough GPUs. When I tried to use GPU operator a few months ago, this was simply not possible at the time so I used libvirt instead. Does the GPU Operator CDI already expose the NVswitches to k8s now? I apologize if my knowledge is a little out of date.

@mresvanis mresvanis force-pushed the fabric-manager-configuration branch from 335729f to 5f8e006 Compare January 20, 2026 17:16
When clusterPolicy.fabricManager.mode=shared-nvswitch and
workload=vm-passthrough, the vfio-manager now preserves the
NVIDIA driver for fabric management while enabling GPU device
passthrough to VMs.

Changes:
- Modify TransformVFIOManager to detect shared-nvswitch mode.
- Replace driver uninstall init container with device unbind init
  container.
- Use vfio-manage unbind --all to detach devices from nvidia driver.
- Keep nvidia driver loaded for fabric management functionality.
- Add comprehensive unit tests for both normal and shared-nvswitch
  modes.

The new flow for shared-nvswitch mode for the vfio-manager:
1. InitContainer: vfio-manage unbind --all (unbind from nvidia driver)
2. Container: vfio-manage bind --all (bind to vfio-pci)

This enables simultaneous fabric management and VM passthrough capabilities.
@mresvanis mresvanis force-pushed the fabric-manager-configuration branch from 5f8e006 to e27e938 Compare January 21, 2026 15:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants