From a0d28b1be6cae38b00ee443dabfe0e7d2dd538cb Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Mon, 23 Feb 2026 14:05:43 -0800 Subject: [PATCH 1/5] Add documentation for v3 CRD mode --- calico/getting-started/kubernetes/helm.mdx | 12 ++ .../policy-tiers/rbac-tiered-policies.mdx | 8 +- calico/operations/calicoctl/install.mdx | 2 +- calico/operations/install-apiserver.mdx | 16 +- calico/operations/native-v3-crds.mdx | 190 ++++++++++++++++++ calico/reference/architecture/overview.mdx | 2 + .../installation/helm_customization.mdx | 5 +- calico/reference/resources/ippool.mdx | 8 +- sidebars-calico.js | 1 + 9 files changed, 236 insertions(+), 8 deletions(-) create mode 100644 calico/operations/native-v3-crds.mdx diff --git a/calico/getting-started/kubernetes/helm.mdx b/calico/getting-started/kubernetes/helm.mdx index 5785e8383c..aa906fbe4d 100644 --- a/calico/getting-started/kubernetes/helm.mdx +++ b/calico/getting-started/kubernetes/helm.mdx @@ -85,6 +85,18 @@ For more information about configurable options via `values.yaml` please see [He helm install calico-crds projectcalico/crd.projectcalico.org.v1 --version $[releaseTitle] --namespace tigera-operator ``` + :::tip + + To install with [native v3 CRDs](../../operations/native-v3-crds.mdx) (tech preview) instead, use the v3 CRD chart: + + ```bash + helm install calico-crds projectcalico/projectcalico.org.v3 --version $[releaseTitle] --namespace tigera-operator + ``` + + Native v3 CRDs eliminate the need for the aggregation API server and allows `kubectl` to manage `projectcalico.org/v3` resources directly. + + ::: + 1. Install the Tigera Operator using the Helm chart: ```bash diff --git a/calico/network-policy/policy-tiers/rbac-tiered-policies.mdx b/calico/network-policy/policy-tiers/rbac-tiered-policies.mdx index 3403cf158d..fc8998080c 100644 --- a/calico/network-policy/policy-tiers/rbac-tiered-policies.mdx +++ b/calico/network-policy/policy-tiers/rbac-tiered-policies.mdx @@ -19,7 +19,13 @@ Self-service is an important part of CI/CD processes for containerization and mi ### Standard Kubernetes RBAC -$[prodname] implements the standard **Kubernetes RBAC Authorization APIs** with `Role` and `ClusterRole` types. The $[prodname] API server integrates with Kubernetes RBAC Authorization APIs as an extension API server. +$[prodname] implements the standard **Kubernetes RBAC Authorization APIs** with `Role` and `ClusterRole` types. In the default installation, the $[prodname] API server integrates with Kubernetes RBAC Authorization APIs as an extension API server. When using [native v3 CRDs](../../operations/native-v3-crds.mdx), tier RBAC is enforced via an admission webhook instead. + +:::note + +When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced for **create, update, and delete** operations via the admission webhook. However, **GET, LIST, and WATCH** operations on tiered policies are not enforced because admission webhooks cannot intercept read requests. This is a known limitation. + +::: ### RBAC for policies and tiers diff --git a/calico/operations/calicoctl/install.mdx b/calico/operations/calicoctl/install.mdx index f951177303..4f778996e4 100644 --- a/calico/operations/calicoctl/install.mdx +++ b/calico/operations/calicoctl/install.mdx @@ -48,7 +48,7 @@ should still be used to manage other Kubernetes resources. :::note If you would like to use `kubectl` to manage `projectcalico.org/v3` API resources, you can use the -[Calico API server](../install-apiserver.mdx). +[Calico API server](../install-apiserver.mdx). Alternatively, when using [native v3 CRDs](../native-v3-crds.mdx), `kubectl` can manage `projectcalico.org/v3` resources directly as native CRDs without needing the API server or calicoctl for resource management. ::: diff --git a/calico/operations/install-apiserver.mdx b/calico/operations/install-apiserver.mdx index 08a1c4a359..e86e1606c2 100644 --- a/calico/operations/install-apiserver.mdx +++ b/calico/operations/install-apiserver.mdx @@ -13,6 +13,12 @@ import TabItem from '@theme/TabItem'; Install the Calico API server on an existing cluster to enable management of Calico APIs using kubectl. +:::tip + +Starting in Calico v3.32.0, you can use [native v3 CRDs](native-v3-crds.mdx) to manage `projectcalico.org/v3` resources directly with `kubectl` without installing the aggregation API server. If you are setting up a new cluster and want a simpler architecture, consider native v3 CRDs instead. + +::: + ## Value The API server provides a REST API for Calico, and allows management of `projectcalico.org/v3` APIs using kubectl without the need for calicoctl. @@ -39,6 +45,8 @@ in this document are not required. In previous releases, calicoctl has been required to manage Calico API resources in the `projectcalico.org/v3` API group. The calicoctl CLI tool provides important validation and defaulting on these APIs. The Calico API server performs that defaulting and validation server-side, exposing the same API semantics without a dependency on calicoctl. +Alternatively, when using [native v3 CRDs](native-v3-crds.mdx), `projectcalico.org/v3` resources are native CRDs, so `kubectl` works directly without needing either the API server or calicoctl for resource management. + calicoctl is still required for the following subcommands: - [calicoctl node](../reference/calicoctl/node/index.mdx) @@ -55,14 +63,16 @@ Select the method below based on your installation method. -1. Create an instance of an `operator.tigera.io/APIServer` with the following contents. +1. Create an instance of an `operator.tigera.io/APIServer` with the following command. - ```yaml + ```bash + kubectl create -f - < -Once removed, you will need to use calicoctl to manage projectcalico.org/v3 APIs. +Once removed, you will need to use calicoctl to manage projectcalico.org/v3 APIs, unless you are using [native v3 CRDs](native-v3-crds.mdx) where `kubectl` works directly. ## Next steps diff --git a/calico/operations/native-v3-crds.mdx b/calico/operations/native-v3-crds.mdx new file mode 100644 index 0000000000..fb1776a5ab --- /dev/null +++ b/calico/operations/native-v3-crds.mdx @@ -0,0 +1,190 @@ +--- +description: Enable native projectcalico.org/v3 CRDs to use Calico resources directly as CRDs without the aggregation API server. +--- + +# Enable native v3 CRDs + +:::note + +This feature is tech preview. Tech preview features may be subject to significant changes before they become GA. + +::: + +## Big picture + +Enable native `projectcalico.org/v3` CRDs so that Calico resources are backed directly by CRDs, eliminating the need for the Calico aggregation API server. + +## Value + +By default, $[prodname] uses an aggregation API server to serve `projectcalico.org/v3` APIs, storing resources internally as `crd.projectcalico.org/v1` CRDs. When using native `projectcalico.org/v3` CRDs, Calico resources are CRDs themselves, which provides several benefits: + +- **Simpler architecture** — no aggregation API server to deploy and manage +- **GitOps-friendly** — no ordering dependencies between CRDs and the API server, so tools like ArgoCD and Flux can apply resources in any order +- **Less platform friction** — removes the need for host-network pods and other requirements of the aggregation API server +- **kubectl works directly** — manage `projectcalico.org/v3` resources with `kubectl` without installing the API server separately +- **Native Kubernetes validation and defaulting** — uses CEL validation rules embedded in the CRD schemas and MutatingAdmissionPolicies for defaulting, leveraging built-in Kubernetes mechanisms instead of a custom API server + +## Concepts + +### How native `projectcalico.org/v3` CRDs work + +When using native `projectcalico.org/v3` CRDs: + +- $[prodname] resources use the `projectcalico.org/v3` API group and are registered as native Kubernetes CRDs. +- The `APIServer` custom resource is still created, but instead of running the aggregation API server, it deploys a webhooks pod that handles validation and defaulting via admission policies. +- $[prodname] auto-detects the mode at startup based on which CRDs are installed on the cluster. If the `projectcalico.org/v3` CRDs are present, it uses them natively; if the `crd.projectcalico.org/v1` CRDs are present, it runs in API server mode. + +### Validation and defaulting + +When using native `projectcalico.org/v3` CRDs, resource validation and defaulting are handled by native CRD validation and defaulting, as well as ValidatingAdmissionPolicies and MutatingAdmissionPolicies. $[prodname] uses [MutatingAdmissionPolicies](https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/) for defaulting, which are currently a **beta** Kubernetes feature. You must ensure that the `MutatingAdmissionPolicy` feature gate is enabled on your Kubernetes API server before using native `projectcalico.org/v3` CRDs. + +## Before you begin + +- A Kubernetes cluster **without** $[prodname] installed, or a cluster where you are performing a fresh install. There is no automated migration tooling from an existing API server mode cluster to native `projectcalico.org/v3` CRDs at this time. +- The `MutatingAdmissionPolicy` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) must be enabled on the Kubernetes API server. This feature is beta in Kubernetes and is not enabled by default. + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +## How to + +### Install $[prodname] with native `projectcalico.org/v3` CRDs + +Select the method below based on your preferred installation method. + + + + +1. Add the $[prodname] Helm repo: + + ```bash + helm repo add projectcalico https://docs.tigera.io/calico/charts + ``` + +1. Create the `tigera-operator` namespace: + + ```bash + kubectl create namespace tigera-operator + ``` + +1. Install the v3 CRD chart instead of the default v1 CRD chart: + + ```bash + helm install calico-crds projectcalico/projectcalico.org.v3 --version $[releaseTitle] --namespace tigera-operator + ``` + + :::note + + This replaces the `crd.projectcalico.org.v1` chart used in the default installation. Do not install both CRD charts. + + ::: + +1. Install the Tigera Operator: + + ```bash + helm install $[prodnamedash] projectcalico/tigera-operator --version $[releaseTitle] --namespace tigera-operator + ``` + + If you have a `values.yaml` with custom configuration: + + ```bash + helm install $[prodnamedash] projectcalico/tigera-operator --version $[releaseTitle] -f values.yaml --namespace tigera-operator + ``` + + + + +1. Install the v3 CRDs: + + ```bash + kubectl create -f $[manifestsUrl]/manifests/v3_projectcalico_org.yaml + ``` + + :::note + + This replaces the `v1_crd_projectcalico_org.yaml` manifest used in the default installation. Do not install both CRD manifests. + + ::: + +1. Install the Tigera Operator and custom resources: + + ```bash + kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml + ``` + + + + +After installing, complete the following steps: + +1. Create the `APIServer` CR to deploy the webhooks pod. This does **not** run the aggregation API server — instead it deploys admission webhooks that handle validation and defaulting. + + ```bash + kubectl create -f - < -o yaml +``` + +### Tier RBAC enforcement + +In both modes, tier-based RBAC uses the same `ClusterRole` and `RoleBinding` definitions with pseudo-resources like `tier.networkpolicies` and `tier.globalnetworkpolicies`. + +In API server mode, tier RBAC is enforced for all operations (create, update, delete, get, list, watch) by the aggregation API server. + +When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced via the admission webhook for **create, update, and delete** operations. However, **GET, LIST, and WATCH** operations on tiered policies are **not enforced** because admission webhooks cannot intercept read operations. This is a known limitation. + +### calicoctl + +`calicoctl` continues to work when using native `projectcalico.org/v3` CRDs but is less necessary since `kubectl` handles Calico resources natively. `calicoctl` is still useful for: + +- [calicoctl node](../reference/calicoctl/node/index.mdx) subcommands +- [calicoctl ipam](../reference/calicoctl/ipam/index.mdx) subcommands +- [calicoctl convert](../reference/calicoctl/convert.mdx) +- [calicoctl version](../reference/calicoctl/version.mdx) + +## Known limitations + +- **No automated migration** — There is no automated migration tooling for converting an existing cluster from API server mode to native `projectcalico.org/v3` CRDs. This is planned for a follow-on release. +- **GET/LIST/WATCH tier RBAC not enforced** — Admission webhooks cannot intercept read operations, so tier-based RBAC for GET, LIST, and WATCH is not enforced when using native `projectcalico.org/v3` CRDs. diff --git a/calico/reference/architecture/overview.mdx b/calico/reference/architecture/overview.mdx index 071be956dc..48fbcb64c6 100644 --- a/calico/reference/architecture/overview.mdx +++ b/calico/reference/architecture/overview.mdx @@ -32,6 +32,8 @@ The following diagram shows the required and optional $[prodname] components for **Main task**: Lets you manage $[prodname] resources directly with `kubectl`. +In the default installation, this is an aggregation API server that translates between `projectcalico.org/v3` and internal CRD representations. When using [native v3 CRDs](../../operations/native-v3-crds.mdx), this component is not used — `kubectl` works directly with `projectcalico.org/v3` CRDs, and validation and defaulting are handled by admission policies instead. + ## Felix **Main task**: Programs routes and ACLs, and anything else required on the host to provide desired connectivity for the endpoints on that host. Runs on each machine that hosts endpoints. Runs as an agent daemon. [Felix resource](../resources/felixconfig.mdx). diff --git a/calico/reference/installation/helm_customization.mdx b/calico/reference/installation/helm_customization.mdx index ee5dc8352c..42f0c64ec3 100644 --- a/calico/reference/installation/helm_customization.mdx +++ b/calico/reference/installation/helm_customization.mdx @@ -10,8 +10,9 @@ You can customize the following resources and settings during $[prodname] Helm-b - [Default felix configuration](../resources/felixconfig.mdx#spec) :::note -If you customize felix configuration when you install $[prodname], the `v1 apiVersion` is used. However, when you apply -felix configuration customization after installation (when the tigera-apiserver is running), use the `v3 apiVersion`. +If you customize felix configuration when you install $[prodname], the `crd.projectcalico.org/v1` API group is used. However, when you apply +felix configuration customization after installation (when the tigera-apiserver is running), use the `projectcalico.org/v3` API group. +When using [native v3 CRDs](../../operations/native-v3-crds.mdx), only the `projectcalico.org/v3` API group is installed and should always be used. ::: ### Sample values.yaml diff --git a/calico/reference/resources/ippool.mdx b/calico/reference/resources/ippool.mdx index 215188859a..734d614bcc 100644 --- a/calico/reference/resources/ippool.mdx +++ b/calico/reference/resources/ippool.mdx @@ -40,7 +40,7 @@ spec: | Field | Description | Accepted Values | Schema | Default | | ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- | --------------------------------------------- | -| cidr | IP range to use for this pool. | A valid IPv4 or IPv6 CIDR. Subnet length must be at least big enough to fit a single block (by default `/26` for IPv4 or `/122` for IPv6). Must not overlap with the Link Local range `169.254.0.0/16` or `fe80::/10`. | string | | +| cidr | IP range to use for this pool. See [CIDR overlap validation](#cidr-overlap-validation) for details on overlap behavior. | A valid IPv4 or IPv6 CIDR. Subnet length must be at least big enough to fit a single block (by default `/26` for IPv4 or `/122` for IPv6). Must not overlap with the Link Local range `169.254.0.0/16` or `fe80::/10`. | string | | | blockSize | The CIDR size of allocation blocks used by this pool. Blocks are allocated on demand to hosts and are used to aggregate routes. The value can only be set when the pool is created. | 20 to 32 (inclusive) for IPv4 and 116 to 128 (inclusive) for IPv6 | int | `26` for IPv4 pools and `122` for IPv6 pools. | | ipipMode | The mode defining when IPIP will be used. Cannot be set at the same time as `vxlanMode`. | Always, CrossSubnet, Never | string | `Never` | | vxlanMode | The mode defining when VXLAN will be used. Cannot be set at the same time as `ipipMode`. | Always, CrossSubnet, Never | string | `Never` | @@ -78,6 +78,12 @@ addresses. $[prodname] supports Kubernetes [annotations that force the use of specific IP addresses](../configure-cni-plugins.mdx#requesting-a-specific-ip-address). These annotations take precedence over the `allowedUses` field. +### CIDR overlap validation + +By default (API server mode), creating an IPPool with a CIDR that overlaps an existing pool is rejected synchronously at creation time. + +When using [native v3 CRDs](../../operations/native-v3-crds.mdx), CIDR overlap validation is **asynchronous**. Pools with overlapping CIDRs are created successfully but receive a `Disabled` status condition. IPAM does not allocate addresses from disabled pools. Check the IPPool status to identify any pools that have been disabled due to CIDR overlap. + ### IPIP Routing of packets using IP-in-IP will be used when the destination IP address diff --git a/sidebars-calico.js b/sidebars-calico.js index f4845c6b89..5b6dabd920 100644 --- a/sidebars-calico.js +++ b/sidebars-calico.js @@ -536,6 +536,7 @@ module.exports = { 'operations/datastore-migration', 'operations/operator-migration', 'operations/install-apiserver', + 'operations/native-v3-crds', { type: 'category', label: 'Monitor', From 05fa16eb65cd549da1a1225a85d8e81a34acb07d Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Wed, 1 Apr 2026 11:24:03 -0700 Subject: [PATCH 2/5] Address PR review feedback Remove version-specific callouts (implied by being in 3.32 docs), capitalize Felix, remove stale GA feature status tag. --- calico/operations/install-apiserver.mdx | 6 ++---- calico/reference/installation/helm_customization.mdx | 4 ++-- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/calico/operations/install-apiserver.mdx b/calico/operations/install-apiserver.mdx index e86e1606c2..0793727dfb 100644 --- a/calico/operations/install-apiserver.mdx +++ b/calico/operations/install-apiserver.mdx @@ -9,13 +9,11 @@ import TabItem from '@theme/TabItem'; ## Big picture -[ **Feature status**: GA in Calico v3.20+ ] - Install the Calico API server on an existing cluster to enable management of Calico APIs using kubectl. :::tip -Starting in Calico v3.32.0, you can use [native v3 CRDs](native-v3-crds.mdx) to manage `projectcalico.org/v3` resources directly with `kubectl` without installing the aggregation API server. If you are setting up a new cluster and want a simpler architecture, consider native v3 CRDs instead. +You can use [native v3 CRDs](native-v3-crds.mdx) to manage `projectcalico.org/v3` resources directly with `kubectl` without installing the aggregation API server. If you are setting up a new cluster and want a simpler architecture, consider native v3 CRDs instead. ::: @@ -25,7 +23,7 @@ The API server provides a REST API for Calico, and allows management of `project :::note -Starting in Calico v3.20.0, new operator-based installations of Calico include the API server component by default, so the instructions +New operator-based installations of Calico include the API server component by default, so the instructions in this document are not required. ::: diff --git a/calico/reference/installation/helm_customization.mdx b/calico/reference/installation/helm_customization.mdx index 42f0c64ec3..d0f7d7e449 100644 --- a/calico/reference/installation/helm_customization.mdx +++ b/calico/reference/installation/helm_customization.mdx @@ -10,8 +10,8 @@ You can customize the following resources and settings during $[prodname] Helm-b - [Default felix configuration](../resources/felixconfig.mdx#spec) :::note -If you customize felix configuration when you install $[prodname], the `crd.projectcalico.org/v1` API group is used. However, when you apply -felix configuration customization after installation (when the tigera-apiserver is running), use the `projectcalico.org/v3` API group. +If you customize Felix configuration when you install $[prodname], the `crd.projectcalico.org/v1` API group is used. However, when you apply +Felix configuration customization after installation (when the tigera-apiserver is running), use the `projectcalico.org/v3` API group. When using [native v3 CRDs](../../operations/native-v3-crds.mdx), only the `projectcalico.org/v3` API group is installed and should always be used. ::: From d4d87e47e1b6d8e2d51b9dc338d5c017ac0dfcc3 Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Wed, 1 Apr 2026 11:32:54 -0700 Subject: [PATCH 3/5] Add v3 CRD mode documentation to Calico Enterprise Copy native-v3-crds guide and apply the same reference updates (RBAC tier enforcement, architecture overview, helm customization, IPPool CIDR overlap validation) to CE docs. --- .../policy-tiers/rbac-tiered-policies.mdx | 8 +- .../operations/native-v3-crds.mdx | 190 ++++++++++++++++++ .../reference/architecture/overview.mdx | 2 + .../installation/helm_customization.mdx | 5 +- .../reference/resources/ippool.mdx | 8 +- sidebars-calico-enterprise.js | 1 + 6 files changed, 210 insertions(+), 4 deletions(-) create mode 100644 calico-enterprise/operations/native-v3-crds.mdx diff --git a/calico-enterprise/network-policy/policy-tiers/rbac-tiered-policies.mdx b/calico-enterprise/network-policy/policy-tiers/rbac-tiered-policies.mdx index 73acbc3c49..2dbbe80382 100644 --- a/calico-enterprise/network-policy/policy-tiers/rbac-tiered-policies.mdx +++ b/calico-enterprise/network-policy/policy-tiers/rbac-tiered-policies.mdx @@ -19,7 +19,13 @@ Self-service is an important part of CI/CD processes for containerization and mi ### Standard Kubernetes RBAC -$[prodname] implements the standard **Kubernetes RBAC Authorization APIs** with `Role` and `ClusterRole` types. The $[prodname] API server integrates with Kubernetes RBAC Authorization APIs as an extension API server. +$[prodname] implements the standard **Kubernetes RBAC Authorization APIs** with `Role` and `ClusterRole` types. In the default installation, the $[prodname] API server integrates with Kubernetes RBAC Authorization APIs as an extension API server. When using [native v3 CRDs](../../operations/native-v3-crds.mdx), tier RBAC is enforced via an admission webhook instead. + +:::note + +When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced for **create, update, and delete** operations via the admission webhook. However, **GET, LIST, and WATCH** operations on tiered policies are not enforced because admission webhooks cannot intercept read requests. This is a known limitation. + +::: ### RBAC for policies and tiers diff --git a/calico-enterprise/operations/native-v3-crds.mdx b/calico-enterprise/operations/native-v3-crds.mdx new file mode 100644 index 0000000000..fb1776a5ab --- /dev/null +++ b/calico-enterprise/operations/native-v3-crds.mdx @@ -0,0 +1,190 @@ +--- +description: Enable native projectcalico.org/v3 CRDs to use Calico resources directly as CRDs without the aggregation API server. +--- + +# Enable native v3 CRDs + +:::note + +This feature is tech preview. Tech preview features may be subject to significant changes before they become GA. + +::: + +## Big picture + +Enable native `projectcalico.org/v3` CRDs so that Calico resources are backed directly by CRDs, eliminating the need for the Calico aggregation API server. + +## Value + +By default, $[prodname] uses an aggregation API server to serve `projectcalico.org/v3` APIs, storing resources internally as `crd.projectcalico.org/v1` CRDs. When using native `projectcalico.org/v3` CRDs, Calico resources are CRDs themselves, which provides several benefits: + +- **Simpler architecture** — no aggregation API server to deploy and manage +- **GitOps-friendly** — no ordering dependencies between CRDs and the API server, so tools like ArgoCD and Flux can apply resources in any order +- **Less platform friction** — removes the need for host-network pods and other requirements of the aggregation API server +- **kubectl works directly** — manage `projectcalico.org/v3` resources with `kubectl` without installing the API server separately +- **Native Kubernetes validation and defaulting** — uses CEL validation rules embedded in the CRD schemas and MutatingAdmissionPolicies for defaulting, leveraging built-in Kubernetes mechanisms instead of a custom API server + +## Concepts + +### How native `projectcalico.org/v3` CRDs work + +When using native `projectcalico.org/v3` CRDs: + +- $[prodname] resources use the `projectcalico.org/v3` API group and are registered as native Kubernetes CRDs. +- The `APIServer` custom resource is still created, but instead of running the aggregation API server, it deploys a webhooks pod that handles validation and defaulting via admission policies. +- $[prodname] auto-detects the mode at startup based on which CRDs are installed on the cluster. If the `projectcalico.org/v3` CRDs are present, it uses them natively; if the `crd.projectcalico.org/v1` CRDs are present, it runs in API server mode. + +### Validation and defaulting + +When using native `projectcalico.org/v3` CRDs, resource validation and defaulting are handled by native CRD validation and defaulting, as well as ValidatingAdmissionPolicies and MutatingAdmissionPolicies. $[prodname] uses [MutatingAdmissionPolicies](https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/) for defaulting, which are currently a **beta** Kubernetes feature. You must ensure that the `MutatingAdmissionPolicy` feature gate is enabled on your Kubernetes API server before using native `projectcalico.org/v3` CRDs. + +## Before you begin + +- A Kubernetes cluster **without** $[prodname] installed, or a cluster where you are performing a fresh install. There is no automated migration tooling from an existing API server mode cluster to native `projectcalico.org/v3` CRDs at this time. +- The `MutatingAdmissionPolicy` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) must be enabled on the Kubernetes API server. This feature is beta in Kubernetes and is not enabled by default. + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +## How to + +### Install $[prodname] with native `projectcalico.org/v3` CRDs + +Select the method below based on your preferred installation method. + + + + +1. Add the $[prodname] Helm repo: + + ```bash + helm repo add projectcalico https://docs.tigera.io/calico/charts + ``` + +1. Create the `tigera-operator` namespace: + + ```bash + kubectl create namespace tigera-operator + ``` + +1. Install the v3 CRD chart instead of the default v1 CRD chart: + + ```bash + helm install calico-crds projectcalico/projectcalico.org.v3 --version $[releaseTitle] --namespace tigera-operator + ``` + + :::note + + This replaces the `crd.projectcalico.org.v1` chart used in the default installation. Do not install both CRD charts. + + ::: + +1. Install the Tigera Operator: + + ```bash + helm install $[prodnamedash] projectcalico/tigera-operator --version $[releaseTitle] --namespace tigera-operator + ``` + + If you have a `values.yaml` with custom configuration: + + ```bash + helm install $[prodnamedash] projectcalico/tigera-operator --version $[releaseTitle] -f values.yaml --namespace tigera-operator + ``` + + + + +1. Install the v3 CRDs: + + ```bash + kubectl create -f $[manifestsUrl]/manifests/v3_projectcalico_org.yaml + ``` + + :::note + + This replaces the `v1_crd_projectcalico_org.yaml` manifest used in the default installation. Do not install both CRD manifests. + + ::: + +1. Install the Tigera Operator and custom resources: + + ```bash + kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml + ``` + + + + +After installing, complete the following steps: + +1. Create the `APIServer` CR to deploy the webhooks pod. This does **not** run the aggregation API server — instead it deploys admission webhooks that handle validation and defaulting. + + ```bash + kubectl create -f - < -o yaml +``` + +### Tier RBAC enforcement + +In both modes, tier-based RBAC uses the same `ClusterRole` and `RoleBinding` definitions with pseudo-resources like `tier.networkpolicies` and `tier.globalnetworkpolicies`. + +In API server mode, tier RBAC is enforced for all operations (create, update, delete, get, list, watch) by the aggregation API server. + +When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced via the admission webhook for **create, update, and delete** operations. However, **GET, LIST, and WATCH** operations on tiered policies are **not enforced** because admission webhooks cannot intercept read operations. This is a known limitation. + +### calicoctl + +`calicoctl` continues to work when using native `projectcalico.org/v3` CRDs but is less necessary since `kubectl` handles Calico resources natively. `calicoctl` is still useful for: + +- [calicoctl node](../reference/calicoctl/node/index.mdx) subcommands +- [calicoctl ipam](../reference/calicoctl/ipam/index.mdx) subcommands +- [calicoctl convert](../reference/calicoctl/convert.mdx) +- [calicoctl version](../reference/calicoctl/version.mdx) + +## Known limitations + +- **No automated migration** — There is no automated migration tooling for converting an existing cluster from API server mode to native `projectcalico.org/v3` CRDs. This is planned for a follow-on release. +- **GET/LIST/WATCH tier RBAC not enforced** — Admission webhooks cannot intercept read operations, so tier-based RBAC for GET, LIST, and WATCH is not enforced when using native `projectcalico.org/v3` CRDs. diff --git a/calico-enterprise/reference/architecture/overview.mdx b/calico-enterprise/reference/architecture/overview.mdx index 2c41f03888..29a092639a 100644 --- a/calico-enterprise/reference/architecture/overview.mdx +++ b/calico-enterprise/reference/architecture/overview.mdx @@ -152,6 +152,8 @@ The Linseed API uses mTLS to connect to clients, and provides an API to access E **Main task**: Allows users to manage $[prodname] resources such as policies and tiers through `kubectl` or the Kubernetes API. `kubectl` has significant advantages over `calicoctl` including: audit logging, RBAC using Kubernetes Roles and RoleBindings, and not needing to provide privileged Kubernetes CRD access to anyone who needs to manage resources. [API server](../installation/api.mdx#apiserver). +In the default installation, this is an aggregation API server that translates between `projectcalico.org/v3` and internal CRD representations. When using [native v3 CRDs](../../operations/native-v3-crds.mdx), this component is not used — `kubectl` works directly with `projectcalico.org/v3` CRDs, and validation and defaulting are handled by admission policies instead. + ### BIRD **Main task**: Gets routes from Felix and distributes to BGP peers on the network for inter-host routing. Runs on each node that hosts a Felix agent. Open source, internet routing daemon. [BIRD](../component-resources/node/configuration.mdx#content-main). diff --git a/calico-enterprise/reference/installation/helm_customization.mdx b/calico-enterprise/reference/installation/helm_customization.mdx index 08ea877af9..e13274322b 100644 --- a/calico-enterprise/reference/installation/helm_customization.mdx +++ b/calico-enterprise/reference/installation/helm_customization.mdx @@ -21,8 +21,9 @@ You can customize the following resources and settings during $[prodname] Helm-b - [Default felix configuration](../resources/felixconfig.mdx#spec) :::note -If you customize felix configuration when you install $[prodname], the `v1 apiVersion` is used. However, when you apply -felix configuration customization after installation (when the calico-apiserver is running), use the `v3 apiVersion`. +If you customize Felix configuration when you install $[prodname], the `crd.projectcalico.org/v1` API group is used. However, when you apply +Felix configuration customization after installation (when the calico-apiserver is running), use the `projectcalico.org/v3` API group. +When using [native v3 CRDs](../../operations/native-v3-crds.mdx), only the `projectcalico.org/v3` API group is installed and should always be used. ::: ### Sample values.yaml diff --git a/calico-enterprise/reference/resources/ippool.mdx b/calico-enterprise/reference/resources/ippool.mdx index cb63f0ba02..0f990b2a6e 100644 --- a/calico-enterprise/reference/resources/ippool.mdx +++ b/calico-enterprise/reference/resources/ippool.mdx @@ -41,7 +41,7 @@ spec: | Field | Description | Accepted Values | Schema | Default | | ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- | --------------------------------------------- | -| cidr | IP range to use for this pool. | A valid IPv4 or IPv6 CIDR. Subnet length must be at least big enough to fit a single block (by default `/26` for IPv4 or `/122` for IPv6). Must not overlap with the Link Local range `169.254.0.0/16` or `fe80::/10`. | string | | +| cidr | IP range to use for this pool. See [CIDR overlap validation](#cidr-overlap-validation) for details on overlap behavior. | A valid IPv4 or IPv6 CIDR. Subnet length must be at least big enough to fit a single block (by default `/26` for IPv4 or `/122` for IPv6). Must not overlap with the Link Local range `169.254.0.0/16` or `fe80::/10`. | string | | | blockSize | The CIDR size of allocation blocks used by this pool. Blocks are allocated on demand to hosts and are used to aggregate routes. The value can only be set when the pool is created. | 20 to 32 (inclusive) for IPv4 and 116 to 128 (inclusive) for IPv6 | int | `26` for IPv4 pools and `122` for IPv6 pools. | | ipipMode | The mode defining when IPIP will be used. Cannot be set at the same time as `vxlanMode`. | Always, CrossSubnet, Never | string | `Never` | | vxlanMode | The mode defining when VXLAN will be used. Cannot be set at the same time as `ipipMode`. | Always, CrossSubnet, Never | string | `Never` | @@ -82,6 +82,12 @@ addresses. $[prodname] supports Kubernetes [annotations that force the use of specific IP addresses](../component-resources/configuration.mdx#requesting-a-specific-ip-address). These annotations take precedence over the `allowedUses` field. +### CIDR overlap validation + +By default (API server mode), creating an IPPool with a CIDR that overlaps an existing pool is rejected synchronously at creation time. + +When using [native v3 CRDs](../../operations/native-v3-crds.mdx), CIDR overlap validation is **asynchronous**. Pools with overlapping CIDRs are created successfully but receive a `Disabled` status condition. IPAM does not allocate addresses from disabled pools. Check the IPPool status to identify any pools that have been disabled due to CIDR overlap. + ### AWS-backed pools $[prodname] supports IP pools that are backed by the AWS fabric. This feature was added in order diff --git a/sidebars-calico-enterprise.js b/sidebars-calico-enterprise.js index d9ad115517..35b79b4bff 100644 --- a/sidebars-calico-enterprise.js +++ b/sidebars-calico-enterprise.js @@ -608,6 +608,7 @@ module.exports = { 'operations/ebpf/troubleshoot-ebpf', ], }, + 'operations/native-v3-crds', 'operations/nftables', 'operations/decommissioning-a-node', { From 89627376a6853b915e4405f4a9b89e360ff17fef Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Wed, 1 Apr 2026 14:18:17 -0700 Subject: [PATCH 4/5] Remove calicoctl section from CE native v3 CRDs doc calicoctl docs do not exist in the calico-enterprise directory, so the cross-links break the build. --- calico-enterprise/operations/native-v3-crds.mdx | 9 --------- 1 file changed, 9 deletions(-) diff --git a/calico-enterprise/operations/native-v3-crds.mdx b/calico-enterprise/operations/native-v3-crds.mdx index fb1776a5ab..b7802a5ee7 100644 --- a/calico-enterprise/operations/native-v3-crds.mdx +++ b/calico-enterprise/operations/native-v3-crds.mdx @@ -175,15 +175,6 @@ In API server mode, tier RBAC is enforced for all operations (create, update, de When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced via the admission webhook for **create, update, and delete** operations. However, **GET, LIST, and WATCH** operations on tiered policies are **not enforced** because admission webhooks cannot intercept read operations. This is a known limitation. -### calicoctl - -`calicoctl` continues to work when using native `projectcalico.org/v3` CRDs but is less necessary since `kubectl` handles Calico resources natively. `calicoctl` is still useful for: - -- [calicoctl node](../reference/calicoctl/node/index.mdx) subcommands -- [calicoctl ipam](../reference/calicoctl/ipam/index.mdx) subcommands -- [calicoctl convert](../reference/calicoctl/convert.mdx) -- [calicoctl version](../reference/calicoctl/version.mdx) - ## Known limitations - **No automated migration** — There is no automated migration tooling for converting an existing cluster from API server mode to native `projectcalico.org/v3` CRDs. This is planned for a follow-on release. From 27e10466a117490e391d7a2cac155ba9e601c829 Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Wed, 1 Apr 2026 14:50:56 -0700 Subject: [PATCH 5/5] Update native v3 CRDs docs now that automated migration exists The "no automated migration" limitation and prereq language are stale now that the migration controller and docs (PR #2595) exist. Replace with a link to the migration guide. --- calico-enterprise/operations/native-v3-crds.mdx | 3 +-- calico/operations/native-v3-crds.mdx | 3 +-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/calico-enterprise/operations/native-v3-crds.mdx b/calico-enterprise/operations/native-v3-crds.mdx index b7802a5ee7..539cfec81b 100644 --- a/calico-enterprise/operations/native-v3-crds.mdx +++ b/calico-enterprise/operations/native-v3-crds.mdx @@ -40,7 +40,7 @@ When using native `projectcalico.org/v3` CRDs, resource validation and defaultin ## Before you begin -- A Kubernetes cluster **without** $[prodname] installed, or a cluster where you are performing a fresh install. There is no automated migration tooling from an existing API server mode cluster to native `projectcalico.org/v3` CRDs at this time. +- A Kubernetes cluster **without** $[prodname] installed, or a cluster where you are performing a fresh install. To migrate an existing cluster from API server mode, see [Migrate from API server to native CRDs](crd-migration.mdx). - The `MutatingAdmissionPolicy` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) must be enabled on the Kubernetes API server. This feature is beta in Kubernetes and is not enabled by default. import Tabs from '@theme/Tabs'; @@ -177,5 +177,4 @@ When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced via the adm ## Known limitations -- **No automated migration** — There is no automated migration tooling for converting an existing cluster from API server mode to native `projectcalico.org/v3` CRDs. This is planned for a follow-on release. - **GET/LIST/WATCH tier RBAC not enforced** — Admission webhooks cannot intercept read operations, so tier-based RBAC for GET, LIST, and WATCH is not enforced when using native `projectcalico.org/v3` CRDs. diff --git a/calico/operations/native-v3-crds.mdx b/calico/operations/native-v3-crds.mdx index fb1776a5ab..57be41735b 100644 --- a/calico/operations/native-v3-crds.mdx +++ b/calico/operations/native-v3-crds.mdx @@ -40,7 +40,7 @@ When using native `projectcalico.org/v3` CRDs, resource validation and defaultin ## Before you begin -- A Kubernetes cluster **without** $[prodname] installed, or a cluster where you are performing a fresh install. There is no automated migration tooling from an existing API server mode cluster to native `projectcalico.org/v3` CRDs at this time. +- A Kubernetes cluster **without** $[prodname] installed, or a cluster where you are performing a fresh install. To migrate an existing cluster from API server mode, see [Migrate from API server to native CRDs](crd-migration.mdx). - The `MutatingAdmissionPolicy` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) must be enabled on the Kubernetes API server. This feature is beta in Kubernetes and is not enabled by default. import Tabs from '@theme/Tabs'; @@ -186,5 +186,4 @@ When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced via the adm ## Known limitations -- **No automated migration** — There is no automated migration tooling for converting an existing cluster from API server mode to native `projectcalico.org/v3` CRDs. This is planned for a follow-on release. - **GET/LIST/WATCH tier RBAC not enforced** — Admission webhooks cannot intercept read operations, so tier-based RBAC for GET, LIST, and WATCH is not enforced when using native `projectcalico.org/v3` CRDs.