logo
Alauda Container Platform
logo
Alauda Container Platform
Navigation

Overview

Architecture
Release Notes

Install

Overview

Prepare for Installation

Prerequisites
Download
Node Preprocessing
Installing
Global Cluster Disaster Recovery

Upgrade

Overview
Pre-Upgrade Preparation
Upgrade the global cluster
Upgrade Workload Clusters

User Interface

Web Console

Overview
Accessing the Web Console
Customizing the Web Console
Customizing the Left Navigation

CLI Tools

ACP CLI (ac)

Getting Started with ACP CLI
Configuring ACP CLI
Usage of ac and kubectl Commands
Managing CLI Profiles
Extending ACP CLI with Plugins
AC CLI Developer Command Reference
AC CLI Administrator Command Reference
violet CLI

Configure

Feature Gate

Clusters

Overview
Immutable Infrastructure

Node Management

Overview
Add Nodes to On-Premises Clusters
Manage Nodes
Node Monitoring

Managed Clusters

overview

Import Clusters

Overview
Import Standard Kubernetes Cluster
Import OpenShift Cluster
Import Amazon EKS Cluster
Import GKE Cluster
Import Huawei Cloud CCE Cluster (Public Cloud)
Import Azure AKS Cluster
Import Alibaba Cloud ACK Cluster
Import Tencent Cloud TKE Cluster
Register Cluster

Public Cloud Cluster Initialization

Network Initialization

AWS EKS Cluster Network Initialization Configuration
AWS EKS Supplementary Information
Huawei Cloud CCE Cluster Network Initialization Configuration
Azure AKS Cluster Network Initialization Configuration
Google GKE Cluster Network Initialization Configuration

Storage Initialization

Overview
AWS EKS Cluster Storage Initialization Configuration
Huawei Cloud CCE Cluster Storage Initialization Configuration
Azure AKS Cluster Storage Initialization Configuration
Google GKE Cluster Storage Initialization Configuration

How to

Network Configuration for Import Clusters
Fetch import cluster information
Trust an insecure image registry
Collect Network Data from Custom Named Network Cards
Creating an On-Premise Cluster
Hosted Control Plane
Cluster Node Planning
etcd Encryption

How to

Add External Address for Built-in Registry
Choosing a Container Runtime
Optimize Pod Performance with Manager Policies
Updating Public Repository Credentials

Backup and Recovery

Overview
Install
Backup repository

Backup Management

ETCD Backup
Create an application backup schedule
Hooks

Recovery Management

Run an Application Restore Task
Image Registry Replacement

Networking

Guides

Configure Domain
Creating Certificates
Configure Services
Configure Ingresses
Configure Subnets
Configure MetalLB
Configure GatewayAPI Gateway
Configure GatewayAPI Route
Configure ALB
Configure NodeLocal DNSCache
Configure CoreDNS

How To

Tasks for Ingress-Nginx
Tasks for Envoy Gateway
Soft Data Center LB Solution (Alpha)

Kube OVN

Understanding Kube-OVN CNI
Preparing Kube-OVN Underlay Physical Network
Automatic Interconnection of Underlay and Overlay Subnets
Cluster Interconnection (Alpha)
Configure Egress Gateway
Configuring Kube-OVN Network to Support Pod Multi-Network Interfaces (Alpha)
Configure Endpoint Health Checker

alb

Tasks for ALB

Trouble Shooting

How to Solve Inter-node Communication Issues in ARM Environments?
Find Who Cause the Error

Storage

Introduction

Concepts

Core Concepts
Persistent Volume
Access Modes and Volume Modes

Guides

Creating CephFS File Storage Type Storage Class
Creating CephRBD Block Storage Class
Create TopoLVM Local Storage Class
Creating an NFS Shared Storage Class
Deploy Volume Snapshot Component
Creating a PV
Creating PVCs
Using Volume Snapshots

How To

Generic ephemeral volumes
Using an emptyDir
Configuring Persistent Storage Using Local volumes
Configuring Persistent Storage Using NFS
Third‑Party Storage Capability Annotation Guide

Troubleshooting

Recover From PVC Expansion Failure

Object Storage

Introduction
Concepts
Installing

Guides

Creating a BucketClass for Ceph RGW
Creating a BucketClass for MinIO
Create a Bucket Request

How To

Control Access & Quotas for COSI Buckets with CephObjectStoreUser (Ceph Driver)
Machine Configuration

Scalability and Performance

Evaluating Resources for Workload Cluster
Disk Configuration
Evaluating Resources for Global Cluster
Improving Kubernetes Stability for Large-Scale Clusters

Storage

Ceph Distributed Storage

Introduction

Install

Create Standard Type Cluster
Create Stretch Type Cluster
Architecture

Concepts

Core Concepts

Guides

Accessing Storage Services
Managing Storage Pools
Node-specific Component Deployment
Adding Devices/Device Classes
Monitoring and Alerts

How To

Configure a Dedicated Cluster for Distributed Storage
Cleanup Distributed Storage

Disaster Recovery

File Storage Disaster Recovery
Block Storage Disaster Recovery
Object Storage Disaster Recovery
Update the optimization parameters
Create Ceph Object Store User

MinIO Object Storage

Introduction
Install
Architecture

Concepts

Core Concepts

Guides

Adding a Storage Pool
Monitoring & Alerts

How To

Data Disaster Recovery

TopoLVM Local Storage

Introduction
Install

Guides

Device Management
Monitoring and Alerting

How To

Backup and Restore TopoLVM Filesystem PVCs with Velero
Configuring Striped Logical Volumes

Networking

Overview

Networking Operators

MetalLB Operator
Ingress Nginx Operator
Envoy Gateway Operator

ALB Operator

Understanding ALB
Auth
Deploy High Available VIP for ALB
Bind NIC in ALB
Decision‑Making for ALB Performance Selection
Load Balancing Session Affinity Policy in ALB
L4/L7 Timeout
HTTP Redirect
CORS
Header Modification
URL Rewrite
ModSecurity
OTel
TCP/HTTP Keepalive
ALB with Ingress-NGINX Annotation Compatibility
ALB Monitoring

Network Security

Understanding Network Policy APIs
Admin Network Policy
Network Policy

Ingress and Load Balancing

Ingress and Load Balancing with Envoy Gateway
Network Observability

Security

Alauda Container Security

Security and Compliance

Compliance

Introduction
Install Alauda Container Platform Compliance with Kyverno

HowTo

Private Registry Access Configuration
Image Signature Verification Policy
Image Signature Verification Policy with Secrets
Image Registry Validation Policy
Container Escape Prevention Policy
Security Context Enforcement Policy
Network Security Policy
Volume Security Policy

API Refiner

Introduction
Install Alauda Container Platform API Refiner
About Alauda Container Platform Compliance Service

Users and Roles

User

Introduction

Guides

Manage User Roles
Create User
User Management

Group

Introduction

Guides

Manage User Group Roles
Create Local User Group
Manage Local User Group Membership

Role

Introduction

Guides

Create Role
Manage Custom Roles

IDP

Introduction

Guides

LDAP Management
OIDC Management

Troubleshooting

Delete User

User Policy

Introduction

Multitenancy(Project)

Introduction

Guides

Create Project
Manage Project Quotas
Manage Project
Manage Project Cluster
Manage Project Members

Audit

Introduction

Telemetry

Install

Certificates

Automated Kubernetes Certificate Rotation
cert-manager
OLM Certificates
Certificate Monitoring
Rotate TLS Certs of Platform Access Addresses

Virtualization

Virtualization

Overview

Introduction
Install

Images

Introduction

Guides

Adding Virtual Machine Images
Update/Delete Virtual Machine Images
Update/Delete Image Credentials

How To

Creating Windows Images Based on ISO using KubeVirt
Creating Linux Images Based on ISO Using KubeVirt
Exporting Virtual Machine Images
Permissions

Virtual Machine

Introduction

Guides

Creating Virtual Machines/Virtual Machine Groups
Batch Operations on Virtual Machines
Logging into the Virtual Machine using VNC
Managing Key Pairs
Managing Virtual Machines
Monitoring and Alerts
Quick Location of Virtual Machines

How To

Configuring USB host passthrough
Virtual Machine Hot Migration
Virtual Machine Recovery
Clone Virtual Machines on KubeVirt
Physical GPU Passthrough Environment Preparation
Configuring High Availability for Virtual Machines
Create a VM Template from an Existing Virtual Machine

Troubleshooting

Pod Migration and Recovery from Abnormal Shutdown of Virtual Machine Nodes
Hot Migration Error Messages and Solutions

Network

Introduction

Guides

Configure Network

How To

Control Virtual Machine Network Requests Through Network Policy
Configuring SR-IOV
Configuring Virtual Machines to Use Network Binding Mode for IPv6 Support

Storage

Introduction

Guides

Managing Virtual Disks

Backup and Recovery

Introduction

Guides

Using Snapshots
Using Velero

Developer

Overview

Quick Start

Creating a simple application via image

Building Applications

Build application architecture

Concepts

Application Types
Custom Applications
Workload Types
Understanding Parameters
Understanding Environment Variables
Understanding Startup Commands
Resource Unit Description

Namespaces

Creating Namespaces
Importing Namespaces
Resource Quota
Limit Range
Pod Security Policies
UID/GID Assignment
Overcommit Ratio
Managing Namespace Members
Updating Namespaces
Deleting/Removing Namespaces

Creating Applications

Creating applications from Image
Creating applications from Chart
Creating applications from YAML
Creating applications from Code
Creating applications from Operator Backed
Creating applications by using CLI

Operation and Maintaining Applications

Application Rollout

Installing Alauda Container Platform Argo Rollouts
Application Blue Green Deployment
Application Canary Deployment
Status Description

KEDA(Kubernetes Event-driven Autoscaling)

KEDA Overview
Installing KEDA

How To

Integrating ACP Monitoring with Prometheus Plugin
Pausing Autoscaling in KEDA
Configuring HPA
Starting and Stopping Applications
Configuring VerticalPodAutoscaler (VPA)
Configuring CronHPA
Updating Applications
Exporting Applications
Updating and deleting Chart Applications
Version Management for Applications
Deleting Applications
Handling Out of Resource Errors
Health Checks

Workloads

Deployments
DaemonSets
StatefulSets
CronJobs
Jobs
Pods
Containers
Working with Helm charts

Configurations

Configuring ConfigMap
Configuring Secrets

Application Observability

Monitoring Dashboards
Logs
Events

How To

Setting Scheduled Task Trigger Rules
Add ImagePullSecrets to ServiceAccount

Images

Overview of images

How To

Creating images
Managing images

Registry

Introduction

Install

Install Via YAML
Install Via Web UI

How To

Common CLI Command Operations
Using Alauda Container Platform Registry in Kubernetes Clusters

Source to Image

Overview

Introduction
Architecture
Release Notes
Lifecycle Policy

Install

Installing Alauda Container Platform Builds

Upgrade

Upgrading Alauda Container Platform Builds

Guides

Managing applications created from Code

How To

Creating an application from Code

Node Isolation Strategy

Introduction
Architecture

Concepts

Core Concepts

Guides

Create Node Isolation Strategy
Permissions
FAQ

Alauda Container Platform GitOps

About Alauda Container Platform GitOps

Extend

Overview
Operator
Cluster Plugin
Chart Repository
Upload Packages

Observability

Overview

Monitoring

Introduction
Install

Architecture

Monitoring Module Architecture
Monitoring Component Selection Guide
Monitor Component Capacity Planning
Concepts

Guides

Management of Metrics
Management of Alert
Management of Notification
Management of Monitoring Dashboards
Management of Probe

How To

Backup and Restore of Prometheus Monitoring Data
VictoriaMetrics Backup and Recovery of Monitoring Data
Collect Network Data from Custom-Named Network Interfaces
Isolating Monitoring Components on Kubernetes Infra Nodes

Distributed Tracing

Introduction
Install
Architecture
Concepts

Guides

Query Tracing
Query Trace Logs

How To

Non-Intrusive Integration of Tracing in Java Applications
Business Log Associated with the TraceID

Troubleshooting

Unable to Query the Required Tracing
Incomplete Tracing Data

Logs

About Logging Service

Events

Introduction
Events

Inspection

Introduction
Architecture

Guides

Inspection
Component Health Status

Hardware accelerators

About Alauda Build of Hami
About Alauda Build of NVIDIA GPU Device Plugin

Alauda Service Mesh

Service Mesh 1.x
Service Mesh 2.x

Alauda AI

About Alauda AI

Alauda DevOps

About Alauda DevOps

Alauda Cost Management

About Alauda Cost Management

Alauda Application Services

Overview

Introduction
Architecture
Install
Upgrade

Alauda Database Service for MySQL

About Alauda Database Service for MySQL-MGR
About Alauda Database Service for MySQL-PXC

Alauda Cache Service for Redis OSS

About Alauda Cache Service for Redis OSS

Alauda Streaming Service for Kafka

About Alauda Streaming Service for Kafka

Alauda Streaming Service for RabbitMQ

About Alauda Streaming Service for RabbitMQ

Alauda support for PostgreSQL

About Alauda support for PostgreSQL

Operations Management

Introduction

Parameter Template Management

Introduction

Guides

Parameter Template Management

Backup Management

Introduction

Guides

External S3 Storage
Backup Management

Inspection Management

Introduction

Guides

Create Inspection Task
Exec Inspection Task
Update and Delete Inspection Tasks

How To

How to set Inspection scheduling?

Inspection Optimization Recommendations

MySQL

MySQL IO Load Optimization
MySQL Memory Usage Optimization
MySQL Storage Space Optimization
MySQL Active Thread Count Optimization
MySQL Row Lock Optimization

Redis

Redis BigKey
High CPU Usage in Redis
High Memory Usage in Redis

Kafka

High CPU Utilization in Kafka
Kafka Rebalance Optimization
Kafka Memory Usage Optimization
Kafka Storage Space Optimization

RabbitMQ

RabbitMQ Mnesia Database Exception Handling

Alert Management

Introduction

Guides

Relationship with Platform Capabilities

Upgrade Management

Introduction

Guides

Instance Upgrade

API Reference

Overview

Introduction
Kubernetes API Usage Guide

Advanced APIs

Alert APIs

AlertHistories [v1]
AlertHistoryMessages [v1]
AlertStatus [v2]
SilenceStatus [v2]

Event APIs

Search

GitOps APIs

Core
Application
ApplicationSet

Log APIs

Aggregation
Archive
Context
Search

Monitoring APIs

Indicators [monitoring.alauda.io/v1beta1]
Metrics [monitoring.alauda.io/v1beta1]
Variables [monitoring.alauda.io/v1beta1]

Kubernetes APIs

Alert APIs

AlertTemplate [alerttemplates.aiops.alauda.io/v1beta1]
PrometheusRule [prometheusrules.monitoring.coreos.com/v1]

AutoScaling APIs

HorizontalPodAutoscaler [autoscaling/v2]

Configuration APIs

ConfigMap [v1]
Secret [v1]

Inspection APIs

Inspection [inspections.ait.alauda.io/v1alpha1]

MachineConfiguration APIs

MachineConfig [machineconfiguration.alauda.io/v1alpha1]
MachineConfigPool [machineconfiguration.alauda.io/v1alpha1]
MachineConfiguration [machineconfiguration.alauda.io/v1alpha1]

ModulePlugin APIs

ModuleConfig [moduleconfigs.cluster.alauda.io/v1alpha1]
ModuleInfo [moduleinfoes.cluster.alauda.io/v1alpha1]
ModulePlugin [moduleplugins.cluster.alauda.io/v1alpha1]

Namespace APIs

LimitRange [v1]
Namespace [v1]
ResourceQuota [v1]

Networking APIs

HTTPRoute [httproutes.gateway.networking.k8s.io/v1]
Service [v1]
VpcEgressGateway [vpc-egress-gateways.kubeovn.io/v1]
Vpc [vpcs.kubeovn.io/v1]

Notification APIs

Notification [notifications.ait.alauda.io/v1beta1]
NotificationGroup [notificationgroups.ait.alauda.io/v1beta1]
NotificationTemplate [notificationtemplates.ait.alauda.io/v1beta1]

Operator APIs

Operator [operators.operators.coreos.com/v1]

Workload APIs

Cronjob [batch/v1]
DameonSet [apps/v1]
Deployment [apps/v1]
Job [batch/v1]
Pod [v1]
Replicaset [apps/v1]
ReplicationController [v1]
Statefulset [apps/v1]
Previous PageCluster Interconnection (Alpha)
Next PageConfiguring Kube-OVN Network to Support Pod Multi-Network Interfaces (Alpha)

#Configure Egress Gateway

#TOC

#About Egress Gateway

Egress Gateway is used to control external network access for Pods with a group of static addresses and has the following features:

  • Achieves Active-Active high availability through ECMP, enabling horizontal throughput scaling
  • Implements fast failover (<1s) via BFD
  • Supports IPv6 and dual-stack
  • Enables granular routing control through NamespaceSelector and PodSelector
  • Allows flexible scheduling of Egress Gateway through NodeSelector

At the same time, Egress Gateway has the following limitations:

  • Uses macvlan for underlying network connectivity, requiring Underlay support from the underlying network
  • In multi-instance Gateway mode, multiple Egress IPs are required
  • Currently, only supports SNAT; EIP and DNAT are not supported
  • Currently, recording source address translation relationships is not supported

#Implementation Details

Each Egress Gateway consists of multiple Pods with multiple network interfaces. Each Pod has two network interfaces: one joins the virtual network for communication within the VPC, and the other connects to the underlying physical network via Macvlan for external network communication. Virtual network traffic ultimately accesses the external network through NAT within the Egress Gateway instances.

Each Egress Gateway instance registers its address in the OVN routing table. When a Pod within the VPC needs to access the external network, OVN uses source address hashing to forward traffic to multiple Egress Gateway instance addresses, achieving load balancing. As the number of Egress Gateway instances increases, throughput can also scale horizontally.

OVN uses the BFD protocol to probe multiple Egress Gateway instances. When an Egress Gateway instance fails, OVN marks the corresponding route as unavailable, enabling rapid failure detection and recovery.

#Notes

  • Only Kube-OVN CNI supports Egress Gateway.
  • Egress Gateway requires Multus-CNI.

#Usage

#Creating a Network Attachment Definition

Egress Gateway uses multiple NICs to access both the internal network and the external network, so you need to create a Network Attachment Definition to connect to the external network. An example of using the macvlan plugin with IPAM provided by Kube-OVN is shown below:

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: eth1
  namespace: default
spec:
  config: '{
    "cniVersion": "0.3.0",
    "type": "macvlan",
    "master": "eth1",
    "mode": "bridge",
    "ipam": {
    "type": "kube-ovn",
    "server_socket": "/run/openvswitch/kube-ovn-daemon.sock",
    "provider": "eth1.default"
    }
    }'
---
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
  name: macvlan1
spec:
  protocol: IPv4
  provider: eth1.default
  cidrBlock: 172.17.0.0/16
  gateway: 172.17.0.1
  excludeIps:
    - 172.17.0.2..172.17.0.10
  1. Host interface that connects to the external network.
  2. Provider name with a format of <network attachment definition name>.<namespace>.
  3. Provider name used to identify the external network and MUST be consistent with the one in the NetworkAttachmentDefinition.
TIP

You can create a Network Attachment Definition with any CNI plugin to access the corresponding network.

#Creating a VPC Egress Gateway

Create a VPC Egress Gateway resource as shown in the example below:

apiVersion: kubeovn.io/v1
kind: VpcEgressGateway
metadata:
  name: gateway1
  namespace: default
spec:
  replicas: 1
  externalSubnet: macvlan1
  nodeSelector:
    - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
            - kube-ovn-worker
            - kube-ovn-worker2
  selectors:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: default
  policies:
    - snat: true
      subnets:
        - subnet1
    - snat: false
      ipBlocks:
        - 10.18.0.0/16
  1. Namespace where the VPC Egress Gateway instances is created.
  2. Replicas of the VPC Egress Gateway instances.
  3. External subnet that connects to the external network.
  4. Node selectors to which the VPC Egress Gateway applies.
  5. Namespace and Pod selectors to which the VPC Egress Gateway applies.
  6. Policies for the VPC Egress Gateway, including SNAT and subnets/ipBlocks to be applied.
  7. Whether to enable SNAT for the policy.
  8. Subnets to which the policy applies.
  9. IP blocks to which the policy applies.

The above resource creates a VPC Egress Gateway named gateway1 under the default namespace, and the following Pods will access the external network via the macvlan1 subnet:

  • Pods in the default namespace
  • Pods under the subnet1 subnet
  • Pods with IPs in the CIDR 10.18.0.0/16
NOTICE

Pods matching .spec.selectors will access the external network with SNAT enabled.

After the creation is complete, check out the VPC Egress Gateway resource:

$ kubectl get veg gateway1
NAME       VPC           REPLICAS   BFD ENABLED   EXTERNAL SUBNET   PHASE       READY   AGE
gateway1   ovn-cluster   1          false         macvlan1          Completed   true    13s

To view more information:

kubectl get veg gateway1 -o wide
NAME       VPC           REPLICAS   BFD ENABLED   EXTERNAL SUBNET   PHASE       READY   INTERNAL IPS     EXTERNAL IPS      WORKING NODES         AGE
gateway1   ovn-cluster   1          false         macvlan1          Completed   true    ["10.16.0.12"]   ["172.17.0.11"]   ["kube-ovn-worker"]   82s

To view the workload:

$ kubectl get deployment -l ovn.kubernetes.io/vpc-egress-gateway=gateway1
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
gateway1   1/1     1            1           4m40s

$ kubectl get pod -l ovn.kubernetes.io/vpc-egress-gateway=gateway1 -o wide
NAME                       READY   STATUS    RESTARTS   AGE     IP           NODE              NOMINATED NODE   READINESS GATES
gateway1-b9f8b4448-76lhm   1/1     Running   0          4m48s   10.16.0.12   kube-ovn-worker   <none>           <none>

To view IP addresses, routes, and iptables rules in the Pod:

$ kubectl exec gateway1-b9f8b4448-76lhm -c gateway -- ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: net1@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 62:d8:71:90:7b:86 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.11/16 brd 172.17.255.255 scope global net1
       valid_lft forever preferred_lft forever
    inet6 fe80::60d8:71ff:fe90:7b86/64 scope link
       valid_lft forever preferred_lft forever
17: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
    link/ether 36:7c:6b:c7:82:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.16.0.12/16 brd 10.16.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::347c:6bff:fec7:826b/64 scope link
       valid_lft forever preferred_lft forever

$ kubectl exec gateway1-b9f8b4448-76lhm -c gateway -- ip rule show
0:      from all lookup local
1001:   from all iif eth0 lookup default
1002:   from all iif net1 lookup 1000
1003:   from 10.16.0.12 iif lo lookup 1000
1004:   from 172.17.0.11 iif lo lookup default
32766:  from all lookup main
32767:  from all lookup default

$ kubectl exec gateway1-b9f8b4448-76lhm -c gateway -- ip route show
default via 172.17.0.1 dev net1
10.16.0.0/16 dev eth0 proto kernel scope link src 10.16.0.12
10.17.0.0/16 via 10.16.0.1 dev eth0
10.18.0.0/16 via 10.16.0.1 dev eth0
172.17.0.0/16 dev net1 proto kernel scope link src 172.17.0.11

$ kubectl exec gateway1-b9f8b4448-76lhm -c gateway -- ip route show table 1000
default via 10.16.0.1 dev eth0

$ kubectl exec gateway1-b9f8b4448-76lhm -c gateway -- iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N VEG-MASQUERADE
-A PREROUTING -i eth0 -j MARK --set-xmark 0x4000/0x4000
-A POSTROUTING -d 10.18.0.0/16 -j RETURN
-A POSTROUTING -s 10.18.0.0/16 -j RETURN
-A POSTROUTING -j VEG-MASQUERADE
-A VEG-MASQUERADE -j MARK --set-xmark 0x0/0xffffffff
-A VEG-MASQUERADE -j MASQUERADE --random-fully

Capture packets in the Gateway Pod to verify network traffic:

$ kubectl exec -ti gateway1-b9f8b4448-76lhm -c gateway -- bash
nobody@gateway1-b9f8b4448-76lhm:/kube-ovn$ tcpdump -i any -nnve icmp and host 172.17.0.1
tcpdump: data link type LINUX_SLL2
tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
06:50:58.936528 eth0  In  ifindex 17 92:26:b8:9e:f2:1c ethertype IPv4 (0x0800), length 104: (tos 0x0, ttl 63, id 30481, offset 0, flags [DF], proto ICMP (1), length 84)
    10.17.0.9 > 172.17.0.1: ICMP echo request, id 37989, seq 0, length 64
06:50:58.936574 net1  Out ifindex 2 62:d8:71:90:7b:86 ethertype IPv4 (0x0800), length 104: (tos 0x0, ttl 62, id 30481, offset 0, flags [DF], proto ICMP (1), length 84)
    172.17.0.11 > 172.17.0.1: ICMP echo request, id 39449, seq 0, length 64
06:50:58.936613 net1  In  ifindex 2 02:42:39:79:7f:08 ethertype IPv4 (0x0800), length 104: (tos 0x0, ttl 64, id 26701, offset 0, flags [none], proto ICMP (1), length 84)
    172.17.0.1 > 172.17.0.11: ICMP echo reply, id 39449, seq 0, length 64
06:50:58.936621 eth0  Out ifindex 17 36:7c:6b:c7:82:6b ethertype IPv4 (0x0800), length 104: (tos 0x0, ttl 63, id 26701, offset 0, flags [none], proto ICMP (1), length 84)
    172.17.0.1 > 10.17.0.9: ICMP echo reply, id 37989, seq 0, length 64

Routing policies are automatically created on the OVN Logical Router:

$ kubectl ko nbctl lr-policy-list ovn-cluster
Routing Policies
     31000                            ip4.dst == 10.16.0.0/16   allow
     31000                            ip4.dst == 10.17.0.0/16   allow
     31000                           ip4.dst == 100.64.0.0/16   allow
     30000                              ip4.dst == 172.18.0.2  reroute  100.64.0.4
     30000                              ip4.dst == 172.18.0.3  reroute  100.64.0.3
     30000                              ip4.dst == 172.18.0.4  reroute  100.64.0.2
     29100                  ip4.src == $VEG.8ca38ae7da18.ipv4  reroute  10.16.0.12
     29100                   ip4.src == $VEG.8ca38ae7da18_ip4  reroute  10.16.0.12
     29000 ip4.src == $ovn.default.kube.ovn.control.plane_ip4  reroute  100.64.0.3
     29000       ip4.src == $ovn.default.kube.ovn.worker2_ip4  reroute  100.64.0.2
     29000        ip4.src == $ovn.default.kube.ovn.worker_ip4  reroute  100.64.0.4
     29000     ip4.src == $subnet1.kube.ovn.control.plane_ip4  reroute  100.64.0.3
     29000           ip4.src == $subnet1.kube.ovn.worker2_ip4  reroute  100.64.0.2
     29000            ip4.src == $subnet1.kube.ovn.worker_ip4  reroute  100.64.0.4
  1. Logical Router Policy used by the VPC Egress Gateway to forward traffic from the Pods specified by .spec.policies.
  2. Logical Router Policy used by the VPC Egress Gateway to forward traffic from the Pods specified by .spec.selectors.

If you need to enable load balancing, modify .spec.replicas as shown in the following example:

$ kubectl scale veg gateway1 --replicas=2
vpcegressgateway.kubeovn.io/gateway1 scaled

$ kubectl get veg gateway1
NAME       VPC           REPLICAS   BFD ENABLED   EXTERNAL SUBNET   PHASE       READY   AGE
gateway1   ovn-cluster   2          false         macvlan           Completed   true    39m

$ kubectl get pod -l ovn.kubernetes.io/vpc-egress-gateway=gateway1 -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE               NOMINATED NODE   READINESS GATES
gateway1-b9f8b4448-76lhm   1/1     Running   0          40m   10.16.0.12   kube-ovn-worker    <none>           <none>
gateway1-b9f8b4448-zd4dl   1/1     Running   0          64s   10.16.0.13   kube-ovn-worker2   <none>           <none>

$ kubectl ko nbctl lr-policy-list ovn-cluster
Routing Policies
     31000                            ip4.dst == 10.16.0.0/16    allow
     31000                            ip4.dst == 10.17.0.0/16    allow
     31000                           ip4.dst == 100.64.0.0/16    allow
     30000                              ip4.dst == 172.18.0.2  reroute  100.64.0.4
     30000                              ip4.dst == 172.18.0.3  reroute  100.64.0.3
     30000                              ip4.dst == 172.18.0.4  reroute  100.64.0.2
     29100                  ip4.src == $VEG.8ca38ae7da18.ipv4  reroute  10.16.0.12, 10.16.0.13
     29100                   ip4.src == $VEG.8ca38ae7da18_ip4  reroute  10.16.0.12, 10.16.0.13
     29000 ip4.src == $ovn.default.kube.ovn.control.plane_ip4  reroute  100.64.0.3
     29000       ip4.src == $ovn.default.kube.ovn.worker2_ip4  reroute  100.64.0.2
     29000        ip4.src == $ovn.default.kube.ovn.worker_ip4  reroute  100.64.0.4
     29000     ip4.src == $subnet1.kube.ovn.control.plane_ip4  reroute  100.64.0.3
     29000           ip4.src == $subnet1.kube.ovn.worker2_ip4  reroute  100.64.0.2
     29000            ip4.src == $subnet1.kube.ovn.worker_ip4  reroute  100.64.0.4

#Enabling BFD-based High Availability

BFD-based high availability relies on the VPC BFD LRP function, so you need to modify the VPC resource to enable BFD Port. Here is an example to enable BFD Port for the default VPC:

apiVersion: kubeovn.io/v1
kind: Vpc
metadata:
  name: ovn-cluster
spec:
  bfdPort:
    enabled: true
    ip: 10.255.255.255
    nodeSelector:
      matchLabels:
        kubernetes.io/os: linux
  1. Whether to enable the BFD Port.
  2. IP address of the BFD Port, which MUST be a valid IP address that does not conflict with ANY other IPs/Subnets.
  3. Node selector used to select the nodes where the BFD Port is running in Active-Backup mode.

After the BFD Port is enabled, an LRP dedicated to BFD is automatically created on the corresponding OVN Logical Router:

$ kubectl ko nbctl show ovn-cluster
router 0c1d1e8f-4c86-4d96-88b2-c4171c7ff824 (ovn-cluster)
    port bfd@ovn-cluster
        mac: "8e:51:4b:16:3c:90"
        networks: ["10.255.255.255"]
    port ovn-cluster-join
        mac: "d2:21:17:71:77:70"
        networks: ["100.64.0.1/16"]
    port ovn-cluster-ovn-default
        mac: "d6:a3:f5:31:cd:89"
        networks: ["10.16.0.1/16"]
    port ovn-cluster-subnet1
        mac: "4a:09:aa:96:bb:f5"
        networks: ["10.17.0.1/16"]
  1. BFD Port created on the OVN Logical Router.

After that, set .spec.bfd.enabled to true in VPC Egress Gateway. An example is shown below:

apiVersion: kubeovn.io/v1
kind: VpcEgressGateway
metadata:
  name: gateway2
  namespace: default
spec:
  vpc: ovn-cluster
  replicas: 2
  internalSubnet: ovn-default
  externalSubnet: macvlan1
  bfd:
    enabled: true
    minRX: 100
    minTX: 100
    multiplier: 5
  policies:
    - snat: true
      ipBlocks:
        - 10.18.0.0/16
  1. VPC to which the Egress Gateway belongs.
  2. Internal subnet to which the Egress Gateway instances are connected.
  3. External subnet to which the Egress Gateway instances are connected.
  4. Whether to enable BFD for the Egress Gateway.
  5. Minimum receive interval for BFD, in milliseconds.
  6. Minimum transmit interval for BFD, in milliseconds.
  7. Multiplier for BFD, which determines the number of missed packets before declaring a failure.

To view VPC Egress Gateway information:

$ kubectl get veg gateway2 -o wide
NAME       VPC    REPLICAS   BFD ENABLED   EXTERNAL SUBNET   PHASE       READY   INTERNAL IPS                    EXTERNAL IPS                    WORKING NODES                            AGE
gateway2   vpc1   2          true          macvlan           Completed   true    ["10.16.0.102","10.16.0.103"]   ["172.17.0.13","172.17.0.14"]   ["kube-ovn-worker","kube-ovn-worker2"]   58s

$ kubectl get pod -l ovn.kubernetes.io/vpc-egress-gateway=gateway2 -o wide
NAME                       READY   STATUS    RESTARTS   AGE     IP            NODE               NOMINATED NODE   READINESS GATES
gateway2-fcc6b8b87-8lgvx   1/1     Running   0          2m18s   10.16.0.103   kube-ovn-worker2   <none>           <none>
gateway2-fcc6b8b87-wmww6   1/1     Running   0          2m18s   10.16.0.102   kube-ovn-worker    <none>           <none>

$ kubectl ko nbctl lr-policy-list ovn-cluster
Routing Policies
     31000                            ip4.dst == 10.16.0.0/16    allow
     31000                            ip4.dst == 10.17.0.0/16    allow
     31000                           ip4.dst == 100.64.0.0/16    allow
     30000                              ip4.dst == 172.18.0.2  reroute  100.64.0.4
     30000                              ip4.dst == 172.18.0.3  reroute  100.64.0.3
     30000                              ip4.dst == 172.18.0.4  reroute  100.64.0.2
     29100                  ip4.src == $VEG.8ca38ae7da18.ipv4  reroute  10.16.0.102, 10.16.0.103  bfd
     29100                   ip4.src == $VEG.8ca38ae7da18_ip4  reroute  10.16.0.102, 10.16.0.103  bfd
     29090                  ip4.src == $VEG.8ca38ae7da18.ipv4     drop
     29090                   ip4.src == $VEG.8ca38ae7da18_ip4     drop
     29000 ip4.src == $ovn.default.kube.ovn.control.plane_ip4  reroute  100.64.0.3
     29000       ip4.src == $ovn.default.kube.ovn.worker2_ip4  reroute  100.64.0.2
     29000        ip4.src == $ovn.default.kube.ovn.worker_ip4  reroute  100.64.0.4
     29000     ip4.src == $subnet1.kube.ovn.control.plane_ip4  reroute  100.64.0.3
     29000           ip4.src == $subnet1.kube.ovn.worker2_ip4  reroute  100.64.0.2
     29000            ip4.src == $subnet1.kube.ovn.worker_ip4  reroute  100.64.0.4

$ kubectl ko nbctl list bfd
_uuid               : 223ede10-9169-4c7d-9524-a546e24bfab5
detect_mult         : 5
dst_ip              : "10.16.0.102"
external_ids        : {af="4", vendor=kube-ovn, vpc-egress-gateway="default/gateway2"}
logical_port        : "bfd@ovn-cluster"
min_rx              : 100
min_tx              : 100
options             : {}
status              : up

_uuid               : b050c75e-2462-470b-b89c-7bd38889b758
detect_mult         : 5
dst_ip              : "10.16.0.103"
external_ids        : {af="4", vendor=kube-ovn, vpc-egress-gateway="default/gateway2"}
logical_port        : "bfd@ovn-cluster"
min_rx              : 100
min_tx              : 100
options             : {}
status              : up

To view BFD connections:

$ kubectl exec gateway2-fcc6b8b87-8lgvx -c bfdd -- bfdd-control status
There are 1 sessions:
Session 1
 id=1 local=10.16.0.103 (p) remote=10.255.255.255 state=Up

$ kubectl exec gateway2-fcc6b8b87-wmww6 -c bfdd -- bfdd-control status
There are 1 sessions:
Session 1
 id=1 local=10.16.0.102 (p) remote=10.255.255.255 state=Up
NOTICE

If all the gateway instances are down, egress traffic to which the VPC Egress Gateway is applied will be dropped.

#Configuration Parameters

#VPC BFD Port

FieldsTypeOptionalDefault ValueDescriptionExamples
enabledbooleanYesfalseWhether to enable the BFD Port.true
ipstringNo-The IP address used by the BFD Port.
Must NOT conflict with other addresses. IPv4, IPv6 and dual-stack are supported.
169.255.255.255
fdff::1
169.255.255.255,fdff::1
nodeSelectormatchLabelsobjectYes-Label selectors used to select nodes that carries the BFD Port work.
The BFD Port binds an OVN HA Chassis Group of selected nodes and works in Active/Backup mode.
If this field is not specified, Kube-OVN automatically selects up to three nodes.
You can view all OVN HA Chassis Group resources by executing kubectl ko nbctl list ha_chassis_group.
A map of {key,value} pairs.-
matchExpressionsobject arrayYes-A list of label selector requirements. The requirements are ANDed.-

#VPC Egress Gateway

FieldsTypeOptionalDefault ValueDescriptionExamples
vpcstringYesName of the default VPC (ovn-cluster)VPC name.vpc1
replicasinteger/int32Yes1Replicas.2
prefixstringYes-Immutable prefix of the workload deployment name.veg-
imagestringYes-The image used by the workload deployment.docker.io/kubeovn/kube-ovn:v1 .14.0-debug
internalSubnetstringYesName of the default subnet within the VPC.Name of the subnet used to access the internal/external network.subnet1
externalSubnetNo-ext1
internalIPsstring arrayYes-

IP addresses used for accessing the internal/external network. IPv4, IPv6 and dual-stack are supported.
The number of IPs specified must NOT be less than replicas.
It is recommended to set the number to <replicas> + 1 to avoid extreme cases where the Pod is not created properly.

10.16.0.101

fdff::1

169.255.255.255,fdff::1
externalIPs
bfdenabledbooleanYesfalseBFD Configuration.Whether to enable BFD for the Egress Gateway.-
minRXinteger/int32Yes1000BFD minRX/minTX in milliseconds.500
minTX
multiplierinteger/int32Yes3BFD multiplier.1
policiessnatbooleanYesfalseEgress policies.Whether to enable SNAT/MASQUERADE.true
ipBlocksstring arrayYes-

IP range segments to which the gateway is applied.
Both IPv4 and IPv6 are supported.

192.168.0.1
192.168.0.0/24
fd00::1
fd00::/120
subnetsstring arrayYes-

The VPC subnet name to which the gateway is applied.
IPv4, IPv6 and dual-stack subnets are supported.

subnet1
selectorsnamespaceSelectormatchLabelsobjectYes-

Configure Egress policies by namespace selectors and Pod selectors.
SNAT/MASQUERADE will be applied to the matched Pods.

Namespace selector. An empty label selector matches all namespaces.

A map of {key,value} pairs.-
matchExpressionsobject arrayYes-A list of label selector requirements. The requirements are ANDed.-
podSelectormatchLabelsobjectYes-

Pod selector. An empty label selector matches all Pods.

A map of {key,value} pairs.-
matchExpressionsobject arrayYes-A list of label selector requirements. The requirements are ANDed.-
nodeSelectormatchLabelsobjectYes-

Node selector used to select nodes that carries the workload deployment.
The workload (Deployment/Pod) will run on the selected nodes.

A map of {key,value} pairs.-
matchExpressionsobject arrayYes-A list of label selector requirements. The requirements are ANDed.-
matchFieldsobject arrayYes-A list of field selector requirements. The requirements are ANDed.-
trafficPolicystringYesCluster

Effective only when BFD is enabled.
Available values: Cluster/Local.
When set to Local, Egress traffic will be redirected to the VPC Egress Gateway instance running on the same node if available.
If the instance is down, Egress traffic will be redirected to other instances.

Local

#Additional resources

  • Egress Gateway - Kube-OVN Document
  • RFC 5880 - Bidirectional Forwarding Detection (BFD)