In Kubernetes, a Service is a method for exposing a network application that is running as one or more Pods in your cluster.
TOC
Why Service is Needed
-
Pods have their own IPs, but:
-
Service solves this by providing:
Example ClusterIP type Service:
# simple-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 80
- The available type values and their behaviors are
ClusterIP, NodePort, LoadBalancer, ExternalName
- The set of Pods targeted by a Service is usually determined by a selector that you define.
Service port.
- Bind
targetPort of the Service to the Pod containerPort. In addition, you can reference port.name under the pod container.
Headless Services
Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed headless Services:
Headless Services are useful when:
-
You want to discover individual Pod IPs, not just a single service IP.
-
You need direct connections to each Pod (e.g., for databases like Cassandra or StatefulSets).
-
You're using StatefulSets where each Pod must have a stable DNS name.
Creating a service by using the web console
-
Go to Container Platform.
-
In the left navigation bar, click Network > Services.
-
Click Create Service.
-
Refer to the following instructions to configure the relevant parameters.
| Parameter | Description |
|---|
| Virtual IP Address | If enabled, a ClusterIP will be allocated for this Service, which can be used for service discovery within the cluster. If disabled, a Headless Service will be created, which is usually used by StatefulSet. |
| Type | - ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster.
- NodePort: Exposes the Service on each Node's IP at a static port (the NodePort).
- ExternalName: Maps the Service to the contents of the externalName field (for example, to the hostname api.foo.bar.example).
- LoadBalancer: Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.
|
| Target Component | - Workload: The Service will forward requests to a specific workload, which matches the labels like
project.cpaas.io/name: projectname and service.cpaas.io/name: deployment-name.
- Virtualization: The Service will forward requests to a specific virtual machine or virtual machine group.
- Label Selector: The Service will forward requests to a certain type of workload with specified labels, for example,
environment: release.
|
| Port | Used to configure the port mapping for this Service. In the following example, other podss within the cluster can call this Service via the virtual IP (if enabled) and TCP port 80; the access requests will be forwarded to the externally exposed TCP port 6379 or redis of the target component's pods.- Protocol: The protocol used by the Service, supported protocols include:
TCP, UDP, HTTP, HTTP2, HTTPS, gRPC. - Service Port: The service port number exposed by the Service within the cluster, that is, Port, e.g., 80.
- Container Port: The target port number (or name) that the service port maps to, that is, targetPort, e.g., 6379 or redis.
- Service Port Name: Will be generated automatically. The format is
<protocol>-<service port>-<container port>, for example: tcp-80-6379 or tcp-80-redis.
|
| Session Affinity | Session affinity based on the source IP address (ClientIP). If enabled, all access requests from the same IP address will be kept on the same server during load balancing, ensuring that requests from the same client are forwarded to the same server for processing. |
-
Click Create.
Creating a service by using the CLI
kubectl apply -f simple-service.yaml
Create a service based on an existing deployment resource my-app.
kubectl expose deployment my-app \
--port=80 \
--target-port=8080 \
--name=test-service \
--type=NodePort \
-n p1-1
Example: Accessing an Application Within the Cluste
# access-internal-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip
spec:
type: ClusterIP
selector:
app: nginx
ports:
- port: 80
targetPort: 80
-
Apply this YAML:
kubectl apply -f access-internal-demo.yaml
-
Starting another Pod:
kubectl run test-pod --rm -it --image=busybox -- /bin/sh
-
Accessing the nginx-clusterip service in test-pod Pod:
wget -qO- http://nginx-clusterip
# or using DNS records created automatically by Kubernetes: <service-name>.<namespace>.svc.cluster.local
wget -qO- http://nginx-clusterip.default.svc.cluster.local
You should see a HTML response containing text like "Welcome to nginx!".
Example: Accessing an Application Outside the Cluste
# access-external-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
-
Apply this YAML:
kubectl apply -f access-external-demo.yaml
-
Checking Pods:
kubectl get pods -l app=nginx -o wide
-
curl Service:
curl http://{NodeIP}:{nodePort}
You should see a HTML response containing text like "Welcome to nginx!".
Of course, it is also possible to access the application from outside the cluster by creating a Service of type LoadBalancer.
Note: Please configure the LoadBalancer service beforehand.
# access-external-demo-with-loadbalancer.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-lb-service
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- port: 80
targetPort: 80
-
Apply this YAML:
kubectl apply -f access-external-demo-with-loadbalancer.yaml
-
Get external ip address:
kubectl get svc nginx-lb-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.0.2.57 34.122.45.100 80:30005/TCP 30s
EXTERNAL-IP is the address you access from your browser.
curl http://34.122.45.100
You should see a HTML response containing text like "Welcome to nginx!".
If EXTERNAL-IP is pending, the Loadbalancer service is not currently deployed on the cluster.
Example: ExternalName type of Servce
apiVersion: v1
kind: Service
metadata:
name: my-external-service
namespace: default
spec:
type: ExternalName
externalName: example.com
-
Apply this YAML:
kubectl apply -f external-service.yaml
-
Try to resolve inside a Pod in the cluster:
kubectl run test-pod --rm -it --image=busybox -- sh
then:
nslookup my-external-service.default.svc.cluster.local
You'll see that it resolves to example.com.
LoadBalancer Type Service Annotations
AWS EKS Cluster
For detailed explanations of the EKS LoadBalancer Service annotations, please refer to the Annotation Usage Documentation .
| Key | Value | Description |
|---|
| service.beta.kubernetes.io/aws-load-balancer-type | external: Use the official AWS LoadBalancer Controller. | Specifies the controller for the LoadBalancer type.
Note: Please contact the platform administrator in advance to deploy the AWS LoadBalancer Controller. |
| service.beta.kubernetes.io/aws-load-balancer-nlb-target-type | - instance: Traffic will be sent to the pods via NodePort.
- ip: Traffic routes directly to the pods (the cluster must use Amazon VPC CNI).
| Specifies how traffic reaches the pods. |
| service.beta.kubernetes.io/aws-load-balancer-scheme | - internal: Private network.
- internet-facing: Public network.
| Specifies whether to use a private network or a public network. |
| service.beta.kubernetes.io/aws-load-balancer-ip-address-type | | Specifies the supported IP address stack. |
Huawei Cloud CCE Cluster
For detailed explanations of the CCE LoadBalancer Service annotations, please refer to the Annotation Usage Documentation .
| Key | Value | Description |
|---|
| kubernetes.io/elb.id | | Fill in the ID of the cloud load balancer, must use an existing cloud load balancer. |
| kubernetes.io/elb.autocreate | Example: {"type":"public","bandwidth_name":"cce-bandwidth-1551163379627","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","available_zone":["cn-north-4b"],"l4_flavor_name":"L4_flavor.elb.s1.small"}
Note: Please read the Filling Instructions first and adjust the example parameters as needed. | New cloud load balancer to be created. |
| kubernetes.io/elb.subnet-id | | The ID of the subnet where the cluster is located. When the Kubernetes version is 1.11.7-r0 or lower, this field must be filled when creating a new cloud load balancer. |
| kubernetes.io/elb.class | - union: Shared load balancing.
- performance: Exclusive load balancing, only supported in Kubernetes version 1.17 and above.
| Specifies the type of the new cloud load balancer to be created, please refer to Differences Between Exclusive and Shared Elastic Load Balancing. |
| kubernetes.io/elb.enterpriseID | | Specifies the enterprise project to which the newly created cloud load balancer belongs. |
Azure AKS Cluster
For detailed explanations of the AKS LoadBalancer Service annotations, please refer to the Annotation Usage Documentation .
| Key | Value | Description |
|---|
| service.beta.kubernetes.io/azure-load-balancer-internal | - true: Private network.
- false: Public network.
| Specifies whether to use a private network or a public network. |
Google GKE Cluster
For detailed explanations of the GKE LoadBalancer Service annotations, please refer to the Annotation Usage Documentation .
| Key | Value | Description |
|---|
| networking.gke.io/load-balancer-type | Internal | Specifies the use of a private network. |
| cloud.google.com/l4-rbs | enabled | Defaults to public. If this parameter is configured, traffic will route directly to the pods. |
This example demonstrates how to configure a LoadBalancer Service using MetalLB BGP mode with externalTrafficPolicy: Local to achieve active-active load balancing without extra network hops.
Benefits
- Active-active load balancing: Traffic is distributed across multiple nodes simultaneously
- No extra network hops: Direct routing to pods without intermediate node forwarding
- Better performance:
externalTrafficPolicy: Local preserves source IP and reduces latency
- High availability: BGP route announcements ensure traffic reaches healthy nodes
Prerequisites
Before configuring the LoadBalancer Service, ensure you have:
- MetalLB deployed: See Creating External IP Address Pool for installation
- BGP Peer configured: See Creating BGP Peers for BGP setup
- External IP address pool: Configure an IPAddressPool with BGPAdvertisement
Steps
Deploy your application with a LoadBalancer Service using externalTrafficPolicy: Local:
# nginx-loadbalancer-local-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer-local
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: nginx
ports:
- port: 80
targetPort: 80
Key Configuration Points
externalTrafficPolicy: Local
The externalTrafficPolicy: Local setting provides several benefits:
- Source IP preservation: Client source IP is maintained, enabling proper logging and security policies
- Direct pod routing: Traffic goes directly to pods without node-level forwarding
LoadBalancer with BGP
When using MetalLB with BGP mode:
- Routes are advertised from nodes specified in the BGPAdvertisement nodeSelectors
- The BGP peer receives these announcements and can route traffic accordingly
- Node selector alignment between BGPPeer and BGPAdvertisement ensures consistent routing
Deployment Steps
-
Deploy the application:
kubectl apply -f nginx-loadbalancer-local-demo.yaml
-
Verify the LoadBalancer Service:
kubectl get svc nginx-loadbalancer-local
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-loadbalancer-local LoadBalancer 10.0.2.57 4.4.4.3 80:30005/TCP 30s
-
Test the service:
Verification
- Monitor service endpoints:
kubectl get endpoints nginx-loadbalancer-local
- Check service status:
kubectl describe svc nginx-loadbalancer-local