As of 14 June 2023, PROXY protocol is supported for Ingress Controllers in Red Hat OpenShift on IBM Cloud clusters hosted on VPC infrastructure.
Fashionable software program architectures usually embody a number of layers of proxies and cargo balancers. Preserving the IP handle of the unique shopper via these layers is difficult, however could be required on your use instances. A possible answer for the issue is to make use of PROXY Protocol.
Beginning with Red Hat OpenShift on IBM Cloud model 4.13, PROXY protocol is now supported for Ingress Controllers in clusters hosted on VPC infrastructure.
Organising PROXY protocol for OpenShift Ingress Controllers
When utilizing PROXY protocol for supply handle preservation, all proxies that terminate TCP connections within the chain have to be configured to ship and obtain PROXY protocol headers after initiating L4 connections. Within the case of Crimson Hat OpenShift on IBM Cloud clusters working on VPC infrastructure, we have now two proxies: the VPC Software Load Balancer (ALB) and the Ingress Controller.
On OpenShift clusters, the Ingress Operator is accountable for managing the Ingress Controller situations and the load balancers used to reveal the Ingress Controllers. The operator watches IngressController sources on the cluster and makes changes to match the specified state.
Because of the Ingress Operator, we will allow PROXY protocol for each of our proxies without delay. All we have to do is to alter the
endpointPublishingStrategy configuration on our
IngressController useful resource:
endpointPublishingStrategy: kind: LoadBalancerService loadBalancer: scope: Exterior providerParameters: kind: IBM ibm: protocol: PROXY
If you apply the earlier configuration, the operat,or switches the Ingress Controller into PROXY protocol mode and provides the
service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol" annotation to the corresponding
LoadBalancer typed Service useful resource, enabling PROXY protocol for the VPC ALB.
On this instance, we deployed a check utility in a single-zone Crimson Hat OpenShift on IBM Cloud 4.13 cluster that makes use of VPC era 2 compute. The appliance accepts HTTP connections and returns details about the obtained requests, such because the shopper handle. The appliance is uncovered by the
default-router created by the OpenShift Ingress Operator on the
Consumer info with out utilizing PROXY protocol
By default, the PROXY protocol isn’t enabled. Let’s check accessing the appliance:
$ curl https://echo.instance.com Hostname: test-application-cd7cd98f7-9xbvm Pod Info: -no pod info available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Info: client_address=172.24.84.165 methodology=GET actual path=/ question= request_version=1.1 request_scheme=http request_uri=http://echo.instance.com:8080/ Request Headers: settle for=*/* forwarded=for=10.240.128.45;host=echo.instance.com;proto=https host=echo.instance.com user-agent=curl/7.87.0 x-forwarded-for=10.240.128.45 x-forwarded-host=echo.instance.com x-forwarded-port=443 x-forwarded-proto=https Request Physique: -no physique in request-
As you’ll be able to see, the handle within the
10.240.128.45 doesn’t match your handle. That’s the employee node’s handle that obtained the request from the VPC load balancer. Which means we cannot get better the unique handle of the shopper:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION 10.240.128.45 Prepared grasp,employee 5h33m v1.26.3+b404935 10.240.128.46 Prepared grasp,employee 5h32m v1.26.3+b404935
Enabling PROXY protocol on the default ingress controller
First, edit the Ingress Controller useful resource:
oc -n openshift-ingress-operator edit ingresscontroller/default
Within the Ingress controller useful resource, discover the
spec.endpointPublishingStrategy.loadBalancer part and outline the next
endpointPublishingStrategy: loadBalancer: providerParameters: kind: IBM ibm: protocol: PROXY scope: Exterior kind: LoadBalancerService
Then, save and apply the useful resource.
Consumer info utilizing PROXY protocol
Wait till the
default-router pods are recycled and check entry to the appliance once more:
$ curl https://echo.instance.com Hostname: test-application-cd7cd98f7-9xbvm Pod Info: -no pod info available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Info: client_address=172.24.84.184 methodology=GET actual path=/ question= request_version=1.1 request_scheme=http request_uri=http://echo.instance.com:8080/ Request Headers: settle for=*/* forwarded=for=192.0.2.42;host=echo.instance.com;proto=https host=echo.instance.com user-agent=curl/7.87.0 x-forwarded-for=192.0.2.42 x-forwarded-host=echo.instance.com x-forwarded-port=443 x-forwarded-proto=https Request Physique: -no physique in request-
This time, you could find the precise shopper handle
192.0.2.42 within the request headers, which is the precise public IP handle of the unique shopper.
The PROXY protocol function on Crimson Hat OpenShift on IBM Cloud is supported for under VPC era 2 clusters that run 4.13 OpenShift model or later.