Hi all, for a long time I could not find time to write an article on swtestacademy because of the busy schedule at work. However, during the Eid holiday, I wanted to play with Kubernetes, scale the selenium grid with it, and share my experience with you all. Previously, we have written articles on Docker with Selenium, Selenoid with Selenium, Selenium Grid, and Parallel Testing with TestNG. In this article, we will spin up and scale Selenium Grid with Kubernetes and Docker and run parallel testing with this solution. We will use Kubernetes for automating deployment, scaling, and management of containerized Selenium Grid, and Docker for operating-system-level virtualization which is also known as containerization.
I will not go into too much theory in this post. However, if you want to learn more theory of Kubernetes, I would like to suggest my friend Karthik K.K.’s Kubernetes series. It will help all of you to learn the Kubernetes fundamentals.
In this article, our aim is to create a selenium grid with a hub, three chrome nodes, and three firefox nodes. These grid modules will communicate with each other over Kubernetes Service. This is the main architecture of our grid setup. Also, we will use Kubernetes Rolling Deployment, Service, Replication Controller concepts. Finally, we will modify our Selenium TestNG parallel test execution project URL as our grid’s URL and run our tests in parallel. First, we need to install Docker on our machine (PC or MAC). I am using a MAC but the installation of Docker is pretty straightforward for both OS. After this step, you will start the Docker on your machine.
Then, we need to install Kubernetes and Minikube on our machine. For this, we need to open a terminal and run the below commands consecutively.
brew install minikube brew install kubernetes-cli brew upgrade minikube brew upgrade kubernetes-cli brew link --overwrite kubernetes-cli minikube config set vm-driver hyperkit minikube start
After these commands, we should see the below result which shows minikube has started successfully.
When minukube has been started, we can use minikube and kubectl commands to create our Selenium Grid Architecture with Kubernetes. We will communicate over the Kubernetes Service to reach the hub and the nodes. Kubernetes Service has bi-directional communication between the hub and the nodes.
Before starting to create our YAML files, I highly suggested using Microsoft VS Code and install YAML, Kubernetes, and Docker plugins. Let’s start with the Selenium Hub by using the Kubernetes Rolling Deployments scheme.
apiVersion: apps/v1 kind: Deployment metadata: name: selenium-hub spec: selector: matchLabels: app: selenium-hub strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: selenium-hub spec: containers: - name: selenium-hub image: selenium/hub:3.141.59-20200515 resources: limits: memory: "1000Mi" cpu: "500m" ports: - containerPort: 4444 livenessProbe: httpGet: path: /wd/hub/status port: 4444 initialDelaySeconds: 30 timeoutSeconds: 5
In the above YAML script, we have used the Kubernetes Deployment concept to create a Selenium Grid Hub and we have used “selenium/hub:3.141.59-20200515” as the hub image. You can find the other versions here: https://hub.docker.com/r/selenium/hub/tags. You can also change the CPU and Memory limits. We can also check the hub via its Probe via “/wd/hub/status” endpoint. By running below command, we will create the deployment:
kubectl create -f deploy.yml
After creation, you can see the details with kubectl’s describe command.
kubectl describe deploy
>> Extra Information Part Started
By adding Strategy as RollingUpdate we can change the hub’s version and after this, we can apply the changes without re-creation the deployment.
After running the below command, new changes will be applied.
kubectl apply -f deploy.yml --record
Then, you will see the below result.
In this way, you can easily update your selenium grid version without hassle. Also, you can check the history via below command:
kubectl rollout history deployment selenium-hub
And also you can easily revert back any revision via the below command:
kubectl rollout undo deployment selenium-hub --to-revision=2
This is really nice for us to change the Selenium Grid versions very easily. Also, if it is required, you can overwrite the default properties of the hub as shown below.
You can verify your values when you started your Selenium Grid as shown below. I used default values for this example.
>> Extra Information Part Ended
Now, we can continue with Kubernetes Service creation. The service’s YAML file is as follows:
apiVersion: v1 kind: Service metadata: name: selenium-srv spec: selector: app: selenium-hub ports: - port: 4444 nodePort: 30001 type: NodePort sessionAffinity: None
We named the service “selenium-srv” and we connected it with our hub and our nodePort is 30001 which means we can reach the hub with this port. SessionAffinity is None by default thus you can omit the last line. It supports “ClientIP” and “None”. Used to maintain session affinity. Enable client IP-based session affinity. Must be ClientIP or None. Defaults to None. More info. We need to create our service with the below command:
kubectl create -f service.yml
After creation, we can check the service with the below command:
kubectl describe service
Now, we can reach our hub via http://192.168.64.3:30001/
It is time to create our Selenium Grid Nodes. For this, we will use the Replication Controller concept. You can also use the Replica Set concept. As you see on the Docker Selenium page, we will use the below images:
The Node Chrome replication controller is as follows:
apiVersion: v1 kind: ReplicationController metadata: name: selenium-node-chrome-rep spec: replicas: 3 selector: app: selenium-node-chrome template: metadata: name: selenium-node-chrome labels: app: selenium-node-chrome spec: containers: - name: selenium-node-chrome image: selenium/node-chrome ports: - containerPort: 5900 env: - name: HUB_HOST value: "selenium-srv" - name: HUB_PORT value: "4444"
Node’s port is 5900 and it is connected to the hub over Kubernetes Service with the config which is specified under env tag and our node pod count is 3. Let’s do the same for the Node Firefox. This time, for firefox node we will use port 5901 and it will be also connected with our hub over Kubernetes service.
apiVersion: v1 kind: ReplicationController metadata: name: selenium-node-firefox-rep spec: replicas: 3 selector: app: selenium-node-firefox template: metadata: name: selenium-node-firefox labels: app: selenium-node-firefox spec: containers: - name: selenium-node-firefox image: selenium/node-firefox ports: - containerPort: 5901 env: - name: HUB_HOST value: "selenium-srv" - name: HUB_PORT value: "4444"
Let’s create the Replication Controllers with the below commands:
kubectl create -f repff.yml kubectl create -f repchrome.yml
and our Kubernetes Cluster setup completed. Now, we can get all created pods via the below command:
kubectl get pods
and you can also see the Kubernetes Dashboard by running the below command:
Then, the Kubernetes Dashboard automatically opens.
If you want to delete, get, and describe Deployment, Pods, Replication Controllers, Service, etc. you can use below commands:
#Delete all replication controllers kubectl delete rc --all #Delete chrome replication controller kubectl delete rc selenium-node-chrome-rep #Get Replication Controllers kubectl get rc #Get Pods kubectl get pods #Delete all pods kubectl delete pods --all #Delete a pods kubectl delete pods/firstpod #Describe a pod #Get deployments kubectl get deploy #Describe Deployment kubectl describe deploy selenium-hub #Delete all deployments kubectl delete deploy --all #For service and replica sets, also you need to use "service" and "rs" keyword and you can use the same commands.
>> Extra Information Part Started
If you want, you can also add auto-scaling functionality to your pods based on CPU usage as below commands. I tried but I could not see the auto-scaling with horizontal auto-scaling (HPA) but if you see that it is worked, please let me know in the comments.
kubectl autoscale rc selenium-node-chrome --min=3 --max=5 --cpu-percent=80 kubectl autoscale rc selenium-ff-rep --min=3 --max=5 --cpu-percent=80
In the above commands our min pod number will be 3 and max will be 5 and after %80 CPU usage auto-scaler kicks in and spin up a new pod. You will get the details of the horizontal auto scalers with below command:
kubectl get hpa
You can get more information about horizontal auto scalability here.
>> Extra Information Part Ended
Now, let’s test this grid architecture by changing the remote webdriver URL of this project. The only thing that I changed in the project is the URL as http://192.168.64.3:30001/wd/hub shown below.
Here is the project repo: https://github.com/swtestacademy/TestNGParallel When I run the project, I can see the below results. Tests are running parallel as described in TestNG.xml and our Kubernetes Selenium Grid cluster also works as expected.
This is my first trial to create a Kubernetes Cluster for Selenium Grid and it can be enhanced with smarter auto-scaling features, some reverse-proxy settings, multi-hub cluster setup, and Replica Sets instead of Replication Controllers but if you need to use Docker and Selenium Grid for your application, you can try these settings, start using Kubernetes and then enhance this solution based on your needs.
See you in another article!
Onur Baskirt is a senior IT professional with 15+ years of experience. Now, he is working as a Senior Technical Consultant at Emirates Airlines in Dubai.