I just wanted to take a moment to thank you for your amazing tutorial on creating an actions-runner-controller. Your explanation was clear, concise, and incredibly helpful. The step-by-step instructions and the attention to detail made it easy to follow along and understand the entire process. Your video has made a significant impact on my project, and I appreciate the effort you put into making such high-quality content. Please keep up the fantastic work; your expertise and teaching style are much appreciated! Looking forward to more great tutorials from you. Although the actions-runner and controller image version changed from 0.4.0 to 0.9.2, everything else was mostly the same and very well controlled by the values.yaml file. I also liked how you distinguished and demonstrated the two different container types (Docker in Docker and Kubernetes) for the runner skill set.
Thanks for the in depth explanation of whats new in ARC, the design seems better, though communication around the upgrade is made harder by deprecating labels.
Gotta say, you explain things very clearly and have a vast knowledge of many topics. Thanks for this, watched from start to finish. Greetings from Israel.
Extremely helpful video, thank you. One thing I've noticed is when we have multiple jobs in a single workflow, ARC terminates and recreates the runner pod while moving to next job. Is there a workaround to keep pod intact so that we can preserve workflow specific caches till the workflow completes fully.
Could you please demonstrate a CD flow where you are connecting these ARC runner to work as a Managed Identity and connect it to another AKS clusters to perform a CD
Thank you so much for the great content! The way you explain things and break them down is extremely helpful, you are a very good teacher! A small input, not sure if I am doing something wrong, but when creating the App Kubernetes secret we are using the arc-runners namespace, however it has not yet been created. The namespace "arc-runners" is created later when creating custom self-signed certs. I was able to create the secrete in the namespace after creating the namespace in self-signed certs walk-through later in the lesson. Alternatively, I guess we can create the namespace when creating secret as opposed to when creating self-signed certs?
Hi Bassem, thanks for the video! One question - as a long user of the community solution, i wonder why you say "you cannot build docker images with dind", since we have been building docker images just find with dind on arc. Could you help me understand?
It’s a mistake from my end! Building Docker containers is definitely feasible with DinD, it’s not possible off the shelf when running Kubernetes mode! Thanks for spotting this and highlighting it
Why action-runner-controller deletes runners pod and does not wait for cluster autoscaler? I did not find any values to manage a timing that would allow me to have a zero scaled node pool for my github runners, so we still use gitlab, it is waiting the node to be ready by default.
Great video. Thank you!. Documentation says to install certs manager before you install ARC. However I do not remember seeing any instructions on it. Please advise. Do we need Certs manager?
Hey! Thanks for the clear explanations! I got a question: Do you know where I can find an image that is using Ubuntu 20 instead of Ubuntu 22. Or it won't be supported for runner-scale-sets?
What are the deployment strategies followed for GitHub self-hosted runners here? If we update and apply changes, will the existing runner be removed and a new runner created, or will the newer runner come first and the older runner be removed?
This is amazing working demo 🎉 Thank You ☺️ for motivating me to setup my own ARC. Quick question : where can I change the name of the runner Operation System ? , I want to setup RHEL runners.
I am not able to comment on my old message, But I was saying can we connect on sperate bridge so that I can explain the issues more briefly? We have GitHub Enterprise support as well....
Create a support ticket, the support team can escalate to our team. Make sure to describe in as much detail as possible the issue. Not everything is supported, if the issue is with your setup, our teams cannot help.
@@glich.stream Just to be sure I got my question across correctly, I'm not referring to the runner itself, I'm referring to the child pod that is created after the runner receives a job. I've noticed it's not inheriting the resource requests and limits set on the runner pod. If it is possible to set resource requests and limits independent of the runner pod, that would be a perfect solve for my problem. Can you please point me to an example? I'm not sure where to configure the child pods.
We are planning to setup ARC on on-premises cluster which is not open to public internet. Is there any documentation on how to setup networking for ARC on on-premises cluster?
It doesn’t really require much. You can configure your helm charts to pull the images from the private container registry. Beyond that everything should run the same, assuming the cluster running ARC also has access to your GitHub. Of course, without internet and on-prem, I’m assuming you’re using GHES, which means if you want to use public actions you have to sync them first, but that’s outside of the scope of ARC.
We are GitHub Enterprise Cloud. Our enterprise has a proxy server, and we require a certificate to facilitate traffic. For implementing ARC, I attempted to create a ConfigMap with our proxy certificate and defined it in configMapKeyRef in githubServerTLS. However, when I installed the scale set Helm chart, it encountered a TLS handshake error. I am trying to customize the Docker images used in the ARC and add the certificate directly in those images by rebuilding them. When rebuilding, I have a question: does the controller Docker image communicate with GitHub or does communication only occur with the listener pod?
This really good technical explanation, thank you! I have a question about building docker image in arc runners - you mentioned that this is not supported. Do you have any workaround or other solution to recommend ? I would like to use ARC in my organization, however majority of our pipelines build and push docker images.
It's a mistake in the video, building docker images works fine if you're using Docker in Docker. It will not work with Kubernetes mode unless you use another build engine like Kaniko.
Thank you. It was a great presentation! One query here- How can be monitor the action runner controller ? Can we get the traces and service metrics for action runner controllers?
Thanks for the video Bassem, I was just wondering, we are using old legacy mode for now, is there a way to have long living containers that can share(have warmed up caches) so during business hours we can run similar workflows faster. We achieved this in the old mode by removing the ephemeral flag and performing scaling slower.
You can set a minimum number of runners to scale down to. So you’d configure a scale set with let’s say 5 runners with a container image having all your tools/configuration. There’s no way to have static non-ephemeral runners with the new mode
Is there an actual working example of gha-runner-scale-set anywhere? I can get the most trivial echo action to work just fine, but anything with docker or volumes fails with errors or permissions or both
Thanks for your very helpful video. i was able to understand it much better. I was able to set-up the self-hosted runner in Kubernetes mode. can you please give me an idea on how to implement kaniko to build and push images on self-hosted runner set in kubernetes mode. is there a documentation for this already? Thank you very much
I have implemented actions runner controller with DIND mode in our env and rolled out to developers. Developers are not satisfied with the performance we are getting. It is very very slow as compared to GitHub hosted runners. Can you please suggest any optimisations which can improve the speed of building the job?
Thank you, Great video!!! you saved me a lot of time. I have a question regards the docker image I'm using for the runners. I'm using containerMode dind with 2.311.0 image and i notice that it doesn't contains the Third-party like aws cli, git ... I couldn't find how can i use the ubuntu-22.04 image or something similar.
The runner image comes WITHOUT batteries. No 3rd party tools are provided by default. The recommendation is for you to use our runner image as a base image to build your own and include any tools you need.
@glich.stream I created a new Dockerfile based on 2.311.0 as you suggested and push it to my ecr. when using my custom image the pod doesn't start. it fails and after few retries it gives up with no logs. any suggestions? Can I modify the dind Template.spec context and add to the command apt install ?
I just wanted to take a moment to thank you for your amazing tutorial on creating an actions-runner-controller. Your explanation was clear, concise, and incredibly helpful. The step-by-step instructions and the attention to detail made it easy to follow along and understand the entire process.
Your video has made a significant impact on my project, and I appreciate the effort you put into making such high-quality content. Please keep up the fantastic work; your expertise and teaching style are much appreciated!
Looking forward to more great tutorials from you.
Although the actions-runner and controller image version changed from 0.4.0 to 0.9.2, everything else was mostly the same and very well controlled by the values.yaml file. I also liked how you distinguished and demonstrated the two different container types (Docker in Docker and Kubernetes) for the runner skill set.
Thanks It was really a very good in-depth explanation of history, and evolution of ARC along with guidance to setup inhouse.
Your detailed exploration of the actions-runner repositorys was incredibly helpful. Thank you for providing such a valuable resource.
Thank you so much for creating this! Extremely didactic and rich in content! You've got a new follower here :)
Thanks for the in depth explanation of whats new in ARC, the design seems better, though communication around the upgrade is made harder by deprecating labels.
Labels are a feature that should not have been a feature 😄 so we’re correcting it now
Gotta say, you explain things very clearly and have a vast knowledge of many topics. Thanks for this, watched from start to finish. Greetings from Israel.
Thanks a lot Bassem, I loved the details and the depth of the information presented.
I’m glad it was helpful 🙏
Extremely helpful video, thank you. One thing I've noticed is when we have multiple jobs in a single workflow, ARC terminates and recreates the runner pod while moving to next job. Is there a workaround to keep pod intact so that we can preserve workflow specific caches till the workflow completes fully.
Woah, I was planing to simply upadte our ARC, but it looks like i got more work to do 🥳
Yeah it’s more of a migration than an upgrade
If [edit: you] got the scaling right, then it'll be all worth it.
@@glich.stream
My compliments to you and your team.
We did the migration and the scaling works perfectly 🥳
@@danielgenis3253 🙌🙌🙌🙌
Any tips on migration ? We have similar situation
Really really good and clear explanation. Content creators should learn from you how it's done
This means a lot to me, thank you 🙏
Could you please demonstrate a CD flow where you are connecting these ARC runner to work as a Managed Identity and connect it to another AKS clusters to perform a CD
Thank you so much for the great content! The way you explain things and break them down is extremely helpful, you are a very good teacher! A small input, not sure if I am doing something wrong, but when creating the App Kubernetes secret we are using the arc-runners namespace, however it has not yet been created. The namespace "arc-runners" is created later when creating custom self-signed certs. I was able to create the secrete in the namespace after creating the namespace in self-signed certs walk-through later in the lesson. Alternatively, I guess we can create the namespace when creating secret as opposed to when creating self-signed certs?
Hi Bassem, thanks for the video! One question - as a long user of the community solution, i wonder why you say "you cannot build docker images with dind", since we have been building docker images just find with dind on arc. Could you help me understand?
It’s a mistake from my end! Building Docker containers is definitely feasible with DinD, it’s not possible off the shelf when running Kubernetes mode! Thanks for spotting this and highlighting it
Thanks for the Video ❤. I am not seeing 2/2 after mentioning dind mode. How shall debug it, any suggestions
I appreciate the effort you've shared with us.
My pleasure 🙏
Which is more appropriate to deploy in ARC Depyomentrunner with horizontal runner scaler or runner-scale-set?
how are you managing upgrading the runners (since helm will not auto-update CRDs)
Why action-runner-controller deletes runners pod and does not wait for cluster autoscaler? I did not find any values to manage a timing that would allow me to have a zero scaled node pool for my github runners, so we still use gitlab, it is waiting the node to be ready by default.
Thank you for the explanation! would it be possible to use GKE as part of the kubernetes cluster?
You can use GKE
Great video. Thank you!. Documentation says to install certs manager before you install ARC. However I do not remember seeing any instructions on it. Please advise. Do we need Certs manager?
You don’t need cert manager. The docs are here: gh.io/arc-docs
As suggested in the document, I am unable to make runner scale set and controller deployed in different namespace talk to each other. Any idea why?
How do i pass the image name dynamically during helm install (listener scale set). I dont want to hardcode the image details in my valaues.yaml file
Hi I'm getting this Error: Container feature is not supported when runner is already running inside container. Any workarounds for this?
Hi there, I am able to get arc deployed to eks with fargate backend. However I am unable to get docker builds on these runners.
Hey! Thanks for the clear explanations!
I got a question: Do you know where I can find an image that is using Ubuntu 20 instead of Ubuntu 22. Or it won't be supported for runner-scale-sets?
What are the deployment strategies followed for GitHub self-hosted runners here? If we update and apply changes, will the existing runner be removed and a new runner created, or will the newer runner come first and the older runner be removed?
You need to uninstall everything and reinstall to upgrade.
This is amazing working demo 🎉 Thank You ☺️ for motivating me to setup my own ARC. Quick question : where can I change the name of the runner Operation System ? , I want to setup RHEL runners.
Yeah you should be able to use RHEL runners. You need to build your own runner image of course
Hi, does anyone know which of his videos have Github apps configuration and installation?
I am not able to comment on my old message, But I was saying can we connect on sperate bridge so that I can explain the issues more briefly? We have GitHub Enterprise support as well....
Create a support ticket, the support team can escalate to our team. Make sure to describe in as much detail as possible the issue. Not everything is supported, if the issue is with your setup, our teams cannot help.
Is there is any tracing feature available in GitHub actions currently?
Does it support aks virtual nodes?
Great vid and solution thanks!
are there any plans to add windows images and document how to use it?
Insightful, thank you lots! I've got a question regarding kubernetes mode. Are we able to set the resource requests & limits for the child pods?
Yes
@@glich.stream Just to be sure I got my question across correctly, I'm not referring to the runner itself, I'm referring to the child pod that is created after the runner receives a job. I've noticed it's not inheriting the resource requests and limits set on the runner pod. If it is possible to set resource requests and limits independent of the runner pod, that would be a perfect solve for my problem. Can you please point me to an example? I'm not sure where to configure the child pods.
We are planning to setup ARC on on-premises cluster which is not open to public internet. Is there any documentation on how to setup networking for ARC on on-premises cluster?
It doesn’t really require much. You can configure your helm charts to pull the images from the private container registry.
Beyond that everything should run the same, assuming the cluster running ARC also has access to your GitHub.
Of course, without internet and on-prem, I’m assuming you’re using GHES, which means if you want to use public actions you have to sync them first, but that’s outside of the scope of ARC.
We are GitHub Enterprise Cloud. Our enterprise has a proxy server, and we require a certificate to facilitate traffic.
For implementing ARC, I attempted to create a ConfigMap with our proxy certificate and defined it in configMapKeyRef in githubServerTLS. However, when I installed the scale set Helm chart, it encountered a TLS handshake error.
I am trying to customize the Docker images used in the ARC and add the certificate directly in those images by rebuilding them.
When rebuilding, I have a question: does the controller Docker image communicate with GitHub or does communication only occur with the listener pod?
This really good technical explanation, thank you!
I have a question about building docker image in arc runners - you mentioned that this is not supported. Do you have any workaround or other solution to recommend ?
I would like to use ARC in my organization, however majority of our pipelines build and push docker images.
It's a mistake in the video, building docker images works fine if you're using Docker in Docker. It will not work with Kubernetes mode unless you use another build engine like Kaniko.
@@glich.stream great, thank you.
How do we define to use windows image for actions runner? There are lot of workflows in the repos that we have which runs on windows based runners.
Windows runners are not supported
Thank you. It was a great presentation! One query here- How can be monitor the action runner controller ? Can we get the traces and service metrics for action runner controllers?
Metrics will be released in the gha-runner-scale-set-0.5.0 release
hi I m using enterprise 3.8.8 version currently. I m having issue in listener pods. Which version does ARC supports? Could you plz comment it
3.9+
Thanks for the video Bassem, I was just wondering, we are using old legacy mode for now, is there a way to have long living containers that can share(have warmed up caches) so during business hours we can run similar workflows faster. We achieved this in the old mode by removing the ephemeral flag and performing scaling slower.
You can set a minimum number of runners to scale down to. So you’d configure a scale set with let’s say 5 runners with a container image having all your tools/configuration. There’s no way to have static non-ephemeral runners with the new mode
Is there an actual working example of gha-runner-scale-set anywhere? I can get the most trivial echo action to work just fine, but anything with docker or volumes fails with errors or permissions or both
Create your own runner image and install whatever you want on it. We won’t provide an image with 3rd party tools on it
Thanks for your very helpful video. i was able to understand it much better.
I was able to set-up the self-hosted runner in Kubernetes mode.
can you please give me an idea on how to implement kaniko to build and push images on self-hosted runner set in kubernetes mode. is there a documentation for this already? Thank you very much
Start a discussion thread in the repo. I cannot provide support here
@@glich.stream Alright Thanks. I have opened a discussion in the action/runner repo.
@@iposipos9342 on actions/action-runner-controller please not runner
@@glich.stream okay. i've done that. thanks
I have implemented actions runner controller with DIND mode in our env and rolled out to developers. Developers are not satisfied with the performance we are getting. It is very very slow as compared to GitHub hosted runners. Can you please suggest any optimisations which can improve the speed of building the job?
Where’s the performance bottleneck at?
Thank you, Great video!!! you saved me a lot of time.
I have a question regards the docker image I'm using for the runners. I'm using containerMode dind with 2.311.0 image and i notice that it doesn't contains the Third-party like aws cli, git ...
I couldn't find how can i use the ubuntu-22.04 image or something similar.
The runner image comes WITHOUT batteries. No 3rd party tools are provided by default. The recommendation is for you to use our runner image as a base image to build your own and include any tools you need.
@glich.stream
I created a new Dockerfile based on 2.311.0 as you suggested and push it to my ecr. when using my custom image the pod doesn't start. it fails and after few retries it gives up with no logs. any suggestions?
Can I modify the dind Template.spec context and add to the command apt install ?
I am to using a new custom image for runner pod but couldn’t see 2/2 , what might be issue
Can we run job on VM instead of pods using ARC, do you support registring Vm spawned by kubevirt?
The short answer is no. We only support Kubernetes vanilla. Even if it might technically work with kubevirt, we do not support it.