Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> * Always use your own runners, on-premise if possible

Why? I understand it in cases where security is critical or intellectual property is at stake. Are you talking about "snowflake runners" or just dumb executors of container images?



Caching is nicer on own runners. No need to redownload 10+GB of "development container images" just to build your 10 lines of changed code.

With self hosted Gitlab runners it was almost as fast as doing incremental builds. When your build process can take like 15-20 minutes (medium sized C++ code base), this brought down the total time to 30 seconds or so.


This. Your own runners can cache everything (docker caches, apt caches, ccache outputs...) and can also share the compilation load (icecc for c++). All that gives 5x-10x speed boost.


This is true because your CI steps will be running on a lower number of physical machines, ensuring higher cache hits?


Kind of - you can also pin runners.("This workflow runs on this runner always"). And caching just means not deleting the artifacts from the file system from the previous runs.

Imagine building Android - even "cloning the sources" is 200GB of data transfer, build times are in hours. Not having to delete the previous sources and doing an incremental build saves a lot of everything.


Gitlab also has some tips here: https://docs.gitlab.com/ci/caching/ on using shared caches, which can help in some scenarios, especially runners in Kubernetes that are ephemeral, ie. created just before a job starts and destroyed afterward.

tldr; "A cache is one or more files a job downloads and saves. Subsequent jobs that use the same cache don’t have to download the files again, so they execute more quickly."

It will probably still be slower than a dedicated runner, but possibly require less maintenance ("pet" runner vs "cattle" runner).


It obviously depends on your load. Fast pipelines matter, so don't run them on some weak cloud runner with the speed of a C64. Fast cloud runners are expensive. Just invest some money and buy or at least rent some beefy servers with lots of cores, RAM and storage and never look back. Use caches for everything to speed up things.

Security is another thing where this can come in handy, but properly firewalling CI runners and having mirrors of all your dependencies is a lot of work and might very well be overkill for most people.


"Fast cloud runners are expensive."

Buy a cheap Ryzen, and put it on your desk, that's a cheap runner.


30 bucks a month on hetzner for a dedicated machine with 12-16 cores and 64 gb of ram and unlimited 1gbps bandwidth.


Debugging and monitoring. When the runner is somewhere else, and is shared nobody is going to give you full access to the machine.

So many times I was biting my fingers not being able to figure out the problems GitHub runners were having with my actions and was unable to investigate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: