Thanks for all the great replies!
I don’t know if this will help you, but I wrote a tutorial on how to setup a local registry on the LAN on a Fedora Server or RHEL-compatible server. https://techne.hyperreal.coffee/tutorials/setup-a-lan-container-registry-with-podman-and-self-signed-certs/
But anyway, it’s unlikely docker.io or quay.io or ghcr.io will go completely offline. If anything they might experience a DDoS, in which case I imagine they have competent devops employees who would ensure they become functional again within a matter of hours.
Very interesting thanks!
The vast majority of selfhosters probably don’t but if you want its called a private repository
Just use a sonatype nexus 3 image and proxy docker hub, etc. Then you pull images through it.
We run this at work so we have forever copies of image tags and to reduce dockerhub rate limit issues. Works well even for a large dev team.
At my job, we run goharbor.io and use its Replications feature to do just that.
Sorry for the link dump - I just glanced over the content and it seems like this might help you:
https://www.warpbuild.com/blog/docker-mirror-setup
https://medium.com/@shaikrish27/deploying-a-docker-registry-mirror-as-a-container-59565ff92c48
https://blog.alexellis.io/how-to-configure-multiple-docker-registry-mirrors/
For most of you suggesting hosting a repository - yes but,
Host forgejo. Just host the git mirror. It comes with a package repo out of the box. Then you have the source code and the docker images
An alternative method is to run an actions workflow that syncs from upstream images directly, like what Forgejo actually do.
oh freaking awesome, this looks amazing! Thank you so much for this!
Host forgejo.
Or Gitea if you want to run the upstream.
I mean you have the current image cached on the local server when you use it.
Isn’t a Docker registry just HTTP? Would a caching proxy be too hard to use for this?