What's the difference to a well established tool like kops (https://github.com/kubernetes/kops), which also supports Hetzner?
There is this project to deploy k3s to Hetzner via Terraform: https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne...
It's not the smoothest thing I've ever used, but it's all self hosted and everything can be fixed with some Terraform or SSH.
Great to see some managed Kubernetes on Hetzner!
I cant seem to figure out where this company is located and if it is a scam or not. Website has no imprint, no contact address. There is one email address in the privacy statement but it is "redacted by cloudflare". Also in privacy statement it says "Edka Digital S.L." but no idea which country it is registered it.
For me it does not pass the smell test. No physical address, no idea who is running it, no idea if company is indeed registered or not. The pricing FAQ at least talks about VAT and I assume it is EU VAT but could be anything.
This is a great idea. I really like it!
We considered reaching out in May, but held back because we want to run on bare metal.
Any chance to get this provisioned on bare metal at Hetzner?
We have K8S running on bare metal there. It's a slog to get it all working, but for our use case, having a dedicated 10G LAN between nodes (and a bare metal Cassandra cluster in the same rack) makes a big difference in performance.
Also, from a cost perspective. We run AX41-NVMe dedicated servers that cost us about EUR 64 per server with a 10G LAN, all in the same rack. Getting the same horsepower using Cloud instances I guess would be a CCX43, which costs almost double.
A bit off topic, but you might want to rethink the name. It is very close to EDEKA, the largest German supermarket chain. They have a very large IT division (https://it.edeka) and judging from the name of your project I was expecting it to be one of their projects.
What are the connectivity options between heztner dedicated servers? I see they allow you to pay to have in a single rack, with a dedicated switch. But does that introduce a risk of single point of failure in the rack power or switch?
Site doesn't answer how storage is 'solved'. Is this solution uses local folder provisioning when using PostgreSQL for example.
This is incredibly timely. I've been an AWS customer for 10+ years and have been having a tough time with them lately. Looking at potentially moving off and considering options.
My theory is that with terraform and a container based infra, that it should be pretty easier with Claude Code to migrate wherever.
I tried to deploy a small cluster in the US VA region, but the cluster status kept flipping between Failed and Creating with no clear way of troubleshooting it: 7ad975fb-3c8e-47a9-b03d-9e6bec81f0db
Could you explain:
1) What are the limitations of the scaling you do? Can I do this programmatically? I.e. send some requests to get additional pods of a specific type online?
2) What have you done in terms of security hardening? you mention hardened pods/cluster, but specifically, did you do pentest? Just follow best practice? Periodic scans? Stress tests?
I have yet to see a guide to automate k8s on Hetzner's beefy bare metal instances. True, you want cattle, but being able to include some bare metal instance with amazing CPUs and memory would be great, and I do just that. My clusters include both cloud and bare metal instances. In the past I had used Hetzner virtual switch to create a shared L2 network between cloud and bare metal nodes. Now I just use tailscale.
But the TF and other tools are using the API to add and kill nodes, if you could pass a class of nodes to those tools that they know can't create but are able to wipe and rebuild, this would be ideal.
When I was looking into this, I instead setup Proxmox on Hetzner (which you can do natively from ISO).
From there it was much easier just using it for whatever I wanted, including K3S
I wonder how long before Hetzner adds something like managed Kubernetes to their native product line. They already have S3 compatible object storages, load balancers and more.
Am i the only one who is confused about "Hetzner" in the title and "AWS KMS" in the body?
This looks great! Haven't tried it yet, but should I presume this also does k8s and OS updates for me? Or how managed is it?
Has anybody found a good way to use encrypted disks with Hetzner yet?
Congrats on shipping! I see that you have WordPress as a pro app. As someone who pays for WP hosting, what I'd like to see there is the ability to "fork" a WP instance, media, DB, everything, with a new hostname, that I can try things, updates, etc.
Is this deploying K3s or full kubernetes with a control vs worker plane on different instances?
Love how focussed this is.
I would have never guessed that there's an overlap between the circle of people wanting to run a prod workload on a K8s cluster and folks that need a GUI to set up and manage a K8s cluster would be that big but looks like I might be wrong.
Congratulations on the launch!
Is there are plans to support Gitlab and gitlab registry (or any registry) ?
Typo: One Cluser always free
Why would I use Edka vs using Linode's free Kubernetes offering?
Great job!
is there a selfhosted version of this ?
Great work. Just tried to email support@ and it bounced.
typo on the website: one cluser always free
Great job. Love the project
exactly what i was looking for. I will give it a shot !
I can't find this Spanish (?) company in the company register and there are none of the legally required information on the website. Not very trustworthy for a SaaS that stores your data and access keys. I'm confident that this is only a startup "day one" issue, but in times of increased scam and extortion can I be sure? Nope.
[dead]
We (https://controlplane.com) have had full Hetzner support for over a year now. You can create a K8s cluster on Linode, Hetzner, AWS or other clouds and on-prem environments. We call it MK8s (Managed K8s Service). It is a CNCF certified hosted K8s service. You run the nodes in your own environment -- on Hetzner or anywhere else.
If you email me- I will give you free credits (doron at controlplane.com)
This certainly looks like a pleasingly straight-forward way to spin up k8s.
I do notice that this deploys onto their cloud offering, which we've (https://lithus.eu) found to be a little shaky in a few places. We deploy clients onto their bare metal line-up which we find to be pretty rock solid. The worst that typically happens is the scheduled restart of an upstream router, which we mitigate via multi-AZ deployments.
That being said, there is a base cluster size under which a custom bare-metal deployment isn't really viable in terms of economics/effort. So I'll definitely keep an eye on this.