… and best with a web GUI?
For example, solutions like Proxmox are really nice and sophisticated, but only if they are obviously Proxmox instances. Of course, you can connect them together and manage them from a single instance. It's no wonder that dedicated server companies offer Proxmox's pre-engineered ISOs, along with ESXi's Top 2 solution, to spit a dedicated server into multiple VMs.
But what if you manage VMs, containers, or dedicated servers in different public clouds or hosting providers? For example, KVM-based VMs on DigitalOcean / Linode, LXC containers on Proxmox hosts, pure dedis on OVH / Hetzner and so on? Ideally without agents, though I do not mind installing an agent if that makes life easier overall.
The goal is to have something that is easy to set up and maintain. Use a single server as a "master" node and connect to any other slave server (virtual, container, or dedicated server) and manage it with SSH commands or an agent. A simple web GUI could be used to report state and resource usage and (why not) execute shell commands remotely.
There is obviously no need for the computer / storage / network separation.
So far I have looked at:
a) Orchestration tools such as Ansible, Chef, Puppet, etc., or Kubernetes / Swarm (albeit primarily designed for container use only) are either limited to CLI or a true overkill setup is possible.
b) OpenStack = Shoot me now (sorry OpenStack user)
c) OpenNebula is probably closer to such a need, but seems a bit over the top (but not too complex if you've tried Kubernetes).
d) oVirt looks really good, but requires that CentOS be used as the base operating system (for the slave nodes). This is a problem if a remote VM is obviously already set up with Ubuntu.
Thank you in advance for any suggestions / hints.