I’d expected this but it still sucks.

  • @DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    1610 months ago

    I’m intrigued, as your recent comment history keeps taking aim at Proxmox. What did you find questionable about them? My servers boot just fine, and I haven’t had any failures.

    I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

    • @TCB13@lemmy.world
      link
      fedilink
      English
      4
      edit-2
      10 months ago

      comment history keeps taking aim at Proxmox. What did you find questionable about them?

      Here’s the thing, I run Promox since 2009 until the end of last year professionally in datacenters, multiple clusters around 10-15 nodes each. I’ve been around for all wins and fails of Proxmox, I’ve seen the raise and fall of OpenVZ, all the SLES/RHEL compatibility issues and then them moving to LXC containers.

      While it worked most of the time and their payed support was decent I would never recommend it to anyone since LXD/Incus became a thing. The Promox PVE kernel has a lot of quirks and hacks. Besides the fact that is build upon Ubuntu’s kernel that is already a dumpster fire of hacks (waiting someone upstream to implement things properly so they can backport them and ditch their implementations) they add even more garbage over it. I’ve been burned countless times by their kernel when it comes to drivers, having to wait months for fixes already available upstream or so they would fix their own shit after they introduced bugs.

      At some point not even simple things such as OVPN worked fine under Proxmox’s kernel. Realtek networking was probably broken more times than working, ZFS support was introduced with guaranteed kernel panics and upgrading between versions was always a shot in the dark and half of the time you would get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later. Their startup is slow, slower than any other solution - it even includes daemons that are there just to ensure that other things are running (because most of them don’t even start with the system properly on the first try).

      Proxmox is considerably cheaper than ESXi so people use it in some businesses like we did, but far from perfect. Eventually Canonical invested in LXC and a very good and much better than OpenVZ and co. container solution was born. LXC got stable and widely used and LXD came with the hypervisor higher level management, networking, clustering etc. and since we now have all that code truly open-source and the creators of it working on the project without Canonicals influence.

      There’s no reason to keep using Proxmox as LXC/LXD got really good in the last few years. Once you’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potencial issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?

      I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

      Well if you’re some time to spare on testing stuff try LXD/Incus and you’ll see. Maybe you won’t replace all your Proxmox instances but you’ll run a mixed environment like a did for a long time.

      • @DeltaTangoLima@reddrefuge.com
        link
        fedilink
        English
        9
        edit-2
        10 months ago

        OK, I can definitely see how your professional experiences as described would lead to this amount of distrust. I work in data centres myself, so I have plenty of war stories of my own about some of the crap we’ve been forced to work with.

        But, for my self-hosted needs, Proxmox has been an absolute boon for me (I moved to it from a pure RasPi/Docker setup about a year ago).

        I’m interested in having a play with LXD/Incus, but that’ll mean either finding a spare server to try it on, or unpicking a Proxmox node to do it. The former requires investment, and the latter is pretty much a one-way decision (at least, not an easy one to rollback from).

        Something I need to ponder…

        • @TCB13@lemmy.world
          link
          fedilink
          English
          3
          edit-2
          10 months ago

          OK, I can definitely see how your professional experiences as described would lead to this amount of distrust. I work in data centres myself, so I have plenty of war stories of my own about some of the crap we’ve been forced to work with.

          It’s not just the level of distrust, is the fact that we eventually moved all those nodes to LXD/Incus and the amount of random issues in day to day operations dropped to almost zero. LXD/Incus covers the same ground feature-wise (with a very few exceptions that frankly didn’t also work properly under Proxmox), is free, more auditable and performs better under the continuous high loads you expect on a datacenter.

          When it performs that well on the extreme case, why not use for self-hosting as well? :)

          I’m interested in have a play with LXD/Incus, but that’ll mean either finding a spare server to try it on, or unpicking a Proxmox node to do it.

          Well you can always virtualize under a Proxmox node so you get familiar with it ahaha

          • @msage@programming.dev
            link
            fedilink
            English
            110 months ago

            How is the development of LXD?

            I am a huge fan of LXC, but I hate random daemons running (so no Docker for me). I have been looking at the Linux Container website, and they mentioned Canonical taking LXD development under its wings, and something about no one else participating apart from Canonical devs.

            So I’m kind of scared about the future of LXC and Incus. Do you have any more information about that?

            • @TCB13@lemmy.world
              link
              fedilink
              English
              4
              edit-2
              10 months ago

              So I’m kind of scared about the future of LXC and Incus. Do you have any more information about that?

              Canonical decided to take LXD away from the Linux Containers initiative and “close it” by changing the license. Meanwhile most of the original team at Canonical that made both LXC and LXD into a real thing quit Canonical and are not working on Incus or somehow indirectly “on” the Linux Containers initiative.

              no one else participating apart from Canonical devs.

              Yes, because everyone is pushing code into Incus and the team at Canonical is now very, very small and missing the key people.

              The future is bright and there’s money to make things happen from multiple sources. When it comes to the move from LXD to Incus I specifically asked stgraber about what’s going to happen in the future to the current Debian LXD users and this was his answer:

              We’ve been working pretty closely to Debian on this. I expect we’ll keep allowing Debian users of LXD 5.0.2 to interact with the image server either until trixie is released with Incus available OR a backport of Incus is made available in bookworm-backports, whichever happens first.

              As you can see, even the LTS LXD version present on Debian 12 will work for a long time. Eventually everyone will move to Incus in Debian 13 and LXD will be history.


              Update: here’s an important part of the Incus release announcement:

              The goal of Incus is to provide a fully community led alternative to Canonical’s LXD as well as providing an opportunity to correct some mistakes that were made during LXD’s development which couldn’t be corrected without breaking backward compatibility.

              In addition to Aleksa, the initial set of maintainers for Incus will include Christian Brauner, Serge Hallyn, Stéphane Graber and Tycho Andersen, effectively including the entire team that once created LXD.