Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others are arm devices with 8 gigs memory. Ample space on all. Wouldn’t Proxmox be overkill here and eat up more system resources than just running base Ubuntu, Debian or other server distro on them all and either running the services needed from binary or docker? Seems like the extra memory needed to run the Proxmox software and then the containers would just kill available memory or CPU availability. Am I wrong in thinking that Proxmox is better suited for when you have a machine with 32gigs or more of memory and some sort of base line powerful cpu?

  • @ikidd@lemmy.world
    link
    fedilink
    English
    17
    edit-2
    5 months ago

    VMs under KVM are pretty much bare metal and Proxmox doesn’t use much for resources itself, it’s basically a headless Debian with a webserver interface to do all the KVM stuff.

    Proxmox, especially if you use ZFS for the VM datastore, makes a home lab so much easier to revert, backup and deploy/clone VMs and LXCs. I highly recommend it if you’re just starting out. Once you wrap your head around it, it gets out of the way and lets you just tinker with your projects, and not have to manually do everything in VirtManager or at the command line.

    Combined with Proxmox Backup Server, it’s a production ready hypervisor for anything you decide to keep. Also, the HA features work well enough that I had my main routing OPNsense VM jump between nodes when the primary node lost a drive, and I didn’t notice for a week, it was that seamless.

    • @SaintWacko@midwest.social
      link
      fedilink
      English
      25 months ago

      Seconding this. Especially if you’re still learning and making mistakes, it’s so nice to just be able to destroy a VM/CT and start over, rather then potentially breaking other things or the OS itself.

      • ddh
        link
        fedilink
        English
        1
        edit-2
        5 months ago

        Also needs mentioning: clustering. I have a years old cluster with none of the hardware I originally started with, but my Pi-hole is still there. Having the ability to migrate guests between hosts is a game changer when you frequently replace or rebuild said hosts. With the right setup, migration can have as little as a few seconds of downtime, or even no downtime at all. You can’t do that with bare metal installs.

    • @blackstratA
      link
      English
      15 months ago

      How’d you set that up with Opnsense fail over? I have an opnsense VM with input straight from the ISPs FTTP box to the NIC on my server. So I can’t fail over to my second proxmox box without swapping the cable over.

      • @ikidd@lemmy.world
        link
        fedilink
        English
        15 months ago

        Probably depends on the ISP, but I just have 2 nics in each server, and eth1 on both is on a switch to the cable modem. If one goes down, the other comes up fine. Can’t recall if I spoofed the same MAC on the OPNsense VMs.