At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I’ll need to IT flash the HBA, or get another. I’m guessing it’s best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

  • @NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    3
    edit-2
    2 hours ago

    better to pass the individual disks through to the VM and manage the zpool from there?

    That’s what I do.

    I like it better this way, because less dependencies.

    Proxmox boots from it’s own SSD, the VM that provides the NAS lives there, too.

    The zpool (consisting of 5 good old harddisks) can be easily plugged somewhere else if needed, and it carries the data of the NAS, but nothing else. I can rebuild the proxmox base, I can reinstall that VM, they all do not affect each other.

    • @blackstratOPA
      link
      English
      12 hours ago

      Good point. Having a small VM that just needs the HBA passed through sounds like the best idea so far. More portable and less dependencies.

  • walden
    link
    fedilink
    English
    2
    edit-2
    2 hours ago

    I use zfs with Proxmox. I have it as a bind mount to Turnkey Fileserver (a default lxc template).

    I access everything through NFS (via turnkey Fileserver). Even other VMs just get the NFS added to the fstab file. File transfers happen extremely fast VM to VM, even though it’s “network” storage.

    This gives me the benefits of zfs, and NFS handles the “what if’s”, like what if two VMs access the same file at the same time. I don’t know exactly what NFS does in that case, but I haven’t run into any problems in the past 5+ years.

    Another thing that comes to mind is you should make turnkey Fileserver a privileged container, so that file ownership is done through the default user (1000 if I remember correctly). Unprivileged uses wonky UIDs which requires some magic config which you can find in the docs. It works either way, but I chose the privileged route. Others will have different opinions.

  • @paperd@lemmy.zip
    link
    fedilink
    English
    145 hours ago

    If you want multiple VMs to use the storage on the ZFS pool, better to create it in proxmox rather than passing raw disks thru to the VM.

    ZFS is awesome, I wouldn’t use anything else now.

    • @blackstratOPA
      link
      English
      13 hours ago

      What I have now is one VM that has the array volume passed through and the VM exports certain folders for various purposes to other VMs. So for example, my application server VM has read access to the music folder so I can run Emby. Similar thing for photos and shares out to my other PCs etc. This way I can centrally manage permissions, users etc from that one file server VM. I don’t fancy managing all that in Proxmox itself. So maybe I just create the zpool in Proxmox, pass that through to the file server VM and keep the management centralised there.

    • @SzethFriendOfNimi@lemmy.world
      link
      fedilink
      English
      -25 hours ago

      If I recall correctly it’s important to be running ECC memory right?

      Otherwise corrupter bites/data can cause file system issues or loss.

      • @snowfalldreamland@lemmy.ml
        link
        fedilink
        English
        42 hours ago

        I think ecc isn’t more required for zfs then for any other file system. But the idea that many people have is that if somebody goes through the trouble of using raid and using zfs then the data must be important and so ecc makes sense.

        • @RaccoonBall@lemm.ee
          link
          fedilink
          English
          137 minutes ago

          And if you dont have ECC zfs just might save your bacon when a more basic fs would allow corruption

  • Scrubbles
    link
    fedilink
    English
    55 hours ago

    I did on proxmox. One thing I didn’t know about ZFS, it has a lot of random writes, I believe logs and journaling. I killed 6 SSDs in 6 months. It’s a great system - but consumer SSDs can’t handle it.

    • @blackstratOPA
      link
      English
      43 hours ago

      Did you have atime on?

    • @ShortN0te@lemmy.ml
      link
      fedilink
      English
      75 hours ago

      I use a consumer SSD for caching on ZFS now for over 2 years and do not have any issues with it. I have a 54 TB pool with tons of reads and writes and no issue with it.

      smart reports 14% used.

  • BlueÆther
    link
    fedilink
    English
    35 hours ago

    I run proxmox and a trunas VM.

    • TrueNAS is on a virt disk on a NVME drive with all the other VMs/LXCs
    • I pass the HBA through to TrueNAS with PCI passthrough: 6 disk Raid z2. this is ‘vault’ and has all my backups of hone dirs and photos etc
    • I pass through two HDs as raw disks for bulk storage (of linux ISOs): 2 disk Mirrored zfs

    Seems to work well

    • @blackstratOPA
      link
      English
      12 hours ago

      I’m starting to think this is the way to do it because it loses the dependency on Proxmox to a large degree.

  • minnix
    link
    fedilink
    English
    05 hours ago

    ZFS is great, but to take advantage of it’s positives you need the right drives, consumer drives get eaten alive as @scrubbles@poptalk.scrubbles.tech mentioned and your IO delay will be unbearable. I use Intel enterprise SSDs and have no issues.

    • @RaccoonBall@lemm.ee
      link
      fedilink
      English
      2
      edit-2
      33 minutes ago

      Complete nonsense. Enterprise drives are better for reliability if you plan on a ton of writes, but ZFS absolutely does not require them in any way.

      Next you’ll say it needs ECC RAM

    • @blackstratOPA
      link
      English
      12 hours ago

      Could this because it’s a RAIDZ-2/3? They will be writing parity as well as data and the usual ZFS checksums. I am running RAID5 at the moment on my HBA card and my limit is definitely the 1Gbit network for file transfers, not the disks. And it’s only me that uses this thing, it sits totally idle 90+% of the time.

      • minnix
        link
        fedilink
        English
        11 hour ago

        For ZFS what you want is PLP and high DWPD/TBW. This is what Enterprise SSDs provide. Everything you’ve mentioned so far points to you not needing ZFS so there’s nothing to worry about.

          • minnix
            link
            fedilink
            English
            155 minutes ago

            Looking back at your original post, why are you using Proxmox to begin with for NAS storage??

    • Scrubbles
      link
      fedilink
      English
      14 hours ago

      No idea why you’re getting downvoted, it’s absolutely correct and it’s called out in the official proxmox docs and forums. Proxmox logs and journals directly to the zfs array regularly, to the point of drive destroying amounts of writes.

      • @ShortN0te@lemmy.ml
        link
        fedilink
        English
        23 hours ago

        What exactly are you referring to? ZIL? ARC? L2ARC? And what docs? Have not found that call out in the official docs.

      • @blackstratOPA
        link
        English
        23 hours ago

        I’m not intending to run Proxmox on it. I have that running on an SSD, or maybe it’s an NVME, I forget. This will just be for data storage mainly of photos that one VM will manage and NFS share out to other machines.

        • Scrubbles
          link
          fedilink
          English
          13 hours ago

          Ah I’ll clarify that I set mine up next to the system drive in proxmox, through the proxmox zfs helper program. There was probably something in there that set up settings in a weird way

        • minnix
          link
          fedilink
          English
          13 hours ago

          Yes I’m specifically referring to your ZFS pool containing your VMs/LXCs. Enterprise SSDs for that. Get them on ebay. Just do a search on the Proxmox forums for enterprise vs consumer SSD to see the problem with consumer hardware for ZFS. For Proxmox itself you want something like an NVME with DRAM, specifically underprovisioned for an unused space buffer for the drive controller to use for wear leveling.