So, I am thinking about getting myself a NAS to host mainly Immich and Plex. Got a couple of questions for the experienced folk;

  • Is Synology the best/easiest way to start? If not, what are the closest alternatives?
  • What OS should i go for? OMV, Synology’s OS, or UNRAID?
  • Mainly gonna host Plex/Jellyfin, and Synology Photos/Immich - not decided quite what solutions to go for.

Appricate any tips :sparkles:

  • InformalTrifle@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’d love to find out more about this setup. Do you know of any blogs/wikis explaining that? Are you separating the storage from the compute with the HBA card?

    • Yote.zip@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      This is a fairly common setup and it’s not too complex - learning more about Proxmox and TrueNAS/ZFS individually will probably be easiest.

      Usually:

      • Proxmox on bare metal

      • TrueNAS Core/Scale in a VM

      • Pass the HBA PCI card through to TrueNAS and set up your ZFS pool there

      • If you run your app stack through Docker, set up a minimal Debian/Alpine host VM (you can technically use Docker under an LXC but experienced people keep saying it causes problems eventually and I’ll take their word for it)

      • If you run your app stack through LXCs, just set them up through Proxmox normally

      • Set up an NFS share through TrueNAS, and connect your app stack to that NFS share

      • (Optional): Just run your ZFS pool on Proxmox itself and skip TrueNAS

      • InformalTrifle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I already run proxmox but not TrueNAS. I’m really just confused about the HBA card. Probably a stupid question but why can’t TrueNAS access regular drives connected to SATA?

        • Yote.zip@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          The main problem is just getting TrueNAS access to the physical disks via IOMMU groups and passthrough. HBA cards are a super easy way to get a dedicated IOMMU group that has all your drives attached, so it’s common for people to use them in these sorts of setups. If you can pull your normal SATA controller down into the TrueNAS VM without messing anything else up on the host layer, it will work the same way as an HBA card for all TrueNAS cares.

          (TMK, SATA controller hubs are usually an all-at-once passthrough, so if you have your host system running off some part of this controller it probably won’t work to unhook it from the host and give it to the guest.)

      • rentar42@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        So theoretically if someone has alrady set up their NAS (custom Debian with ZFS root instead of TrueNAS, but shouldn’t matter), it sounds like it should be relatively straightforward to migrate all of that into a Proxmox VM, by installing Proxmox “under it”, right? Only thing I’d need right now is some SSD for Proxmox itself.

        • Yote.zip@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Proxmox would be the host on bare metal, with your current install as a VM under that. I’m not sure how to migrate an existing real install into a VM so it might require backing up configs and reinstalling.

          You shouldn’t need any extra hardware in theory, as Proxmox will let you split up the space on a drive to give to guest VMs.

          (I’m probably misunderstanding what you’re trying to do?)

          • rentar42@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I just thought that if all storage can easily be “passed through” to a VM then it should in theory be very simple to boot the existing installation in a VM directly.

            Regarding the extra storage: sharing disk space between proxmox and my current installation would imply that I have to pass-through “half of a drive” which I don’t think works like that. Also, I’m using ZFS for my OS disk and I don’t feel comformtable trying to figure out if I can easily resize those partitions without breaking anything ;-)

            • Yote.zip@pawb.social
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              That should work, but I don’t have experience with it. In that case yeah you’d need another separate drive to store Proxmox on.

      • talentedkiwi@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        This is 100% my experience and setup. (Though I run Debian for my docker VM)

        I did run docker in an LXC but ran into some weird permission issues that shouldn’t have existed. Ran it again in VM and no issues with the same setup. Decided to keep it that way.

        I do run my plex and jellyfin on an LXC tough. No issues with that so far.