I currently have a hodgepodge of solutions for my hosting needs. I play ttrpgs online, so have two FoundryVTT servers hosted on a pi. Then I have a second pi that is hosting Home Assistant. I then also have a synology device that is my NAS and hosts my Plex server.

I’m looking to build a home server with some leftover parts from a recent system upgrade that will be my one unified server doing all the above things in the same machine. A NAS, hosting a couple Foundry instances, home assistant, and plex/jellyfin.

My initial research has me considering Unraid. I understand that it’s a paid option and am okay with paying for convenience/good product. I’m open to other suggestions from this community.

The real advice I’m hoping to get here is a kind of order of operations. Assume I have decided on the OS I want to use for my needs, and my system is built. What would you say is the best way going about migrating all these services over to the new server and making sure that they are all reachable by web?

  • BearOfaTime@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    Can Proxmox with some containers/VMs address your needs?

    Its what I’m running for a media server (a VM) and some containers for things like Pihole and Syncthing.

    • iAmTheTot@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      I don’t know, that’s why I’m here for advice lol. I’ve never had to tackle “which OS?” before.

      • lemming741@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Proxmox was the answer for me. OpenMediaVault in a VM for NAS, LXC containers for things that need GPU access (Plex and frigate). Hell, I even virtualized my router. One thing I probably should have done was a single docker host and learn podman or something similar. I ended up with 8 or 9 VMs that run 8 or 9 dockers. It works great, but it’s more to manage.

        You’ll want 2 network cards/interfaces- one for the VMs and another for the host. Power usage is not great using old gaming parts. Discrete graphics seem to add 40 watts no matter what. A 5600G or Intel with quicksync will get the job done and save you a few bucks a month. I recently moved to a 7700x and transcode performance is great. Expect 100-150 watts 24/7 which costs me $10-15 month. But I can compile ESPHome binaries in a few seconds 🤣

        • AbidanYre@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          I ended up with 8 or 9 VMs that run 8 or 9 dockers. It works great, but it’s more to manage.

          It’s more overhead on the cpu, but it’s so easy.

  • Illecors@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    If you can dedicate some time to constant keep up - pick a rolling distro. Doing major version upgrades has never not had problems for me. Every major distro has one.

    My choice is Gentoo, but I’m weird like that. Having said that - my email server has been running happily on Arch for just over 5 years now.

    The lemmy instance I host is on Debian testing - Gentoo was not available on DO - no issues so far.

    Even when it’s mostly containers - why waste time every n years doing the big upgrade? Small change is always safer.

  • Ebby@lemmy.ssba.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Why not dockerize Foundry and run all on the Synology?

    Though I did convert my home assistant docker to HAOS on a Pi for extra features way back in the day. Not sure you have to now.

    • iAmTheTot@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      I don’t want to use the synology anymore, I’m interested in building my own system from previous parts with better performance. I don’t have a very great synology right now and also want more drive space.

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Step 0). Decide if there’s anything you dont want on a common server.

    I realized long ago that my projects sometimes stall out partway through. However some things need to just work, regardless of where I am in a project. HA is a great example of something that manages itself (so less advantage to the VM) and that I want always available, so even if I decide to go down a route like you are, HA stays independent, stays available

    • null@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      Yup, same experience. I started out hosting everything on a single box, but have slowly moved things like HA and Pi-hole to their own machines, so they don’t all go down when that one box goes down.

  • fruitycoder@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    5 months ago

    K3s! You could even reuse your pis in the cluster.

    I would deploy it to your new server, setup your CSI (e.g longhorn its pretty simple), find a helm chart for one of the apps and try deploying it.

    • iAmTheTot@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      I understood like four of the words in your comment so I’m going to go ahead and assume that solution is too advanced for me.

      • MigratingtoLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        5 months ago

        K3s is an embedded Kubernetes distribution by a Californian company called Rancher, which is owned by the Enterprise Linux Giant SUSE.

        Kubernetes works on the idea of masters and workers. I.e. you usually cannot bring up (“schedule”) containers (pods) on the master nodes (control nodes for brevity). K3s does away with such limitations, meaning you can just run one VM with k3s and run containers on top.

        Although if Kubernetes is too hard I would push you towards Podman.

        I do not know the extrapolation for CSI but Longhorn is a storage backend of Kubernetes for persistent storage across nodes