I wrote a blog post detailing my homelab setup throughout 2023. It includes the hardware I use, as well as the applications I selfhosted. I also detailed how I automate my home Kubernetes cluster and how I back up my data.

  • fl42v@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    10 months ago

    ResizedImage_2024-01-31_00-06-56_6577

    makes me wander, what ai/prompt did you use to generate the pic. Looks neat!

    • mudkip@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      10 months ago

      I use DALL·E via ChatGPT Plus. The prompt is the first two paragraph in this post plus “Draw a picture for this article titled My 2023 Homelab Setup”.

      I retried several times and picked my favorite one :)

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 months ago

    I really like self hosting too but Kubernetes is overkill in complexity. I use nomad. :)

    • diminou@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      In a cluster? I’m actually thinking about using Nomad between my three sff pc that I use as servers, but I have no clue as to how to sync storage between them (container side I mean, with nextcloud data for example)?

      • johntash@eviltoast.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 months ago

        Storage is hard to do right :(

        If you can get away with it, use a separate NAS that exposes NFS to your other machines. Iscsi with a csi might be an option too.

        For databases, it’s usually better to not put their data on shared storage and instead use the databases built in replication (and take backups!).

        But if you want to go down the rabbit hole, check out ceph, glusterfs, moosefs, seaweedfs, juicefs, and garagehq.

        Most shared file systems aren’t fully posix compliant so things like file locking may not work. This affects databases and sqlite a lot. Glusterfs and moosefs seen to behave the best imo with sqlite db files. Seaweedfs should as well, but I’m still working on testing it.

        • Hexarei@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 months ago

          Yep, as someone who just recently setup a hyperconverged mini proxmox cluster running ceph for a kubernetes cluster atop it, storage is hard to do right. Wasn’t until after I migrated my minor services to the new cluster that I realized that ceph’s rbd csi can’t be used by multiple pods at once, so having replicas of something like Nextcloud means I’ll have to use object storage instead of block storage. I mean. I can do that, I just don’t want to lol. It also heavily complicates installing apps into Nextcloud.

      • 1984@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Yeah in a cluster with consul. Consul gives automatic service discovery and works with traefik so I don’t even have to care which node my service is running on since traefik knows how to find it using consul.

        For the storage I went with a simple solution. I installed nfs on a machine running in nomad, and then configured the nomad clients to mount that disk. All of this with ansible so I don’t have to do it more than once.

  • NowheremanA
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    Nice setup. Going to steal some ideas for my own setup!

  • walden@sub.wetshaving.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    Nice, gave me a couple of ideas about some other software I might be interested in hosting for myself.

    I couldn’t get the top-level navigation shortcuts to work in your blog, just FYI.

  • alienscience@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    The manifest of my Kubernetes cluster is managed in a Git repository and is automatically deployed via a GitOps tool named Flux CD. When I push changes to the repository, such as adding a new application or upgrading Docker images, the deployment occurs within a few minutes.

    This is the way.

    Although I use Flux ImageUpdateAutomation instead of Renovate Bot. Did you consider using Flux to do auto updates? Are there any downsides that made you choose Renovate Bot instead?

    • mudkip@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Thanks! I looked at the documentation of Flux image update automation first but I didn’t figured out how to handle Helm chart versions, and it seems it requires manual “marker” to handle image tags in Helm values (which Renovate can manage automatically), so it looks like Renovate fits my needs more.