Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 484 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • That’s more of a general DevOps/server admin steep learning curve than Vaultwarden’s there, to be fair.

    It looks a bit complicated at first as Docker isn’t a trivial abstraction, but it’s well worth it once it’s all set up and going. Each container is always the same, and always independent. Vaultwarden per-se isn’t too bad to run without a container, but the same Docker setup can be used for say, Jitsi which is an absolute mess of components to install and make work, some Java stuff, and all. But with Docker? Just docker compose up -d, wait a minute or two and it’s good to go, just need to point your reverse proxy to it.

    Why do you need a reverse proxy? Because it’s a centralized location where everything comes in, and instead of having 10 different apps with their own certificates and ports, you have one proxy, one port, and a handful of certificates all managed together so you don’t have to figure out how to make all those apps play together nicely. Caddy is fine, you don’t need NGINX if you use Caddy. There’s also Traefik which lands in between Caddy and NGINX in ease of use. There’s also HAproxy. They all do the same fundamental thing: traffic comes in as HTTPS, it gets the Host header from the request and sends it to the right container as plain HTTP. Well it doesn’t have to work that way specifically but that’s the most common use case in self hosted.

    As for your backups, if you used a Docker compose file, the volume data should be in the same directory. But it’s probably using some sort of database so you might want to look into how to do periodic data exports instead, as databases don’t like to be backed up live since the file is always being updated so you can’t really get a proper snapshot of it in one go.

    But yeah, try to think of it as an infrastructure investment that makes deploying more apps in the future a breeze. Want to add a NextCloud? Add another docker compose file and start it, Caddy picks it up automagically and boom, it’s live and good to go!

    Moving services to a new server is also pretty easy as well. Copy over your configs and composes, and volumes if applicable. Start them all, and they should all get back exactly in the same state as they were on the other box. No services to install and configure, no repos to add, no distro to maintain. All built into the container by someone else so you don’t have to worry about any of it. Each update of the app will bring with it the whole matching updated OS with the right packages in the right versions.

    As a DevOps engineer we love the whole thing because I can have a Kubernetes cluster running on a whole rack and be like “here’s the apps I want you to run” and it just figures itself out, automatically balances the load, if a server goes down the containers respawn on another one and keeps going as if nothing happened. We don’t have to manually log into any of those servers to install services to run an app. More upfront work for minimal work afterwards.



  • IMO the biggest attack vector there would be a Minecraft exploit like log4j, so the most important part to me would make sure the game server is properly sandboxed just in case. Start from a point of view of, the attacker breached Minecraft and has shell access to that user. What can they do from there? Ideally, nothing useful other than maybe running a crypto miner. Don’t reuse passwords obviously.

    With systemd, I’d use the various Protect* directives like ProtectHome, ProtectSystem=full, or failing that, a container (Docker, Podman, LXC, manually, there’s options). Just a bare Alpine container with Java would be pretty ideal, as you can’t exploit sudo or some other SUID binaries if they don’t exist in the first place.

    That said the WireGuard solution is ideal because it limits potential attackers to people you handed a key, so at least you’d know who breached you.

    I’ve fogotten Minecraft servers online and really nothing happened whatsoever.





  • Titus is fairly trustable (he’s made a few videos on the dangers of custom Windows ISOs like AtlasOS) but the thing is written in good chunks with AI assisted development and it’s also the dude’s Rust learning experience as well, so the code is not great. Parts of it are meant to run under ArchISO to install Arch (another sin, an automatic Arch installer) so it makes sense to want to just one-liner download and run the prebuilt binary.

    I wouldn’t use it personally but his audience is for it. It targets quick and easy, not proper and secure. It’s mostly meant to easily install and clone his setup, it’s too early in development to really be that useful for everyone.

    On the winutil side he also does the | iex PowerShell sin, but the toolbox do be really useful to debloat a Windows install.


  • I’ve read some posts about editing fstab to mount them at startup, but they don’t cover whether the drives will be available to other users or not. Can I just add them to fstab and mount them somewhere that’s available to all users, then sort out the permissions? If so, where’s the best place to put them?

    Yes pretty much. It just explicitly tells the system where to mount it, and for some filesystems you can even force the UID/GID and modes.

    Usually /mnt/whatever for static mounts and /media/whatever for removable mounts (those appear as drives in file managers, whereas /mnt doesn’t). You can set the users option in fstab and it’ll let users mount and unmount it without sudo as well, or auto to always mount it on boot.

    From there usually you can make a shared group, chown the mount to root:thatgroup, then chmod g+s to make sure the group is inherited. And you should mostly be good to go.


  • You can’t, because normies don’t care about tech other than it benefits them directly in some way. They care about the experience they get and doing the same thing everyone does because normies are like sheeps.

    Normies barely even get how emails work and it’s been like over 40 years. They know if they sign up for Gmail it’s free, they get a ton of space and an @gmail.com address. That’s it.

    And even then, people looked at me weird back in 2007 when I made my Gmail account because “everyone uses Hotmail, why wouldn’t you use Hotmail, everyone uses it so it must be the best”. Heck just yesterday, the teller at the mechanic shop looked at me weird because I used [email protected] to place the online order, they were utterly confused. They thought I made a Gmail or Outlook for all of those aliases. People don’t think about using emails, they think about using Gmail or Hotmail/Outlook.

    Same with Reddit, it didn’t become popular until normies felt like they were missing out by not being on Reddit, and arguably that was Reddit’s downfall flooding the site with the same repeated arguments and opinions over and over. And for that too, I’ve been told my “Reddit looks weird” because I use a third-party app. People want to use Reddit so they download Reddit.

    Normies don’t use Twitter because they want to microblog, they use Twitter because their idols are on Twitter and they want to mimic them. If Taylor Swift opened a Mastodon account and posted exclusively there, we’d get a massive spike of users. And they all would want to register on the same instance as her and it would be the only viable instance to them.

    They just want to fit in and do the same as the others, using the same services and same apps and everything. “Influencers” are everything these days.

    The best way to get normies on the Fediverse is IMO, endorsing Threads and BlueSky, which will effectively force them to integrate because those platforms integrate.




  • The developer benefits from reaching more people, some of whom are likely to purchase the proprietary license. Or sometimes you dual-license just so that licenses are compatible. Each license has pros and cons for both the developers and the users.

    Qt for example, the LGPL means you need to dynamically link to it, and if you ship your own Qt libraries you must provide the source code for it. But if you’re a company that writes proprietary software and can’t dynamically link, then you can purchase the proprietary license which allows you to do a lot more, but you’re compensating the devs for it. And for the Qt devs that’s good because either you pay them, or you use it for free but must share your changes with everyone.

    For ElasticSearch, that makes it so Amazon can’t just patch it up and sell the modified version without sharing what they changed. They wanted to add back a FOSS license to stop the bleed to OpenSearch which many in the FOSS community switched to purely for the license because even separate software should be compatible license-wise if you want a sustainable FOSS project. But the AGPL requires sources merely for being able to talk to it over the network, so Elastic gets the free dev work, or the juicy license payments. The other free licenses achieve similar goals with technical differences that might matter for the user. But as a developer using ElasticSearch maybe you do want to ship your software under the SSPL, so you can pick the SSPL version.

    Dual-licensing MIT/GPL for example, you can build proprietary software, or GPL software where you can vendor it in as GPL-only as well, and thus guarantee your user their GPL rights.


  • Apart from the technical reasons already mentioned, before things like Twitch and TikTok and Instagram were a thing, people mostly downloaded content and very rarely uploaded much. So it made sense for the ISPs to allocate more downstream channels and advertise much higher download speeds which is what everyone cared about. Especially with DSL and aging copper lines, it kind of tops out at 40-50 Mbps for most people when lucky (even though VDSL2-Vplus technically can go up to 300/100). Especially if you’re shoving IPTV on that line, 25/25 is much less desirable for the average consumer than say, 45/5.

    And as others have said, it’s much easier for the ISP to throw more power on your lines to sustain faster speeds, so it just kind of happened that it was convenient for everyone to do it that way.

    It also has the side effect of heavily discouraging hosting servers at home, reduces the amount of bandwidth used by torrenting and the likes.





  • I haven’t looked into it particularly deep but it’s not like there’s a ton of stuff a WM can possibly do unless the code base is littered with raw X11 calls everywhere.

    Most of the window placement and tiling logic shouldn’t be tied directly to X11 and only a small part of it should really be interacting with X11 to place and size windows. So one should target that intermediate spot that makes all the X11 calls.

    And if the code is too shit to port, it probably deserves to die.


  • The identifier is unavoidable for push notifications to work. It needs to know which phone to send it after all, even if it doesn’t use Google’s services, it would still need a way to know which device has new messages when it checks in. If it’s not a phone number it’s gonna be some other kind of ID. Messages need a recipient.

    Also, Signal’s goal is protecting conversations for the normies, not be bulletproof to run the next Silk Road at the cost of usability. Signal wants to upgrade people’s SMS messaging and make encryption the norm, you have to make some sacrifices for that. Phone numbers were a deliberate decision so that people can just install Signal and start using E2E texting immediately.

    If you want something really private you should be using Tor or I2P based solutions because it’s the only system that can reasonably hide both source and destination completely. Signal have your phone number and IP address after all. They could track your every movements.

    Most people don’t need protection against who they talk to, they want privacy of their conversations and their content. Solutions with perfect anonymity between users are hard to understand and use for the average person who’s the target audience of Signal.



  • It’s possible to do but also probably not worth the amount of effort to reimplement all of those protocols only for super old WMs that don’t have a Wayland equivalent. None of them are particularly complex, so It’s probably easier to just port those to wlroots than implement the compatibility, and it’s an opportunity to make an API or library to make it easy to write WMs.