Every community I care about is dead
You can circumvent this by connecting to a router that has no access to internet. It will connect to the router, fail to connect to the internet, and then you can tell it to skip the initial setup and enable sideload mode.
JXL is the best image codec we have so far and it’s not even close. I did a breakdown on some of its benefits here. JXL can losslessly convert PNG, JPG, and GIF into itself, and can losslessly send them back the other way too. The main downside is that Google has been blocking its adoption by keeping support out of Chromium in favor of pushing AVIF, which started a chicken and egg problem of no one wanting to use it until everyone else started using it too. If you want to be an early adopter you can feel free to use JXL, but just know that 3rd party software support is still maturing.
Something you might find interesting is that the original JPEG is such a badass format that they’ve taken a lot of their findings from JXL and made a badass JPEG encoder with it named jpegli. Oddly, jpegli-based JPEGs are not yet able to be losslessly-compressed into JXL files, per this issue - hopefully that will be fixed at some point.
Yes it’s lossless. JPG->JXL lossless compression is generally 20% savings for free.
Arch should be fine for university stuff. The main problem with Arch is not Arch itself, but all the software it tracks being very fresh. You’ll be pulling updates as they come down the line, and that may result in temporary bugs or day-to-day workflow changes - caused by the software developers themselves. I don’t think an Arch system is unusually unstable or prone to breaking, but last year they did brick everyone’s GRUB loaders by pushing an update too early (post-mortem here). It’s up to you, but if you want to err on the side of system/software stability I would go for Mint/OpenSUSE Tumbleweed/Debian.
I don’t have any practical experience with EndeavourOS but TMK it’s just preconfigured Arch and it uses the default repos, so that sounds good to me. Vanilla Arch is not inherently better or worse, it’s just a more minimal starting point.
I’ve seen a trend where people move the goalposts on the reasons they’re not able to switch. “If only this program worked I could switch”, but when that program is ported it’ll be a new excuse next. Sooner or later you’ll have to draw a line and say “99% of my stuff works, the 1% that doesn’t can get bent”.
Yeah I wouldn’t bother. It intends for you to have a duplicate copy on every device, which is probably not what you want. Syncthing is really good for things like synchronizing notes, calendars, password databases, music, etc to your devices. Things that you want to access in both places, but that are usually disconnected from each other from time to time.
Conduit is also licensed under Apache 2.0, so it could also be taken closed source at any point in time. The reason this wouldn’t impact Conduit as much is that there’re other contributors, whilst Synapse and Dendrite are almost exclusively developed by Element.
Right. The current perspective is based on the idea that if Synapse/Dendrite go closed-source right now, an open source version would be good as dead. Element is responsible for 95% of Synapse/Dendrite and I’m sure a community fork would have to play a lot of catch-up to figure out how to keep it going. If the community was more involved in Synapse/Dendrite implementation (and if Element let them) there would be less cause for alarm, as closing the source would just mean an immediate community fork and putting Element on ignore. Also to reiterate, The Matrix Foundation is not going along with Element on this move, and even if Element pulled something shady the Matrix Core Spec etc. would still remain open and under the Foundation’s control, so the max we have to lose is Synapse/Dendrite and all of Element’s developers.
As for the rest I agree and I do actually trust that Element is simply playing their only card here. These maneuvers are all required in order for Element to survive as a company at all, but they also unfortunately leave this backdoor open as a consequence. Matthew has pinky-promised over and over that they are only acting in good faith and that they would never use the backdoor, but it’s understandable that the presence of the backdoor is putting everyone at unease. Best case scenario we take this as a warning sign that if Element drops dead tomorrow then Matrix is also dead. If people want Matrix to not be practically owned by Element then we should diversify and prepare escape plans.
It depends on what your workflow/usecase for putting documents on the drive currently is. Syncthing is usually intended to be put on two separate devices, and then a folder on each device gets synchronized - meaning you have a folder of your documents on each device. Is there any reason not to just mount the network drive’s folder and drag the documents in that way?
This is actually quite a controversial change mainly because of their switch to a CLA. This indirectly gives them the opportunity to switch the license to closed source whenever they feel like it in the future. Semi-controversially, they are also primarly making this AGPL change in order to begin selling dual-licensing to companies. The Matrix Foundation itself does not support this change from Element, though Element is within its rights to do so.
You can read some more thoughts on this from the pessimistic folks at HackerNews. My main takeaway is that I don’t trust Element because I don’t trust anyone. I’m sure they’re doing this in good faith but I don’t like the power they have at the moment. I hope this is what’s needed to begin focusing efforts on alternative homeserver implementations like Conduit.
I haven’t used Kali Linux before, but hcxtools
is available in the Debian repos so presumably your /etc/apt/sources.list
is invalid (probably the LiveUSB has disabled non-iso sources). Can you post what is in that file?
Edit: Actually it looks like Kali uses a single line for its repo. Can you add
deb http://http.kali.org/kali kali-rolling main contrib non-free non-free-firmware
to your
/etc/apt/sources.list
, run an apt update
and try again?
Syncthing - No introduction needed. Couldn’t live without it.
Healthchecks.io (you can self host this) - Dead man’s switch monitoring for all my automation. Most of my automated scripts hit up a Healthchecks endpoint when they run, and if they fail to hit the endpoint on a regular schedule I get notified. Mandatory for my anxiety.
Flatpak is like an alternative packaging system that exists outside of your distro’s normal packaging model, e.g. apt/dnf/pacman etc. The killer features are that Flatpaks work on any distro with a single universal package, and that the software versions will be cutting-edge without needing cutting-edge system dependencies. Flatpaks run in their own dependency network and generally don’t rely on anything from the host system - this means that you can have arbitrary software on your machine that your distro/repo maintainers don’t need to compile/quality-control/stability-test/etc. It also comes with an easy sandboxing framework out of the box as a bonus.
In my case I usually use Flatpaks to get more current versions of software without totally messing up Debian’s “Debian does not break” stability model - Debian is meticulously maintained so that its “Stable” branch only has ultra-stable versions of software, at the expense of those packages being older and frozen. If you use a distro with smaller package repos (e.g. OpenSUSE/Fedora/etc) you’ll probably appreciate finding Flatpak versions of software that you’d normally need to manually compile.
Flatpaks are cool, and they have a specific use. They’re not the end-all be-all of packaging and they’re (hopefully) not going to replace apt/dnf/pacman. As for why they hate apt
I have no idea. apt
is good, and you can even make it a little nicer by installing nala and using that instead of apt
.
If the basis of this thread is that you’re digging for distro recommendations I’d personally steer you towards Linux Mint and OpenSUSE Tumbleweed for their ease of use. Debian is a little more difficult to set up than Linux Mint but not tremendously so. Arch is more of an “intermediate” difficulty distro where the main challenge is that your system packages are fast-moving and can break/change in small ways from day-to-day. If you aren’t comfortable with Linux you might get frustrated with minor bugs that you don’t know how to troubleshoot. Conversely, if you want to learn Linux then dealing with Arch’s shenanigans will help expose you to various parts of the system naturally.
The video is clickbait and a few of the distros are in categories just for dramatic effect. I personally share Chris’s criteria for “pointless” distros however, and I hope that his main “clickbait motive” was trying to stop people from hopping around from gimmick distro to gimmick distro when the real magic has always been with the Debian/Arch base underneath the hood. I don’t care to give Chris the attention he wants so I’d rather answer your questions instead of talk about the video directly:
I agree that Debian and Arch are “S-tier” distros. Not that they’re better than everything else for every usecase but they are very high quality community-run distros with large package bases, and they accomplish their mission statements with ease. If you’re a Linux power user for long enough you may eventually settle into one of these two distros because they give you a lot of room to mold your configuration without being opinionated by downstream distro maintainers.
Linux Mint is very good, and it’s probably the only “fork distro” that I recommend people use because it makes Debian/Ubuntu very simple and usable for new users, and it’s done so for many years with a great track record. I currently run Debian Stable but if you put a gun to my head and said “you can only run Linux Mint from now on” I’d be fine with it. Specifically, I prefer the LMDE edition but the normal version is good too.
You can run cutting-edge gaming stuff on Debian Stable and Linux Mint by using Flatpak Lutris/Steam, which uses its own cutting-edge Mesa package instead of the system’s, and you can also install a cutting-edge kernel on these stable distros by using Debian backports or e.g. XanMod. I prefer using stable distros like Debian Stable and pulling cutting-edge versions of your important packages through Flatpak or other means, which gives you a “stable base and rolling top”.
I think the general usecase for Arch has diminished from half a decade ago due to Flatpak’s popularity, and IMO a stable base setup makes more sense if you can get everything important that you need from Flatpaks. With Arch, not only are the programs you care about bleeding-edge, everything is bleeding-edge, and you may end up with annoying bugs from packages you didn’t even know existed.
If you want a more modern version of the Linux desktop without the bleeding-edge of Arch I think OpenSUSE Tumbleweed is a great cutting-edge distro. They have extensive automatic testing that ensures high system stability even while living near the edge of package freshness. The main downside is OpenSUSE’s smaller package base compared to Debian/Arch-based distros.
The field of programming is so large that I’d hesitate to call any sort of company culture “the norm”, but I would definitely recommend you stay away from the bottom end companies like these. They’re clearly taking advantage of your inexperience and grinding you up for a quick buck, and they’re in no way indicative of a typical or fair programming position.
None of this is your fault, this is your company being garbage. Putting you in charge of all these responsibilities with a ~year of experience is a big red flag. Your project manager asking you to estimate your percentage work done is also ridiculous - people can barely get decent estimates with fully-dissected agile pointing, so I’m not sure how a “project manager” thinks that your guesses are useful data. If you’re not getting paid at least 6 figures right now you should hop to another job immediately. It won’t look bad on your resume to hop from your first job after a year and a half.
I personally wouldn’t stay with a company like this even if they paid fairly because the culture is borderline toxic/manipulative. I wouldn’t recommend you stay but the only reason I see in favor of staying is that you’ll learn a lot really quickly - at the expense of your mental, not being paid fairly, and potentially being fired.
I think mods (including me) wouldn’t to put effort into a new community if it doesn’t get any interaction, so I think it would be nice to at least start with it appearing in the “All” tab.
This is a very good usecase for a problem I hadn’t even thought of.
r/linux4noobs -> [email protected] or any of the Linux communities seem to be responsive to questions. I think in these early stages the more niche communities need to exist within the larger communities. If the niche community gets too disruptive to the large community it can break out into its own community.
A couple nits to pick: BTRFS doesn’t use/need journaling because of its CoW nature - data on the disk is always correct because it doesn’t write data back over the same block it came from. Only once data is written successfully will the pointer be moved to the newly-written block. Also, instantaneous copies from BTRFS are actually due to reflinking instead of CoW (XFS can also do reflinking despite not being CoW, and ZFS didn’t have this feature until OpenZFS 2.2 which just released).
I agree with the ZFS bit and I’m firmly in the BTRFS/ZFS > Ext4/XFS/etc camp unless you have a specific performance usecase. The ability to scrub checksums of data is so invaluable in my opinion, not to mention all the other killer features. People have been running Ext4 systems for decades pretending that if Ext4 does not see the bitrot, the bitrot does not exist. (then BTRFS picks up a bad checksum and people scold it for being a bad filesystem)
Everyone fully missing the point here. This is the banner image for [email protected] (that’s not where we are right now for the record), and it has a normal JPEG size of 7.7MB. When it’s served as WebP it’s 3.8MB. OP is correct that this is very stupid and wasteful for a web content image. It’s a triple-monitor 1440p wallpaper that’s used verbatim, and it should instead be compressed down to be bandwidth-friendly. I was able to get it to 1.4MB at JPEG quality 80, and when swapping it out in dev tools and performing A/B testing I can’t tell the difference. This should be brought to the attention of a mod on that community so it can stop sucking people’s data for no reason.