• 14 Posts
  • 582 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
  • The problem is that games don’t run at all or require major effort to run without issues.

    A major cause for that is the distro - when it comes to gaming, the distro makes a huge difference as I outlined previously. The second major cause is the flavor of Wine you chose (Proton-GE is the best, not sure what you used). The third major cause is checking whether or not the games are even compatible in the first place (via ProtonDB, Reddit etc) - you should do this BEFORE you recommend Linux to a gamer.

    In saying all that, I’ve no idea about pirated stuff though, you’re on your own on that one - Valve and the Wine developers obviously don’t test against pirated copies, and you won’t get much support from the community either.


  • Unfortunately you chose the wrong distro for your friend - Linux Mint isn’t good for gaming - it uses an outdated kernel/drivers/other packages, which means you’ll be missing out on all the performance improvements (and fixes) found in more up-to-date distros. Gaming on Linux is a very fast moving target, the landscape is changing at a rapid pace thanks to the development efforts of Valve and the community. So for gaming, you’d generally want to be on the latest kernel+mesa+wine stack.

    Also, as you’ve experienced, on Mint you’d have to manually install things like Waydroid and other gaming software, which can be a PITA for newbies.

    So instead, I’d highly recommend a gaming-oriented distro such as Nobara or Bazzite. Personally, I’m a big fan of Bazzite - it has everything you’d need for gaming out-of-the-box, and you can even get a console/Steam Deck-like experience, if you install the -deck variant. Also, because it’s an immutable distro with atomic updates, it has a very low chance of breaking, and in the rare ocassion that an update has some issues - you can just select the previous image from the boot menu. So this would be pretty ideal for someone who’s new to Linux, likes to game, and just wants stuff to work.

    In saying that, getting games to run in Linux can be tricky sometimes, depending on the game. The general rule of thumb is: try running the game using Proton-GE, and if that fails, check Proton DB for any fixes/tweaks needed for that game - with this, you would never again have to spend hours on troubleshooting, unless you’re playing some niche game that no one has tested before.





  • Others here have already given you some good overviews, so instead I’ll expand a bit more on the compilation part of your question.

    As you know, computers are digital devices - that means they work on a binary system, using 1s and 0s. But what does this actually mean?

    Logically, a 0 represents “off” and 1 means “on”. At the electronics level, 0s may be represented by a low voltage signal (typically between 0-0.5V) and 1s are represented by a high voltage signal (typically between 2.7-5V). Note that the actual voltage levels, or what is used to representation a bit, may vary depending on the system. For instance, traditional hard drives use magnetic regions on the surface of a platter to represent these 1s and 0s - if the region is magnetized with the north pole facing up, it represents a 1. If the south pole is facing up, it represents a 0. SSDs, which employ flash memory, uses cells which can trap electrons, where a charged state represents a 0 and discharged state represents a 1.

    Why is all this relevant you ask?

    Because at the heart of a computer, or any “digital” device - and what sets apart a digital device from any random electrical equipment - is transistors. They are tiny semiconductor components, that can amplify a signal, or act as a switch.

    A voltage or current applied to one pair of the transistor’s terminals controls the current through another pair of terminals. This resultant output represents a binary bit: it’s a “1” if current passes through, or a “0” if current doesn’t pass through. By connecting a few transistors together, you can form logic gates that can perform simple math like addition and multiplication. Connect a bunch of those and you can perform more/complex math. Connect thousands or more of those and you get a CPU. The first Intel CPU, the Intel 4004, consisted of 2,300 transistors. A modern CPU that you may find in your PC consists of hundreds of billions of transistors. Special CPUs used for machine learning etc may even contain trillions of transistors!

    Now to pass on information and commands to these digital systems, we need to convert our human numbers and language to binary (1s and 0s), because deep down that’s the language they understand. For instance, in the word “Hi”, “H”, in binary, using the ASCII system, is converted to 01001000 and the letter “i” would be 01101001. For programmers, working on binary would be quite tedious to work with, so we came up with a shortform - the hexadecimal system - to represent these binary bytes. So in hex, “Hi” would be represented as 48 69, and “Hi World” would be 48 69 20 57 6F 72 6C 64. This makes it a lot easier to work with, when we are debugging programs using a hex editor.

    Now suppose we have a program that prints “Hi World” to the screen, in the compiled machine language format, it may look like this (in a hex editor):

    As you can see, the middle column contains a bunch of hex numbers, which is basically a mix of instructions (“hey CPU, print this message”) and data (“Hi World”).

    Now although the hex code is easier for us humans to work with compared to binary, it’s still quite tedious - which is why we have programming languages, which allows us to write programs which we humans can easily understand.

    If we were to use Assembly language as an example - a language which is close to machine language - it would look like this:

         SECTION .data
    msg: db "Hi World",10
    len: equ $-msg
    
         SECTION .text
         
         global main   
    main:
         mov  edx,len
         mov  ecx,msg
         mov  ebx,1
         mov  eax,4
    
         int  0x80
         mov  ebx,0
         mov  eax,1
         int  0x80
    

    As you can see, the above code is still pretty hard to understand and tedious to work with. Which is why we’ve invented high-level programming languages, such as C, C++ etc.

    So if we rewrite this code in the C language, it would look like this:

    #include <stdio.h>
    int main() {
      printf ("Hi World\n");
      return 0;
    } 
    

    As you can see, that’s much more easier to understand than assembly, and takes less work to type! But now we have a problem - that is, our CPU cannot understand this code. So we’ll need to convert it into machine language - and this is what we call compiling.

    Using the previous assembly language example, we can compile our assembly code (in the file hello.asm), using the following (simplified) commands:

    $ nasm -f elf hello.asm
    $ gcc -o hello hello.o
    

    Compilation is actually is a multi-step process, and may involve multiple tools, depending on the language/compilers we use. In our example, we’re using the nasm assembler, which first parses and converts assembly instructions (in hello.asm) into machine code, handling symbolic names and generating an object file (hello.o) with binary code, memory addresses and other instructions. The linker (gcc) then merges the object files (if there are multiple files), resolves symbol references, and arranges the data and instructions, according to the Linux ELF format. This results in a single binary executable (hello) that contains all necessary binary code and metadata for execution on Linux.

    If you understand assembly language, you can see how our instructions get converted, using a hex viewer:

    So when you run this executable using ./hello, the instructions and data, in the form of machine code, will be passed on to the CPU by the operating system, which will then execute it and eventually print Hi World to the screen.

    Now naturally, users don’t want to do this tedious compilation process themselves, also, some programmers/companies may not want to reveal their code - so most users never look at the code, and just use the binary programs directly.

    In the Linux/opensource world, we have the concept of FOSS (free software), which encourages sharing of source code, so that programmers all around the world can benefit from each other, build upon, and improve the code - which is how Linux grew to where it is today, thanks to the sharing and collaboration of code by thousands of developers across the world. Which is why most programs for Linux are available to download in both binary as well as source code formats (with the source code typically available on a git repository like github, or as a single compressed archive (.tar.gz)).

    But when a particular program isn’t available in a binary format, you’ll need to compile it from the source code. Doing this is a pretty common practice for projects that are still in-development - say you want to run the latest Mesa graphics driver, which may contain bug fixes or some performance improvements that you’re interested in - you would then download the source code and compile it yourself.

    Another scenario is maybe you might want a program to be optimised specifically for your CPU for the best performance - in which case, you would compile the code yourself, instead of using a generic binary provided by the programmer. And some Linux distributions, such as CachyOS, provide multiple versions of such pre-optimized binaries, so that you don’t need to compile it yourself. So if you’re interested in performance, look into the topic of CPU microarchitectures and CFLAGS.

    Sources for examples above: http://timelessname.com/elfbin/


  • This shouldn’t even be a question lol. Even if you aren’t worried about theft, encryption has a nice bonus: you don’t have to worry about secure erasing your drives when you want to get rid of them. I mean, sure it’s not that big of a deal to wipe a drive, but sometimes you’re unable to do so - for instance, the drive could fail and you may not be able to do the wipe. So you end up getting rid of the drive as-is, but an opportunist could get a hold of that drive and attempt to repair it and recover your data. Or maybe the drive fails, but it’s still under warranty and you want to RMA it - with encryption on, you don’t have to worry about some random accessing your data.



  • On Windows, I would never need to know that the “File browser window” is called “explorer”

    I do though. That knowledge is pretty handy for launching apps via the Run dialog, which I find faster than using the Start Menu with its horrible search. And this has become even more important to me with recent versions of Windows getting rid of the classic Control Panel UI, as you can still access the old applets without needing to put up with the horrid Metro UI. For instance, I find the network settings applet far more convenient and easy to use, so I just launch it via ncpa.cpl. Or if I want to get to the old System applet to change the hosname/page file size etc, I can get to it by running sysdm.cpl. Or getting to Add/Remove programs via appwiz.cpl, and so on.

    Also, knowing the actual commands opens up many scripting and automation possibilities, or say you just want to create a custom shortcut to a program/applet somewhere. There are several useful applets you can launch via rundll32 for instance.


  • It’s interesting, but it always seemed a bit too hacky for my liking and possibly prone to breakage. Eg seeing the compatibility table here doesn’t inspire much confidence: https://bedrocklinux.org/0.7/feature-compatibility.html

    I also don’t like that it hijacks your host distro, it would’ve be been better if it was a bit more self-contained, like how Nix works on other distros. Feels like the mashup Bedrock does would be a PITA for troubleshooting (for instance, mixing binaries from different distros via $PATH is just asking for trouble). I also dislike that it uses FUSE to share resources between strata, given how inefficient FUSE is.

    I think for most purposes, if you really want to mix-and-match distro features, a far cleaner approach would be to just use Distrobox.



  • d3Xt3r@lemmy.nzMtoLinux@lemmy.mlTrying to ditch windows
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    7 months ago

    .NET is now fully cross platform. you can absolutely run and debug applications on linux as you would in windows.

    Correct me if I’m wrong, but isn’t this limited to just console apps - as in you can’t yet run GUI apps, unless you’re using a cross-platform toolkit like Avalonia, or it’s a WinForms app running under Mono?


  • d3Xt3r@lemmy.nzMtoLinux@lemmy.mlStay on Fedora or Switch to Void?
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    7 months ago

    This is not an answer or recommendation btw, just chiming in my 2c as an Arch and Fedora user who’s tried Void for a while.

    From what I’ve experienced, there was no visible difference in the startup/shutdown speed (compared to Arch). This was on a Zen 4 mini PC, with a Samsung 980 Pro PCIe 4.0 NVMe. But I suspect it’ll be the same for anyone who’s on any modern system with an NVMe drive. But, if you’re on an older PC with a spinning disk or limited RAM, you might notice a difference. But both Void and Arch were visibly faster at startup/shutdown compared to Fedora, but we’re only talking about a couple of seconds here. Again, on an NVMe, startup/shutdown speeds shouldn’t really be relevant these days, unless there’s some bug or misconfiguration slowing down your init.

    I definitely do like the idea of using musl over the bloated glibc, but there’s still far too many programs out there dependent on it, so you won’t be able to get rid of glibc completely on a full-fledged desktop.

    The package manager (xbps) wasn’t visibly faster compared to pacman either (especially with pacman’s parallel downloads). Also, I missed the unique features found in certain AUR helpers, like pikaur, which showed the latest Arch news and package comments.

    However xbps is definitely a lot faster than the current dnf on Fedora, although that gap may close with dnf5 - which you can install if you want to. I haven’t tested dnf5 yet though so can’t comment on it. The xtools features in Void were pretty nifty, but in saying that, the lack of them on other distros wasn’t that big of a dealbreaker.

    Finally, for me, ultimately what I’m after is performance, and Arch with x86-64-v4 packages and the BORE scheduler performed much better overall compared to vanilla Void (or Fedora for that matter). If Void had x86-64-v4 as well, I might consider using it as one of my primary distros, but at present, I’d relegate it to niche scenarios where system resources are limited.

    If you want to use Void without transitioning, just install it in a VM and give it a good try. With the state of KVM these days there’s very little performance overhead and you can definitely daily-drive Void inside a VM, and then form your own conclusions as to whether its worth switching or not.


  • I wish they did this a decade ago, back when they tried to crowdfund the Ububtu phone - and subsequently scrapped all plans just because they didn’t meet the target. There was already a big dev scene in the community with people porting Ubuntu to Android phones - they could’ve easily partnered up with them, like how OnePlus partnered up with CyanogenMod a year later. I mean, Canonical did raise $12mil through the campaign, which showed there was not only plenty of interest, but also plenty of people willing to actually fund it.

    The problem now is Google and Apple have taken such a deep foothold on the market, it may be a bit too late. After the disappointment of the scrapped Ububtu Phone and subsequent loss of trust in Canonical over the years, I can’t help but be sceptical about this whole thing. I’ll celebrate if and when we have an actual, usable, flagship device in our hands, and not something gimped like the Librem 5 or the Pinephone.


  • Nice review, thanks for sharing! I was curious about how the V3 was with Linux. I’ve got a Minisforum UM780 mini PC with a 7840HS, which I use as a homelab box, and it’s been excellent on Arch. I was tempted to get the V3 as well, but 14" is a bit too big for my use case (primarily as a tablet).

    But it’s nice knowing that even the fingerprint reader worked out of the box, I know that’s been a sore point for many Linux users. The battery life seems a bit on the lower end though - have you tried TuneD yet? Apparently some folks have experienced better battery life with it, compared to PPD. I’m also curious what the battery life would be like if you ran a distro which used x86-64-v4 packages, such as CachyOS, in theory you should get better battery life since you’d be using more optimised instructions.






  • I think you’d be fine with either, but in the end it comes down to how “hands-off” you want to be, or how much customisability, flexibility and performance you’re after. Unlike Manjaro, Cachy is closer to Arch, which means things may on rare occasions break or may require manual intervention (you’ll need to keep up with the Arch news). Bazzite on the other hand is the polar opposite, being an immutable distro - updates are atomic (they either work or don’t, and in case an update is no good, you can easily rollback to a previous version from GRUB); but this also means you lose some customisability and flexibility - like you can’t run a custom kernel or mess with the display manager (logon screen) etc, and you’ll need to mostly stick to installing apps via Flatpak or Distrobox.

    Overall, if you’re after a console-like experience that just works™, then choose Bazzite. On the other hand, if you’re a hands-on type of person who likes to fine-tune things and is after the best possible performance, choose CachyOS.


  • NetworkManager doesn’t support DoH, DoT or other recent protocols like DoQ and DoH3. You’ll need to set up a local DNS resolver / proxy which can handle those protocols. You could use dnsproxy for this. Once you set it up, you can just use “127.0.0.1” as your DNS server in NetworkManager.

    Btw, if possible I’d recommend sticking to DoH3 (DNS-over-HTTP/3) or DoQ (DNS-over-QUIC) - they perform better than DoT and vanilla DoH, and are more reliable as well.