• 0 Posts
  • 332 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle
  • what does the community think of it?

    Everyone has their own opinion, personally I think they’re a great idea and have lots of great applications. But just like rolling vs non-rolling release it’s a personal and application dependant choice.

    Do the downsides outweigh the benefits or vice versa?

    Again, depends, for my personal computer I wouldn’t use it because I think it could get complicated to get specific things to work, but for closed hardware like the Deck or even a fairly stable desktop used as a gaming system it’s perfect.

    Could this help Linux reach more mainstream audiences?

    It could, it can also hamper it because people might start to try solutions that only work until next boot and not understanding why, or having problems getting some special hardware to work (more than it would be a mutable distro). But there is a great counter to this which is that once it’s running it will be very difficult to break by user error.

    At the end of the day I think it’s a cool technology but that people should know what they’re getting into, just like when choosing rolling vs non-rolling distro, it’s not about what’s better, but what suits your needs best.


  • It’s perfectly normal, especially when you’re still so green. I’ve distro hopped lots for my first 4 years, started with Ubuntu, and tried a bunch of stuff until settling for Arch back in 2008. Since then I’ve tried one or another distro for some amount of time or specific purpose, e.g. servers running Debian, work machines running Ubuntu, and there was a 2 year gap where I used Gentoo as my main system (but despite things that I loved there, I just didn’t had the patience). Just the other day I was talking about Bazzite with someone here on Lemmy, and they made such a good defense for it that I might install it on a VM for testing, I’ve also been wanting to give NixOS a serious try any day. All of which is to say, yes man, trying different stuff is normal, even if you’re perfectly happy with what you have you won’t know if there’s anything better for you unless you try it, I used to think I was happy on Windows.



  • I think you got docker mixed with something else, since docker does the exact opposite, i.e. allows you run services without all of the arcane shit involved. Just put the compose file in a folder and run docker compose up -d and you’re done, whereas the alternative would be to install a database, configure it, install the immich service, connect it to the database, write a service file for both database and service to allow it to be auto-started, and face multiple issues due to missing dependencies or permissions.




  • It’s not though, but people have used it to promote pyramid schemes so it gets the bad faith. It’s almost like considering e-mail to be spam, it’s not, even though a large chunk of emails are spam the rest is very useful.

    Ethereum can be used to represent ownership in a way that’s non-transferable by anyone other than the owner, this can have real life applications such as deeds for houses or car registrations. Someone noticed that another excellent application for this is art ownership, for example a token that represents a painting could be used by art collectors as a proof of authenticity, since only one person could be the owner of the token, and that person can prove that he owns the token, if the painting were to be stolen there’s a way to prove you’re the rightful owner. Someone heard that and convinced artists to sell art using those tokens without fully understanding what they were selling, and a bunch of people bought them without understanding what they were buying. Then they noticed that this would only work if others were onboard, so they tried to push it to other people, and eventually you had people who didn’t understand a thing paying thousands of dollars for a drawing of a monkey…

    Ethereum could be used for so many awesome things, but obviously we live in the awful reality where the biggest and most known application for it was used to scam people.



  • Not aware of any, but I’ll do my best on my own.

    Let’s abstract money to its bare minimum, in the most basic form money is an abstract fungible (i.e. 1 of it is the same as another one, they’re interchangeable) token that can be sent or received, and the most basic way to keep track of this is with a ledger. A ledger is in its most basic form a lot of entries saying stuff like “Alice earned 5 coins” and “Alice paid Bob 3 coins”, by looking at these 2 (and assuming they’re the only ones in our ledger) we know that Alice now has 2 coins and Bob 3. Therefore if now Alice tries to send 3 coins to someone else we know this is invalid because she doesn’t have that amount of coins.

    Ok, so that’s the basic of what money is, but as a general rule the ledgers for most coins are centralized, e.g. your bank has that ledger for your account. For years people tried to have a way to create a decentralized ledger, so anyone could have a copy of the ledger and validate it on their own, that way this currency on that ledger could not be controlled by anyone. There are two big problems with that, first you need a way to ensure that only the owner of an account can give away those coins, and secondly we need a way to ensure no one cheats the system, for instance in the example above if Alice could remove her previous transaction from the ledger and input a new one she could convince Bob she paid him, but actually send the money to someone else.

    Problem 1, ownership. This is a slightly difficult answer, so I’ll not explain this fully, if you’re interested read about public and private keys. Essentially in cryptography there’s a way to sign a message in a way that you can verify who signed it without being able to reproduce the signature, in practice this means each account/wallet has 2 numbers, one is the private one used to sign messages (anyone who knows this number can spend the coins) and the other is a public number used to verify who coins are sent to and that the spending of coins was properly signed. This has been a solved problem for decades and it’s a very secure and acceptable solution, we use it on things like ssh, SSL and the likes.

    Problem 2, consensus. This is the hardest problem to solve, and this is the brilliance of Bitcoin. The way Bitcoin solved this issue is: A block is several entries in the ledger; The entries can be arranged in multiple ways, each way yielding a different hash for that block; Each block has a reference to the hash of the block that came before; Only certain hashes are acceptable (e.g. hashes that end with 0, or with 00), and this hash cannot be predicted, so it needs to be brute forced; Whoever creates a block can insert a transaction giving themselves some amount of coins. Phew, that’s a lot, but what does it all mean? It means that everyone sees every transaction in the network and try to build a block that will be accepted, the first person who does shows their block to the world, and everyone tries to find the next block after that. For Bitcoin the largest chain is the valid one, so if someone found a block it’s in your best interest to start to try to find the next one, it’s also in your best interest to show the world your block as soon as possible so others will build on top of it, the more blocks on top of yours the more unlikely it is that someone will be able to overwrite it (they would need to find more blocks that what has been built on top of yours, and even finding one block is hard because of the specific hash that needs to be generated). The difficulty (i.e. rules for which hashes are acceptable) are adjusted in order to make sure that on average one block is found every 10 minutes by the entire amount of people trying to find blocks.

    All together now: Currently Alice has 10 coins, she uses her private key to sign a transaction giving Bob 6 coins. This transaction gets picked by several miners. One of them finds the next block and includes this transaction there. Now all miners are trying to find the next block from that one, and when they do this transaction will have been validated by 2 blocks so it’s way more likely to keep being validated. After 6 blocks it would take the entire mining network 1 hour to undo that block, and unless 51% of the random strangers mining Bitcoin decide to cooperate, the rest of the miners will keep adding blocks on top making this transaction impossible to be reverted. If after that Alice now tries to spend 5 coins no miner will include that transaction, because it would create an invalid block that other miners would just ignore.

    There’s a bit more to Blockchains, for example each transaction also pays some amount to the miners as an incentive, so you can have a transaction be more priority than another by paying more to the miners.

    What about other coins? There are lots of them out there, I’ll only mention one, Ethereum. Ethereum takes this concept to the next level, instead of a ledger storing transaction it stores programs, so one can have a program that if you pay it X coins it gives you Y other tokens, or any other number of complicated stuff. Also recently Ethereum changed from the proof of work (i.e. finding the hash) to proof of stake, in which people pay some amount of coins to be allowed to validate transactions, but if they generate an invalid transaction they lose those coins.

    I strongly recommend you read the Bitcoin white paper, it’s not as difficult as you would think, and it will go into a lot more details on how things work.


  • In that sense it is a bit of scripting, it’s a templating language similar to Jinja, so you put things you want to display between {{ }}, for example {{name}} will get rendered as the content of the name variable. [[ ]] is the way Silverbullet habgles links, so [[Something]] is a link to the file Something.md, so [[ {{ name }} ]] is a link to the file with the name from the variable.

    Also that’s because I wanted a custom view, a very similar thing could be done with:

    \```query
    recipe
    \```
    

    BTW, you can have a table of contents on Silverbullet by just putting a block named toc, i.e. ```toc and closing it on the next line.



  • Let me give you an example, I have a page with this:

     ```template
     | Name | Keywords |
     |-----------|-----------------|
     {{#each {recipe}}}
     | [[{{name}}]] | {{keywords}} |
     {{/each}}
    \ ```
    

    Then each recipe page has a header, so for example if I have a file named Recipes/Steak.md with the content:

    ---
    tags: recipe
    keywords: beef easy
    ---
    
    # Ingredients 
    
    Yadda yadda yadda...
    
    

    So that table gets populated with all of the recipes wherever they are and I can add other columns or info there. It’s very neat and customizable.


  • Silverbullet is open source and has a very simple architecture with slightly extended markdown files which are easy to sync using whatever you use for syncing files. Plus it syncs files locally and allows you to edit offline and sync later (with a basic sync conflict resolution to avoid losing changes) and a very cool feature is that it allows you to write your own scripts to get whatever feature you want.


  • Another vote for Silverbullet, I’ve been using it for a while and it’s great. There is a tree view plugin that’s very easy to install, however I disabled it after a short while because I realized that, because of the way I take notes, that is a lot less useful than other features.

    For example, I have a folder with all my cooking recipes, at first I thought having a Tree view would be good there, but actually if I use the querying mechanism I can have tables that give me more information than just the name, e.g. tags, difficulty, etc. also this works regardless of where the recipes are, so if I want to create a subfolder structure or scrap recipes from elsewhere in the whole space it would work (granted, not very useful for recipes, but I also have a table for work tools, some of which are embebed on another page, some of which are a page of their own, and I have a table that lists all of the tools to give me an overview)


  • Different things. Bash/Zsh/Fish/Nu are shells, i.e. a low level CLI interface with the computer. On systems with graphics you need a graphical program to display the shell, e.g Konsole/Gnome-shell/Alacritty. Also there’s a third (optional) program to render the line where you type commands differently, this is called a prompt and there are several different ones, e.g. Powerlevel10k/oh-my-posh/Starship.


  • There are a lot of moving parts, so let’s start from the ground up. Processors are glorified input-output machines, you put electricity in some pins, and it gives you back electricity in other pins. Some of those pins define which operation you want and others give the input, so for example sending 00000010 to the operation could mean addition, so the output pins will show the result of the addition of your inputs. Each binary number can be interpreted as a decimal or hexadecimal number, but people are bad at remembering numbers, so instead we can have a table of conversion that says for example that ADD means 00000010, so you write a program saying ADD 2 3 and that’s called assembly language.

    Each processor has their own table of which operations it can do, so writing assembly is tedious since you need to know and account for all of that. Instead you can write in a higher level language where a program called a compiler will translate your code into assembly taking all of the considerations for different processors.

    So far, so good, but there is some stuff which is recurrent and requires special care. For example a processor knows nothing of the disks or memory in the system, so you need a program to be running there to manage stuff. We call that program an Operating system.

    Different operating systems do things differently, one might store things in any order on the disk to save on write speed while others might choose to align data where suitable to save on read speed. And they provide different high level APIs for it, e.g. one OS might have the open_file(char* full_path) while other could have open(char* folder, char* file). So if a program tries to call open in an OS that uses open_file the program won’t know what to do.

    Then just like OS sometimes programs try to use libraries that they expect to be installed in your system, such as DirectX on Windows. These libraries also have their own functions that the program tries to call.

    So now we get to a game which is trying to call a function from DirectX which is trying to call something native to Windows. There’s no way Linux knows what to do with that.

    So a few people realized that if they reimplemented the functions from windows but calling the equivalent functions on Linux you could get the programs to run. They also realized that you can reimplement DirectX using OpenGL calls, or more recently Vulkan. Putting those stuff together almost every call a game is likely to make calls one of these reimplementation which in turn calls the Linux kernel, which in turn calls the corresponding set of instructions on the CPU to do stuff the Linux way. The end result is that most things work, however sometimes the game developer tries to be smart and makes assumptions about how the OS will do something, and then face some errors because Linux did something slightly different.

    But the VAST majority of times when a game doesn’t work is because the game developer is actively trying to ensure you’re not doing anything weird, such as running the game on a different OS.



  • Wow, that’s very unfortunate. If you installed docker through package manager and have added yourself to the group I believe this to be self-imposed, I don’t know which mechanism Docker uses to give access to users in the group to its service, but seems related to that since it looks like the service is running but just your user can’t access it. To confirm it’s just that run the compose command as root, i.e. sudo docker compose up, this is not ideal but if that works you know it’s a permission problem with your user.

    You seem to know your way around Linux, so it’s probably not something obvious. I’m almost sure it’s something stupid and self imposed, I’ve done my fair share of stupid shit like leaving a config file malformatted or deleting a library or installing something through manually copying files only for something else to break because I overwrote something important.


  • I know you probably heard this thousands of times, but really, if you’re into self-hosting docker is a blessing. People make it harder than it needs to be when explaining all of the ins and outs. I assume you have a Linux box where you run your stuff, just install docker and docker compose there (you might need to enable the docker service, add your user to the docker group and reboot, unless you’re using a user friendly distro like Ubuntu). Then just make a folder anywhere for Silverbullet, create a file named compose.yaml and put the following text there:

    # services means that everything inside is a service to be deployed 
    services:
      # this is the name of the service, you can put whatever you want
      silverbullet:
        # this is the docker image to use
        image: zefhemel/silverbullet
        # this is the rule to restart in case of crashes
        restart: unless-stopped
        # these are environment variables you want defined
        environment:
        # this is a specific variable for Silverbullet, it's essentially username:password change this accordingly 
        - SB_USER=admin:admin
        # volumes are local folders you want to be available
        volumes:
          # in this case we want that the folder ./space be mounted as /space inside the container
          - ./space:/space
        # these are the ports we want to expose
        ports:
          # This means expose port 3000 on port 3000, if you want to access Silverbullet on port 8080 this would be 8080:3000 (because internally the service is still listening to 3000)
          - 3000:3000
    

    Then run docker compose up and you should be able to access it on the port 3000.

    Ling story short docker compose looks for a file named compose.yaml in the local directory, and that file above has all of the information it needs to run the server. I’ve annotated each line there, feel free to remove the comments.