• 2 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle

  • Dark Souls’ implementation is something special. Censors your name based on the language settings you have in place at the time, voice-over dialogue remains in English. So change your system language to either another language you know, or play it a few times so you know what things are, and then put the most offensive shit in as your character name you like.


  • When I was still dual-booting Windows and Linux, I found that “raw disk” mode virtual machines worked wonders. I used VirtualBox, so you’d want a guide somewhat like this: https://superuser.com/questions/495025/use-physical-harddisk-in-virtual-box - other VM solutions are available, which don’t require you to accept an agreement with Oracle.

    Essentially, rather than setting aside a file on disk as your VM’s disk, you can set aside a whole existing disk. That can be a disk that already has Windows installed on it, it doesn’t erase what you have. Then you can start Windows in a VM and let it do its updates - since it can’t see the bootloader from within the VM, it can’t fuck it up. You can run any software that doesn’t have particularly high graphics requirement, too.

    I was also able to just “restart in Windows” if I wanted full performance for a game or something like that, but since Linux has gotten very good indeed at running games, that became less and less necessary until one day I just erased my Windows partition to recover the space.


  • It’s a simple alphabet for computing because most of the early developers of computing developed using it and therefore it’s supported everywhere. If the Vikings had developed early computers then we could use the 24 futhark runes, wouldn’t have upper and lower case to worry about, and you wouldn’t need to render curves in fonts because it’s all straight lines.

    But yeah, agreed. Very widely spoken. But don’t translate programming languages automatically; VBA does that for keywords and it’s an utter nightmare.


  • If you move past the ‘brute force’ method of solving into the ‘constraints’ level, it’s fairly easy to check whether there are multiple possible valid solutions. Using a programming language with a good sets implementation (Python!) makes this easy - for each cell, generate a set of all the values that could possibly go there. If there’s only one, fill it in and remove that value from all the sets in the same row/column/block. If there’s no cells left that only take a unique value, choose the cell with the fewest possibilities and evaluate all of them, recursively. Even a fairly dumb implementation will do the whole problem space in milliseconds. This is a very easy problem to parallelize, too, but it’s hardly worth it for 9x9 sodokus - maybe if you’re generating 16x16 or 25x25 ‘alphabet’ puzzles, but you’ll quickly generate problems beyond the ability of humans to solve.

    The method in the article for generating ‘difficult’ puzzles seems mighty inefficient to me - generate a valid solution, and then randomly remove numbers until the puzzle is no longer ‘unique’. That’s a very calculation-heavy way of doing it, need to evaluate the whole puzzle at every step. It must be the case that a ‘unique’ sodoku has at least 8 unique numbers in the starting puzzle, because otherwise there will be at least two solutions, with the missing numbers swapped over. Preferring to remove numbers equal to values that you’ve already removed ought to get you to a hard puzzle faster?



  • PS3 most certainly had a separate GPU - was based on the GeForce 7800GTX. Console GPUs tend to be a little faster than their desktop equivalents, as they share the same memory. Rather than the CPU having to send eg. model updates across a bus to update what the GPU is going to draw in the next frame, it can change the values directly in the GPU memory. And of course, the CPU can read the GPU framebuffer and make tweaks to it - that’s incredibly slow on desktop PCs, but console games can do things like tone mapping whenever they like, and it’s been a big problem for the RPCS3 developers to make that kind of thing run quickly.

    The cell cores are a bit more like the ‘tensor’ cores that you’d get on an AI CPU than a full-blown CPU core. They can’t speak to the RAM directly, just exchange data between themselves - the CPU needs to copy data in and out of them in order to get things in and out, and also to schedule any jobs that must run on them, they can’t do it themselves. They’re also a lot more limited in what they can do than a main CPU core, but they are very very fast at what they can do.

    If you are doing the kind of calculations where you’ve a small amount of data that needs a lot of repetitive maths done on it, they’re ideal. Bitcoin mining or crypto breaking for instance - set them up, let them go, check in on them occasionally. The main CPU acts as an orchestrator, keeping all the cell cores filled up with work to do and processing the end results. But if that’s not what you’re trying to do, then they’re borderline useless, and that’s a problem for the PS3, because most of its processing power is tied up in those cores.

    Some games have a somewhat predictable workload where offloading makes sense. Got some particle effects - some smoke where you need to do some complicated fluid-and-gravity simulations before copying the end result to the GPU? Maybe your main villain has a very dramatic cape that they like to twirl, and you need to run the simulation on that separately from everything else that you’re doing? Problem is, working out what you can and can’t offload is a massive pain in the ass; it requires a lot of developer time to optimise, when really you’d want the design team implementing that kind of thing; and slightly newer GPUs are a lot more programmable and can do the simpler versions of that kind of calculation both faster and much more in parallel.

    The Cell processor turned out to be an evolutionary dead end. The resources needed to work on it (expensive developer time) just didn’t really make sense for a gaming machine. The things that it was better at, are things that it just wasn’t quite good enough at - modern GPUs are Bitcoin monsters, far exceeding what the cell can do, and if you’re really serious about crypto breaking then you probably have your own ASICs. Lots of identical, fast CPU cores are what developers want to work on - it’s much easier to reason about.


  • Yes, because it doesn’t do as much to protect you from data corruption.

    If you have a use case where a barely-measurable increase in speed is essential, but not so essential that you wouldn’t just pay for more RAM to keep it in cache, and also it doesn’t matter if you get the wrong answer because you’ve not noticed the disk is failing, and you can afford to lose everything in the case of a power cut, then sure, use a legacy filesystem. Otherwise, use a modern one.



  • Yeah.

    There’s a couple of ways of looking at it; general purpose computers generally implement ‘soft’ real time functionality. It’s usually a requirement for music and video production; if you want to keep to a steady 60fps, then you need to update the screen and the audio buffer absolutely every 16 ms. To achieve that, the AV thread runs at a higher priority than any other thread. The real-time scheduler doesn’t let a lower-priority thread run until every higher-priority thread is finished. Normally that means worse performance overall, and in some cases can softlock the system - if the AV thread gets stuck in a loop, your computer won’t even respond to keyboard input.

    Soft real-time is appropriate for when no-one will die if a timeslot is missed. A video stutter won’t kill you. Hard real-time is for things like industrial control. If the anti-lock breaks in your car are meant to evaluate your wheels one hundred times a second, then taking 11 ms to evaluate that is a complete system failure, even if the answer is correct. Note that it doesn’t matter if it gets the right answer in 1 ms or 9 ms, as long as it never ever takes more than 10. Hard real-time performance does not mean good performance, it means predictable performance.

    When we program up PLCs in industrial settings, for our ‘critical sections’, we’ll processor interrupts, so that we know our code will absolutely run in time. We use specialised languages as well - no loops, no recursion - that don’t let you do things that can’t be checked for an upper time bound. Lots of finite state machines! But when we’re done, we know that we’ve got code that won’t miss a time slot in the next twenty years of operation.

    That does mean, ironically, that my old Amiga was a better music computer than my current desktop, despite being millions of times less powerful. OctaMED could take over the whole CPU whenever it liked. Whereas a modern desktop might always have to respond to a USB device or a hard drive, leading to a potential stutter at any time. Tiny probability, but not an acceptable one.


  • I used to work with a Greek guy called Argyros Argyros - cool guy, but suspect he was an outlier. Named after his dad, so certainly some people are named that way. Icelandic for instance would traditionally use “Given Name” “Patronym from father” - Magnus Magnusson was quite famous in the UK; Björk Guðmundsdóttir might be the most famous internationally, but she’s not a “double”. There’s quite a few cultures - Hungarian, Chinese, Japanese, … - that write their names as “Family Name” “Given Name” as opposed to the other way around, if that’s what you mean?






  • Not all of the light would have been wasted on the wall. If your wall is painted green, then the ‘rest of the rainbow’ (red, orange, yellow, blue, violet wavelengths) would be absorbed and converted into heat. Paint is quite rough on a microscopic level, and the green light reflected would be scattered in every direction.

    Things that have a colour do so because they reflect those frequencies. Mirrors reflect pretty much all frequencies of visible light with very little scattering - that’s the definition of the word, really.

    If you had a black feature wall behind your lamp, such that very little was reflected off it into the rest of the room, then with a mirror there would be about twice the photons illuminating the room. If your wall was pure brilliant white, much less of a difference. Your eyes don’t perceive ‘twice the photons’ as ‘twice as bright’ - they scale from absorbing thousands a second when fully dark-adjusted at night, to trillions per second at midday - but you might find it a bit easier to eg. read a book elsewhere in the room.

    Light output from the lamp doesn’t change, but depending on the colours of things in your room, the light output that is useful for seeing might do.


  • Really? If it’s a big enough treatment works to warrant a SCADA, then I doubt an automation engineer with the experience to set it all up would be asking this question, but here goes. You’ve a couple of obstacles:

    • every contract I’ve ever seen for industrial automation has either specified which control plane they want directly, or they’ll have a list of approved suppliers which you must use. Someone after you will have to maintain this. Those maintainers will only accept the things that they have been trained on. Those things are Windows PCs running Windows software. They will reject anything else. The people running network security on those machines will have a very short list of the acceptable operating systems for running SCADA systems. That list will be a couple of versions of Windows Server. They will also reject anything else.

    • that’s not nearly enough information to make a recommendation. Which PLCs? Allen Bradley, Siemens, Mitsubishi, …? I can’t think of a job I’ve ever been on where the local HMI hasn’t matched the PLCs. The SCADA software almost invariably matches the PLCs used in the main motor control centre, with perhaps a couple of oddball PLCs for proprietary panels and such like. Could maybe ask the supplier if they’ve a Linux alternative? Siemens will laugh at you and Mitsi won’t understand the question, but AB just might.

    Sorry - I’m a Linux evangelist, but I don’t think it’s a good fit for here. SCADA performance generally isn’t bad due to Windows Server - it’s fine, does what it’s intended to - but because eg. STEP 7 is an appallingly slow and bloated piece of software which would bring a mainframe to its knees. Which is bizarre - the over-the-wire protocol connecting the machines is generally a short binary blob described in the PLC configuration - these bits are the drive statuses, these bits are an int or a float for an instrument readout - and it shouldn’t be at all slow updating it all, but slow it is.




  • There are, but it’s complicated. Doom (2016) for instance - it doesn’t handle the very large Vulkan swap chain that’s possible on some modern graphics cards, crashes on start-up. Someone patched Proton around that time so that Doom would start; the patch was later reverted since it broke other games. Other games based off of that engine - couple of Wolfensteins, Doom Eternal - have the problem fixed in the binaries, and so run on up-to-date Proton, but depending on your hardware, only a few specific, old, versions of Proton, will do for Doom.

    Regressions get fixed - that’s okay. Buggy behaviour which depended on regressions that got fixed - that’s a problem.