Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Installing Linux after Windows should be fine without disconnecting drives.
The reverse is troublesome. Microsoft’s installer is all too happy to shit on your drives, even the ones you’re not using for installation. But Linux installers are much more friendly to dual-booting and all kinds of complex setups.
I explained why that data does not contradict what the previous commenter was trying to say.
Speed is less of a factor than endurance in a persistence-hunting scenario where we’re much slower than our prey anyway.
I don’t know the facts for this specific claim, but the logic is fair. One group can be better suited for endurance without being faster. One group could also be faster on average without having the individual fastest performers. Not only because of cultural factors, but also because the distribution curves might have different shapes for men vs women. There could be greater outliers (top performers) among men even if the average is higher among women in general. It’s not necessarily as straightforward as, say, height, where men’s distribution curve is almost the same shape as women’s, just shifted up a few inches.
I don’t have the data to draw any real conclusions, though.
One of the problems looking at athletic records is that it’s really just the elite among a self-selected group of enthusiasts, which doesn’t tell us a whole lot about what might have been the norm 100,000 years ago, or what might be the norm today if all else were equal between genders. These are not controlled trials.
I’ve read that the top women outperform the top men in long-distance open-water swimming, supposedly due in part to higher body fat making women more buoyant, helping to regulate body temperature, and providing fuel. This is the first time I’ve read that women might have an advantage in running, though.
I wish the article provided citations. The reality is probably too complex to fit into a headline or pop-sci writeup.
Same on macOS. Apple has “case-sensitive HFS+” as an option for UNIX compatibility (or at least they used to) but actually running a system on it is a bad idea in general.
Haven’t heard of Hiren’s BootCD in like 15 years. Good to see it’s still around!
Yeah, I had to disconnect all my SATA HDs to stop the Windows installer from shitting all over them.
I’d be worried about Windows updates doing the same thing now, after the the recent glitch that broke bootloaders.
F-Droid link for the lazy: https://f-droid.org/packages/com.junkfood.seal/
Definitely going to check this out. I’ve been using yt-dlp via command line in Termux but that experience is less than ideal.
It was bought out and cleaned up a few years ago. It’s legit again now, though I don’t think it’ll ever really recover from that fiasco.
Totally agree. Their product line was an absolute mess back then. Their current lineup is getting a little bloated too. I don’t know why they bother having two laptop product lines anymore when they are so similar.
Apple tried to allow clones, but ran into the same problem because the clone makers could make cheaper machines by slapping together parts.
Yeah, this is exactly what happened, although some of the clone brands were perfectly high-quality (Power Computing in particular made great machines, usually the fastest on the market). In the Mac community at the time, a lot of people (myself included) wished Apple would just exit the hardware business and focus on what they were good at: software.
Then Steve Jobs came back and did exactly the opposite of that. First order of business was to kill cloning. Then came the iPod.
To be fair, the next generation of Power Macs after that were about half the price of the previous gen.
Most of Apple’s history, actually.
Macs have a reputation for being expensive because people compare the cheapest Mac to the cheapest PC, or to a custom-built PC. That’s reasonable if the cheapest PC meets your needs or if you’re into building your own PC, but if you compare a similarly-equipped name-brand PC, the numbers shift a LOT.
From the G3-G5 era ('97-2006) through most of the Intel era (2006-2020), if you went to Dell or HP and configured a machine to match Apple’s specs as closely as possible, you’d find the Macs were almost never much more expensive, and often cheaper. I say this as someone who routinely did such comparisons as part of their job. There were some notable exceptions, like most of the Intel MacBook Air models (they ranged from “okay” to “so bad it feels like a personal insult”), but that was never the rule. Even in the early-mid 90s, while Apple’s own hardware was grossly overpriced, you could by Mac clones for much cheaper (clones were licensed third-parties who made Macs, and they were far and away the best value in the pre-G3 PowerPC era).
Macs also historically have a lower total cost of ownership, factoring in lifespan (cheap PCs fail frequently), support costs, etc. One of the most recent and extensive analyses of this I know if comes from IBM. See https://www.computerworld.com/article/1666267/ibm-mac-users-are-happier-and-more-productive.html
Toward the tail end of the Intel era, let’s say around 2016-2020, Apple put out some real garbage. e.g. butterfly keyboards and the aforementioned craptastic Airs. But historically those are the exceptions, not the rule.
As for the “does more”, well, that’s debatable. Considering this is using Apple’s 90s logo, I think it’s pretty fair. Compare System 7 (released in '91) to Windows 3.1 (released in '92), and there is no contest. Windows was shit. This was generally true up until the 2000s, when the first few versions of OS X were half-baked and Apple was only just exiting its “beleaguered” period, and the mainstream press kept ringing the death knell. Windows lagged behind its competition by at least a few years up until Microsoft successfully killed or sufficiently hampered all that competition. I don’t think you can make an honest argument in favor of Windows compared to any of its contemporaries in the 90s (e.g. Macintosh, OS/2, BeOS) that doesn’t boil down to “we’re used to it” or “we’re locked in”.
Chromium itself will. Other Chromium-based browser vendors have confirmed that they will maintain v2 support for as long as they can. So perhaps try something like Vivaldi. I haven’t tried PWAs in Vivaldi myself, but it supports them according to the docs.
Debian still supports Pentium IIs. They axed support for the i586 architecture (original Pentium) a few years back, but Debian 12 (current stable, AKA Bookworm) still supports i686 chips like the P2.
Not sure how the rest of the hardware in that Compaq will work.
See: https://www.debian.org/releases/stable/i386/ch02s01.en.html
Probably ~15TB through file-level syncing tools (rsync or similar; I forget exactly what I used), just copying my internal RAID array to an external HDD. I’ve done this a few times, either for backup purposes or to prepare to reformat my array. I originally used ZFS on the array, but converted it to something with built-in kernel support a while back because it got troublesome when switching distros. Might switch it to bcachefs at some point.
With dd specifically, maybe 1TB? I’ve used it to temporarily back up my boot drive on occasion, on the assumption that restoring my entire system that way would be simpler in case whatever I was planning blew up in my face. Fortunately never needed to restore it that way.
Hopefully they have better defenses against legal action from Nvidia than ZLUDA did.
In the past, re-implementing APIs has been deemed fair use in court (for example, Oracle v Google a few years back). I’m not entirely sure why ZLUDA was taken down; maybe just to avoid the trouble of a legal battle, even if they could win. I’m not a lawyer so I can only guess.
Validity aside, I expect Nvidia will try to throw their weight around.
It’s worth mentioning that with a large generational gap, the newer low-end CPU will often outperform the older high-end. An i3-1115G4 (11th gen) should outperform an i7-4790 (4th gen), at least in single-core performance. And it’ll do it while using a lot less power.
I don’t think there’s any way to count years without rooting it somewhere arbitrary. We cannot calculate the age of the planet, the sun, or the universe to the accuracy of a year (much less a second or nanosecond). We cannot define what “modern man” is to a meaningful level of accuracy, either, or pin down the age of historical artifacts.
Most computers use a system called “epoch time” or “UNIX time”, which counts the seconds from January 1, 1970. Converting this into a human-friendly date representation is surprisingly non-trivial, since the human timekeeping systems in common use are messy and not rooted in hard math or in the scientific definition of a second, which was only standardized in 1967.
Tom Scott has an amusing video about this: https://www.youtube.com/watch?v=-5wpm-gesOY
There is also International Atomic Time, which, like Unix Time, counts seconds from an arbitrary date that aligns with the Gregorian calendar. Atomic Time is rooted at the beginning of 1958.
ISO 8601 also aligns with the Gregorian calendar, but only as far back as 1582. The official standard does not allow expressing dates before that without explicit agreement of definitions by both parties. Go figure.
The core problem here is that a year, as defined by Earth’s revolution around the sun, is not consistent across broad time periods. The length of a day changes, as well. Humans all around the world have traditionally tracked time by looking at the sun and the moon, which simply do not give us the precision and consistency we need over long time periods. So it’s really difficult to make a system that is simple, logical, and also aligns with everyday usage going back centuries. And I don’t think it is possible to find any zero point that is truly meaningful and independent of wishy-washy human culture.
Interesting. I’m not sure that’s a Lemmy thing per se, maybe specific to your client, or some extension or something altering CSS?
I just checked in my browser’s inspector, and the italicized text’s <em> tag has the same calculated font setting as the main comment’s <div> tag.
FWIW, I’m using Firefox with my instance’s default Lemmy web UI.
There’s one called Redox that is entirely written in Rust. Still in fairly early stages, though. https://www.redox-os.org/