• 0 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle

  • You say that it is sorted in the order of most significants, so for a date it is more significant if it happend 1024, 2024 or 9024?

    Most significant to least significant digit has a strict mathematical definition, that you don’t seem to be following, and applies to all numbers, not just numerical representations of dates.

    And most importantly, the YYYY-MM-DD format is extensible into hh:mm:as too, within the same schema, out to the level of precision appropriate for the context. I can identify a specific year when the month doesn’t matter, a specific month when the day doesn’t matter, a specific day when the hour doesn’t matter, and on down to minutes, seconds, and decimal portions of seconds to whatever precision I’d like.


  • Sometimes the identity of the messenger is important.

    Twitter was super easy to set up with the API to periodically tweet the output of some automated script: a weather forecast, a public safety alert, an air quality alert, a traffic advisory, a sports score, a news headline, etc.

    These are the types of messages that you’d want to subscribe to the actual identity, and maybe even be able to forward to others (aka retweeting) without compromising the identity verification inherent in the system.

    Twitter was an important service, and that’s why there are so many contenders trying to replace at least part of the experience.


  • This isn’t exactly what you asked, but our URI/URL schema is basically a bunch of missed opportunities, and I wish it was better designed.

    Ok so it starts off with the scheme name, which makes sense. http: or ftp: or even tel:

    But then it goes into the domain name system, which suffers from the problem that the root, then top level domain, then domain, then progressively smaller subdomains, go right to left. www.example.com requires the system look up the root domain, to see who manages the .com tld, then who owns example.com, then a lookup of the www subdomain. Then, if there needs to be a port number specified, that goes after the domain name, right next to the implied root domain. Then the rest of the URL, by default, goes left to right in decreasing order of significance. It’s just a weird mismatch, and would make a ton more sense if it were all left to right, including the domain name.

    Then don’t get me started about how the www subdomain itself no longer makes sense. I get that the system was designed long before HTTP and the WWW took over the internet as basically the default, but if we had known that in advance it would’ve made sense to not try to push www in front of all website domains throughout the 90"s and early 2000’s.


  • Your day to day use isn’t everyone else’s. We use times for a lot more than “I wonder what day it is today.” When it comes to recording events, or planning future events, pretty much everyone needs to include the year. Getting things wrong by a single digit is presented exactly in order of significance in YYYY-MM-DD.

    And no matter what, the first digit of a two-digit day or two-digit month is still more significant in a mathematical sense, even if you think that you’re more likely to need the day or the month. The 15th of May is only one digit off of the 5th of May, but that first digit in a DD/MM format is more significant in a mathematical sense and less likely to change on a day to day basis.


  • Functionally speaking, I don’t see this as a significant issue.

    JPEG quality settings can run a pretty wide gamut, and obviously wouldn’t be immediately apparent without viewing the file and analyzing the metadata. But if we’re looking at metadata, JPEG XL reports that stuff, too.

    Of course, the metadata might only report the most recent conversion, but that’s still a problem with all image formats, where conversion between GIF/PNG/JPG, or even edits to JPGs, would likely create lots of artifacts even if the last step happens to be lossless.

    You’re right that we should ensure that the metadata does accurately describe whether an image has ever been encoded in a lossy manner, though. It’s especially important for things like medical scans where every pixel matters, and needs to be trusted as coming from the sensor rather than an artifact of the encoding process, to eliminate some types of error. That’s why I’m hopeful that a full JXL based workflow for those images will preserve the details when necessary, and give fewer opportunities for that type of silent/unknown loss of data to occur.


    • Existing JPEG files (which are the vast, vast majority of images currently on the web and in people’s own libraries/catalogs) can be losslessly compressed even further with zero loss of quality. This alone means that there’s benefits to adoption, if nothing else for archival and serving old stuff.
    • JPEG XL encoding and decoding is much, much faster than pretty much any other format.
    • The format works for both lossy and lossless compression, depending on the use case and need. Photographs can be encoded in a lossy way much more efficiently than JPEG and things like screenshots can be losslessly encoded more efficiently than PNG.
    • The format anticipates being useful for both screen and prints. Webp, HEIF, and AVIF are all optimized for screen resolutions, and fail at truly high resolution uses appropriate for prints. The JPEG XL format isn’t ready to replace camera RAW files, but there’s room in the spec to accommodate that use case, too.

    It’s great and should be adopted everywhere, to replace every raster format from JPEG photographs to animated GIFs (or the more modern live photos format with full color depth in moving pictures) to PNGs to scanned TIFFs with zero compression/loss.






  • I don’t think the First Amendment would ever require the government to host private speech. The rule is basically that if you host private speech, you can’t discriminate by viewpoint (and you’re limited in your ability to discriminate by content). Even so, you can always regulate time, place, and manner in a content-neutral way.

    The easiest way to do it is to simply do one of the suggestions of the linked article, and only permit government users and government servers to federate inbound, so that the government hosted servers never have to host anything private, while still fulfilling the general purpose of publishing public government communications, for everyone else to host and republish on their own servers if they so choose.


  • None of what I’m saying is unique to the mechanics of open source. It’s just that the open source ecosystem as it currently exists today has different attack surfaces than a closed source ecosystem.

    Governance models for a project are a very reasonable thing to consider when deciding whether to use a dependency for your application or library.

    At a certain point, though, that’s outsourced to trust whoever someone else trusts. When I trust a specific distro (because I’m certainly not rolling my own distro), I’m trusting how they maintain their repos, as well as which packages they include by default. Then, each of those packages has dependencies, which in turn have dependencies. The nature of this kind of trust is that we select people one or two levels deep, and assume that they have vetted the dependencies another one or two levels, all the way down. XZ did something malicious with systemd, which opened a vulnerability in sshd, as compiled for certain distros.

    You’re assuming that 100% of the source code used in a closed source project was developed by that company and according to the company’s governance model, which you assume is a good one.

    Not at all. I’m very aware that some prior hacks by very sophisticated, probably state sponsored attackers have abused the chain of trust in proprietary software dependencies. Stuxnet relied on stolen private keys trusted by Windows for signing hardware drivers. The Solarwinds hack relied on compromising plugins trusted by Microsoft 365.

    But my broader point is that there are simply more independent actors in the open source ecosystem. If a vulnerability takes the form of the weakest link, where compromising any one of the many independent links is enough to gain access, that broadly distributed ecosystem is more vulnerable. If a vulnerability requires chaining different things together so that multiple parts of the ecosystem are compromised, then distributing decisionmaking makes the ecosystem more robust. That’s the tradeoff I’m describing, and making things spread too thin introduces the type of vulnerability that I’m describing.


  • In the broader context of that thread, I’m inclined to agree with you: The circumstances by which this particular vulnerability was discovered shows that it took a decent amount of luck to catch it, and one can easily imagine a set of circumstances where this vulnerability would’ve slipped by the formal review processes that are applied to updates in these types of packages. And while it would be nice if the billion-dollar-companies that rely on certain packages would provide financial support for the open source projects they use, the question remains on how we should handle it when those corporations don’t. Do we front it ourselves, or just live with the knowledge that our security posture isn’t optimized for safety, because nobody will pay for that improvement?


  • GamingChairModel@lemmy.worldtolinuxmemes@lemmy.worldBackdoors
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    6 months ago

    100%.

    In many ways, distributed open source software gives more social attack surfaces, because the system itself is designed to be distributed where a lot of people each handle a different responsibility. Almost every open source license includes an explicit disclaimer of a warranty, with some language that says something like this:

    THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.

    Well, bring together enough dependencies, and you’ll see that certain widely distributed software packages depend on the trust of dozens, if not hundreds, of independent maintainers.

    This particular xz vulnerability seems to have affected systemd and sshd, using what was a socially engineered attack on a weak point in the entire dependency chain. And this particular type of social engineering (maintainer burnout, looking for a volunteer to take over) seems to fit more directly into open source culture than closed source/corporate development culture.

    In the closed source world, there might be fewer places to probe for a weak link (socially or technically), which makes certain types of attacks more difficult. In other words, it might truly be the case that closed source software is less vulnerable to certain types of attacks, even if detection/audit/mitigation of those types of attacks is harder for closed source.

    It’s a tradeoff, not a free lunch. I still generally trust open source stuff more, but let’s not pretend it’s literally better in every way.


  • Good writeup.

    The use of ephemeral third party accounts to “vouch” for the maintainer seems like one of those things that isn’t easy to catch in the moment (when an account is new, it’s hard to distinguish between a new account that will be used going forward versus an alt account created for just one purpose), but leaves a paper trail for an audit at any given time.

    I would think that Western state sponsored hackers would be a little more careful about leaving that trail of crumbs that becomes obvious in an after-the-fact investigation. So that would seem to weigh against Western governments being behind this.

    Also, the last bit about all three names seeming like three different systems of Romanization of three different dialects of Chinese is curious. If it is a mistake (and I don’t know enough about Chinese to know whether having three different dialects in the same name is completely implausible), that would seem to suggest that the sponsors behind the attack aren’t that familiar with Chinese names (which weighs against the Chinese government being behind it).

    Interesting stuff, lots of unanswered questions still.