• 0 Posts
  • 93 Comments
Joined 1 year ago
cake
Cake day: August 4th, 2023

help-circle



  • so do some folks use opp as “opponent”? Sure, that’s believable. But I feel fairly confident…

    Bro, it doesn’t even have the right number of P’s for your reasoning to make any sense.

    It comes from “opponent,” that’s why there are two P’s. It comes from video games/chess/card games/etc where you refer to the person or persons you’re playing against as the “opponent”. It’s been happening for many years but has made it’s way into gen z slang.







  • That’s why Pixels and some others have a “smart charge” feature that will wait to charge your phone until just before your alarm time so that it will finish right before you take it off the charger.

    why am I going backwards to needing to babysit my phone when it’s charging, and why would anyone want to charge their phone when they want to be using it vs when they’re asleep?

    I honestly don’t understand why people have such trouble with this. I can throw my phone on a charger when I go to shower and it’s at 80 percent when I get out, and that’s enough for my day. I could leave it while I get dressed and eat or something and it’d be at 100 if I needed. I don’t need my phone 24 hours a day. And there are many points in my day where I’m not using my phone for an hour that I could spare to charge it. I don’t need to leave it burning away permanent battery capacity for hours and hours every night.


  • Yes, the battery doesn’t charge to “dangerous - could explode” levels. But they very much do still charge to levels that are damaging to long term health/capacity of the battery.

    Yes, they tune the batteries so that 100% isn’t the absolute cap. But even with that accounted for, many batteries will be above values that would be considered good for the long term health of a lithium cell. 80 percent on most phones is still very much at levels that are considered damaging to lithium batteries.

    To put it another way, the higher you charge a lithium battery, the more stress you put on it. The more stress you put on it, the fewer charge cycles those components will hold. It’s not like there’s a “magic number” at 80 percent, it’s just that the higher you go the worse it is. Yes, some manufacturers have tweaked charge curves to be more reasonable. But they’ve also increased limits. Many batteries now charge substantially higher than most people would consider sustainable.

    And after such changes, 80% lands pretty close to the general recommendations for improved battery longevity. Every percent will help, but it’s not a hard and fast rule.

    Calibrations have gotten a little better in some ways, but all you have to do is look at basic recommendations from battery experts and look at your phones battery voltage to see that almost every manufacturer is pushing well past the typical recommendations at 90 or even 85 percent.



  • Can’t answer the rest of your question because I don’t use a one plus but:

    aren’t you supposed to charge the phone overnight?

    No, you aren’t “supposed” to charge your phone overnight. Leaving your phone on the charger at 100% is actually pretty bad for long term battery health. Hence why the notification exists in the first place. Modern phones also full charge in like an hour, so this leaves your phone in that state for many hours.

    The longer story is it’s actually best to stop charging your phone at 80 percent unless you really need the extra juice, because any time your phone spends above that is potentially damaging, but that tends to be hard to deal with for most people.

    Most of the phones I’ve seen with this feature have a “battery warning” or “charge notification” or “protect battery” type setting somewhere you can turn off. But again, I’ve never used a one plus so Idk if they do or where it is.



  • You seem to think I’m just talking about linearly expanding the vocabulary of the model, I’m talking about giving it an entirely new paradigm through which to work.

    No, I don’t. I know exactly what you’re trying to say. But you’re basically talking about trying to make a car fly. That’s not how it was built and it’s goals and foundations are entirely different. You’re better off starting over and building a plane. Your proposal just doesn’t fit within the paradigms of what was built and makes no sense.

    I’m talking building in entirely new ways for the AI to understand.

    Exactly. But the AI doesn’t “understand” anything. In order to achieve this, you need to build something that “understands” things. LLMs don’t understand anything.

    Anyway, this is why no one likes pedants. If you want to actually engage in conversation, sure.

    It’s easy to label me as a pendant, but I’m explaining how this stuff works. You clearly have no idea, admitted yourself that you don’t understand, and then keep going. You just keep spewing the same shit, but the shit you’re spewing makes no sense. But you refuse to budge or engage in conversation here.

    You’re just talking out of your ass. You’re admittedly uneducated but want to be treated like you’re educated and make any sense. You don’t. This is why people hate people pretending to be experts and talking about things they don’t understand. It’s a waste of time.

    If you want to keep living in some imaginary world where this can be done, be my guest, but it’s fake. That’s not how this shit works. Enjoy your imaginary quest though.


  • I didn’t say any researcher or anything had named it intelligence. Nor am I trying to be semantically correct.

    Read the guys comments. He’s trying to push the idea that we can “change” it’s “understanding” about the things it’s discussing. He is one of the people who has fallen for the tech bros etc convincing people it is intelligent. I’m not fighting semantics, I’m trying to explain to him that it’s not intelligent. Because he himself clearly doesn’t understand that.


  • I don’t see any reason these kinds of relationships can’t be integrated into generative AI, they just HAVEN’T yet

    No, it’s just fucking pointless. You’re talking about adding sand to a beach. These things are way more complicated and trying to shovel these things in just makes a mess. See literally the OP.

    each time you increase how the relationships interact, you’re also drastically increasing the size and complexity of the algorithm and model.

    No youre not. Not even fucking close. You clearly don’t understand this at all.

    The ALGORITHM will always be the same. Except for new generations of these bots. Claiming adding things like racial bias is going to alter the algorithm is just nonsensical.

    The MODEL is the huge fucking corpus of internet data. Anything you tack onto it is a drop in an ocean. It’s not steering anything.

    Whats changing is they’re editing inputs because that’s all you can really do to shift where these things go. Other changes would turn this into a very different beast, and can’t be done at the fine grained level like “race”.

    Claiming this has any significant impact on the size or complexity of any of this is just total hog wash and you must not understand how these work or how big they are.


  • Ottomateeverything@lemmy.worldtoLemmy Shitpost@lemmy.worldAI or DEI?
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    9 months ago

    You’re just rephrasing the same approach, over, and over, and over. It’s like you’re not even reading what I’m saying.

    The answer is no. This is not a feasible approach. LLMs are just parrots and they don’t understand anything. They were essentially a “shortcut” that gets something that acts intelligent without actually having to build something intelligent. You’re not going to convince it to be intelligent. You’re not going to solve all it’s short comings by shoe horning something in. It’s just more work than building actual intelligence.

    It’s like if a costal town got overrun by flooding from a hurricane. And some guy shows up and is like “hey, I’ve got a bucket, I’ll just pull all the water to the sea”. And I’m like “that’s infeasible, we need a different solution, your bucket even has fucking holes in it”. And you’re over here saying “well, what if we got some duct tape? And then we can patch the holes. And then we can call our friends, and we can all bucket the water”.

    It’s just not happening.

    Eh I really need to learn more about AI to understand the limits

    Yeah. This. You just keep repeating the same approach over and over without understanding or listening to the basic failings of these chat bots. It’s just not happening. You’re just perpetuating nonsense.

    These things are basically slightly more complicated versions of the auto complete in your phone keyboard. Except that they’re fed hug amounts of the internet. They get really good at parroting sentences, but they have no sense of “intelligence” or what they’re actually doing. You’re better off trying to convince your auto correct to sound like Shakespeare than you are to remove the failings like racial bias from things like Gemini and ChatGPT. You can chip at small corners here and there but this is just not the path forward.


  • I don’t know, maybe that would work, for this one particular problem. My point is it’s more than that. Even if you go through the trouble of fixing this one particular issue with LLMs, there are literally thousands of other problems to solve before it’s all “fixed”. At some point, when you’ve built and maintained thousands of workarounds, they start conflicting with each other and making a giant spider web of issues to juggle.

    And so you’re right back at the problem that you were trying to solve by building the LLM in the first place. This approach is just futile and nonsensical.