• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: September 27th, 2023

help-circle



  • My bad, I wasn’t precise enough with what I wanted to say. Of course you can confirm (with astronomically high likelihood) that a screenshot of AI Overview is genuine if you get the same result with the same prompt.

    What you can’t really do is prove the negative. If someone gets an output then replicating their prompt won’t necessarily give you the same output, for a multitude of reasons. e.g. it might take all other things Google knows about you into account, Google might have tweaked something in the last few minutes, the stochasticity of the model is leading to a different output, etc.

    Also funny you bring up image generation, where this actually works too in some cases. For example they used the same prompt with multiple different seeds and if there’s a cluster of very similar output images, you can surmise that an image looking very close to that was in the training set.




  • Mirodir@discuss.tchncs.detoLemmy Shitpost@lemmy.worldAutomation
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    3 months ago

    So is the example with the dogs/wolves and the example in the OP.

    As to how hard to resolve, the dog/wolves one might be quite difficult, but for the example in the OP, it wouldn’t be hard to feed in all images (during training) with randomly chosen backgrounds to remove the model’s ability to draw any conclusions based on background.

    However this would probably unearth the next issue. The one where the human graders, who were probably used to create the original training dataset, have their own biases based on race, gender, appearance, etc. This doesn’t even necessarily mean that they were racist/sexist/etc, just that they struggle to detect certain emotions in certain groups of people. The model would then replicate those issues.


  • It’s even simpler. A strictly increasing series will always have element n be higher than the average between any element<n and element n.

    Or in other words, if the number of calls is increasing every day, it will always be above average no matter the window used. If you use slightly larger windows you can even have some local decreases and have it still be true, as long as the overall trend is increasing (which you’ve demonstrated the extreme case of).






  • Not every meal in a “$x/plate” restaurant is gonna cost the same though. It’s not hard to reach a disparity between the cheapest and most expensive reasonable meal (similar sizes) of around a factor of 2 at many restaurants.

    Why is the server getting twice the tip if I order the most expensive plate and dessert vs cheapest plate and dessert?




  • That was a response I got from ChatGPT with the following prompt:

    Please write a one sentence answer someone would write on a forum in a response to the following two posts:
    post 1: “You sure? If it’s another bot at the other end, yeah, but a real person, you recognize ChatGPT in 2 sentences.”
    post 2: “I was going to disagree with you by using AI to generate my response, but the generated response was easily recognizable as non-human. You may be onto something lol”

    It’s does indeed have an AI vibe, but I’ve seen scammers fall for more obvious pranks than this one, so I think it’d be good enough. I hope it fooled at least a minority of people for a second or made them do a double take.



  • This exact image (without the caption-header of course) was on one of the slides for one of the machine-learning related courses at my college, so I assume it’s definitely out there somewhere and also was likely part of the training sets used by OpenAI. Also, the image in those slides has a different watermark at the bottom left, so it’s fair to assume it’s made its rounds.

    Contradictory to this post, it was used as an example for a problem that machine learning can solve far better than any algorithms humans would come up with.


  • I’m not really sure how to describe it other than when I read a function to determine what it does then go to the next part of the code I’ve already forgotten how the function transforms the data

    This sounds to me like you could benefit from mentally using the information hiding principle for your functions. In other words: Outside of the function, the only thing that matters is “what goes in?” and “what comes out?”. The implementation details should not be important once you’re working on code outside of that function.

    To achieve this, maybe you could write a short comment right at the start of every function. One to two sentences detailing only the inputs/output of that function. e.g. “Accepts an image and a color and returns a mask that shows where that color is present.” if you later forget what the function does, all you need to do is read that one sentence to remember. If it’s too convoluted to write in one or two sentences, your function is likely trying to achieve too much at once and could (arguably “should”) be split up.

    Also on a different note: Don’t sell your ability to “cludge something together” short. If you ever plan to do this professionally or educationally, you will sadly inevitably run into situations where you have no choice but to deliver a quick and dirty solution over a clean and well thought out one.

    Edit: typos