• 0 Posts
  • 149 Comments
Joined 2 years ago
cake
Cake day: September 2nd, 2023

help-circle


  • Well yes, the LLMs are not the ones that actually generate the images. They basically act as a translator between the image generator and the human text input. Well, just the tokenizer probably. But that’s beside the point. Both LLMs and image generators are generative AI. And have similar mechanisms. They both can create never-before seen content by mixing things it has “seen”.

    I’m not claiming that they didn’t use CSAM to train their models. I’m just saying that’s this is not definitive proof of it.

    It’s like claiming that you’re a good mathematician because you can calculate 2+2. Good mathematicians can do that, but so can bad mathematicians.




  • The wine thing could prove me wrong if someone could answer my question.

    But I don’t think my theory is that wild. LLMs can interpolate, and that is a fact. You can ask it to make a bear with duck hands and it will do it. I’ve seen images on the internet of things similar to that generated by LLMs.

    Who is to say interpolating nude children from regular children+nude adults is too wild?

    Furthermore, you don’t need CSAM for photos of nude children.

    Children are nude at beaches all the time, there probably are many photos on the internet where there are nude children in the background of beach photos. That would probably help the LLM.








  • One of the techniques I’ve seen it’s like a “password”. So for example if you write a lot the phrase “aunt bridge sold the orangutan potatoes” and then a bunch of nonsense after that, then you’re likely the only source of that phrase. So it learns that after that phrase, it has to write nonsense.

    I don’t see how this would be very useful, since then it wouldn’t say the phrase in the first place, so the poison wouldn’t be triggered.

    EDIT: maybe it could be like a building process. You have to also put “aunt bridge” together many times, then “bridge sold” and so on, so every time it writes “aunt”, it has a chance to fall into the next trap, untill it reaches absolute nonsense.




  • I see you ignored my entire comment.

    I don’t know what is more explicit about expect. Unwrap is as explicit as it gets without directly calling panic!, it’s only 1 abstraction level away. It’s literally the same as expect, but without a string argument. It’s probably top 10 functions most commonly used in rust, every rust programmer knows what unwrap does.

    Any code reviewer should be able to see that unwrap and flag it as a potential issue. It’s not a weird function with an obscure panic side effect. It can only do 2 things: panic or not panic, it can be implemented in a single line. 3 lines if the panic! Is on a different line to the if statement.


  • An unhanded error will always result on a panic (or a halt I guess). You cannot continue the execution of the program without handling an error (remember, just ignoring it is a form of handling). You either handle the error and continue execution, or you don’t and stop execution.

    A panic is very far from a segfault. In apparent result, it is the same. However, a panic is a controlled stopping of the program’s execution. A segfault is a forced execution stop by the OS.

    But the OS can only know that it has to segfault if a program accesses memory outside its control.

    If the program accesses memory that it’s under it’s control, but is outside bounds, then the program will not stop the execution, and this is way worse.

    EDIT: As you said, it’s also an important difference that a panic will just stop the thread, not the entire process.