

They are the ones in charge of making sure bitbucket stops working every other Thursday.


They are the ones in charge of making sure bitbucket stops working every other Thursday.


I’m kinda glad they won’t. If I tell you how easy it is to install an ad blocker and you don’t, you’re just funding my leeching through your data and time that you claim has no value.
Yes of course! It’s one of the 3 changes of this version
Removing ICE is always good work. They are awful when encountered, although I haven’t encountered one yet.


Well yes, the LLMs are not the ones that actually generate the images. They basically act as a translator between the image generator and the human text input. Well, just the tokenizer probably. But that’s beside the point. Both LLMs and image generators are generative AI. And have similar mechanisms. They both can create never-before seen content by mixing things it has “seen”.
I’m not claiming that they didn’t use CSAM to train their models. I’m just saying that’s this is not definitive proof of it.
It’s like claiming that you’re a good mathematician because you can calculate 2+2. Good mathematicians can do that, but so can bad mathematicians.


We have all been children, we all know the anatomical differences.
It’s not like children are alien, most differences are just “this is smaller and a slightly different shape in children”. Many of those differences can be seen on fully clothed children. And for the rest, there are non-CSAM images that happen to have nude children. As I said earlier, it is not uncommon for children to be fully nude in beaches.


What you don’t think?
Why does being a parent give any authority in this conversation?


The wine thing could prove me wrong if someone could answer my question.
But I don’t think my theory is that wild. LLMs can interpolate, and that is a fact. You can ask it to make a bear with duck hands and it will do it. I’ve seen images on the internet of things similar to that generated by LLMs.
Who is to say interpolating nude children from regular children+nude adults is too wild?
Furthermore, you don’t need CSAM for photos of nude children.
Children are nude at beaches all the time, there probably are many photos on the internet where there are nude children in the background of beach photos. That would probably help the LLM.


Did it have any full glasses of water? According to my theory, It has to have data for both “full” and “wine”


Tbf it’s not needed. If it can draw children and it can draw nude adults, it can draw nude children.
Just like it doesn’t need to have trained on purple geese to draw one. It just needs to know how to draw purple things and how to draw geese.
Deref works on top of encapsulation though. Inheritance syntactically hides the encapsulation, with Deref it’s in the clear.
It’s true that it feels like inheritance, but I’m grateful for it. Otherwise it would be a pain to use the windows API.


4k is noticeable in a standard pc.
I recently bought a 1440p screen (for productivity, not gaming) and I can fit so much more UI with the same visual fidelity compared to 1080p. Of course, the screen needs to be physically bigger in order for the text to be the same size.
So if 1080p->1440p is noticeable, 1080p->4k must be too.


This is important to me. More than “time until login” I’d prefer “time until queue”. I want to login before walking away because I want to open certain programs. So if an OS allows me to tell it “after you boot up, open these 3 programs” but hasn’t completely booted up, I would prefer it to one that only lets you open programs once it has booted.
And no, configuring so it opens the same programs at startup doesn’t count. I wanna choose every time I turn on the computer.


Someone on Microsoft probably needed an excuse for their pay increase.
“I rebuilt/had the idea to rebuilt the taskbar” sounds a lot better to managers than “I maintained the taskbar”.


One of the techniques I’ve seen it’s like a “password”. So for example if you write a lot the phrase “aunt bridge sold the orangutan potatoes” and then a bunch of nonsense after that, then you’re likely the only source of that phrase. So it learns that after that phrase, it has to write nonsense.
I don’t see how this would be very useful, since then it wouldn’t say the phrase in the first place, so the poison wouldn’t be triggered.
EDIT: maybe it could be like a building process. You have to also put “aunt bridge” together many times, then “bridge sold” and so on, so every time it writes “aunt”, it has a chance to fall into the next trap, untill it reaches absolute nonsense.


the shape of the gap is almost the same as the peak in “other”. So that peak is probably “windows but we messed up with data collection” or “some browser in windows changed its user agent”.


How dare they collect data and display it in an accurate manner! They should just start by putting Linux at 50% and then move the lines a little bit.


I see you ignored my entire comment.
I don’t know what is more explicit about expect. Unwrap is as explicit as it gets without directly calling panic!, it’s only 1 abstraction level away. It’s literally the same as expect, but without a string argument. It’s probably top 10 functions most commonly used in rust, every rust programmer knows what unwrap does.
Any code reviewer should be able to see that unwrap and flag it as a potential issue. It’s not a weird function with an obscure panic side effect. It can only do 2 things: panic or not panic, it can be implemented in a single line. 3 lines if the panic! Is on a different line to the if statement.


An unhanded error will always result on a panic (or a halt I guess). You cannot continue the execution of the program without handling an error (remember, just ignoring it is a form of handling). You either handle the error and continue execution, or you don’t and stop execution.
A panic is very far from a segfault. In apparent result, it is the same. However, a panic is a controlled stopping of the program’s execution. A segfault is a forced execution stop by the OS.
But the OS can only know that it has to segfault if a program accesses memory outside its control.
If the program accesses memory that it’s under it’s control, but is outside bounds, then the program will not stop the execution, and this is way worse.
EDIT: As you said, it’s also an important difference that a panic will just stop the thread, not the entire process.
A LOT of people out there just read whatever tho google AI tells them on their search. They’ve been trained that the answer is always the 1st non-ad link on Google. And now the thing at the top of a Google search is their LLM answer.