

Microsoft is killing itself with shit vibe code.


Microsoft is killing itself with shit vibe code.
I really do not want an AI doing all of those things for me. That would be like giving a four year old full access to my computer.


Since they renamed Microsoft Office to Copilot, they have millions of Copilot users. It’s almost as popular as Google+ when Google made all Gmail users Google+ users.
You want magnets that powerful next to your face?
That is the scariest guitar I’ve ever seen.


Remember to back up everything before resizing your partitions. It’s so easy to lose all your data when you do that.


Feel free to try. Here’s the library I use: https://nymph.io/
It’s open source, and all the docs and code are available at that link and on GitHub. I always ask it to make a note entity, which is just incredibly simple. Basically the same thing as the ToDo example.
The reason I use this library (other than that I wrote it, so I know it really well) is that it isn’t widely known and there aren’t many example projects of it on GitHub, so the LLM has to be able to actually read and understand the docs and code in order to properly use it. For something like React, there are a million examples online, so for basic things, the LLM isn’t really understanding anything, it’s just making something similar to its training data. That’s not how actual high level programming works, so making it follow an API it isn’t already trained on is a good way to test if it is near the same abilities as an actual entry level SWE.
I just tested it again and it made 9 mistakes. I had to explain each mistake and what it should be before it finally gave me code that would work. It’s not good code, but it would at least work. It would make a mistake, I would tell it how to fix it, then it would make a new mistake. And keep in mind, this was for a very simple entity definition.


I played around with it a lot yesterday, giving it documentation and asking it to write some code based on the API documentation. Just like every single other LLM I’ve ever tried, it just bungled the entire thing. It made up a bunch of functions and syntax that just doesn’t exist. After I told it the code was wrong and gave it the right way to do it, it told me that I got it wrong and converted it back to the incorrect syntax. LLMs are interesting toys, but shouldn’t be used for real work.


Why would they ask consumers what they want when they can tell consumers what they want. What are you gonna do, move to Linux?


It’s integrated graphics so it uses up to half of the system RAM. I have 96GB of system ram, so 48GB of VRAM. I bought it last year before the insane price hikes, when it was within reach to normal people like me.
I’ve tried it and it works. I can load huge models. Bigger than 48GB even. The ones bigger than 48GB run really slow, though. Like one token per second. But the ones that can fit in the 48GB are pretty decent. Like 6 tokens per second for the big models, if I’m remembering correctly. Obviously, something like an 8b parameter model would be way faster, but I don’t really have a use for those kinds of models.


I mean, I get that, but why is Proton offering one? What value do I get from Proton’s LLM that I wouldn’t get from any other company’s LLM? It’s not privacy, because it’s not end to end encrypted. It’s not features, because it’s just a fine tuned version of the free Mistral model (from what I can tell). It’s not integration (thank goodness), because they don’t have access to your data to integrate it with (according to their privacy policy).
I kind of just hate the idea that every tech company is offering an LLM service now. Proton is an email and VPN company. Those things make sense. The calendar and drive stuff too. They have actual selling points that differentiate them from other offerings. But investing engineering time and talent into yet another LLM, especially one that’s worse than the competition, just seems like a waste to me. And especially since it’s not something that fits into their other product offerings.
It truly seems like they just wanted to have something AI related so they wouldn’t be “left behind” in case the hype wasn’t a bubble. I don’t like it when companies do that. It makes me think they don’t really have a clear direction.
Edit: it looks like they use several models, not just one:
Lumo is powered by open-source large language models (LLMs) which have been optimized by Proton to give you the best answer based on the model most capable of dealing with your request. The models we’re using currently are Nemo, OpenHands 32B, OLMO 2 32B, GPT-OSS 120B, Qwen, Ernie 4.5 VL 28B, Apertus, and Kimi K2.
- https://proton.me/support/lumo-privacy
I have a laptop with 48GB of VRAM (a Framework with integrated Radeon graphics) that can run all of those models locally, so Proton offers even less value for someone in my position.


Oh, I completely agree. Using Gmail is the problem here, and no amount of settings fiddling will solve that.


Yeah, I really should sit down and create a Matrix server for it.


I don’t think they meant Gmail used to be private, but email. Yes, Gmail has never been private. But, that’s why it’s free.


I meant I run it.
But to answer your question, it uses subaddressing really well. When you give your email to a company, you add a label to the address just for that company, then all of their emails go in that label. You can easily toggle things like notifications, mark as read, and show in aggbox (our version of the inbox, since there isn’t really an inbox when everything is sorted already). Then if that company leaks your email, you can block that label.
You can also set up screening labels that are meant for real people, then any new senders get screened to make sure they’re human before you get their mail.


I’d like to invite you to try https://port87.com/
It’s proudly AI free.


Proton has their own AI bullshit:
At least it’s not rummaging around your email though.
And just so you know, it is not end-to-end encrypted like their email is when emailing another Proton user: https://lumo.proton.me/legal/privacy
The only way to have actually private AI is to run it on your own hardware.


Keep it down this time.


Just watch, we’re two weeks away from some tech bro trying to start a Clankercamp website. The best part is that no one except other tech bros will care.
Oh no! They’ll have to actually think!