Alt account of @Badabinski

Just a sweaty nerd interested in software, home automation, emotional issues, and polite discourse about all of the above.

  • 0 Posts
  • 25 Comments
Joined 5 months ago
cake
Cake day: June 9th, 2024

help-circle
  • Right, but I can’t require a second factor on a different device that operates outside of my primary device’s trust store. I’m sure there is some way to make my desktop hit my phone up directly and ask for fingerprint auth before unlocking the local keystore, but that still depends on the security of my device and my trust store. I don’t want the second factor to be totally locked to the device I’m running on. I want the server to say, “oh, cool, here’s this passkey. It looks good, but we also need a TOTP from you before you can log in,” or “loving the passkey, but I also need you to respond to the push notification we just sent to a different device and prove your identity biometrically over there.” I don’t want my second factor to be on the same device as my primary factor. I don’t know why a passkey (potentially protected by local biometric auth) + a separate server-required second factor (TOTP or push notification to a different device or something) isn’t an option.

    EDIT: I could make it so a fingerprint would decrypt my SSH key rather than what I have now (i.e. a password). That would effectively be the same number of factors as you’re describing for a passkey, and it would not be good enough for my organization’s security model, nor would it be good enough for me.


  • I just don’t get why I can’t use something like TOTP from my phone or a key fob when logging in with a passkey from my desktop. Why does my second factor have to be an on-device biometrically protected keystore? The sites I’m thinking of currently support TOTP when using passwords, so why can’t they support the same thing when using passkeys? I don’t want to place all my trust in the security of my keystore. I like that I have to unlock my phone to get a TOTP. Someone would have to compromise my local keystore and my phone, which makes it a better second factor in my opinion.

    EDIT: like, at work, I ssh to servers all over the damn place using an ssh key. I have to get to those servers through a jump box that requires me to unlock my phone and provide a biometric second factor before it will allow me through. That’s asymmetric cryptography + a second factor of authentication that’s still effective even if someone has compromised my machine and has direct access to my private key. That’s what I want from passkeys.


  • This is a bad take. Several cities in my state banded together to create a municipal fiber network called UTOPIA. The fiber is owned by the cities that bought in and is used by several different ISPs. The ISPs pay UTOPIA for access, and then they have to compete with each other for subscribers based on performance, features, and cost. Like, there’s genuine market competition for internet! If the state owns the infrastructure and then forces the playing field to be level, then everyone benefits. People in the cities with UTOPIA got fast fiber internet waaay faster than anyone else, they have a plethora of choices (want a static IP and a business plan in your residence? There’s an ISP that sells that!) at great prices, ISPs get access to subscribers without having to maintain fiber, and the cities who bought in get to make money from this and attract residents and businesses who benefit from the service.

    My city didn’t buy in. Google Fiber eventually came to town so I was able to kick Comcast out, but I am uneasy about what’ll happen if Google decides to drop their ISP business. If I was in a city with UTOPIA, it would just be one ISP folding and I’d be able to pick a new one and switch over right away.

    EDIT: cool, Cory Doctorow wrote a blag post about it: https://doctorow.medium.com/https-pluralistic-net-2024-05-16-symmetrical-10gb-for-119-utopia-347e64869977
    UTOPIA users have access to 18 different ISPs. I feel like that speaks for itself right there. This is the future we all should have had.




  • I just wish that companies enabling passkeys would still allow password+MFA. There are several sites that, when you enable passkeys, lock you out of MFA for devices that lack a biometric second factor of authentication. I’d love to use passkeys + biometrics otherwise, since I’ve often felt that the auth problem would be best solved with asymmetric cryptography.

    EDIT: I meant to say “would still allow passkeys+MFA.” hooray for sleep deprivation lol.





  • I think they’re all top-level responses too. I took a random sampling of their comments, and they never respond to anyone else’s comment. That smells like someone being lazy and not bothering to iterate through comments when writing their dumb AI commenting script.

    Like, just, what the fuck is this shit? There’s one comment from 8 months ago that looks real. Everything else is from the past week and reads like LLM drivel. Why would you bother? Is it just someone who is bored and wanted to see how long they could convince people?


  • Thank god for projects like Valetudo thar let you break your stuff away from the cloud.

    Semi-related story time. I bought a Midea Cube dehumidifier for my laundry room. My dryer has been broken for years, and I’ve found that air drying clothes makes them last a lot longer. It’s hard to air dry inside, hence the dehumidifier. My plan was to control the dehu automagically with Home Assistant along with some fans, so people could just click a button to turn all the shit on to dry their clothes.

    After buying it, I realized that the dehumidifier could only be controlled via the cloud, and the cloud control was unreliable as fuck. With the exception of tech people, nobody is willing to deal with my flaky bullshit. If the button doesn’t work consistently, my partner, her other partner, and my FIL aren’t going to bother. Luckily, a very industrious person made this thing that let me rip out the hardware responsible for cloud connectivity and replace it with a cheap microcontroller. Now, my dehumidifier talks to my Home Assistant server directly via MQTT and it just fucking works.

    Give me local-only control or fuck off, I’ll take control myself. It’s not much to demand, and shit like what this article describes absolutely deepens my conviction around local-only control.



  • I was looking for this comment so I can vent my extreme irritation to the world.

    God, can this concept please die already‽ If you want to put solar panels where the cars/trains are, just 👏 fucking 👏 put 👏 them 👏 on 👏 top👏

    Do not put them on the ground where they will get smushed and covered in dust and snow and dirt. do not. Just make a little roof for the train tracks/road/bike path/sidewalk/game trail/snail raceway and then put the panels on top of the roof and then if you’re feeling fancy angle the panels to point towards the sun and if you’re feeling really quite fancy then you can use bifacial panels to capture the backscatter from the ground and shit and then we can all be happy. solar ground no, solar roof yes, ground no roof yes. do not play the trolley problem with solar panels on the railroad tracks. we have been doing solar energy for decades and have fucking minmaxed this shit so why are they still trying to do this just STOP.

    Fuck.

    Person I’m responding to, please know that none of this is directed at you. I’m just sour right now and should get off the internet.




  • Ugh, I hate ChatGPT. If this is Bash (which it is, because it’s literally looking for files in a directory called ~/.bashrc.d), then it should god damned well be using syntax and language features that we’ve had for at least twenty fucking years. Specifically, if you’re writing for Bash (and not POSIX shell), you better be using [[ ]] rather than [ ]. This wiki is my holy book I use to keep the demons away when writing Bash, and it does a simply fantastic job of explaining why you should use God damned double square brackets.

    ChatGPT writes shitty, horrible, buggy ass Bash. This is relatively decent for ChatGPT (it even makes sure the files are real files and not symlinks), but I’ve had to fix enough terrible fucking shitty AI Bash to have no tolerance for even the smallest misstep from it.

    Sincerely, A senior developer who is known as the Bash wizard at work.

    EDIT: Sorry, OP. ChatGPT did not, in fact, write this code, and I am going to leave my comment here as a testament to what a big smelly dick I was here.


  • Yeah, I’ve been wondering how the fuck they pulled this off. If it turns out that the only pagers that exploded belonged to Hezbollah members, then that would signal to me that this was done entirely digitally.

    I’ve heard that batteries (can’t remember if it was laptop or phone batteries) contain the energy of a small grenade, but getting it to release that energy all at once without physical access is absolutely fucking wild and has serious fucking implications for device security.

    EDIT: To avoid spreading misinformation, I’m providing this edit to say that the batteries absolutely were not the cause of the explosion. This was a supply-chain attack. Explosives were inserted into the pagers. The batteries in these pagers cannot be made to explode like this. I was overly excited when I made this comment.


  • Badabinski@kbin.earthtoLinux@lemmy.mlGoldilocks distro?
    link
    fedilink
    arrow-up
    25
    arrow-down
    8
    ·
    2 months ago

    For me, it’s Arch for desktop usage. When I first started using Arch it would not have been Arch, but now it’s Arch. The package manager has great ergonomics (not great discoverability, but great ergonomics), it’s always up to date, I can get a system from USB to sway in ~20 minutes (probably be faster if I used the installer), it’s fast because it doesn’t enable many things by default, and it’s honestly been the most reliable distro I’ve ever used. I used to use OpenSUSE ~10 years ago, and that broke more in one year than Arch has in ten.

    I personally feel like Arch’s unreliable nature has been overstated. Arch will give you the rope to hang yourself if you ask for it, but if you just read the emails (or use a helper that displays breaking changes when updating like paru) and merge your pacnews then you’ll likely have a rock solid system.

    Again, this is all just my opinion. It’s easy for me to overlook or forget all of the pain and suffering I likely went through when learning how to Arch. I won’t recommend it to you, but I’ll happily say how much I’ve come to enjoy using it.


  • I wrote a comment about this several months ago on my old kbin.social account. That site is gone and I can’t seem to get a link to it, so I’m just going to repost it here since I feel it’s relevant. My kbin client doesn’t let me copy text posts directly, so I’ve had to use the Select feature of the android app switcher. Unfortunately, the comment didn’t emerge unscathed, and I lack the mental energy to fix it due to covid brain fog (EDIT: it appears that many uses of I were not preserved). The context of the old post was about layoffs, and it can be found here: https://kbin.earth/m/asklemmy@lemmy.ml/t/12147

    I want to offer my perspective on the Al thing from the point of view of a senior individual contributor at a larger company. Management loves the idea, but there will be a lot of developers fixing auto-generated code full of bad practices and mysterious bugs at any company that tries to lean on it instead of good devs. A large language model has no concept of good or bad, and it has no logic. happily generate string- templated SQL queries that are ripe for SQL injection. I’ve had to fix this myself. Things get even worse when you have to deal with a shit language like Bash that is absolutely full of God awful footguns. Sometimes you have to use that wretched piece of trash language, and the scripts generated are horrific. Remember that time when Steam on Linux was effectively running rm -rf /* on people’s systems? I’ve had to fix that same type of issue multiple times at my workplace.

    I think LLMs will genuinely transform parts of the software industry, but I absolutely do not think they’re going to stand in for competent developers in the near future. Maybe they can help junior developers who don’t have a good grasp on syntax and patterns and such. I’ve personally felt no need to use them, since spend about 95% of my time on architecture, testing, and documentation.

    Now, do the higher-ups think the way that do? Absolutely not. I’ve had senior management ask me about how I’m using Al tooling, and they always seem so disappointed when I explain why I personally don’t feel the need for it and what feel its weaknesses are. Bossman sees it as a way to magically multiply IC efficiency for nothing, so absolutely agree that it’s likely playing a part in at least some of these layoffs.

    Basically, I think LLMs can be helpful for some folks, but my experience is that the use of LLMs by junior developers absolutely increases the workload of senior developers. Senior developers using LLMs can experience a productivity bump, but only if they’re very critical of the output generated by the model. I am personally much faster just relying on traditional IDE auto complete, since I don’t have to change from “I’m writing code” mode to “I’m reviewing code mode.”


  • The tiered storage stuff is pretty cool. You can say “I want this data on this disk, so if I get a cache miss from a faster disk/RAM it’ll come from this other disk first.”

    I believe it also has some interesting ways of handling redundancy like erasure coding, and I thiiiink it does some kind of byte-level deduplication? I don’t know if that’s implemented or is even still planned, but I remember being quite excited for it. It was supposed to be dedupe without all of the hideous drawbacks that things like ZFS dedupe have.

    EDIT: deduplication is absolutely not a thing yet. I don’t know if it’s still on the roadmap.

    EDIT: Erasure coding is deffo implemented, however.