• 0 Posts
  • 29 Comments
Joined 10 days ago
cake
Cake day: January 7th, 2026

help-circle


  • AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence.

    I can’t help but to always be a bit skeptical, when reading something like this. To me it’s akin to having to do calculations manually, but there’s a calculator right beside you. For now, the technology might not yet be considered sufficiently trustworthy, but what if the clanker starts spitting out conclusions, which equal a maintainer’s, like 99% of the time? Wouldn’t (partial) automation of the process become extremely tempting, especially when the stack of pull request starts piling up (because of vibecoding)?

    Such a policy would be near-impossible to enforce anyway. In fact, we’d rather have them transparently disclose the use of AI than hide it and submit the code against our terms. According to our policy, any significant use of AI in a pull request must be disclosed and labelled.

    And how exactly do you enforce that? It seems like you’re just shifting the problem.

    Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.

    I mean, there’s hallucination concerns, there’s licensing conflicts. Sure, people can also copy code from other projects with incompatible licenses, but someone without programming experience is less likely to do so, than when vibecoding with a tool directly trained on such material.

    Malicious and deceptive LLMs are absolutely conceivable, but that would bring us back to the saboteur.

    If Microsoft itself, would be the saboteur, you’d be fucked. They know the maintainers, because GitHub is Microsoft property, and so is the proprietary AI model, directly implemented in the toolchain. A malicious version of Copilot could, hypothetically, be supplied to maintainers, specifically targeting this exploit. Microsoft is NOT your friend, it closely works together with government organizations; which are increasingly interested in compromising consumer privacy.

    For now, I do believe this to be a sane approach to AI usage, and believe developers to have the freedom to choose their preferred environment. But the active usage of such tools, does come with a (healthy) dose of critique, especially with regards to privacy-oriented pieces of software; a field where AI has generally been rather invasive.


  • Yes, because they constitute a significant portion, of the eyes, traditionally involved with doing the verification of software. You can allow a potentially cherry-picked group of researchers to do the verification, on behalf of the user-base, but that hinges on a “trust me bro” basis. I appreciate you’ve looked into the process in practice, but please understand that these pieces of software, are anything but simple. Also if a state-actor were to deliberately implement an exploit, it wouldn’t be necessarily obvious at all, even if source-code was available; they’re state-backed, top of their game security-researchers themselves. Even higher tier consumer-grade computer viruses, won’t execute in a virtualized environment, precisely to avoid being detected. They won’t compromise when unnecessary, and might only be exploited when absolutely required; again to avoid suspicion.

    I fully agree with the last paragraph though, and believe there to be an overreliance on digital systems over all. In terms of FOSS software, you have to rely on many, many different contributors to facilitate maintenance, packaging and distribution in good faith; and sometimes all it takes is just one package, for the whole system to become compromised. But even so, I’m more comfortable knowing, the majority of software I’m running on my machines, to be open-source; than relying on a single entity, like Microsoft, having an abysmal track record in respect of privacy, while doing so in the dark. Of course you could restrict access to Microsoft servers using network filtering, but it’s not just that aspect, it’s also not having to deal with Microsoft’s increasingly restricted experience, primarily serving their perverse dark patterns. I do believe people should handle sensitive files with care, for instance: put Tails on a live-USB, leave it off the internet, put the files on an encrypted drive, dismount the drives physically, and store them somewhere safe.


  • Ah sorry, it seems I read over that part. Unless programmers have the exceptional skills and time required, to effectively reverse engineer these complex algorithms, nobody will bother to do so; especially when required after each update. On the contrary, if source code was available, the bar of entry is significantly lower and requires way less specialized skills. So save to say, most programmers won’t even bother inspecting a binary, unless there’s absolutely no other way around or have time to burn. Where as, if you’d open up the source, there would be a lot more, let’s say C programmers, able to inspect the algorithm. Really, have a look at what it requires to write binary code, let alone reverse engineering complicated code, that somebody else wrote.

    I agree with Linus’ statement though: I rarely inspect source-code myself, but I find it more comforting knowing, package-maintainers for instance, could theoretically check the source before distribution. I stand by my opinion that it’s a bad look for a privacy- and security-oriented piece of software, to restrict non-“experts” from inspecting that, which should ensure that.












  • India proposes requiring smartphone makers to share source code with the government and make several software changes as part of a raft of security measures.

    How does that sound promising at all? Especially when initiated by a government, previously having attempted to enforce government spyware, to be installed on all consumer smartphones. The following excerpts are from India’s proposed phone security rules that are worrying tech firms

    Devices must store security audit logs, including app installations and login attempts, for 12 months.

    Phones must periodically scan for malware and identify potentially harmful applications.

    Defined to be potentially harmful by who? Right.

    Phone makers must notify a government organisation before releasing any major updates or security patches.

    We cannot approve of the security patch just yet, as we must first extensively exploit the vulnerability…

    Devices must detect if phones have been rooted or “jailbroken”, where users bypass built-in security restrictions, and display continuous warning banners to recommend corrective measures.

    Phones must permanently block installation of older software versions, even if officially signed by the manufacturer, to prevent security downgrades.


  • It becomes more apparent to me everyday, we might be headed towards a society, dynamically managed by digital systems; a “smart society”, or rather a Society-as-a-Service. This seems to be the logical conclusion, if you continue the line of “smart buildings” being part of “smart cities”. With use of IoT sensors and unified digital platforms, data is continuously being gathered on the population, to be analyzed, and its extractions stored indefinitely (in pseudonymized form) by the many data centers, currently being constructed. This data is then used to dynamically adapt the system, to replace the “inefficient” democratic process and public services as a whole. Of course the open-source (too optimistic?) model used, is free of any bias; however nobody has access to the resources required to verify the claim. But given big-tech, historically never having shown any signs of tyranny, a utopian outcome can safely be assumed… Or I might simply just be a nut, with a brain making nonsensical connections, which have no basis in reality.