

I agree with you, but the cash example is a bad one because there is a push to move entirely to electronic payments.
made you look


I agree with you, but the cash example is a bad one because there is a push to move entirely to electronic payments.


Some newer radiation hardened stuff is 10x larger than that, older gear even more so. But that just reduces the risk, not sure it’s possible to negate it entirely.
An easier way is to just include more CPUs as part of the system, run them in lockstep, then compare the results by majority rule. If 2/3 CPUs say one answer and the third says something else, you discard the result of the third and go with the other CPUs.


The ladybird devs are currently in the process of switching language again from Swift to Rust, using LLMs.


This isn’t sending your packets anywhere but their closest datacenter, not sure I’d trust MS (Or rather, Cloudflare) with your porn rather than your ISP who you’re actually paying.


The original use case for this stuff was unencrypted HTTP with a public WiFi connection, in which case your ISP is the owners of whatever shop you’re in and yeah they could see everything.
If you’re at home or whatever it offers effectively no benefits, doesn’t “block trackers” or whatever nonsense like Nord claims, but I don’t think Microsoft ever claimed that it did.


IPFS has gateways though, so you can link to the latest version of a page which can be updated by the owner, or alternatively link to a specific revision of the page that is immutable and can’t be forged.


Seems like we need to switch to URLs that contain the SHA256 of the page they’re linking to, so we can tell if anything has changed since the link was created.
IPFS says hi


It’s the same person running all of them, so yeah it is.


No, Nokia do own a bunch of patents on it, I’m pretty sure they also created (and have patents on) the HEIF format used in HEIC/AVIF as well.
Edit: search results were failing me, but here’s a couple.
https://blogs.windows.com/devices/2013/03/18/h-265hevc-high-quality-video-at-half-the-bandwidth/ https://mspoweruser.com/nokia-details-its-contribution-to-h-265hevc-hints-at-integration-in-devices/


Much in the same way that laws don’t prevent crime, a project banning AI contributions doesn’t stop people from trying to sneak in LLM slop, it instead lets the project ban them without argument.


SpaceX wrote in its July permit application — under the header Specific Testing Requirements — Table 2 for Outfall: 001 — that its mercury concentration at one outfall location was 113 micrograms per liter. Water quality criteria in the state calls for levels no higher than 2.1 micrograms per liter for acute aquatic toxicity and much lower levels for human health
Cool, you can drink the mercury water, but I’ll pass thanks.


I’ve got some numbers, took longer than I’d have liked because of ISP issues. Each period is about a day, give or take.
With the default TTL, my unbound server saw 54,087 total requests, 17,022 got a cache hit, 37,065 a cache miss. So a 31.5% cache hit rate.
With clamping it saw 56,258 requests, 30,761 were hits, 25,497 misses. A 54.7% cache hit rate.
And the important thing, and the most “unscientific”, I didn’t encounter any issues with stale DNS results. In that everything still seemed to work and I didn’t get random error pages while browsing or such.
I’m kinda surprised the total query counts were so close, I would have assumed a longer TTL would also cause clients to cache results for longer, making less requests (Though e.g. Firefox actually caps TTL to 600 seconds or so). My working idea is that for things like e.g. YouTube video, instead of using static hostnames and rotating out IPs, they’re doing the opposite and keeping the addresses fixed but changing the domain names, effectively cache-busting DNS.


Set that minimum TTL to something between 40 minutes (2400 seconds) and 1 hour; this is a perfectly reasonable range.
Sounds good, let’s give that a try and see what breaks.


What they’re saying is that a web server can create a traditional jpeg file from a jpeg xl to send to a client as needed.
Other way around, you can convert a “web safe” JPEG file into a JXL one (and back again), but you can’t turn any random JXL file into a JPEG file.
But yeah, something like Lemmy could recompress uploaded JPEG images as JXL on the server, serving them at JXL to updated clients, and converting back to JPEG as needed, saving server storage and bandwidth with no quality loss.


I’ve seen zero RISC devices in the wild
Ever seen an Nvidia GPU? They’ve been using them for years. One estimate is they shipped 1 billion cores in 2024.
Not as end user programmable chips of course, but the “end user devices” market is only a small part of the total industry.


Rust has no stable inter-module ABI, so everything has to be statically linked together. And because of how “viral” the GPL/LGPL are a single dependency with that license turns the entire project into a GPL licenced one.
So the community mostly picks permissive licenses that don’t do that, and that inertia ends up applying to the binaries as well for no real good reason. Especially when there’s options like e.g. MPL.


Ehh, I’d pass on Ladybird. I’ve been donating to Servo myself.
Can thank Intel for that, they pressured MS to lower the documented requirements so they could sell more low-end hardware.
Of course, MS executives also gladly went along with it, not like they’re innocent in any way.
Also Nvidia and their drivers caused issues, as usual.