That’s not been my experience. Lots of drives I’ve bought have been FAT32 out of the box.
Canadian software engineer living in Europe.
That’s not been my experience. Lots of drives I’ve bought have been FAT32 out of the box.
In terms of local storage, I usually have everything in ~/projects/project-name
, and I don’t have tiny file size limits because I don’t use FAT32 filesystems — that’s the default filesystem you usually get on USB sticks and external hard drives you buy. You have to format those drives to something like EXT4 (Linux) or NTFS (Windows) or you get stuck with FAT32 which has 2gb file sizes.
I had no idea! Thanks for the tip.
In one of the other comments, we worked out that it was definitely something to do with ACPI, but yes I do have an external monitor. This is a desktop system.
Disabling the interrupt did the job, but I don’t know why it’s happening. If this is related to the monitor, could this be an Nvidia thing?
There it is! Thank you! It’s a process owned by root called kworker/0:0+kacpid
. Any idea what that is?
[Edit 1] Interestingly, I can’t even kill -9
it.
[Edit 2] With kworker kacpid
to work with, I did a quick search and found this SO page that has some interesting information that I only partially understand, but the following worked like a charm:
# grep -Ev "^[ ]*0" /sys/firmware/acpi/interrupts/gpe?? | sort --field-separator=: --key=2 --numeric --reverse | head -1
/sys/firmware/acpi/interrupts/gpe09:11131050 STS enabled unmasked
# echo disable > /sys/firmware/acpi/interrupts/gpe09
It’s not clear to me what an interrupt is or whether this gpe09
value is meant to be persistent across reboots, or why this only seems to be happening in the last couple months, but if I can make it go away by running the above from time to time, I guess it’s alright?
Oh boy are you going to love-to-hate this then. It’s best viewed on a proper computer, but you’ll get the gist on mobile too.
But there’s nothing stopping you from loading realistic (or even real) data into a system like this. They’re entirely different concepts. Indeed, I’ve loaded gigabytes of production data into systems similar to what I’m proposing here (taking all necessary precautions of course). At one company, I even built a system that pulled production into a developer-friendly snapshot while simultaneously pseudo-anonymising that data so it can be safely (for some value of ${safe}) be tinkered with in development.
In fact, adhering to a system like this makes such things easier, since you don’t have to make any concessions to “this is how we do it in development”. You just pull a snapshot from the environment you want to work with and load it into your Compose session.
It sounds like you’re confusing the application with the data. Nothing in this model requires the use of production data.
I feel like you must have read an entirely different post, which must be a failing in my writing.
I would never condone baking secrets into a compose file, which is why the values in compose.yaml
aren’t secrets. The idea is that your compose file is used exclusively for testing and development, where the data isn’t real, and the priority is easing development. When you deploy, you don’t use that compose file because your environment is populated by whatever you use in production (typically Kubernetes these days).
You should not store your development database password in a .env
file because it’s not a secret. The AWS keys listed in the compose are meant to be exactly as they are there: XXX
, because LocalStack doesn’t care what these values are, only that they exist.
As for the CLI thing, again I think you’ve missed the point. The idea is to start from a position of “I’m building images” and therefore neve have a “local app, (Django, sqlite)” because sqlite should not be used unless that’s what’s used in production. There should be little to no difference between development and production, so scripting a bridge between these doesn’t make a lot of sense to me.
I don’t mean to be snarky, but I feel like you didn’t actually read the post 'cause pretty much everything you’ve suggested is the opposite of what I was trying to say.
.json
or .env
files. The litmus test here is: “How many steps does it take to get this project running?” If it’s more than 1 (docker compose up
) it’s too many.High praise! Just keep in mind that my blog is a mixed bag of topics. A little code, lots of politics, and some random stuff to boot.
It’s a tough one, but there are a few options.
For AWS, my favourite one is LocalStack, a Docker image that you can stand up like any other service and then tell it to emulate common AWS services: S3, Lamda, etc. They claim to support 80 different services which is… nuts. They’ve got a strange licensing model though, which last time I used it meant that they support some of the more common services for free, but if you want more you gotta pay… and they aren’t cheap. I don’t know if anything like this exists for Azure.
The next-best choice is to use a stand-in. Many cloud services are just managed+branded Free software projects. RDS is either PostgreSQL or MySQL, ElastiCache is just Redis, etc. For these, you can just stand up a copy of the actual service and since the APIs are identical, you should be fine. Where it gets tricky is when the cloud provider has messed with the API or added functionality that doesn’t exist elsewhere. SQS for example is kind of like RabbitMQ but not.
In those cases, it’s a question of how your application interacts with this service. If it’s by way of an external package (say Celery to SQS for example), then using RabbitMQ locally and SQS in production is probably fine because it’s Celery that’s managing the distinction and not you. They’ve done the work of testing compatibility, so theoretically you don’t have to.
If however your application is the kind of thing that interacts with this service on a low level, opening a direct connection and speaking its protocol yourself, that’s probably not a good idea.
That leaves the third option, which isn’t great, but I’ve done it and it’s not so bad: use the cloud service in development. Normally this is done by having separate services spun up per user or even with a role account. When your app writes to an S3 bucket locally, it’s actually writing to a real bucket called companyname-username-projectbucket
. With tools like Terraform, the fiddly process of setting all this up can be drastically simplified, so it’s not so bad – just make sure that the developers are aware of the fact that their actions can incur costs is all.
If none of the above are suitable, then it’s probably time to stub out the service and then rely more heavily on a QA or staging environment that’s better reflective of production.
Yeah that was the big strike against it for me too. I found that you can sort of perch it over a crossed leg and it’s sort of serviceable that way, but yeah… no coding on the train with a Surface.
The Surface Pro keyboard is actually quite good, with the added bonus that it’s also easily detachable.
This too is an excellent take. “Artificial pain points” for capitalism, or “learn some shit” for Linux. Love it.
You make an excellent point. I have a lot more patience for something I can understand, control, and most importantly, modify to my needs. Compared to an iThing (when it’s interacting with other iThings anyway) Linux is typically embarrassingly user hostile.
Of course, if you want your iThing to do something Apple hasn’t decided you shouldn’t want to do, it’s a Total Fucking Nightmare to get working, so you use the OS that supports your priorities.
Still, I really appreciate the Free software that goes out of its way to make things easy, and it’s something I prioritise in my own Free software offerings.
Oof, that video… I don’t have enough patience to put up with that sort of thing either. I wonder how plausible a complete Rust fork of the kernel would be.
In my experience, the larger the company, the more likely they are to force you to use Windows. The smaller companies will be more relaxed about the whole thing.
The largest company I’ve worked for that allows Linux had a staff count of hundreds of engineers and hundreds more non-nerds. In their case though, the laptops were crippled with Crowdstrike and Kollide and while the tech team was working hard to support us, we were always aware that we made up around 1% of the machines they manage and represented a big chunk of their headaches.
The response to this you usually hear (from me even) is that “I don’t need support, I know what I’m doing”. Which is probably true, but the vast majority of problems is in dealing with access to proprietary systems, failures from Crowdstrike or complaints about kernel versions etc.
TL;DR: work at a small company (<100 staff) and they’ll probably leave you alone. Go bigger and you’ll be stuck fighting IT in one way or another.
Actually, someone did, changing the name to “Glimpse”. They announced it as an explicit fork that would continue development under the new name.
As far as I know, that’s as far as they got.
ExFAT is good for portable devices, but if you’re working with something internally, there’s no reason not to use EXT4 or NTFS.