

Writing software carries a non-zero risk. If compiling was part of building the package rather than manually committed to the repository, things would work. And that would make the design have no essential binary blob.


Writing software carries a non-zero risk. If compiling was part of building the package rather than manually committed to the repository, things would work. And that would make the design have no essential binary blob.


It is a better approach, it just may be more complex. Only people developing or packaging the library need to compile the message definitions. It’s not a big burden to require than they have protoc installed. The end user will only need to depend on the created package.


Also there is strictyaml that validates against schemas. Don’t touch the builtin yaml module.
Thanks. I’ll include that in an update.
protobuf needs to be compiled. This introduces possibility of coder error. Just forgetting to compile and commit protobuf files after a change. This affected the electrum btc and ltc (light) wallets.
Yes, that’s certainly a downside. It also demonstrates one should not commit such generated files. A better approach is to commit the source files (in this instance message definition) and have a compilation step included in the program’s build/install recipe.
strictyaml


Joblib has the same drawback as pickle. From the documentation:
joblib.dump() and joblib.load() are based on the Python pickle serialization model, which means that arbitrary Python code can be executed when loading a serialized object with joblib.load().
joblib.load() should therefore never be used to load objects from an untrusted source or otherwise you will introduce a security vulnerability in your program.


If you’re serialising trusted data, you can define schema for it and use Protocol Buffers which will not only by safer but also faster. Pretending that you need to be able to serialise arbitrary data hurts everyone.
No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install. You can opt in if you want it.
So it makes installation faster by making runtime slower.
Ignoring requires-python upper bounds. When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive.
So it makes installation faster by installing untested code.
Sounds like a non-starter to me.


Because those recommendations are written for new users. A new user will be better served by a distribution which puts user-friendliness at its forefront. If you’re not a newbie you probably don’t need recommendations because you already know what distributions are available out there.
You can just copy the file and set XAUTHORITY as necessary. Just make sure only the desired user can read it.
No, do not do that. This gives access to the display to anyone who can connect to it. The proper way is to give the user access to file whose path is in $XAUTHORITY.
Capital letters in user names. 🤮
Debian has torbrowser-launcher you might wanna take a look at that.
As for the issue, this could be because the user lacks credentials to connect to the display.
Firstly, and most importantly, executing grub-install requires super-user
privileges. Rather than adding it to PATH you should instead run the command
through sudo. A regular user typically does not need any of sbin
directories in their PATH.
As for the command itself, there are three things wrong with it:
PATH should only include directories whereas you tried to add to it a path to
an executable. So rather than /usr/sbin/grub-install/grub-install you
should just add /usr/sbin.PATH you’ve overwritten the variable. Instead you
need PATH="$PATH:/usr/sbin/:/usr/local/sbin" (notice $PATH: at the
beginning of the assignment).Also, export is unnecessary since PATH is already an environment variable.
(That’s also bashism but that’s likely an irrelevant issue).
Honestly I don’t understand what this is showing. I guess it’s how long the lid was open?
Speaking of bash prompt: https://mina86.com/2015/bash-right-prompt/
What’s the point of linking LWN article rather than directly the official announcement?


Maybe those would help (although using those would require changing how you do emails and it’s not a solution for Android):


uutils developers aren’t earning any more than coreutils developers. This is an orthogonal discussion.


I’m essentially trying to find the most performant way to get a simple read/write buffer.
Stack is hot so it’s probably better to put things there than to have static array which is out of memory cache and whose address is out of TLB.
To answer your question, yes, this is undefined behaviour if the function is called from multiple threads. It’s also undefined behaviour if, by accident, you take second reference to the array.
It’s unlikely that you really need to do anything fancy. I/O is usually orders of magnitude slower than dealing with memory buffers. Unless you profile your code and find the bottleneck, I’d advice against static mutable buffer.
PS. On related note, a shameless plug: Rust’s worst feature.


Yes, but I was talking about field name, not struct tag. And up to C99 my comment was correct.


You appear to be correct.


Tag is what goes after the struct keyword to allow referring to the struct type. Structs don’t have to have a tag. Name is what field are called. Adapting Obin’s example:
struct foo { int baz; };
struct bar { struct foo qux; };
struct bar data;
data.qux.baz = 0;
foo and bar are tags for struct foo and struct bar types respectively; baz and qux are field names; and data is a variable name.
Who do I message to replace the pie chart for oldest Rust version used with a cumulative distribution graph? Even legend is a mess on that one.