Namespacing by username or org is a good way to get people to download the compromised wrong crate though since barely any document will talk about that part of the name and it will sometimes change over the lifetime of a project.
Namespacing by username or org is a good way to get people to download the compromised wrong crate though since barely any document will talk about that part of the name and it will sometimes change over the lifetime of a project.
As with many posts like this one, please include some sort of paragraph on what your software actually does instead of just assuming everyone is familiar with it.
The connected network/platform is called the Atmosphere.
Not sure that is a good name from a search or brand recognition perspective.
You are confusing cargo and crates.io.
Cargo is the program doing all the downloading of dependencies, crates.io is the official registry (but there are ways to host your own for private crates, e.g. kellnr), Rust is just the compiler and does not do any downloading of anything. For completeness sake, rustup is the program you can use to install cargo, rust and some other tooling and data files.
But why not use a proper builder pattern in that case?
But a scope adds a nesting level which adds a lot more visual clutter.
The first one won’t work either for private fields.
Why not just a let app = app;
line after the let mut app = ...;
one?
Maybe make it aggregate the data over all commits and then use that as a learning opportunity to learn about tools like cargo-bench, criterion, cargo-flamegraph and other profiling and benchmarking tools and optimization techniques to see if you can speed it up, reduce its memory usage,…?
Maybe keep maintaining the HashMap you have now and use one of these less portable mechanisms in a test to alert you when you forgot to register one?
Have you tried some tag soup parser? That should work as a last resort even if the ones building a tree structure don’t.
The thing I don’t get in these discussions is that there are people who have convinced themselves that a language we came up with in the first 20 years or so of the industry’s existence is the pinnacle of programming language development and that all those newer languages are really completely equivalent in terms of outcome once you add up their up- and downsides.
That seems based on the same misconception as the whole “fighting the compiler” view on Rust, namely that other languages are better because they let you get away with not thinking through the problems in your code up front. I am not surprised that this view is common in the C world which is pretty far on the end of the spectrum that believes that they are a “sufficiently disciplined programmer” (as opposed to the end of the spectrum that advocates for static checks to avoid human mistakes).
The problem you mention is fundamentally no different from e.g. changing some C internals in the subsystem you know well and that leads to breakage in the code in some other C subsystem you don’t know at all. The only real difference is that in C that code will break silently more likely than not, without some compiler telling you about it. The fact that the bit you know well/don’t know well is the language instead of some domain knowledge about the code is really not that special in practical terms.
Speaking of convenient things best not handled manually, do you have any plans to get support for it into crates like sqlx-postgresql, diesel or humantime where conversions need to happen but pretty much the same way for every user of the library?
If the iPhone had been hyped like AI is today people would have claimed you could replace your hammer, saw, garden hose and cooking utensils with an iPhone.
Meanwhile current AI is pretty much useless for any purpose where you actually need to rely on a decent chance to get quality results without human review.
Is it really that hard to distinguish genuine revolutions (iPhone, Rust, AI, reusable rockets, etc.) from hyped nonsense (Blockchain/web3, Metaverse, etc.)?
It is funny that you list AI under genuine revolutions while I would list it (or at least 90% of it) under hyped nonsense.
From the perspective of a library author even evaluating if a given bug could be considered a vulnerability is extra effort that is not strictly useful to the project itself, just to those using it who want to not apply every single update.
I would say this very issue is at the core of the current CVE discussions that leads more and more projects to become their own CNAs.
Security people and corporate downstream consumers of security feeds want to invest the minimum of effort while pushing as much of the evaluation what is and isn’t a vulnerability on the authors of library authors as possible. However, this does not work. A vulnerability can only ever truly be evaluated by investing significant amounts of effort in the abstract way that is required to do it in an upstream project. On the other hand, at point of use it is often trivial to discard the possibility of an exploit because the potentially vulnerable code is not even used by the project using the library that contains the code.
Funnily enough the same is true for languages that have huge standard libraries. They put anything that is convenient to solve their immediate problem in there. That is how languages like Python end up with multiple of just about everything complicated in there.
That kind of “escape hatch” also makes reasoning about code a lot harder merely because you have to consider that someone used it somewhere. You literally don’t want “escape hatches” from safety guarantees all over your language.