newly manufactured vehicles should not already rust.
Seems to be a trend these days, unfortunately.
newly manufactured vehicles should not already rust.
Seems to be a trend these days, unfortunately.
Rather than modifying your dependencies in the cache directory (which is really not a good idea), consider cloning the repo directly. You can use a patch entry in your Cargo.toml
to have all references to iced_wgpu
point to your local modified copy.
Correct - Rust’s attribute grammar allows any parseable sequence of tokens enclosed in #[attr ...]
basically. Serde specifically requires things to be in strings, but this is not a requirement of modern Rust or modern versions of syn
(if you’re comfortable writing your own parser for the meta).
The author is not a Rust expert though, so I’m not surprised to see this assumption. It doesn’t take away from the article though.
Edit: for fun, syn
has an example parsing an attribute in an attribute
Adding a single unused function should no effect on runtime performance. The compiler removes dead code during compilation, and there’s no concept at runtime anyway of “creating a function” since it’s just a compile-time construct to group reusable code (generally speaking - yes the pedants will be right when they say functions appear in the compiled output, to some extent).
Anyway, this can all be tested on Godbolt anyway if you want to verify yourself. Make a function with and without a nested unused function and check the output.
Sorry if I’m missing some sarcasm here, but if this is all you have to contribute, then as a professional software developer, I’d much rather work with the author of the article on a daily basis.
Two thoughts come to mind for me:
Or look at Python and their urllib, urllib2, new urllib, and the requests package on PyPi.
We already sort of saw this in Rust with crossbeam and standard channels, until of course they replaced the standard lib implementation with crossbeam’s implementation.
Still working on an assertions library that I started a few weeks ago. I finally managed to get async assertions working:
expect!(foo(), when_ready, all, not, to_equal(0)).await;
It also captures values passed down the assertion chain and reports them on failure (without requiring all types to implement Debug
since it uses autoref specialization).
Hopefully it’ll be ready for a release soon.
To be clear - I’m referring to devices with, say, 128MiB of device storage and memory when I refer to low memory machines (which I’ve developed for before actually). If you’ve got storage in the GB, then there’s no way optimizing for size matters lol.
My understanding is that should almost only ever be set for WASM. Certain low-memory machines may also want it, but that’s extremely rare.
I’m not sure who’s recommending it, only ever seen it recommended for WASM applications.
Anytime anyone mentions integrating an HTTP client into Rust’s std, all it takes is one good Python anecdote to shut that discussion right down.
Having the standard library be stable and not try to add a bunch of support for changing standards is a long-term benefit to the language. Having “de-facto standard libs” with crates like url
, http
, etc ends up being better because they can evolve independently from the standard library, at the pace their respective domains evolve.
Although, I suppose an argument could be made that url
is unlikely to really evolve anymore.
Ignoring the rest, just some thoughts about the list of proposed features:
A
capture
trait for automatic cheap clones
Automatic implicit cloning would be useful for high level developers, but not ideal at all for low level or performance-sensitive code. It’s not the case that anyone using a shared pointer wants to clone it all the time. The high level usecase doesn’t justify the cost assumed by the low level users.
Instead, being able to wrap those types with some kind of custom “clone automatically” type feels like a middle ground. It could be a trait like mentioned, or a special type in the standard library. Suppose we call it Autoclone[T]
or something (using brackets because Lemmy nonsense). Autoclone[Rc[T]]
could function like the article mentioned.
Automatic partial borrows for private methods
Having “private” and non-“private” methods function differently feels like confusing behavior that should be avoided if possible. Also, “private” I assume refers to pub(self)
methods (the default if unspecified), which is “module-level” methods (so accessible within the module it’s defined in). Anyway, there are years of discussion around this so I’ll just defer to that as to why it’s not in yet.
I agree with the urge to make it happen though. Some method of doing partial borrows for methods would be nice.
Named and optional function parameters
This is what prompted me to even comment. What “every language” does for complex constructors is different per language. C#, for example, supports both named and optional parameters, but construction usually uses an object initializer:
var jake = new Person("Jake")
{
Age = 30,
// ...
};
This is similar to Rust’s initializers:
let jake = Person {
age: 30,
...Person::new("Jake")
};
Where it gets tricky is around required parameters. Optional ones don’t really matter since you can use the syntax above if you want, or chain methods like with the builder style.
As for the overhead of writing builders, there’s already libraries that let you slap #[derive(Builder)]
on types and get a builder type automatically.
As for optional parameters, how those are implemented differs between languages. In C#, default values must be constant values. In Python, default values are basically “global” values and this nonsense is possible:
def count_calls(count=[]):
# if unset, count is a global list
count.push(0)
return len(count)
Anyway, all this is to say that the value of optional parameters isn’t obvious.
Named parameters is more of a personal choice thing, but falls apart when your parameter has no name and is actually a pattern:
async fn get_foo(_: u32) {}
Also, traits often use names prefixed with underscores in their default fn impls to indicate a parameter an implementer has access to, but the trait doesn’t use by default. Do you use that name, or the name the implementer defined? I assume the former since you don’t always know the concrete type.
Faster unwrap syntax
We have that, it’s called the try
operator.
Okay I know it’s different, and I know everyone’s use case is different, but I’ve been coding long enough to know that enabling easy unwraps means people will use it everywhere despite proper error handling being pretty dang important in a production environment.
Thinking of my coworkers alone, if we were to start writing Rust, they’d use that operator everywhere because that’s what they’re familiar with coming from other languages. Then comes the inevitable “how do I add a try-catch block?” caused by later needing to handle an error.
Anyway, I prefer the extra syntax since it guides devs away from using that method over propagating the error upwards. For the most part, you can just use anyhow::Result
and get most error types converted automatically.
Try trait
Yes please.
Specialization
Yes please.
Stabilizing async read/write traits to standardize on an executor API
I’d want input from runtime devs on this, but if possible, yes please.
Allowing compilation of builds that fail typechecking
???
How is the compiler going to know how to compile the code if it doesn’t know the types? This isn’t Python. The compiler needs to know things like how much memory to allocate, and there’s a ton of potential unsound behavior that can occur from treating one type as another, even if they’re the same size.
Anyway I’ll save the rest for later since I’m out of time.
I was mostly looking for something more composable, similar to how jest
works. Some ideas that I’ve been working on are assertions like:
expect!([1, 2, 3])
.all()
.to_be_less_than(5);
I also have some ideas around futures that I’d like to play with.
Felt like making an assertions library since I can’t seem to find something quite what I’m looking for.
Inline consts also let you perform static assertions, like asserting a type parameter is not a zero-sized type, or a const generic is non-zero. This is actually pretty huge since some checks can be moved from runtime to compile time (not a lot of checks, but some that were difficult or impossible to do at compile time before).
If by parallel you mean across multiple threads in some map-reduce algorithm, the compiler will not do that automatically since that would be both extremely surprising behavior and in most cases, would make performance worse (it’d be interesting to see just how many shapes you’d need to iterate over before you start seeing performance benefits from map-reduce). If you’re referring to vectorization, then the Rust compiler does automatically do that in some cases, and I imagine it depends on how the area is calculated and whether the implementation can be inlined.
I agree with the conclusion, and the exploration is interesting enough that I think it was worth sharing. Still, while the author seemingly knows this already based on their conclusion, it’s still worth stressing: these kinds of microbenchmarks rarely reflect real world performance.
This toy case doesn’t have many (if any) real world performance-sensitive applications. At best, using shapes in games comes to mind, but shapes there are often represented as meshes, and if you really need the area that much, you might find that precalculating the area once is more impactful on the performance than optimizing how fast the area is calculated.
Still, the author seems aware, and it seems to just be the author sharing their fun experiment.
You should look at the tools I linked. cargo-make
would just change your flow to cargo make run
, cargo make check
, etc. and just
has similar benefits. You’d handle the computer-specific logic in there, using a .env per computer if you want.
A couple options come to mind to me:
cfg
more. If your features are conditional based on OS, architecture, CPU features, etc then you probably are better off using conditions other than feature
. See the reference for more information.cargo-make
and just
. From there, you can do platform-specific build logic or even read variables from a .env
like you mentioned you wanted.
That’s my guess too. This would be way too many changes otherwise, with unsafe attributes, syntax changes, and feature additions (like
use
syntax).