• 0 Posts
  • 116 Comments
Joined 2 years ago
cake
Cake day: September 2nd, 2023

help-circle





  • And how are women pushed out of “man jobs”?

    And how are we fixing that?

    Is it bosses that aim to have male coworkers turning down women? How is that different than bosses wanting artificially 50/50 turning down men?

    Is it not being represented in advertising? How is that different than what happens now. Where most advertising displays just women? Or if there is both a man and a woman, the woman is usually centered in the picture or doing a more important/powerful role.

    By “encouraging” women in the workplace, what you see is things being done to men that you complain was done to women.






  • Technically, this may sound pedantic. You are not passing neither arrays nor tuples as generic parameter types.

    What you are doing is passing an array to a function.

    The type of the array is [i32;5]. Every value has a type.

    By passing the array to a function, you are allowing the compiler to infer what function you are calling, since that function is generic. Using the type of the parameter you passed to it.

    You can only pass values to function parameters. And you can only pass types as generic type parameters.

    Well in this case it’s a little different, since it looks like you are passing a value (5) to a generic type parameter (LENGTH), but the const part of const LENGTH means that it’s a value generic for a type, not a type generic for a type, which is the usual thing.

    EDIT: additionally, the : usize part tells you what type exactly the const parameter for the type has to be.

    Note that you can’t have non-const values as type parameters. Since types are defined at compile time.

    EDIT 2: since type inference just fills some boilerplate for you. If we do that boilerplate manually it’s easier to see what parameters go where.

    When you do Buffer::from([0,1,2,3,4,5]) what you are really doing is: Buffer<i32, 5>::from([0,1,2,3,4,5)]. In fact, if you put that, the code will compile exactly the same. Now if you put a 6 instead, it won’t compile since the type of the buffer and the type of the array you are passing are not the same.


  • You don’t need to know at all what optimizations will happen. I said that as an example of a thing that you know in compile time but not in run time.

    To tell or not whether a type will be inferred is determined by you. If you tell the compiler the type, it will never be inferred. If you don’t tell the compiler the type, it will try to infer it. If it tries to infer the type but it fails, it will throw a compiler error and it won’t finish building the binary.

    The compiler will only successfully infer a type if it has enough information at compile time to know with certainty what type it is. Of course, the compiler is not perfect, so it is possible in complex situations for it to fail even though it theoretically could.

    Examples where inferring will succeed:

    
    fn copy<T>(in: T) -> T {
        return in;
    }
    
    fn main() {
        let a = 47; //here a is of type i32, this was not inferred, it's just the default type of integer literals
        let b = copy(a); // here the compiler knows that a is i32, therefore it should call copy<i32>. Due to the type signature of copy<i32>, the type of b is inferred to be i32
    
        let c: u16 = 25; // here instead of the default, we manually specify that the type of c is u16
        let d = copy(c); // this is the same as b, but instead of calling copy<i32>, copy<u16> is called. Therefore d is inferred to be u16
    
        let e = 60; // at first, this looks like a, and it should be  the default of i32
        let f: i64 = copy(e); // here, since f is specified to be i64, copy<i64> is called. Therefore e instead of being the default of i32, it is overridden since inference has preference over the default. e is inferred to be i64.
    }
    

    Examples where inference will fail

    
    trait Default {
       fn default() -> Self
    }
    
    impl Default for i32 {
        fn default() -> i32 { return 0 }
    }
    
    impl Default for i8 {
        fn default() -> i8 { return 0}
    }
    
    fn main() {
        let a: i32 = 8;
        let b = copy(a);
        let c: u8 = copy(b);
        // What type should be inferred to? If it calls copy<i32> because a is i32, then it can't call copy<u8> later to initialize c. And if it calls copy<u8> instead, it can't receive a as an argument since a is i32. Results in compiler error
    
        let d = Default::default();
        // What type is d? both i32 and i8 implement the Default trait, each with its own return type.
        // let d: i32 = Default::default(); would compile correctly.
    }
    

    These situations might be obvious, but inference works as a chain, sometimes hundreds of types are inferred in a single function call. So you should know the basics to diagnose these kinds of problems.


  • Since deadcream already told you the reason. I’m gonna explain a more generic way.

    There are 2 important times: compilation time and run time.

    At compilation time, everything that is constant, is known to the compiler, or can be calculated by it.

    At run time, everything* is known.

    Types have to be generated at compilation time**. This means that generics have to be also known at compilation time.

    In this case. Both the “T” type of the buffer and its size “LENGTH” are generic, so they must be known at compile time. Compile time usually doesn’t know about vales of variables, except if those variables are “const”. Then it is known. A value literal is the same as a const variable.

    So here, you provide a value literal ([0,1,2,3,4]) which is a fixed array, that is, both its “T” type (i32 by default) and length (5) are known at compile time. Buffer has all the information it needs to become a real type instead of a generic one. In this case, the type will be Buffer<i32, 5>

    * Things that are optimized out at compile time are not known at runtime, but yes at compile time. For example:

    const A: i32 = 5;
    const B: i32 = 5+1;
    
    fn main() {
        dbg!(B)
    }
    

    Since A is never used (except to calculate B, which is const), A is probably optimized out. However, since B is used, there probably is a 6 somewhere in memory. Notice how I say probably since optimizations are optional. Or more optimizations may even remove the 6, and convert it to an ASCII “6” to be printed out.

    **While this is always true trait objects (like Box<dyn ToString>) can act like some kind of runtime type, if you need that functionality.



  • IMAP is an incredibly simple protocol compared to the sum of all the protocols that are needed to implement a web browser.

    A web browser also has to be way more performant.

    Both an IMAP client and a web browser have to be reliable and secure. However achieving so in a system as complex as a web browser is incredibly expensive.

    Web browsers are almost as complex as operating systems.

    Complexity, performance, reliability and security on that level are expensive. You would be delusional to think a web browser should be worth as much as an IMAP client.




  • RefCell is neither mutable nor immutable. It’s a type like any other. What is special about RefCell is that it has a method like:

    fn borrow_mut(&self) -> &mut T

    Which means you can get a mutable reference to its contents by only having a normal reference to the RefCell.

    By pointers I mean raw pointers. The pointers themselves are not unsafe. They are just normal pointers like you would have in C.

    Rc can be used to generate weak refs. Which is what you want for your tree.

    I don’t know about servo. So I can’t tell you much about it.

    Don’t hope too much about the temporary unsafe thing. It’s not practical (maybe impossible) to make a safety checker that checks the entire program. The practical way is to make safe abstractions using unsafe code. For example rust itself is built upon plenty of unsafe code, however, it provides to you the appropriate abstractions so what you do is safe.

    In this case. You can have a bit of unsafe code on your tree, so that the users of that tree have a safe API to do something that you needed unsafe code for.

    For example one of the cases where you cannot automatically check the safety is in dereferencing a raw pointer returned by a FFI function call to a dynamic library. Your automatic safety checker would need to be able to read the library’s documentation. And at that point it’s not a real safety checker because the documentation may lie or have bugs.


  • The safe, fast and easy way to do trees is by using Rc<RefCell<T>>. Rc/Arc allows data to be owned multiple times. You want this because this way a node can be referenced by its parent and its child at the same time. However, Rc makes the inner type inmutable. And you probably will want to mutate it in a tree, that’s what RefCell is for. With RefCell you do the borrow checking at run-time instead of at compile-time. This allows you to mutate T even though Rc only gives you an inmutable reference. This is called interior mutability.

    RefCell doesn’t eliminate the borrow checker though, you must still follow its rules. If you try to get 2 mutable references to the inner type of RefCell, it will panic.

    I know you don’t want to read unsafe, but you gotta hear about the alternative. Just use pointers. Pointers don’t have the borrow checker to restrict them. And self-referencing structures with interior mutability are not easy to borrow-check automatically. You can have the raw pointers as private fields of the struct so the code that is actually unsafe will be a few very small functions.

    Here’s why the other options worse than pointers:

    Rc<RefCell<T>> will clutter your code with boilerplate and it’s a pain to deal with. Pointers are not too ergonomic in rust (mainly because there is no -> operator), but they need way less boilerplate. Also, you already need to manually check the mutability rules, why not all the rules.

    Another option that I’ve seen is “have a hashmap with all the nodes, and just store the id of the node instead of a reference”. This is the same as “have all the nodes on a Vector and store the index”. Think about this for a second. You have a pool of memory and a number that identifies what part of that pool is the memory you want. Seen it yet? That is exactly what a pointer is! If you do that, you’re just disabling the borrow-checker anyway. You just created your own memory allocator and will have to manage your memory manually, at that point just use pointers, it will be the same except with fewer boilerplate and indirection.