• 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • Python’s major pro is its simple, straightforward syntax, which excels at data handling. This has made it popular with novices of all shades […]

    For first-timer coders, Python is easier to learn, understand, and adapt than many low-level programming languages […]

    Is python being easy to learn actually true? I can see it being easier than low-level programming. But there’s other alternatives like C# and Java that certainly seem much better and easier to me. Especially when you consider the ecosystem around only writing code.

    Plus, the Python language is a steadfast feature in the desktop Linux software landscape. It’s preinstalled on most Linux distributions, boasts extensive library support, and can be used to fashion very cool (as well as very basic) Qt, GTK, and other toolkit UIs.

    It’s certainly available, and more readily available on Linux. The whole v2 v3 mess was lackluster. But I guess preinstalled is convenient, and more accessible than installable Java or whatever.

    I’ve never seen JavaScript or Python popularity as evidence or correlating with actual qualities. More with a self-promoting usage. Python was being used in science, then in AI, then AI became popular. To me, it seems like a natural propagation consequence more than simplicity or features over other frameworks and languages.



  • I found it hard to follow despite C# being my main driver.

    Using ref, in the past, has been about modifiable variable references.

    All these introductions, even when following C# changes across recent versions, were never something I actively used, apart from the occasional adding ref to structs so they can contain existing ref struct types. It never seems necessary.

    Even without ref you use reference and struct types, where reference content can be modified elsewhere. And IDisposable for object lifetimes with cleanup.





  • Because I stumbled over this paragraph (the page is linked to from Googles announcement) and was reminded of this comment, I’ll quote it here:

    First, developer education is insufficient to reduce defect rates in this context. Intuition tells us that to avoid introducing a defect, developers need to practice constant vigilance and awareness of subtle secure-coding guidelines. In many cases, this requires reasoning about complex assumptions and preconditions, often in relation to other, conceptually faraway code in a large, complex codebase. When a program contains hundreds or thousands of coding patterns that could harbor a potential defect, it is difficult to get this right every single time. Even experienced developers who thoroughly understand these classes of defects and their technical underpinnings sometimes make a mistake and accidentally introduce a vulnerability.

    I think it’s a fair and correct assessment.



  • They wrote they’re using . as placeholder commit messages.

    I use f for such [f]ollowup/[f]ixup commits, and a for [a]dditional code/components/changesets. Both keys are trivial to enter. When cleaning it up after, f commits are typically squashed into previous ones, and a commits get a description and/or serve as a base for squashing.

    I can see . working well as well, but having a more obvious character (with vertical height/substance) seems preferable.




  • How did it change how I think about version control? Not much? The goals are still the same. It only does many things better than previous centralized tools.

    When DVCS came up and became popular, I used Git and Bzr.

    At work, we used subversion. In one project, we had one SVN repository in our office and the customer had one in their office. A colleage had created a sync util. We regularly synced all history into an external hard drive, drove to the customer, and merged it there. Required a thorough and checklist process, potentially conflict resolution, and changelog generating for the big merge commit. Then drive back to the office, and merge back there.

    Of course sometimes you used remote desktop to hotfix changes in their code base. Meaning you’d now have the change in two places as different commits.

    Anyway, I’ve never found Git difficult. I used it, learned and understood it, and it’s consistent. I know enough “internals”/technical details to understand and use it well and without confusion.





  • I think the fundamental difference is that Git is a CLI tool. But that’s not how or where people use and want to use it. So obviously various interfaces are being created. It’s not alternative CLI that are created. It’s UIs and GUI interfaces. For a lack of a [more-than-barebone] official one.

    Shells remain CLI. Distros are also technically/technologically driven.

    Maybe the better analogy is that with vim and nano, we see many text editors and IDEs with GUIs.