• 0 Posts
  • 6 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle
  • Literally nothing. A corporation, especially a publicly traded one like that, can’t do much but maximize (ideally long-term, but usually short-term) shareholder returns.

    The Activision-Microsoft merger is a good recent example of this. During the anti trust trial, the CEO of Activision literally came out and said that he believes it’s a bad idea that will be bad for the industry and bad for the company in the long term, using the impact of consolidation in Hollywood as an example, but he has to side with the board. He’s basically legally obligated to.

    I’m not saying it’s unjust or a bad system (and I’m definitely not trying to paint Bobby Kotick as a good guy), I just want to point out that corporations are very simple in their purpose, and nobody should be expecting anything more from them. If you’re disappointed that Google made this 180, that’s on you for falling in love with a corporation. They’re useful tools for producing goods and services, but terrible as a political tool for democracy.

    But for some reason, it became popular to fetishize tech companies, and that spawned megalomaniacs like Elon, Zuckerberg, Horowitz, Thiel, etc who feel like they should be the supreme rulers of our civilization.


  • Calculators made mental math obsolete. GPS apps made people forget how to navigate on their own.

    Maybe those are good innovations or not. Arguments can be made both ways, I guess.

    But if AI causes critical thinking skills to atrophy, I think it’s hard to argue that that’s a good thing for humanity. Maybe the end game is that AI achieves sentience and takes over the world, but is benevolent, and takes care of us like beloved pets (humans are AI’s best friend). Is that good? Idk

    Or maybe this isn’t a real issue and the study is flawed, or more realistically, my interpretation of the study is wrong because I only read the headline of this article and not the study itself?

    Who knows?


  • There’s so much misinfo spreading about this, and while I don’t blame you for buying it, I do blame you for spreading it. “It sounds legit” is not how you should decide to trust what you read. Many people think the earth is flat because the conspiracy theories sound legit to them.

    DeepSeek probably did lie about a lot of things, but their results are not disputed. R1 is competitive with leading models, it’s smaller, and it’s cheaper. The good results are definitely not from “sheer chip volume and energy used”, and American AI companies could have saved a lot of money if they had used those same techniques.


  • The model weights and research paper are

    I think you’re conflating “open source” with “free”

    What does it even mean for a research paper to be open source? That they release a docx instead of a pdf, so people can modify the formatting? Lol

    The model weights were released for free, but you don’t have access to their source, so you can’t recreate them yourself. Like Microsoft Paint isn’t open source just because they release the machine instructions for free. Model weights are the AI equivalent of an exe file. To extend that analogy, quants, LORAs, etc are like community-made mods.

    To be open source, they would have to release the training data and the code used to train it. They won’t do that because they don’t want competition. They just want to do the facebook llama thing, where they hope someone uses it to build the next big thing, so that facebook can copy them and destroy them with a much better model that they didn’t release, force them to sell, or kill them with the license.