• 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • The problem is for organizations it’s harder to leave because that is where the people you want to reach are. That’s the only reason any org or company is on social media in the first place. If they leave too soon they risk too many people not seeing the things they send out to the community.

    It’s more an individual thing because so many people just have social inertia and haven’t left since everyone they know is already there. The first to leave have to decide if they want to juggle using another platform to keep connections or cut off connections by abandoning the established platform.



  • There’s something to be said that bitcoin and other crypto like it have no intrinsic value but can represent value we give and be used as a decentralized form of currency not controlled by one entity. It’s not how it’s used, but there’s an argument for it.

    NFTs were a shitty cash grab because showing you have the token that you “own” a thing, regardless of what it is, only matters if there is some kind of enforcement. It had nothing to do with rights for property and anyone could copy your crappy generated image as many times as they wanted. You can’t do that with bitcoin.


  • Been playing around with local LLMs lately, and even with it’s issues, Deepseek certainly seems to just generally work better than other models I’ve tried. It’s similar hit or miss when not given any context beyond the prompt, but with context it certainly seems to both outperform larger models and organize information better. And watching the r1 model work is impressive.

    Honestly, regardless of what someone might think of China and various issues there, I think this is showing how much the approach to AI in the west has been hamstrung by people looking for a quick buck.

    In the US, it’s a bunch of assholes basically only wanting to replace workers with AI they don’t have to pay, regardless of the work needed. They are shoehorning LLMs into everything even when it doesn’t make sense to. It’s all done strictly as a for-profit enterprise by exploiting user data and they boot-strapped by training on creative works they had no rights to.

    I can only imagine how much of a demoralizing effect that can have on the actual researchers and other people who are capable of developing this technology. It’s not being created to make anyone’s lives better, it’s being created specifically to line the pockets of obscenely wealthy people. Because of this, people passionate about the tech might decide not to go into the field and limit the ability to innovate.

    And then there’s the “want results now” where rather than take the time to find a better way to build and train these models they are just throwing processing power at it. “needs more CUDA” has been the mindset and in the western AI community you are basically laughed at if you can’t or don’t want to use Nvidia for anything neural net related.

    Then you have Deepseek which seems to be developed by a group of passionate researchers who actually want to discover what is possible and more efficient ways to do things. Compounded by sanctions preventing them from using CUDA, restrictions in resources have always been a major cause for a lot of technical innovations. There may be a bit of “own the west” there, sure, but that isn’t opposed to the research.

    LLMs are just another tool for people to use, and I don’t fault a hammer that is used incorrectly or to harm someone else. This tech isn’t going away, but there is certainly a bubble in the west as companies put blind trust in LLMs with no real oversight. There needs to be regulation on how these things are used for profit and what they are trained on from a privacy and ownership perspective.






  • Even using LLMs isn’t an issue, it’s just another tool. I’ve been messing around with local stuff and while you certainly have to use it knowing it’s limitations it can help for certain things, even if just helping parse data or rephrasing things.

    The issue with neural nets is that while it theoretically can do “anything”, it can’t actually do everything.

    And it’s the same with a lot of tools like this. People not understanding the limitations or flaws and corporations wanting to use it to replace workers.

    There’s also the tech bros who feel that creative works can be generated completely by AI because like AI they don’t understand art or storytelling.

    But we also have others who don’t understand what AI is and how broad it is, thinking it’s only LLMs and other neural nets that are just used to produce garbage.




  • Trump started the whole thing because he was unpopular on Tiktok. Republicans jumped on board because young people were politically organizing on the platform and they don’t like that.

    But up to that point, there was no really effort, even as much as they tried to claim “national security”.

    Then when the real information about Palestine was being spread there the democrats jumped on board because they are the same as republicans when if comes to foreign policy.

    That’s what started the actual push that gained momentum. They had no actual evidence about the stuff they claimed. Also, if the claim applied to Tiktok, it applies to all the other social media. But they don’t actually care about privacy. They only care about a platform that they couldn’t control and wasn’t catering to them.

    If they cared about privacy, they would have pushed general privacy legislation and/or regulation and oversight on all social media.

    The reversal was Biden realizing he does not have a good legacy and with Trump, there was a lot more content this time around that was pro-trump (and also tiktok gave him a million dollars). So now he gets to claim he “saved tiktolk” when he was the start of the whole thing.



  • Subverting copy protection had always been a vuage notion because they sell you encrypted content, but they still have to sell you something with the decryption keys as well.

    Now, using the key to remove the encryption falls under “subverting” but if you use the key to play the encrypted media directly, why does it matter what hardware it is happening on?

    When it came to switch emulation you didn’t really circumvent the copy protection, you exported the keys from a switch. The game images are basically dumped as is.

    Yes, you could find the keys elsewhere, but if you dumped your own it wouldn’t really be considered subverting. Especially with the jig you put the switch into a state built into the switch hardware. It’s not even a exploit like jailbreak usually are. The recovery boot mode is an intended service feature.

    The only illegal thing would be getting copies of games and keys from other people.