Laboratory planner by day, toddler parent by night, enthusiastic everything-hobbyist in the thirty minutes a day I get to myself.

  • 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: July 31st, 2023

help-circle
  • In that case (as is the case with most games) the near-worst case scenario is that you are no worse off trusting Valve with the management of item data than you would be if it was in a public block chain. Why? Because those items are valueless outside the context of the commercial game they are used in. If Valve shuts down CS:GO tomorrow, owning your skins as a digital asset on a blockchain wouldn’t give you any more protection than the current status quo, because those skins are entirely dependent on the game itself to be used and viewed – it’d be akin to holding stock certificates for a company that’s already gone bankrupt and been liquidated: you have a token proving ownership of something that doesn’t exist anymore.

    Sure, there’s the edge case that if your Steam account got nukes from orbit by Gaben himself along with all its purchase and trading history you could still cash out on your skin collection, Conversely, having Valve – which, early VAC-ban wonkiness notwithstanding, has proven itself to be a generally-trustworthy operator of a digital games storefront for a couple decades now – hold the master database means that if your account got hacked and your stuff shifted off the account to others for profit, it’s much easier for Valve support to simply unwind those transactions and return your items to you. Infamously, in the case of blockchain ledgers, reversing a fraudulent transaction often requires forking the blockchain.


  • The idea has merit, in theory – but in practice, in the vast majority of cases, having a trusted regulator managing the system, who can proactively step in to block or unwind suspicious activity, turns out to be vastly preferable to the “code is law” status quo of most blockchain implementations. Not to mention most potential applications really need a mechanism for transactions to clear in seconds, rather than minutes to days, and it’d be preferable if they didn’t need to boil the oceans dry in the process of doing so.

    If I was really reaching, I could maybe imagine a valid use case for say, a hypothetical, federated open source game that needed to have a trusted way for every node to validate the creation and trading of loot and items, that could serve as a layer of protection against cheating nodes duping items, for instance. But that’s insanely niche, and for nearly every other use case a database held by a trusted entity is faster, simpler, safer, more efficient, and easier to manage.



  • I agree, this is a good use of the live service model to improve the gameplay experience. Previous entries in the Flight Simulator series did have people purchase and download static map data for selected regions, and it was a real pain in the butt – and expensive, too. Even with FS2020 there is a burgeoning market for airport and scenery packs that have more detail and verisimilitude than Asobo’s (admittedly still pretty good) approach of augmenting aerial and satellite imagery with AI can provide.

    Bottom line, though, simulator hobbyists have a much different sense of what kind of costs are reasonable for their games. If you’re already several grand deep on your sim rig, a couple hundred for more RAM or a few bucks a month for scenery updates isn’t any big deal to you.


  • Right now Intel and AMD have less to fear from Apple than they do from Qualcomm – the people who can do what they need to do with a Mac and want to are already doing that, it’s businesses that are locked into the Windows ecosystem that drive the bulk of their laptop sales right now, and ARM laptops running Windows are the main threat in the short term.

    If going wider and integrating more coprocessors gets them closer to matching Apple Silicon in performance per watt, that’s great, but Apple snatching up their traditional PC market sector is a fairly distant threat in comparison.


  • The problem is that the private sector faces the same pressures about the appearance of failure. Imagine if Boeing adopted the SpaceX approach now and started blowing up Starliner prototypes on a monthly basis to see what they could learn. How badly would that play in the press? How quickly would their stock price tank? How long would the people responsible for that direction be able to hold on to their jobs before the board forced them out in favor of somebody who’d take them back to the conservative approach?

    Heck, even SpaceX got suddenly cagey about their first stage return attempts failing the moment they started offering stakes to outside investors, whereas previously they’d celebrated those attempts that didn’t quite work. Look as well at how the press has reacted to Starship’s failures, even though the program has been making progress from launch to launch at a much greater pace than Falcon did initially. The fact of the matter is that SpaceX’s initial success-though-informative-failure approach only worked because it was bankrolled entirely by one weird dude with cubic dollars to burn and a personal willingness to accept those failures. That’s not the case for many others.


  • NASA in-house projects were historically expensive because they took the approach that they were building single-digit numbers of everything – very nearly every vehicle was bespoke, essentially – and because failure was a death sentence politically, they couldn’t blow things up and iterate quickly. Everything had to be studied and reviewed and re-reviewed and then non-destructively tested and retested and integration tested and dry rehearsed and wet rehearsed and debriefed and revised and retested and etc. ad infinitum. That’s arguably what you want in something like a billion dollar space telescope that you only need one of and has to work right the first time, but the lesson of SpaceX is that as long as you aren’t afraid of failure you can start cheap and cheerful, make mistakes, and learn more from those mistakes than you would from packing a dozen layers of bureaucracy into a QC program and have them all spitball hypothetical failure modes for months.

    Boeing, ULA and the rest of the old space crew are so used to doing things the old way that they struggle culturally to make the adaptations needed to compete with SpaceX on price, and then in Boeing’s case the MBAs also decided that if they stopped doing all that pesky engineering analysis and QA/QC work they could spend all that labor cost on stock buybacks instead.





  • The reverse. OceanGate saw how planes were being built and said, “let’s do that for submersibles!” even though in airplanes, composites are subjected to <1 atmosphere of tension loading and <2g aerodynamic loading, whereas their submersible was going to be subjected to >400 atmospheres of compression loading, and a much more corrosive environment.

    Composites in aircraft have a fairly long and uncontroversial history, and there’s nothing inherently wrong with them in that application. The biggest problem with composites is what happens with them at the end of their service life. Finding ways to recycle them without compromising safety is a good thing, and if it weren’t for Boeing having such a damaged reputation at the moment I think nobody would bat an eye.



  • Any time you see perovskite-based cells mentioned, you can assume for the time being that it’s just R&D. Perovskites are cool materials that open up a lot of neat possibilities, like cheaply inkjet-printing PV cells, but they have fundamental durability issues in the real world. When exposed to water, oxygen, and UV light, the perovskite crystals break down fairly rapidly.

    That’s not to say that the tech can’t be made to work – at least one lab team has developed cells with longevity similar to silicon PVs – but somebody’s going to have to come up with an approach that solves for performance, longevity, and manufacturability all at once, and that hasn’t happened yet. I imagine that when they do, that will be front-and-center in the press release, rather than just an efficiency metric.


  • This is actually becoming somewhat commonplace. For example, in many cutting-edge cancer therapies, blood is drawn from the patient, processed in tissue-culture suites on site to extract the patient’s immune cells and sensitize them to some marker expressed by their specific cancer cells, and then the modified immune cells are returned to the patient room and transfused back into their bodies. It’s not cheap per se but it’s something that most top-tier cancer centers can do, and to do the similar process of extracting stem cells, inducing them to transform into pancreatic islet cells, and transplanting those into the patient’s pancreas isn’t that big of a jump – and it’d be cheaper than a lifetime of insulin in any case. It also points the way towards treating other kinds of organ failure without the risk of rejection, too.


  • Data center cooling towers can be closed- or open-loop, and even operate in a hybrid mode depending on demand and air temps/humidity. Problem is, the places where open-loop evaporative cooling works best are arid, low-humidity regions where water is a scarce resource to start.

    On the other hand, several of the FAANGS are building datacenters right now in my area, where we’re in the watershed of the largest river in the country, it’s regularly humid and rainy, any water used in a given process is either treated and released back into the river, or fairly quickly condenses back out of the atmosphere in the form of rain somewhere a few hundred miles further east (where it will eventually collect back into the same river). The only way that water is “wasted” in this environment has to do with the resources used to treat and distribute it. However, because it’s often hot and humid around here, open loop cooling isn’t as effective, and it’s more common to see closed-loop systems.

    Bottom line, though, I think the siting of water-intensive industries in water-poor parts of the country is a governmental failure, first and foremost. States like Arizona in particular have a long history of planning as though they aren’t in a dry desert that has to share its only renewable water resource with two other states, and offering utility incentives to potential employers that treat that resource as if it’s infinite. A government that was focused on the long-term viability of the state as a place to live rather than on short-term wins that politicians can campaign on wouldn’t be making those concessions.



  • It’s not a coincidence that Texas is a hotbed of development for “microgrid” systems to cover for when ERCOT shits the bed – and of course all those systems are made up of diesel and natural gas generator farms, because Texans don’t want any of that communist solar power!

    I’ve got family in Texas who love it there for some reason, but there’s almost no amount of money you could pay me to move there. Bad enough when I have to work on projects in the state – contrary to the popular narrative, in my personal opinion it’s a worse place than California to try and build something, and that’s entirely to do with the personalities that seem to gravitate to positions of power there. I’d much rather slog through the bureaucracy in Cali than tiptoe around a tinpot dictator in the planning department.


  • Thrashy@lemmy.worldtoTechnology@lemmy.worldThe decline of Intel..
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    6 months ago

    The only link I am aware of is that Intel operates an R&D center in Haifa (which, it happens, is responsible for the Pentium M architecture that became the Core series of CPUs that saved Intel’s bacon after they bet the farm on Netburst and lost to Athlon 64). Linkerbaan’s apparent reinvention of the Protocols of the Elders of Zion to the contrary, the only real link seems to be that Haifa office, which exists to tap into the pool of talented Israeli electronics and semiconductor engineers.


  • Thrashy@lemmy.worldtoTechnology@lemmy.worldThe decline of Intel..
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    6 months ago

    Historically AMD has only been able to take the performance crown from Intel when Intel has made serious blunders. In the early 2000s, it was Intel commiting to Netburst in the belief that processors could scale past 5Ghz on their fab processes, if pipelined deeply enough. Instead they got caught out by unexpected quantum effects leading to excessive heat and power leakage, at the same time that AMD produces a very good follow-on to their Athlon XP line of CPUs, in the form of the Athlon 64.

    At the time, Intel did resort to dirty tricks to lock AMD out of the prebuilt and server space, for which they ultimately faced antitrust action. But the net effect was that AMD wasn’t able to capitalize on their technological edge, Ave ended up having to sell off their fabs for cash, while Intel bought enough time to revise their mobile CPU design into the Core series of desktop processors, and reclaim the technological advantage. Simultaneously AMD was betting the farm on Bulldozer, believing that the time had come to prioritize multithreading over single-core performance (it wasn’t time yet).

    This is where we enter the doldrums, with AMD repeatedly trying and failing to make the Bulldozer architecture work, while Intel coasted along on marginal updates to the Core 2 architecture for almost a decade. Intel was gonna have to blunder again to change the status quo – which they did, by betting against EUV for their 10nm fab process. Intel’s process leadership stalled and performance hit a wall, while AMD was finally producing a competent architecture in the form of Zen, and then moved ahead of Intel on process when they started manufacturing Zen2 at TSMC.

    Right now, with Intel finally getting up to speed with EUV and working on architectural improvements to catch up with AMD (and both needing to bridge the gap to Apple Silicon now) at the same time that AMD is going from strength to strength with Zen revisions, we’re in a very interesting time for CPU development. I fear a bit for AMD, as I think the fundamentals are stronger for Intel (stronger data center AI value proposition, graphics group seemingly on the upswing now that they’re finally taking it seriously, and still in control of their destiny in terms of fab processes and manufacturing) while AMD is struggling with GPU and AI development and dependent on TSMC, perpetually under threat from mainland China, for process leadership. But there’s a lot of strong competition in the space, which hasn’t been the case since the days of the Northridge P4 and Athlon XP, and that’s exciting.


  • On the one hand, I agree with you that the expected lifespan of current OLED tech doesn’t align with my expectation of monitor life… But on the other hand, I tend to use my monitors until the backlight gives out or some layer or other in the panel stackup shits the bed, and I haven’t yet had an LCD make it past the decade mark.

    In my opinion OLED is just fine for phone displays and TVs, which aren’t expected to be lit 24/7 and don’t have lots of fixed UI elements. Between my WFH job and hobby use, though, my PC screens are on about 10 hours a day on average, with the screen displaying one of a handful of programs with fixed, high contrast user interfaces. That’s gonna put an OLED panel through the wringer in quite a bit less time than I have become used to using my LCDs, and that’s not acceptable to me.