Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP). He/him.

(header photo by Brian Maffitt)

  • 17 Posts
  • 92 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • MHLoppy@fedia.iotoTechnology@lemmy.worldNVIDIA is full of shit
    link
    fedilink
    arrow-up
    32
    arrow-down
    8
    ·
    3 days ago

    It covers the breadth of problems pretty well, but I feel compelled to point out that there are a few times where things are misrepresented in this post e.g.:

    Newegg selling the ASUS ROG Astral GeForce RTX 5090 for $3,359 (MSRP: $1,999)

    eBay Germany offering the same ASUS ROG Astral RTX 5090 for €3,349,95 (MSRP: €2,229)

    The MSRP for a 5090 is $2k, but the MSRP for the 5090 Astral – a top-end card being used for overclocking world records – is $2.8k. I couldn’t quickly find the European MSRP but my money’s on it being more than 2.2k euro.

    If you’re a creator, CUDA and NVENC are pretty much indispensable, or editing and exporting videos in Adobe Premiere or DaVinci Resolve will take you a lot longer[3]. Same for live streaming, as using NVENC in OBS offloads video rendering to the GPU for smooth frame rates while streaming high-quality video.

    NVENC isn’t much of a moat right now, as both Intel and AMD’s encoders are roughly comparable in quality these days (including in Intel’s iGPUs!). There are cases where NVENC might do something specific better (like 4:2:2 support for prosumer/professional use cases) or have better software support in a specific program, but for common use cases like streaming/recording gameplay the alternatives should be roughly equivalent for most users.

    as recently as May 2025 and I wasn’t surprised to find even RTX 40 series are still very much overpriced

    Production apparently stopped on these for several months leading up to the 50-series launch; it seems unreasonable to harshly judge the pricing of a product that hasn’t had new stock for an extended period of time (of course, you can then judge either the decision to stop production or the still-elevated pricing of the 50 series).


    DLSS is, and always was, snake oil

    I personally find this take crazy given that DLSS2+ / FSR4+, when quality-biased, average visual quality comparable to native for most users in most situations and that was with DLSS2 in 2023, not even DLSS3 let alone DLSS4 (which is markedly better on average). I don’t really care how a frame is generated if it looks good enough (and doesn’t come with other notable downsides like latency). This almost feels like complaining about screen space reflections being “fake” reflections. Like yeah, it’s fake, but if the average player experience is consistently better with it than without it then what does it matter?

    Increasingly complex manufacturing nodes are becoming increasingly expensive as all fuck. If it’s more cost-efficient to use some of that die area for specialized cores that can do high-quality upscaling instead of natively rendering everything with all the die space then that’s fine by me. I don’t think blaming DLSS (and its equivalents like FSR and XeSS) as “snake oil” is the right takeaway. If the options are (1) spend $X on a card that outputs 60 FPS natively or (2) spend $X on a card that outputs upscaled 80 FPS at quality good enough that I can’t tell it’s not native, then sign me the fuck up for option #2. For people less fussy about static image quality and more invested in smoothness, they can be perfectly happy with 100 FPS but marginally worse image quality. Not everyone is as sweaty about static image quality as some of us in the enthusiast crowd are.

    There’s some fair points here about RT (though I find exclusively using path tracing for RT performance testing a little disingenuous given the performance gap), but if RT performance is the main complaint then why is the sub-heading “DLSS is, and always was, snake oil”?


    obligatory: disagreeing with some of the author’s points is not the same as saying “Nvidia is great”




  • I think you’ve tilted slightly too far towards cynicism here, though “it might not be as ‘fair’ as you think” is probably also still largely true for people that don’t look into it too hard. Part of my perspective is coming from this random video I watched not long ago which is basically an extended review of the Fairphone 5 that also looks at the “fair” aspect of things.

    Misc points:

    • In targeting Scope 2 emissions they went with renewables to get down to 0 Scope 2 emissions. (p13)
    • In targeting Scope 3 emissions they rejigged their transportation a little (ocean freight instead of flying, it sounds like?) to reduce emissions there. (p14)
    • In targeting Scope 3 emissions they used an unspecified level of renewable energy in late manufacturing with modest claimed emissions reductions. (p14)
    • Retired some carbon credits, which, yes, are usually not as great as we would like, but still. (p14)
    • They may have some impact by choice of supplier even when they don’t necessarily directly spend extra cash on e.g., higher worker payments.
    • They may have some impact by engaging with suppliers. They provide small-scale examples of conducting worker satisfaction surveys via independent third party which seemed to provide some concrete improvements (p30) and “supporting” another supplier in “implementing best practices for a worker-management safety committee” (p30).
    • They’re reducing exposure to hazardous chemicals in final assembly, and according to them they are “the first company to start eliminating CEPN’s second round priority chemicals” (p31). I don’t know much about this.
    • With partners, they “organize school competitions in which children are educated about […] e-waste” (p40).
    • They’re “building local recycling capacity” in Ghana by “collaborating” with recycling companies (p40).
    • Extremely high repairability (with modest costs for replacement parts that make it financially sensible to repair instead of replace) keeps more phones in use, reducing all the bad parts of having to manufacture brand new phones.
    • The ICs make up a huge portion of the environmental costs of the phone (both with the FP4 (pp 40-41) and with the FP5 (p10)), and Fairphone isn’t big enough to get behemoth chip manufacturers to change their processes (though apparently they’re lobbying Qualcomm for socketable designs, as unlikely as that is to happen any time soon). If you accept the premise that for around half of the phone they have almost no impact on in terms of the manufacturing side, it makes their efforts on the rest a bit better, I guess?

    So yes, they are a long way from selling “100% fair” phones, but it seems like they’re inching the needle a bit more than your summary suggests, and that’s not nothing. It feels like you’ve skipped over lots of small-yet-positive things which are not simply “low economy of scale manufacturing” efforts.





  • Tbh I thought it was a bunch of non-lemmy platforms (e.g., mbin which fedia.io runs - anecdotally it usually happens due to some types of edits not federating well), but if someone from infosec.pub (which runs lemmy) also had the problem then I’m actually not sure what the common factor is lol

    edit: the common factor might just be instances that have blocked lemmy.ml, which currently includes fedia.io (my instance) and infosec.pub (the other commenter’s instance), though I’m surprised links to lemmy.ml’s hosted images are included in the block










  • You’re making assumptions about how they work based on your intuition - luckily we don’t need to do much guesswork about how the sorts are actually implemented because we can just look at the code to check:

    CREATE FUNCTION r.scaled_rank (score numeric, published timestamp with time zone, interactions_month numeric)
        RETURNS double precision
        LANGUAGE sql
        IMMUTABLE PARALLEL SAFE
        -- Add 2 to avoid divide by zero errors
        -- Default for score = 1, active users = 1, and now, is (0.1728 / log(2 + 1)) = 0.3621
        -- There may need to be a scale factor multiplied to interactions_month, to make
        -- the log curve less pronounced. This can be tuned in the future.
        RETURN (
            r.hot_rank (score, published) / log(2 + interactions_month)
    );
    

    And since it relies on the hot_rank function:

    CREATE FUNCTION r.hot_rank (score numeric, published timestamp with time zone)
        RETURNS double precision
        LANGUAGE sql
        IMMUTABLE PARALLEL SAFE RETURN
        -- after a week, it will default to 0.
        CASE WHEN (
    now() - published) > '0 days'
            AND (
    now() - published) < '7 days' THEN
            -- Use greatest(2,score), so that the hot_rank will be positive and not ignored.
            log (
                greatest (2, score + 2)) / power (((EXTRACT(EPOCH FROM (now() - published)) / 3600) + 2), 1.8)
        ELSE
            -- if the post is from the future, set hot score to 0. otherwise you can game the post to
            -- always be on top even with only 1 vote by setting it to the future
            0.0
        END;
    

    So if there’s no further changes made elsewhere in the code (which may not be true!), it appears that hot has no negative weighting for votes <2 because it uses the max value out of 2 and score + 2 in its calculation. If correct, those posts you’re pointing out are essentially being ranked as if their voting score was 2, which I hope helps to explain things.


    edit: while looking for the function someone else beat me to it and it looks like possibly the hot_rank function I posted may or may not be the current version but hopefully you get the idea regardless!



  • After many years of selectively evaluating and purchasing bundles as my main source of new games, I’ve come to wonder if it would’ve been better to just buy the individual games when I wanted to play them at whatever the available price was - the rate at which I get through games is far lower than the rate at which games are available in “good” bundles. In the end I’m not even sure if I’ve saved money (because of how many games have been bought but are as-of-yet unplayed) and it does take more time to evaluate whether something’s a good deal or not.

    The upside is way more potential variety of games to pull from in my library, but if I only play at most like 1-2 dozen new games a year then I’m not sure that counts for much 🫠