e

  • 23 Posts
  • 801 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • AdrianTheFrog@lemmy.worldto196@lemmy.blahaj.zoneRule
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 hours ago

    I think the only actual performance downsides once everything is already loaded into vram should be from the sampled values being less often concurrent in memory (which shouldn’t matter at all if mipmaps are being used)

    What other step could decrease performance?



  • AdrianTheFrog@lemmy.worldto196@lemmy.blahaj.zoneRule
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    20 hours ago

    If you store the the textures in a format without hardware decoding support though, then I guess you would only get the bus speed advantage if you decode in a compute shader. Is that what people do? Or is it the storage bus that people care about and not pcie? I guess even if you decoded on CPU you’d get faster retrieval from storage, even though the CPU would introduce a bit of latency it might be less than what the disk would have…


  • AdrianTheFrog@lemmy.worldto196@lemmy.blahaj.zoneRule
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    20 hours ago

    In a video game, you can walk up close enough to a wall that only 5% of the texture fills your entire screen. That means the texture is very zoomed in and you can clearly see the lack of detail in even a 4k texture, even when on a 1080p monitor.

    You’re also not “pushing 4x the pixels through the game” the only possible performance downside is the vram usage (and probably slightly less efficient sampling)



  • AdrianTheFrog@lemmy.worldto196@lemmy.blahaj.zoneRule
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    20 hours ago

    Once gpu hardware becomes good enough that even low end computers can support real time ray tracing at usable speeds, game developers will be able to remove the lightmaps, ao maps, etc that usually comprise a very significant fraction of a game’s total file size. The problem with lightmaps is that even re-used textures still need to use different lightmaps, and you also need an additional 3d grid of baked light probes to light dynamic objects in the scene.

    Activision has a very interesting lighting technique that allows some fairly good fidelity from slightly lower resolution lightmaps (allowing normal maps and some geometry detail to work over a single lightmap texel) in combination with surface probes and volume probes, but it’s still a fairly significant amount of space. It also requires nine different channels afaik instead of the three that a normal lightmap would have. (https://advances.realtimerendering.com/s2024/content/Roughton/SIGGRAPH Advances 2024 - Hemispheres Presentation Notes.pdf)



  • AdrianTheFrog@lemmy.worldto196@lemmy.blahaj.zoneRule
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    The problem is, if you used normal compression formats, you would have to decompress them and then recompress them with the GPU supported formats every time you wanted to load an asset. That would either increase load times by a lot, or make streaming in new assets in real time much harder.










  • I have a pretty quick ~$500 phone (snapdragon 8 gen 3) and tried this local AI app once (just something on fdroid, you could probably find it) but the experience was pretty terrible. Like a minute per image on the small local models from 2022. I’m sure you could do better, but my conclusion is that an $800 phone is as useful as a $60 phone for generative ai because you’re going to have to use some remote service anyways.



  • As an American, cutting edge tech manufacturing isn’t something we do much of. In semiconductors for example, Intel is currently still working on their new node (probably made in the US and Isreal), but new Intel CPUs you buy are going to be tsmc made until then. And AMD and Nvidia, apple, etc are all making their chips at TSMC as well

    A lot of tech companies are US based, but very little of the actual production process is done in the US. I guess that doesn’t matter if you just care about the money going to the US though, since buying an Nvidia made chip will still give money to the (us-based) company.























OSZAR »