

I used Rufus and it was pretty seamless
e
I used Rufus and it was pretty seamless
If you store the the textures in a format without hardware decoding support though, then I guess you would only get the bus speed advantage if you decode in a compute shader. Is that what people do? Or is it the storage bus that people care about and not pcie? I guess even if you decoded on CPU you’d get faster retrieval from storage, even though the CPU would introduce a bit of latency it might be less than what the disk would have…
In a video game, you can walk up close enough to a wall that only 5% of the texture fills your entire screen. That means the texture is very zoomed in and you can clearly see the lack of detail in even a 4k texture, even when on a 1080p monitor.
You’re also not “pushing 4x the pixels through the game” the only possible performance downside is the vram usage (and probably slightly less efficient sampling)
Is there any way an additional decompression step can be done without increasing load times and latency?
Once gpu hardware becomes good enough that even low end computers can support real time ray tracing at usable speeds, game developers will be able to remove the lightmaps, ao maps, etc that usually comprise a very significant fraction of a game’s total file size. The problem with lightmaps is that even re-used textures still need to use different lightmaps, and you also need an additional 3d grid of baked light probes to light dynamic objects in the scene.
Activision has a very interesting lighting technique that allows some fairly good fidelity from slightly lower resolution lightmaps (allowing normal maps and some geometry detail to work over a single lightmap texel) in combination with surface probes and volume probes, but it’s still a fairly significant amount of space. It also requires nine different channels afaik instead of the three that a normal lightmap would have. (https://advances.realtimerendering.com/s2024/content/Roughton/SIGGRAPH Advances 2024 - Hemispheres Presentation Notes.pdf)
4k textures do not become magically useless when you have a 1080p monitor. The thing about video games is that the player can generally move their head anywhere they want, including going very close to any texture.
The problem is, if you used normal compression formats, you would have to decompress them and then recompress them with the GPU supported formats every time you wanted to load an asset. That would either increase load times by a lot, or make streaming in new assets in real time much harder.
A lot of people play online games. They aren’t exactly rare.
Something that is actually a lot less used (and probably a lot of effort to maintain as well) is webxr. It’s a cool technology but not very useful right now (although I could imagine it becoming more important in the future)
Why is webgl garbage? You don’t think 3d online games should be able to exist?
That’s anticheat, not drm
I thought canvas was just for schools
Yeah, it’s probably not something I would have chosen if I had the option but I don’t really care about the curved screen.
Yea, I got the op 12 because it was just $50 more than the r on Amazon at the time.
It’s definitely powerful enough but I’m slightly disappointed by the software, arcore is just completely broken, and hdr is fairly spotty (works in yt app and photos app but doesn’t work in chrome or Google photos)
Yeah, they announced they’re basically killing science funding yesterday (for everything except like AI and a few other buzzword topics)
We have a couple good cs universities right now, I really hope that’s still true in four years
Edit: the actual way they do it is from things like sensor noise, it’s practically impossible to predict the random noise on a temperature sensor for example
Edit2: oh wait it’s literally just an led and cmos sensor lol (well i guess there’s a lot of processing etc but still)
I have a pretty quick ~$500 phone (snapdragon 8 gen 3) and tried this local AI app once (just something on fdroid, you could probably find it) but the experience was pretty terrible. Like a minute per image on the small local models from 2022. I’m sure you could do better, but my conclusion is that an $800 phone is as useful as a $60 phone for generative ai because you’re going to have to use some remote service anyways.
I would get discord, youtube, lemmy, and reddit
I try to avoid new platforms tho bc I don’t trust myself not to get addicted and social media already takes up too much of my time
As an American, cutting edge tech manufacturing isn’t something we do much of. In semiconductors for example, Intel is currently still working on their new node (probably made in the US and Isreal), but new Intel CPUs you buy are going to be tsmc made until then. And AMD and Nvidia, apple, etc are all making their chips at TSMC as well
A lot of tech companies are US based, but very little of the actual production process is done in the US. I guess that doesn’t matter if you just care about the money going to the US though, since buying an Nvidia made chip will still give money to the (us-based) company.
I only see them like once a month or less when it’s spring or winter, and I’ve never seen one inside. I didn’t know they were that fragile, but the ones I see are probably too fast for me to catch lol
I think the only actual performance downsides once everything is already loaded into vram should be from the sampled values being less often concurrent in memory (which shouldn’t matter at all if mipmaps are being used)
What other step could decrease performance?