DLSS5 should not be run where an artist does not want it to be run.
DLSS5 requiring two GPUs is horrible, and can only push us further towards the dreaded “game in the cloud”
But if a tool enhances a texture in a specific way, for instance sharpening lines along a garment, or adding shadows to an object under a lamp, how is that different than existing texture mapping algs?
As artists learn to predict what these tools do, and where to take advantage of them (such as in backgrounds or on specific textures), I think they will become useful. At least I hope. If nvidia doesn’t provide tooling to do that, then I’m 100% on the same page as you.
But, again, why? All this is applied post production, so there’s no control from the artist’s perspective on what the player sees on their end. I’d much rather a static pipeline where I’m in control of the look and feel, while also providing the player with options for accessibility like gamma adjustment.
But if a tool enhances a texture in a specific way, for instance sharpening lines along a garment, or adding shadows to an object under a lamp, how is that different than existing texture mapping algs?
We already have all that. This ‘feature’ literally adds nothing of value to our pipeline because it is all applied after the product is shipped and on the player’s computer.
Further, because it’s a filter, it obfuscates what’s actually happening underneath. Why learn to predict what the filter will do when you can just not work with it and create scenes exactly how you want it?
This whole thing is providing a solution to a problem that doesn’t exist simply to recoup their investments. It’s a complete waste of energy, materials, processing power etc. Absolutely unnecessary.
I agree on two things:
But if a tool enhances a texture in a specific way, for instance sharpening lines along a garment, or adding shadows to an object under a lamp, how is that different than existing texture mapping algs?
As artists learn to predict what these tools do, and where to take advantage of them (such as in backgrounds or on specific textures), I think they will become useful. At least I hope. If nvidia doesn’t provide tooling to do that, then I’m 100% on the same page as you.
But, again, why? All this is applied post production, so there’s no control from the artist’s perspective on what the player sees on their end. I’d much rather a static pipeline where I’m in control of the look and feel, while also providing the player with options for accessibility like gamma adjustment.
We already have all that. This ‘feature’ literally adds nothing of value to our pipeline because it is all applied after the product is shipped and on the player’s computer.
Further, because it’s a filter, it obfuscates what’s actually happening underneath. Why learn to predict what the filter will do when you can just not work with it and create scenes exactly how you want it?
This whole thing is providing a solution to a problem that doesn’t exist simply to recoup their investments. It’s a complete waste of energy, materials, processing power etc. Absolutely unnecessary.