NVIDIA just dropped something I genuinely could not ignore. DLSS 5. And I want to talk about it, because this one hits different for anyone who creates 3D assets for games.
My reaction was split right down the middle. Part of me was genuinely impressed, legitimately excited as someone who loves real-time graphics. The other part of me, the part that's been doing this for years and teaches character creation, got very quiet.
Here's the honest thing I have to say first. When I looked at those comparisons, my gut reaction wasn't "wow, photoreal." It was "that looks AI." And I think a lot of you know exactly what I mean.
There's a look that AI imagery has developed at this point, and our eyes are trained to spot it. The skin is too smooth, too even, no pore variation, no asymmetry. The eyes catch light in this weirdly perfect way that reads as dead rather than alive. The hair has that fuzzy, almost cotton-candy quality at the edges. Everything has a certain softness to it, like someone put a glow filter over a 3D render and called it photorealism. It's the uncanny valley, but specifically for AI. I've shown AI-generated character renders to people with zero technical background and they immediately say "that looks fake," not because the pixels are wrong, but because the quality of perfection itself feels inhuman.
So what is DLSS 5 actually doing? It's a real-time neural rendering model. It takes color and motion vectors from a game frame, and an AI model infuses that frame with photoreal lighting and materials, in real time, at up to 4K. This is not a filter or a post-process blur pass. The AI understands scene semantics; it knows what skin is, what fabric is, what hair is. Look at the Resident Evil Requiem comparison. That gap between before and after is the gap character artists have spent entire careers trying to close. Jensen Huang called this the GPT moment for graphics, and honestly, he's right.
But there's something nobody is talking about in this conversation, and it needs to be said. This technology is built around photorealism. The model was trained to push imagery toward real-world lighting, real-world materials, real-world skin behavior. Think about how much of the industry lives outside of that. Fortnite is stylized. Valorant is stylized. Genshin Impact has one of the largest player bases on the planet, built entirely on a non-photorealistic look. If you apply DLSS 5 to a stylized character, the AI starts pulling it toward photorealism, inferring subsurface scattering on a face never meant to have it, erasing the exact artistic decisions that made that character work. That's not a small problem; that's half the market.
The part that worries me most is the studio response pattern. When a new tool lowers the cost to reach a certain quality bar, studios rarely say "great, let's keep the same team and make better things." They say "great, let's make the same thing with fewer people." DLSS went from upscaling, to frame generation, to 23 out of 24 AI-generated pixels, to real-time photoreal lighting inference, all in about seven years. If AI can take a character model that's 80% of the way there and make it look cinematic at runtime, the question becomes whether studios still invest in the artist who gets it to 100%.
I don't have a clean answer. But as character artists, we understand something important here. Real skin has chaos in it; a sunburn patch on the nose, asymmetry in how the cheeks catch light, one pore cluster slightly larger than the rest. When I'm sculpting a face, some of the most important decisions I make are the imperfect ones. That's not a mistake; that's the craft. DLSS 5 is currently optimizing in the opposite direction, toward a kind of averaged perfection that ironically reads as less real to people who've been surrounded by AI imagery for years.
The ground is shifting. DLSS 5 is one of the clearest signals yet of where real-time rendering is heading. But the craft that sits upstream of the render, the design, the anatomy, the human judgment about what makes a face feel alive, that's not something a neural model can train away. Stay sharp, and keep building your skills deeper than what any AI can infer from a single frame.