- 9
- 403 028
SIGGRAPH Advances in Real-Time Rendering
เข้าร่วมเมื่อ 9 ก.ย. 2021
Welcome to the home for the well-established series of SIGGRAPH courses, covering late-breaking work and advances in real-time computer graphics!
This course was first presented in SIGGRAPH 2006 in Boston, MA.
Since then we continue bringing you the state-of-the-art techniques from the game development community, as well as innovating graphics methods and forward-looking research with the emphasis on practical implementations.
This course was first presented in SIGGRAPH 2006 in Boston, MA.
Since then we continue bringing you the state-of-the-art techniques from the game development community, as well as innovating graphics methods and forward-looking research with the emphasis on practical implementations.
Global Illumination Based on Surfels
This talk delves into real-time global illumination approaches based on Surfels designed by EA SEED and EA DICE, and it's a part of the SIGGRAPH 2021 Advances in Real-Time Rendering in Games course (advances.realtimerendering.com/).
Global Illumination Based on Surfels (GIBS) is a solution for calculating indirect diffuse illumination in real-time. The solution combines hardware ray tracing with a discretization of scene geometry to cache and amortize lighting calculations across time and space. It requires no pre-computation, no special meshes, and no special UV sets, freeing artists from tedious and time-consuming processes required by traditional solutions. GIBS enables new possibilities in the runtime, allowing for high fidelity lighting in dynamic environments and for user created content, while accommodating content of arbitrary scale. The algorithm is part of the suite of tools available to developers and teams throughout EA as part of the Frostbite engine.
This talk will detail the GIBS algorithm and how surfels are used to enable real-time ray traced global illumination. We will describe how the scene is discretized into surfels on the fly, and why we think this discretization is a good fit for caching lighting operations. The talk will describe the acceleration structure used to enable efficient access to surfel data, and how this structure allows us to cover environments of arbitrary size, while keeping a predictable performance and memory footprint. We will detail how the algorithm handles dynamic objects, skinned characters, and transparency. Several techniques have been developed to efficiently integrate irradiance on surfels. We will describe our use of ray guiding, ray binning, spatial filters, and how we handle scenes with large numbers of lights.
Speaker Bios:
Henrik Halen joined Electronic Art's SEED research division as a Senior Rendering Engineer in 2017. His work at SEED is focused on real-time graphics algorithms, lighting and characters. Henrik's experience as a rendering engineer prior to joining SEED includes a decade of contributions to franchises such as Gears of War, Battlefield, Medal of Honor and Mirror's Edge.
Andreas Brinck has worked as a rendering engineer for more than two decades. He joined Electronic Arts in 2011 to help start Ghost Games and was later the rendering lead on NFS Rivals, NFS 2015, NFS Payback, and NFS Heat. In 2019 he joined DICE LA where he is currently working on the Battlefield franchise.
Kyle Hayward has worked as a rendering engineer since 2010. He has focused on multiple areas in graphics, from animation compression to global illumination, working on both offline and real-time solutions. He joined EA in 2012, and later became the NBA rendering lead from 2014 onwards. In 2019 he joined Frostbite, where he has been working on global illumination and raytracing.
Xiangshun Bei has been a rendering engineer within DICE LA at EA since 2019, focusing on real-time rendering and ray tracing. He currently works on the Battlefield franchise. Prior to DICE, he contributed to graphics drivers for Adreno GPU on Snapdragon SoC at Qualcomm. He received his master’s degree in computer science from University of Southern California in 2017.
Global Illumination Based on Surfels (GIBS) is a solution for calculating indirect diffuse illumination in real-time. The solution combines hardware ray tracing with a discretization of scene geometry to cache and amortize lighting calculations across time and space. It requires no pre-computation, no special meshes, and no special UV sets, freeing artists from tedious and time-consuming processes required by traditional solutions. GIBS enables new possibilities in the runtime, allowing for high fidelity lighting in dynamic environments and for user created content, while accommodating content of arbitrary scale. The algorithm is part of the suite of tools available to developers and teams throughout EA as part of the Frostbite engine.
This talk will detail the GIBS algorithm and how surfels are used to enable real-time ray traced global illumination. We will describe how the scene is discretized into surfels on the fly, and why we think this discretization is a good fit for caching lighting operations. The talk will describe the acceleration structure used to enable efficient access to surfel data, and how this structure allows us to cover environments of arbitrary size, while keeping a predictable performance and memory footprint. We will detail how the algorithm handles dynamic objects, skinned characters, and transparency. Several techniques have been developed to efficiently integrate irradiance on surfels. We will describe our use of ray guiding, ray binning, spatial filters, and how we handle scenes with large numbers of lights.
Speaker Bios:
Henrik Halen joined Electronic Art's SEED research division as a Senior Rendering Engineer in 2017. His work at SEED is focused on real-time graphics algorithms, lighting and characters. Henrik's experience as a rendering engineer prior to joining SEED includes a decade of contributions to franchises such as Gears of War, Battlefield, Medal of Honor and Mirror's Edge.
Andreas Brinck has worked as a rendering engineer for more than two decades. He joined Electronic Arts in 2011 to help start Ghost Games and was later the rendering lead on NFS Rivals, NFS 2015, NFS Payback, and NFS Heat. In 2019 he joined DICE LA where he is currently working on the Battlefield franchise.
Kyle Hayward has worked as a rendering engineer since 2010. He has focused on multiple areas in graphics, from animation compression to global illumination, working on both offline and real-time solutions. He joined EA in 2012, and later became the NBA rendering lead from 2014 onwards. In 2019 he joined Frostbite, where he has been working on global illumination and raytracing.
Xiangshun Bei has been a rendering engineer within DICE LA at EA since 2019, focusing on real-time rendering and ray tracing. He currently works on the Battlefield franchise. Prior to DICE, he contributed to graphics drivers for Adreno GPU on Snapdragon SoC at Qualcomm. He received his master’s degree in computer science from University of Southern California in 2017.
มุมมอง: 42 426
วีดีโอ
SIGGRAPH 2021 Advance in Real Time Rendering in Games course - Closing Remarks
มุมมอง 2.2K3 ปีที่แล้ว
Closing Remarks for the SIGGRAPH 2021 Advances in Real-Time Rendering in Games course ( advances.realtimerendering.com/) which is the leading course bringing state-of-the-art and production-proven rendering techniques for fast, interactive rendering of complex and engaging virtual worlds of video games. This year the course includes speakers from the makers of several innovative games and game ...
SIGGRAPH 2021 Advance in Real Time Rendering in Games course - Welcome and Introduction
มุมมอง 6K3 ปีที่แล้ว
SIGGRAPH 2021 Advances in Real-Time Rendering in Games course ( advances.realtimerendering.com/) is the leading course bringing state-of-the-art and production-proven rendering techniques for fast, interactive rendering of complex and engaging virtual worlds of video games.This year the course includes speakers from the makers of several innovative games and game engines, such as Unity Technolo...
Improved Spatial Upscaling through FidelityFX Super Resolution for Real-Time Game Engines
มุมมอง 5K3 ปีที่แล้ว
This talk provides in-depth intuition and details about AMD's FSR1 scaling algorithm, as well as shares the architecture behind integrating this technique into Unity's high-definition rendering pipeline. Part of the SIGGRAPH 2021 Advances in Real-Time Rendering in Games course (advances.realtimerendering.com/). All methods shared in the presentation are illustrated with diagrams, images, and pe...
Experimenting with Concurrent Binary Trees for Large Scale Terrain Rendering
มุมมอง 17K3 ปีที่แล้ว
In this talk, we share results of our novel technique using concurrent binary trees for large-scale terrain rendering, as well as the results of the deep dive into latest efforts to integrate the original technique into the Unity game engine. Part of the SIGGRAPH 2021 Advances in Real-Time Rendering in Games course (advances.realtimerendering.com/). We will review the foundations of concurrent ...
A Deep Dive into Nanite Virtualized Geometry
มุมมอง 249K3 ปีที่แล้ว
Nanite, Unreal Engine 5's new virtual geometry system, enables the rendering of trillion triangle scenes at real-time framerates. This lecture will take a deep dive into how Nanite works, from mesh import all the way to final rendered pixels. Part of the SIGGRAPH 2021 Advances in Real-Time Rendering in Games course (advances.realtimerendering.com/). We will explain how the mesh-based data struc...
Large-Scale Global Illumination at Activision
มุมมอง 17K3 ปีที่แล้ว
In this talk, we’ll describe the key techniques behind the large-scale global illumination system in Activision. Part of the SIGGRAPH 2021 Advances in Real-Time Rendering in Games course (advances.realtimerendering.com/). We present a new precomputed lighting compression technique that enables high-performance and seamless reconstruction directly from the compressed lighting data. In addition, ...
Real-Time Samurai Cinema: Lighting, Atmosphere, and Tonemapping in Ghost of Tsushima
มุมมอง 12K3 ปีที่แล้ว
In this talk, we describe some of the graphics techniques used in the production of Ghost of Tsushima. Part of the SIGGRAPH 2021 Advances in Real-Time Rendering in Games course (advances.realtimerendering.com/). Set in 13th century Japan, Ghost of Tsushima pays homage to classic samurai cinema with dramatic lighting, wind, clouds, haze, and fog, and features a beautiful open-world version of th...
Radiance Caching for Real-Time Global Illumination
มุมมอง 52K3 ปีที่แล้ว
This talk will present an efficient and high-quality Final Gather for fully dynamic Global Illumination with ray tracing, targeted at next generation consoles and shipping in Unreal Engine 5. Part of the SIGGRAPH 2021 Advances in Real-Time Rendering in Games course ( advances.realtimerendering.com/). Hardware Ray Tracing provides a new and powerful tool for real-time graphics, but current hardw...
There are some truly clever ideas here that make you pause, think, and then see the sense. However, making it all work with all the 'devils in the details', that's on an entirely different level and I consider that an astonishing achievement. Hats off.
i know he obviously didn't write/invent all of this and there's a massive team at unreal that got this done, but the fact that one person can even *understand* the number of moving parts in this system is insane. watching this video could singlehandedly kill a software engineer from the 80s.
all of this is very cool but i think any method that is even slightly temporally incoherent is just not worth it. everything ive seen so far depends on the last frame or last couple of frames, it and inevitably causes smearing and/or takes too long to accumulate. its so distracting that id much rather stick to static lightmaps and other conventional methods.
With all this talent and expertise, they still can't add the state-of-the-art scripting language (c#) to their engine and force people to live in 1979 with c++
This guy sounds like a nerd... thanks for for your work
The irony is the huge RAM memory space was attribut to normal map and color texture not 3D assets, a polygonal object is like a 2D vector file it's scalable at any pixel resolution and should'nt take more ram and hard drive space than a raster texture, Voxel can also be usable to add detail without exploding the CPU and GPU performance time to found hybrid way or new tech ...
This kind of culling is already done in game engine like CSGO/cs2's hammer engine I think 🤔
Okay here me out, but tell me this guy doesn't look like Oscar Isaac from Ex Machina.
he was taking the influencing from the character lol
it's funny because this is pretty much what Euclideon was talking about for over a decade. it's also what John Carmack spoke about after Rage's release. John Carmack didn't have enough time at ID to see it through, and everyone said Euclideon was faking it
Pointless if it still needs 2000$gpu to get 30fps
Nuts. Absolutely nuts. I had no idea Nanite was so freaking complicated!
It'll be very nice if someone could make a video to explain the content of this video.
16:31 the edges boundary shape or vertices were kept while the vertices inside were deleted causing an lesser vertice count.There are also more and more algorithms but it's not really intidimating or superior.
nanite is still the god of lod
nanite is the god of overdraw if anything
I’m here using Unity’s real-time light, light probes and reflection probes thinking that’s all there is. Watching this has blown my mind
It seems there are only a few videos from 2021. Will there be any other public videos?
This is exceptional work
So did they end up using this approach as a default for GI? Or do they use something else for new EA/Frostbite games?
any idea why nanite meshes do not update the nav mesh? not even after setting the nav mesh dynamic, the nav mesh simply ignores nanite meshes
i understood 2% of this but that 2% was crazy
"Instances are the new triangles". My favorite line.
So guys basically reinvented voxels but for polygons. Sounds crazy.
This is ROAM but implemented on the GPU.
That is awesome!
It's not clear to me how to parallel propagate subdivision into neighboring triangles. Is there a way how to avoid recursive subdivision on half-edges of main mesh triangles?
all this graphical wizardry and yet unreal actors and objects are bloated to huge sizes in memory, huge disconnect between the graphics side and the rest of the engine
The guy just reading the text word by word, which makes this very difficult to listen. He doesn't explain, he doesn't communicate. Just a plain reading with some random pseudo-emotional intonation swings.
16:31 the edges boundary shape or vertices were kept while the vertices inside were deleted causing an lesser vertice count.There are also more and more algorithms but it's not really intidimating or superior.This is all done on tesselation shader.I know the concept can be simple but to implement such is plain daunting.One more importance is the UV map,i guess they have to virtualize it someway,like store the texture fragments after they deleted such vertices as well as their UV coords.
Awsome Job !!
Good talk - the change of basis to apply simple constraints in the spatial domain is simple but the details are clever and a bit mind blowing. And finally, the compression of 44:1 with little loss of detail was amazing. That's one where I'll need to read the papers. Could that technique also apply to image compression ? I wonder how it would perform.
The way you are describing things up to around 16:10 you make it sound like someone with decent coding expertise could implement the important bits of this idea themselves
Yeah. Science!
It'd be much clearer if you could state the problem prior to an explanation.
He did
The problem is how to make more realistic global/ambient shading that is efficient. Most objects in a scene do not move and typically the light source does not move, so that the light reflecting off of most of the surfaces is not changing and can be cached (stored) in order to calculate it just once. Part of the problem is that this is a technical conference and many of the terms are expected to be know by the attendees. It's also true that researchers aren't always the clearest communicators and may also introduce new terms without defining them.
So what if there's no good previous frame to work from, like after a hard transition? Is there a big slow down, or do we get a single poor quality frame?
1.5gb is a memory problem? I would gladly pay 8gb memory requirement for those tasty tasty shadows and indirect lighting.
Interesting solution but I'll take path tracing with radiance caching over this anyway.
Pretty cool explanations. I didn't really understand most of the technical details and jargon but the overall concept and parts of the executen were quite clear. 👍 1:04:02 I understood the slide about vertex clustering and duplicates completely 😁 which was cool 1:08:14 one of the things I asked myself is how you handle translucent things, since you said you didn't use the materials in looking for the right triangles. So I guess you somehow still have to implement this somehow
About your second point, I believe Nanite doesn't support transparent materials.
Thanks much appreciated.
unusable
As a computer scientist, this stuff is almost pure magic. Congratulation to the team at Unreal for designing/implementing these algorithms and tools! This is amazing work.
Fewer than 100 people actually understand how this thing works.
Annnnd that's it for my standup.
Genius solution!
Based that he explained it fully. Ty
Remodeling manually is the same,This is just automatic manner.
This is some crazy piece of engineering.
I miss good old days when I thought shadowmapping is hard to understand
The team behind this are true geniuses. This amount of complexity being managed is pretty incredible.
The whole idea and initial work was the guy in this video, Brian Karis. I believe he didn´t got a team to lead till he has proved it worked. He came with the idea, and they told him, go work alone on it for a while a see what happens.
Wouldn't this mean you have 1 draw call per cluster? Also... what if half the mesh uses lod 0 and the other half has to use lod 1... wouldn't the triangles due to simplification either be crossing over to have gaps?
I'm STILL struggling very hard to understand HOW this did NOT revolutionize every 3D suite out there. It's not as if they kept the tech completely hidden.
Implementation complexity. It's too new to be that wide-spread. Doesn't even cover all cases yet. Eventually this may grow into a proper standard of doing things. Give it a few more years
“You can’t displace a spear into a shape like a torus”. Poincare 😒
Great job, very technical and well explained.
'Also known as an index buffer and a triangle mesh'.. Lol