- 25
- 71 793
Jonathan
เข้าร่วมเมื่อ 4 ต.ค. 2011
Adaptive Tessellation for Catmull-Clark Subdivision (SIGGRAPH course presentation)
For more information. see: advances.realtimerendering.com/s2022/index.html
มุมมอง: 1 063
วีดีโอ
Adaptive Tessellation for Catmull-Clark Subdivision (Render Test in Unity)
มุมมอง 1.4K2 ปีที่แล้ว
Render test in Unity for an upcoming talk. More details soon ! Credits: (3D asset) Kitchen PUP Asset by Christina Faraj (audio) The Smurfs' Nightmare - Smurf Village by Alberto Jose González
A Halfedge Refinement Rule for Parallel Catmull Clark Subdivision (paper presentation)
มุมมอง 1.1K3 ปีที่แล้ว
More details: onrendering.com/
A Halfedge Refinement Rule for Parallel Catmull Clark Subdivision
มุมมอง 8783 ปีที่แล้ว
Voice by Gill Vanhoey. Details on the paper: onrendering.com/
Concurrent Binary Trees (with application to longest edge bisection)
มุมมอง 1.6K4 ปีที่แล้ว
Voice by Maria Fernanda Rojas Moron. More details: onrendering.com/
Polygonal Integration of Bivariate Cauchy Distributions
มุมมอง 8075 ปีที่แล้ว
Supplemental video of our technical report entitled "Integration and Simulation of Bivariate Projective-CauchyDistributions within Arbitrary Polygonal Domains". Music: Sayonara Senshi-tachi (composer: Shunsuke Kikuchi) Check out the report here: arxiv.org/abs/1909.07605 Check out the associated MATLAB code here: github.com/jdupuy/CauchyPolygons
Adaptive GPU Subdivision in Unity (Hackweek19 Project)
มุมมอง 1.1K5 ปีที่แล้ว
Adaptive GPU Subdivision in Unity (Hackweek19 Project)
The SGGX microflake Distribution - LOD test video
มุมมอง 9145 ปีที่แล้ว
A simple test video that demonstrates the level of detail capabilities of the SGGX microflake distribution. I never released the video so here it is.
An Adaptive Parameterization for Material Acquisition and Rendering
มุมมอง 4816 ปีที่แล้ว
Download the database here: rgl.epfl.ch/materials To be presented at SIGGRAPH Asia 2018 in Tokyo. Voice by Kenneth Vanhoey.
A Spherical Cap Preserving Parameterization for Spherical Distributions
มุมมอง 5K7 ปีที่แล้ว
See my webpage: onrendering.com/ We introduce a novel parameterization for spherical distributions that is based on a point located inside the sphere, which we call a pivot. The pivot serves as the center of a straight-line projection that maps solid angles onto the opposite side of the sphere. By transforming spherical distributions in this way, we derive novel parametric spherical distributio...
Real-Time Polygonal-Light Shading with Linearly Transformed Cosines
มุมมอง 36K8 ปีที่แล้ว
Real-Time Polygonal-Light Shading with Linearly Transformed Cosines
Real-time Animation and Rendering of Ocean Whitecaps
มุมมอง 13K12 ปีที่แล้ว
Real-time Animation and Rendering of Ocean Whitecaps
It's not clear to me how to parallel propagate subdivision into neighboring triangles. Is there a way how to avoid recursive subdivision on half-edges of main mesh triangles?
Hi Jonathan, Just wanted to say thank you for opensourcing your solution. I have been digging into it, and testing it against an obj file with 7 million points, its subdivides it in only a few milliseconds. I have been trying to go directly from obj to subD, and not use the .ccm in-between file. I was wondering if ConcurrentBitField, could be applied directly to the obj data, and in that way accelerating the generation of the cc_Mesh?
Hi Simon! Yes I think you could use ConcurrentBitField directly in conjunction with the OBJ mesh but it will be slower than a .ccm file. The reason being that the complexity will be different: ConcurrentBitField runs in O(logN) where N is the number of halfedges, whereas .ccm is O(1).
Just checking in to see how you are progressing! Anythjng new?
this is neat but wouldnt it be better to just use alembic?
This is amazing. Well done.
Thanks for the awesome video! As far as I understood, you store cbt buffer in GPU memory and looping it in parallel. If the LOD will be big, isn't it really an issue to loop potentially huge amount of nodes and recalculate sum reduction ? And I don't clearly understand from demo, how and where you scale geometry with "PixelsPerEdge" parameter. Does it connected to LOD level and can it be somehow cached ?
This really helped me understand CBT
KSP2 devs just announced they will be replacing their old terain generation algorithm with this one as it their main performance bottleneck at release, and it looks really promising!
came here just to say that man, looks very promising!!!
Edge hardness coming over from blender would be awesome, but I dont think the default blend importer takes that into Unity
Probably could convert edge hardness into vertex color with a python script and take the color in on the unity side to affect the catmull clark tesselation
I'd buy a 100 dollar asset store package with this working from blender 👍
Even with current performance, it might still be good enough for simple pixar-like characters in an otherwise conventional scene
Thanks for the presentation! Would there be an advantage in using mesh shaders to compute the tessellated geometry to reduce memory usage?
Hi, very nice presentation. I’m trying to figure out why you need the sum reduction tree to calculate next/prev in the halfedge. Is that because you allow subdivision of arbitrary n-gons? Otherwise, if we only allow up to e.g pentagonal shapes as in the example, wouldn’t it just be sufficient to look at the face-id of the halfedges in the local neighborhood? E.g for 3 in the example around 12:00, we could look at 4 immediately to see that we need to wrap, and then we look at most 5 steps (2 in this case) back to realize we will wrap around to 0. What am I missing? Isn’t it unnecessarily slow to create the sum reduction tree for very large meshes? Say you allow n-gons, then wouldn’t it be sufficient to binary search over the half edges just comparing the face-id until you hit the upper and lower bounds?
Hi Oliver, You're absolutely right: we only need the sum reduction because we allow for arbitrary n-gons in the mesh. If you're dealing with a semi--regular mesh (eg quad-only or triangle-only) the operators FaceID, NextID and PrevID become analytic. I actually discuss that in another presentation here: th-cam.com/video/FFWgQZsfwy8/w-d-xo.html And regarding your other question: Say you allow n-gons, then wouldn’t it be sufficient to binary search over the half edges just comparing the face-id until you hit the upper and lower bounds? => We did not try that because I was expecting memory access to become a bottleneck (and binary search is memory access heavy). But I don't see why it wouldn't work ! Feel free to write anytime if you have more questions :)
Randomly found this while learning to write tesselation for my terrain shader. But what made me smile is randomly hearing one of my favorite ost from one of the most underrated composer out there ♥ cant get tired of the smurfs ost
Outstanding work.
Excellent job done! I hope this will be adapted in industry asap
Hi! I'd love to see the code that generates the SGGX distribution from the mesh. What did you use?
Hi! The trees are polygon meshes originally. I wrote my own voxelizer to convert them into a volume with SGGX parameters.
Don't know if that was supposed to be funny but that was the most epic math video I've ever seen lol.
It was meant to be epic :) Thanks !
Very nice indeed. Wondering how one would use this with landscape tiles for implementing a landscape system in a game. Can the landscape be split into uniform sized tiles and if so would the resulting tiles be seamless with regards to tessellation/LODs? The goal is to be able to use different shaders per tile.
I still do not understand how the reflection tensor is related to recursive reflection, or is it correct to compare with reflection 1 time?
Just dropping by again to rewatch. Still great! Lol
Anything new Jonathan? I am itching to see your latest work!!!
I dont think I fully understand. Is this used in conjunction with tesselation shaders or is it a complete solution by itself?
It's a complete solution by itself, which is meant to provide an alternative to tessellation shaders. The main benefit of this approach compared to tessellation shaders is that it allows you to cache your scene's geometry. This is useful whenever you have rendering algorithms that require multiple passes over your scene geometry (e.g., shadow mapping, Z prepass, path tracing, visibility buffering, etc.). With tessellation shaders, you would have to recompute the geometry at each pass. Here, you don't have to. Also, you have a very fine control over the adaptivity of the mesh. Hope this is clear :)
@@jdupuy01 That is very clear. Thank you. What a fantastic solution!
You are a research scientist at Unity! How exciting!
Outstanding.
I'd love to know more about this! Can you describe what is happening here?
Hi Ian! What you are seeing is an algorithm that adaptively triangulates subdivision surfaces in real time in the Unity game engine. It's essentially a merger of two of my latest project th-cam.com/video/FFWgQZsfwy8/w-d-xo.html and th-cam.com/video/Wr3yIJ927EE/w-d-xo.html The video shows the control cage at the beginning then switches between wireframe and final render to show how the triangulation is refined on the fly (the small inset that pops from time to time shows that the triangulation is only refined within the camera frustum). I will provide more details during a presentation at SIGGRAPH. Stay tuned :)
@@jdupuy01 Thank you Jonathan. I study topology (from a less mathematical perspective - I work in advertising where fidelity of structure is important in recreating physical products!) and really look forward to your presentation.
Awesome work!
Beautiful solution, great explanation. Many thanks : )
I want to learn more about this stuff
Is this less expensive than raytracing simulation? The video only mentions about a GPU implementation, but what would a software implementation be like?
Yes it's less expensive than simulation through raytracing because you only need one evaluation to get the entire shading due to an area light (rather than spawning many rays from the surface you're shading towards the area light). You can port our GPU implementation straightforwardly to the CPU.
This is awesome! Thank you for the video and documentation.
Excellent demonstration! I'm wondering whether after the mesh is subdivided through your method it could then be further manipulated by some shader process to create surface detail. I'm sort of imagining animated wave patterns on a water surface, or fine wrinkles on a character's skin, or tree bark, for example. Ya, this looks really powerful in adding greater nuance and detail to modeling. Thanks for showcasing your efforts!
Excellent presentation, such an easy to understand explanation of the algorithm! I'm curious in practice in what way the result of the algorithm (in GPU memory) is used to generate geometry.
Thank you :) My paper explains how to generate the geometry from the CBT. You can also have a look at my code here: github.com/jdupuy/LongestEdgeBisection2D/blob/master/terrain/shaders/TerrainRenderCommon.glsl#L33
@@jdupuy01 will this be implemented inside the unity game engine? Because this will be really great for creating huge open world games inside unity
@@kunkerkar It is currently being considered for integration. For more information see this talk "Experimenting With Concurrent Binary Trees for Large-scale Terrain Rendering" advances.realtimerendering.com/s2021/index.html
is this open source? and if so where can I get source code?
No, sorry (hackweek is a Unity internal event).
U have god level editing years ago also
Co ja tu robię -,-?
Awesome stuff! Just fyi this has been added to CRYENGINE and looks fantastic in practice!
du très bon boulot !
Thank you for the stunning work! I see your implementation uses three additional ifft channels to calculate the jacobian factors. This is significant in terms of performance, so I was wondering whether you also tried to calculate the jacobian through finite diffs of the horizontal displacement (as a post processing step), and if so, how does the resulting quality compare between the two techniques.
Thanks :) Yes finite differencing is a good idea ! I believe it should work fine and run faster. Actually there's a very cool GDC19 presentation that does just that: gpuopen.com/gdc-presentations/2019/gdc-2019-agtd6-interactive-water-simulation-in-atlas.pdf I should probably update this demo at some point and bench the performance difference... If I do, I'll ping you :)
Thank you!
Hahahaha very nice choice of music :D All math related videos should have a soundtrack to it :P
For wave displacements do you swap the Gaussian numbers ? I've seen a few papers use this for the height spectrum random Gaussian numbers: (n1 + i*n2) Then for x:z displacement they used this for the spectrum of random Gaussian numbers: (n2 - i*n1) Tessendorf's paper never mentions this for the choppiness of waves but other papers seem to do it without explanation as to why they did it, did you come across this ?
Could you point me to references that rely on such a swapping procedure so I can have a look ?
Is this using polygons or something else
Yes, this is using polygons.
Hi Jonathan, which rendering engine did you use to implement this?
Sorry for my layman question, is this a method of tessellation?
I welcome any question :) This is a level-of-detail method for voxels (while tessellation is a level-of-detail method for polygons).
@@jdupuy01 Oh I see! Quite easy to understand! Thank you! So I guess if you zoom further in the voxel subdivision stops and individual voxels become apparent or not?
You're welcome. If you zoom further you would indeed see the voxels. Note that you can attenuate this effect if you interpolate neighbouring voxels (just like we usually do for textures).
Does it mean that for any spherical distribution with analytical sampling it is possible to construct a sampling method for the distribution clipped to an arbitrary spherical cap?
So, the source code of the Unity Engine itself was modified for this paper ? Or what role did Unity Technologies play ? Its not entirely clear.
This work was done within Unity Labs. One of the goals of Unity Labs is to research new rendering algorithms that will improve existing real-time graphics. Some of these algorithms will end-up in the engine, and some won't; that's part of the research process ;) In the mean time, we are sharing our results with the world to demonstrate what kind of technology can be achieved today, and explain how to achieve it. That's the goal of this video, and the technical paper it supplements. Note that for this particular work, most of the renderings were done in Unity using custom shaders.
The github link shows *.c and *.h files for a standalone. Does this mean, that we don't get to see the code you used in Unity ?
Yes, the github link hosts a standalone demo. We will release a Unity package soon. If you're interested, I'll provide you with the link once it's online :)
Yes. Very interested. I have been following research in Lighting and Shadows from the SteamVR team, Asset store developers, amateurs, really everyone. This research is of great interest to me :)
Will this see the light? I mean in Unity.
Nicely put ^^ There's no precise date yet, but sphere lights will be supported in Unity eventually, yes.
What about now?
What does "this spherical integral is invariant to linear transformations" intuitively mean? I've only recently started reading this blog : mynameismjp.wordpress.com/2016/10/09/sg-series-part-1-a-brief-and-incomplete-history-of-baked-lighting-representations/ so I was able to understand that its difficult to integrate spherical gaussian area lights in real-time with gaussian on probe surface, so we use a cosine-lobe to approximate the area lights. Therefore problem transforms from integrating 2 SG's to simply multiplying an SG with a cosine-lobe. (Please feel free to correct me if I'm wrong). When you say 'this spherical integral is invariant to linear transform" can I understand it as 'you can do as many linear transformations on the area light :- Move it around, deform it(linearly), rotate it...whatever. The irradiance calculation would be affected accordingly(and correctly)" So this basically gives us the power to make indirect lighting of moving area-lights(even though its just linear motion), compared to current method of 'baking' a static-lights scene onto the probe and using those probes interpolate/approximate indirect lighting at runtime. If what I understand is true, then this is definitely interesting development!
We implemented this two years ago, much faster then the method shown (available now to select few developers please e-mail us) th-cam.com/video/hqp7kHPVr58/w-d-xo.html and th-cam.com/video/H5MK3m5vvTs/w-d-xo.html
Why didn't you release it on the asset store?
No, you didn't.
very nice since this is from adam unity showcase (!?) when will this be available for the community? i really like the volume light effect !
I knew that... You just made up all those words, right? Next party I go to, when someone says something I don't understand, I know all I gotta do is drop "That's almost as complex as Real-Time Polygonal-Light Shading with Linearly Transformed Cosines". yeah, that'll do it. ;D Seriously though, gave you the 63rd "thumbs up". Impressive.
I study in an university and I didn't understand all of the math here (I don't study math as my major) but I can assure you that all of the terms used here are real terms and used by the mathematical community.