Age | Commit message (Collapse) | Author |
|
|
|
Patch by @swerner, thanks!
|
|
Attributes were not resized after pushing new triangles to the mesh.
|
|
= crash
Wrong formula was used to calculate needed verts and tris to be reserved.
|
|
|
|
|
|
|
|
is used
The idea is to make it simpler to remove noise from scenes when some prop uses
Sharp glossy closure and causes noise in certain cases. Previously Sharp Glossy
was not affected by Filter Glossy at all, which was quite confusing.
Here is a file which demonstrates the issue: {F417797}
After applying the patch all the noise from the scene is gone.
This change also solves fireflies reported in T50700.
Reviewers: brecht, lukasstockner97
Differential Revision: https://developer.blender.org/D2416
|
|
|
|
|
|
|
|
|
|
|
|
handles overflow and underflow, but not NaN/inf.
|
|
|
|
New logic of split_faces was leaving mesh in a proper state
from Blender's point of view, but Cycles wanted loop normals
to be "flushed" to vertex normals.
Now we do such a flush from Cycles side again, so we don't
leave bad meshes behind.
Thanks Bastien for assistance here!
|
|
This way we can control exact spaces and such added to the cflags
which is crucial to troubleshoot certain drivers.
|
|
Noise texture is now faster when the color socket is unused. Potential for
speedup spotted by @nutel.
Some performance results:
Render Time Before After Difference
Gooseberry benchmark 47:51.34 45:55.57 -4%
Koro 12:24.92 12:18.46 -0.8%
Simple cube (Color socket) 48.53 48.72 +0.3%
Simple cube (Fac socket) 48.74 32.78 -32.7%
Goethe displacement 1:21.18 1:08.47 -15.6%
Cycles brick displacement 3:02.38 2:16.76 -25.0%
Large displacement scene 23:54.12 20:09.62 -15.6%
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D2513
|
|
The issue seems to be caused by vertex normal being re-calculated
to something else than loop normal, which also caused wrong loop
normals after re-calculation.
For now issue is solved by preserving CD_NORMAL for loops after
split_faces() is finished, so render engine can access original
proper value.
|
|
cards
Was only visible with Persistent Images option ON.
|
|
|
|
progress when baking with high samples
|
|
This is supposed to be a temporary layer.
If someone needs loop normals after split it should explicitly
ask for that.
|
|
|
|
|
|
Solves memory regression by the default configuration.
|
|
Doesn't currently change anything, but would need for some future
work here.
It uses existing padding in kernel BVH structure, so there is
nothing changed memory-wise.
|
|
The issue here was mainly coming from minimal pixel width feature
which is quite commonly enabled in production shots.
This feature will use some probabilistic heuristic in the curve
intersection function to check whether we need to return intersection
or not. This probability is calculated for every intersection check.
Now, when we use multiple BVH nodes for curve primitives we increase
probability of that primitive to be considered a good intersection
for us. This is similar to increasing minimal width of curve.
What is worst here is that change in the intersection probability
fully depends on exact layout of BVH, meaning probability might
change differently depending on a view angle, the way how builder
binned the primitives and such. This makes it impossible to do
simple check like dividing probability by number of BVH steps.
Other solution might have been to split BVH into fully independent
trees, but that will increase memory usage of all the static
objects in the scenes, which is also not something desirable.
For now used most simple but robust approach: store BVH primitives
time and test it in curve intersection functions. This solves the
regression, but has two downsides:
- Uses more memory.
which isn't surprising, and ANY solution to this problem will
use more memory.
What we still have to do is to avoid this memory increase for
cases when we don't use BVH motion steps.
- Reduces number of maximum available textures on pre-kepler cards.
There is not much we can do here, hardware gets old but we need
to move forward on more modern hardware..
|
|
Likely was harmless for Blender, but better be safe here.
|
|
|
|
|
|
Was a bug in recent optimization commit.
|
|
Quite simple fix for now which only deals with this case. Maybe we want to do
some "clipping" on image load time so regular textures wouldn't give NaN as
well.
|
|
This allows us to use faster math and still have reliable
isnan/isfinite tests.
Only do it for host side, kernels stays unchanged.
Thanks Lukas Stockner for the tip!
|
|
Optimize vertex de-duplication the same way as we do doe Remove Doubles.
|
|
Those are now matching and it's faster to skip C++ RNA to
calculate pointiness.
|
|
Our Python API is not ready for such things at all. Better be slower
but more correct for until we improve our API.
|
|
it's better place for such an utility structure. Still not fully ideal tho.
|
|
Simplifies some logic.
|
|
Basically made the algorithm to handle vertices with the same coordinate
as a single vertex.
|
|
|
|
This way the calculation is not spread over multiple places.
|
|
This way memory is more "manageable" and easier to follow.
|
|
|
|
|
|
|
|
|
|
Seems CUDA failed to de-duplicate the array across multiple inlined
versions of the shadow_blocked(). Helped it a bit with that now.
Gives about 100MB memory improvement on a scenes after previous
commit and brings up memory "regression" to only 100MB comparing to
the master branch now.
|
|
This commit enables record-all behavior of transparent shadows
rays.
Render times difference goes as following:
GTX 1080 render time
BMW -0.5%
Fishy Cat -0.0%
Pabellon Barcelona -11.6%
Classroom +1.2%
Koro -58.6%
Kernel will now use some extra VRAM memory to store the intersection
array (200MB on my configuration). This we can optimize out with some
further commits.
|
|
The idea is to record all possible transparent intersections when
shooting transparent ray on GPU (similar to what we were doing on
CPU already).
This avoids need of doing whole ray-to-scene intersections queries
for each intersection and speeds up a lot cases like transparent
hair in the cost of extra memory.
This commit is a base ground for now and this feature is kept
disabled for until some further tweaks.
|