Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
D2989 by @dmarra w/ own edits
|
|
|
|
This a small cleanup of something which I think is just a typo anyway.
With all the recent talks of harrassment and groping, I think we better avoid
that within our source code! :)
Reviewers: sergey
Reviewed By: sergey
Tags: #motion_tracking
Differential Revision: https://developer.blender.org/D2979
|
|
disabled marker
Simple fix, which is totally safe for 2.79a!
|
|
loaded.
Stupid mistake in material reading code, thanks to Simon Wendsche (@BYOB) for the investigation and fix!
To be backported to 2.79a.
|
|
|
|
|
|
Differential Revision: https://developer.blender.org/D2981
|
|
Differential Revision: https://developer.blender.org/D2982
|
|
Differential Revision: https://developer.blender.org/D2987
|
|
|
|
We were duplicating rectf twice :/
Patch by Clément Foucault.
|
|
|
|
|
|
Addon's were also ignored
|
|
|
|
Thanks Campbell Barton for the help and review.
This is for Blender 2.8, so we are not using this function yet.
|
|
|
|
Only show objects in current scene when not pinned.
This commit adds a filter argument to id-template
since we may want to filter by other criteria.
|
|
|
|
In that case it can now fall back to CPU memory, at the cost of reduced
performance. For scenes that fit in GPU memory, this commit should not
cause any noticeable slowdowns.
We don't use all physical system RAM, since that can cause OS instability.
We leave at least half of system RAM or 4GB to other software, whichever
is smaller.
For image textures in host memory, performance was maybe 20-30% slower
in our tests (although this is highly hardware and scene dependent). Once
other type of data doesn't fit on the GPU, performance can be e.g. 10x
slower, and at that point it's probably better to just render on the CPU.
Differential Revision: https://developer.blender.org/D2056
|
|
Fixes a few corner cases found while stress testing host mapped memory.
|
|
Patch by Lucas Walter (@lucasw), thanks.
Reviewers: mont29
Reviewed By: mont29
Subscribers: mont29
Differential Revision: https://developer.blender.org/D2983
|
|
|
|
|
|
|
|
Technically this was introduced in 01b547f9931970050e when
exposing size and randomness for particles.
This "fixes" makes sure particle size and size randomness is always in the
Render panel when it affects the particle system (i.e., always unless using
advanced hair or hair that is not rendering groups/objects).
|
|
Differential Revision: https://developer.blender.org/D2977
|
|
|
|
Fix T52977: Parent bone name disappeared in the UI in pose mode.
Regression caused by own rBc57636f060018. So instead of changing widget
type, just flag it as disabled.
Note that core of the issue is elsewhere though - there is absolutely no
reasons to have a search widget for pointers we cannot change nor
search! But fixing this is not really top priority, one of the many
glitches of our UI code, so think we can live with current code.
To be backported to 2.79a.
|
|
SVM nodes need to read all data to get the right offset for the following node.
This is quite weak, a more generic solution would be good in the future.
|
|
|
|
|
|
We tried to do as much as possible in a single threaded callback, which
lead to using some nasty tricks like fake atomic-based spinlocks to
perform some operations (like float addition, which has no atomic
intrinsics).
While OK with 'standard' low number of working threads (8-16), because
collision were rather rare and implied memory barrier not *that* much
overhead, this performed poorly with more powerful systems reaching the
100 of threads and beyond (like workstations or render farm hardware).
There, both memory barrier overhead and more frequent collisions would
have significant impact on performances.
This was addressed by splitting further the process, we now have three
loops, one over polys, loops and vertices, and we added an intermediate
storage for weighted loop normals. This allows to avoid completely any
atomic operation in body of threaded loops, which should fix scalability
issues. This costs us slightly higher temp memory usage (something like
50Mb per million of polygons on average), but looks like acceptable
tradeoff.
Further more, tests showed that we could gain an additional ~7% of speed
in computing normals of heavy meshes, by also parallelizing the last two
loops (might be 1 or 2% on overall mesh update at best...).
Note that further tweaking in this code should be possible once Sergey
adds the 'minimum batch size' option to threaded foreach API, since very
light loops like the one on loops (mere v3 addition) require much bigger
batches than heavier code (like the one on polys) to keep optimal
performances.
|
|
|
|
|
|
This is a bit annoying to have per-DM locking, but it's way better (as in, up to
4 times better) for playback speed when having lots of subsurf objects,
|
|
The idea is to avoid any threading overhead when we start pushing tasks in a
loop. Similarly to how we do it from the new dependency graph. Gives couple of
percent of speedup here, but also improves scalability.
|
|
Makes log easier to read.
|
|
This statistics is only collected when debug_value is different from 0.
Stored in depsgraph node itself, so we can always have access to average data
and other stats which requires persistent storage. This way we also don't waste
time trying to find stats from a separately stored hash map.
|
|
We might implement other things to dump into graphviz, so better to
start having explicit names.
|
|
|
|
|
|
Now negative color values are clamped to zero before the actual denoising.
|
|
|
|
it is only to be implemented for operation node.
|
|
No real reason to have that, better to free up space for something much more
awesome!
|