Age | Commit message (Collapse) | Author |
|
|
|
linked to multiple scenes.
Crash happened because code could not find a valid base in current scene
after adding the object, added some checks for that.
Root of the issue was wrong assumptions in `BKE_object_add` logic, which
would pick the first valid ancestor collection in case initially
selected collection was not editable. In some case, this could pick a
collection not instanced in the current scene's view layer, leading to
not getting a valid base for the newly added object.
Addressed this by adding a new variant of `BKE_collection_object_add`,
`BKE_collection_viewlayer_object_add`, which ensures final collection is
in given viewlayer.
|
|
Sculpt undo now detects if an attribute layer has
changed type/domain and unconverts it back. This
is a temporary workaround to a more fundamental
bug in the undo system.
Memfile undo assumes it can always rebuild the
application state from the prior undo step,
which isn't true with incremental undo systems.
The correct fix is to push an extra undo step prior
to running an operator if an incremental undo system
is active and the operator is using memfile undo.
|
|
Differential Revision: https://developer.blender.org/D15083
|
|
These lines were not used now because the handling of copy data has changed.
Assigning the `eval` data can produce unexpected result, especially since everywhere ID_RECALC_TRANSFORM is used, we also do a copy on write. That should take care of `ob->data` for the eval object.
|
|
The problem was because the check was done with the total weights of the first element of the array and if this was null or 0, the weights were not duplicated.
As this bug was introduced fixing T97150 due a problem in the weight data, now instead to duplicate all stroke data to create the perimeter for the PDF/SVG, only the points are duplicated because the weights are not needed. This fix the original bug and also reduce the memory used by the export process.
|
|
Subdivision did not properly update when evaluating first without and then with
orco coordinates. Now update the subdivision evaluator settings every time, and
reallocate the vertex data buffer when needed.
there is an additional issue in this file where orco coordinates are not
available immediately on the first frame when they should be, and only appear
on the second frame. However that is an old limitation related to the depsgraph
not getting re-evaluated on viewport display mode changes, here we just fix the
crash.
|
|
Previously, when there were multiple curve points at the same or
almost the same position, the computed tangent was unpredictable.
Now, the handling of this case is more explicit, making it more
predictable. In general, when there are duplicate points, it will just use
tangents from neighboring points now.
Also fixes T98209.
Differential Revision: https://developer.blender.org/D15016
|
|
The original fix for T97366 was too restrictive and breaks real-world
cases of single-file UDIM textures. See D13297 for an example.
This patch effectively reverts the original fix and instead fixes the
downstream code to accept single-file ranges as necessary.
Note: This means it is very important for users to make use of the
"UDIM detection" option during `image.open` or drag n' drop scenarios in
order to declare their intent when loading their files.
Differential Revision: https://developer.blender.org/D14853
|
|
`BKE_id_delete` should only check for consistency of user count with
regards to the tags and flags of the ID, not 'protect' nor even warn in
case a 'fake user' ID is deleted (such higher-level checks are to be
handled by higher-level code).
Also replace the assert + debug print by a CLOG error, this avoids
'assert crash' while still failing tests, and always producing a useful
message.
Fixes T98374 and T98260.
|
|
This was not effectively doing anything anymore, time to remove it.
|
|
Weird file where the proxy object has a `proxy_group` pointer, which
does not instantiate any collection...
|
|
When appending an already linked data, `BKE_blendfile_append` would not
properly substitute the `item->new_id` pointer of link/append context
items with the local duplicate of the linked ID.
This would cause drag'n'drop of assets to work incorrectly in some
cases. Fixes part of T95706 and T97320.
|
|
When no image user is known the last used frame of the image is used to
read a frame. When partial updating an image there is always an image user
that would use a zerod out image user, meaning the frame number is set to 0
when using the clone tool.
This fix syncs the frame with the last used frame of the image to ensure
that the buffer exists. There is a bailout in the overlay_edit_uv.c.
|
|
`BKE_nlastrip_find_active()` and `update_active_strip()` now do a more
thorough search for the active strip, by also recursing into
meta-strips.
There were some assumptions in the NLA code that the active strips is
contained in the active track. This assumption turned out to be false:
when there is a meta-strip, the active strip could be inside that, and
then it's not contained in `active_track.strips` directly.
Apart from the above, there are other situations in which the track
pointed to by `AnimData::act_track` does *not* contain the active strip
(even indirectly).
Both these cases can happen when the transform system is moving a strip
that's in tweak mode. Entering tweak mode doesn't just search for the
active track/strip, but also falls back to the last-selected ones. This
means that the track/strip pointers & flags can be out of sync with
what's being tweaked/transformed. Because of this, the assumption that
there is any relation between "active strip" and "active track" has been
removed from the `update_active_strip()` function.
All this searching could be prevented by passing the `AnimData` to the
code that duplicates tracks & strips. That way, when the active
track/strip is dup'ed, the `AnimData` pointers can be updated as well.
That would require more refactoring and thus could introduce other bugs,
so that's left for later.
|
|
Also fixed an unreported issue of incorrect interpolation of thickness.
Reviewed By: Aleš Jelovčan (frogstomp), Antonio Vazquez (antoniov)
Differential Revision: https://developer.blender.org/D15005
|
|
Differential Revision: https://developer.blender.org/D14998
|
|
The final normalization step of sculpt normal calculation iterates over
all unique vertices in each node and marks them as done. However,
storing the done mask in a bitmap meant that multiple threads could
write to a single byte at the same time, because the bits for separate
threads could be stored in the same byte. This is not threadsafe
Fixing this issue seems to improve performance as well. First I tested
just clearing the entire bitmap after the normal calculation. Then I
tested using an array of booleans instead, which turned out to be
slightly better, and simplifies code a bit.
I tested on a Ryzen 3800x, on an 8 million polygon subdivided
Suzanne by using the grab brush with a radius large enough to
affect most of the mesh.
| Original | Clear Entire Bitmap | Boolean Array |
| --------- | ------------------- | ------------- |
| 67.9 ms | 59.1 ms | 57.9 ms |
| 1.0x | 1.15x | 1.17x |
Differential Revision: https://developer.blender.org/D14985
|
|
collections.
Existing code for the `Move` operator, and some `Collections` panel
operations (Object properties) was absolutely not override-safe, and
sometimes not even linked-data safe.
|
|
That max number of `10000` level of recursivity was a typo (should have
been `1000`), but even that is way too high, typical sane situation
should not lead to more than a few tens of levels, so reducing the max
level to 200.
Also improve error message with more context info about the issue.
Found while investigating issues for the Blender Studio's Heist production.
|
|
loops."
This reverts commit e42e4e8568edeb4e0b962e2059586c2a3d3b457d.
Wrong commit, sorry for the noise.
|
|
That max number of `10000` level of recursivity was a typo (should have
been `1000`), but even that is way too high, typical sane situation
should not lead to more than a few tens of levels, so reducing the max
level to 200.
Also improve error message with more context info about the issue.
Found while investigating issues for the Blender Studio's Heist production.
|
|
In some cases (when there is an evaluated curve), the conversion code
would try to free the evaluated data-block twice, because freeing the
object would free it from `data_eval` and then the data-block was freed
again explicitly. Now check if the data-block is stored in `data_eval`
before freeing `object.data` manually. This is another area that's made
more complex by the fact that we change the meaning of `object.data`
for evaluated objects. The solution is more complicated than it should
be, but it works whether or not an evaluated mesh or curve exists.
|
|
Sometimes integers are mixed using float weights. In those cases
the mixed result has to be converted from into float again.
Previously, this was done using a simple cast, which was unexpected
because e.g. 14.999 would be cast to 14 instead of 15.
Now, the values are rounded properly.
This can affect existing files unfortunately without a good option
for versioning. Gladly, very few files seem to depend on the details
of the old behavior.
Differential Revision: https://developer.blender.org/D14892
|
|
This makes changes to the opensubdiv module to support additional vertex data
besides the vertex position, that is smootly interpolated the same way. This is
different than varying data which is interpolated linearly.
Fixes T96596: wrong generated texture coordinates with GPU subdivision. In that
bug lazy subdivision would not interpolate orcos.
Later on, this implementation can also be used to remove the modifier stack
mechanism where modifiers are evaluated a second time for orcos, which is messy
and inefficient. But that's a more risky change, this is just the part to fix
the bug in 3.2.
Differential Revision: https://developer.blender.org/D14973
|
|
Previously, the depsgraph assumed that every node tree might contain
a reference to a video. This resulted noticeable overhead when there
was no video.
Checking whether a node tree contained a video was relatively expensive
to do in the depsgraph. It is cheaper now due to the structure of the new
node tree updater.
This also adds an additional run-time field to `bNodeTree` (there are
quite a few already). We should move those to a separate run-time
struct, but not as part of a bug fix.
Differential Revision: https://developer.blender.org/D14957
|
|
If making liboverride of an empty collection, this (root of override
hierarchy) collection would get untagged in code when checking for
collections that do not need to be overridden, leading to not overriding
this root collection, and later in code to using NULL pointer.
|
|
The problem is that depsgraph evaluation happens before the OpenGL context
is initialized, and so modifier evaluation happens without GPU subdivision.
Later the BKE_subsurf_modifier_can_do_gpu_subdiv test in the draw code gives
a different result.
This just checks if the mesh has information for GPU subdivision in the draw
code, and if so uses it. This is only set if the test for supported GPU
subdivision passes in the modifier evaluation.
Additionally it may be good to perform OpenGL context initialization earlier
so background render can take advantage of GPU subdivision, but this is more
complicated.
Differential Revision: https://developer.blender.org/D14969
|
|
Moving Text.as_string() from Python to C [0] added an extra new-line
causing a round-trip from_string/to_string to add a new-line,
this also broke the undo test `test_undo.text_editor_simple` in
`../lib/tests/ui_simulate/run.py`.
[0]: 231eac160ee394d41c84e0cc36845facb7594ba5
|
|
The 'OVERRIDE_HIDDEN' extra collection would often be mistakenly added
to a linked collection, which is totally forbidden and guaranteed to
crash on undo/redo.
Reworked the code instantiating that extra collection in a more generic
and hopefully robust way now.
|
|
|
|
We really need to fix how unprojected radius (scene unit) works.
What happened is the paint code updates the brush's normal radius
with the current unprojected pixel radius, which was then
used by texture brush tiled mode.
To fix this I just cached the pixel radius at stroke start in
UnifiedPaintSettings->start_pixel_radius.
|
|
excluded get instanced into Current Scene.
This was due to using `BKE_scene_has_object` function, which uses the
cache of bases of the viewlayers, which do not have entries for the
content of excluded collections... Now use
`BKE_collection_has_object_recursive` instead.
|
|
Differential Revision: https://developer.blender.org/D14917
|
|
GLSL has different max number of ssbo per glsl stage.
This patch checks if the number of compute ssbo blocks matches
our requirements for the GPU Subdiv, before enabling it.
Some platforms allow more ssbo bindings then blocks per stage.
|
|
|
|
Followup to rB6c679aca1770c37.
|
|
Code was not really designed to hanlde corrupted (e.g. local ID) root
hierarchies, now it should handle better those invalid cases and restore
proper sane situation as best as possible.
Fixes crashes with some corrupted files from Blender studio.
|
|
acceleration, lens length, and power
`TEMPERATURE` type was also missing, not only the new-ish
`TIME_ABSOLUTE` one...
Added a static assert on the size of the `bpyunits_ucategories_items`
array, and a comment on anonymous enum of `B_UNIT_`, in the hope this
won't happen again in the future.
|
|
|
|
Support merging UV's that share the same vertex and are very close when
applying modifiers.
This is needed to prevent UV's becoming "detached" which can happen when
applying the subdivision surface modifier.
This regression was caused by [0] which removed selection threshold for
nearby coordinates. While restoring the UV selection threshold could be
done - some selection operations that walk around connected UV fans
wouldn't behave in a deterministic way (such as select shortest path).
There are also other cases where UV's may be compared without a
threshold such as tangent calculation and exporters which have their own
logic to handling UV's.
Also resolves T86896, T89903.
[0]: b88dd3b8e7b9c02ae08d4679bb427963c5d21250
Reviewed By: sergey
Ref D14841
|
|
This index is not meant for the point cache data array, it's already offset.
|
|
copy.
Similar issue/solution as in rB5188c14718c5 from this Monday actually,
there may be more of those still lurking around... Quite surprising they
all get reported now, this behavior has been in Blender since years.
|
|
Fix regression described in T97799.
Apply layer transform and layer parenting to all visible frames, i.e. active frame + onion skinning frames.
Reviewed By: #grease_pencil, antoniov
Maniphest Tasks: T97799
Differential Revision: https://developer.blender.org/D14829
|
|
When the profile only has one control point, its segment count was
determined incorrectly. There needs to be a special case for a single
control point, when there are no segments even if the curve is cyclic.
|
|
These kinds of depsgraph evaluations should not be marked as low priority as
this could negatively affect playback performance. Low priority should mainly
be used for background tasks.
|
|
Regression from rB1a7757b0bc69/rBa0acb9bd0cc0. Special handling
(averaging) of weights on merged center vertices also requires to be
'reversed' when new correct merge order is used, compared to previous
behavior.
|
|
This commit implements copying of materials and material indices from
all of the boolean node's input meshes. The materials are added to the
final mesh in the order that they appear when looking through the
materials of the input meshes in the same order of the multi-socket
input node.
All material remapping is done with mesh-level materials. Object-level
materials are not considered, since the meshes don't come from objects.
Merging all materials rather than just the materials on the first mesh
requires a change to the boolean-mesh conversion. This subtly changes
the behavior for object linked materials, but in a good way I think;
now the material remap arrays are respected no matter the number
of materials on the first mesh input.
Differential Revision: https://developer.blender.org/D14788
|
|
8e4c3c6a2414 mistakenly used the `vec` argument
of `BKE_where_on_path` like it was optional. Fix by
making that argument optional too.
|
|
to have zero users
Relinking code would weirdly enough allow clearing of extra/fake user
status on IDs used by affected ID, which would be utterly wrong.
Fairly unclear why this was working OK in reported case before rBa71a513def20,
could not spot any obvious reason just from reading code...
Also, in `libblock_remap_data_update_tags`, only transfer fake user
status if `new_id` is not NULL (otherwise that would have removed that
falg from `old_id`, without actually transferring it to anything).
|