diff options
author | Hans Goudey <h.goudey@me.com> | 2022-01-13 23:37:58 +0300 |
---|---|---|
committer | Hans Goudey <h.goudey@me.com> | 2022-01-13 23:38:25 +0300 |
commit | cfa53e0fbeed7178c7876413e2010fd3347d7f72 (patch) | |
tree | f5cdc39134a3477614ae98699f90da3722161ed0 /source/blender/blenkernel/intern/mesh.cc | |
parent | 800fc17367480b82e07570d5234726aa61c38563 (diff) |
Refactor: Move normals out of MVert, lazy calculation
As described in T91186, this commit moves mesh vertex normals into a
contiguous array of float vectors in a custom data layer, how face
normals are currently stored.
The main interface is documented in `BKE_mesh.h`. Vertex and face
normals are now calculated on-demand and cached, retrieved with an
"ensure" function. Since the logical state of a mesh is now "has
normals when necessary", they can be retrieved from a `const` mesh.
The goal is to use on-demand calculation for all derived data, but
leave room for eager calculation for performance purposes (modifier
evaluation is threaded, but viewport data generation is not).
**Benefits**
This moves us closer to a SoA approach rather than the current AoS
paradigm. Accessing a contiguous `float3` is much more efficient than
retrieving data from a larger struct. The memory requirements for
accessing only normals or vertex locations are smaller, and at the
cost of more memory usage for just normals, they now don't have to
be converted between float and short, which also simplifies code
In the future, the remaining items can be removed from `MVert`,
leaving only `float3`, which has similar benefits (see T93602).
Removing the combination of derived and original data makes it
conceptually simpler to only calculate normals when necessary.
This is especially important now that we have more opportunities
for temporary meshes in geometry nodes.
**Performance**
In addition to the theoretical future performance improvements by
making `MVert == float3`, I've done some basic performance testing
on this patch directly. The data is fairly rough, but it gives an idea
about where things stand generally.
- Mesh line primitive 4m Verts: 1.16x faster (36 -> 31 ms),
showing that accessing just `MVert` is now more efficient.
- Spring Splash Screen: 1.03-1.06 -> 1.06-1.11 FPS, a very slight
change that at least shows there is no regression.
- Sprite Fright Snail Smoosh: 3.30-3.40 -> 3.42-3.50 FPS, a small
but observable speedup.
- Set Position Node with Scaled Normal: 1.36x faster (53 -> 39 ms),
shows that using normals in geometry nodes is faster.
- Normal Calculation 1.6m Vert Cube: 1.19x faster (25 -> 21 ms),
shows that calculating normals is slightly faster now.
- File Size of 1.6m Vert Cube: 1.03x smaller (214.7 -> 208.4 MB),
Normals are not saved in files, which can help with large meshes.
As for memory usage, it may be slightly more in some cases, but
I didn't observe any difference in the production files I tested.
**Tests**
Some modifiers and cycles test results need to be updated with this
commit, for two reasons:
- The subdivision surface modifier is not responsible for calculating
normals anymore. In master, the modifier creates different normals
than the result of the `Mesh` normal calculation, so this is a bug
fix.
- There are small differences in the results of some modifiers that
use normals because they are not converted to and from `short`
anymore.
**Future improvements**
- Remove `ModifierTypeInfo::dependsOnNormals`. Code in each modifier
already retrieves normals if they are needed anyway.
- Copy normals as part of a better CoW system for attributes.
- Make more areas use lazy instead of eager normal calculation.
- Remove `BKE_mesh_normals_tag_dirty` in more places since that is
now the default state of a new mesh.
- Possibly apply a similar change to derived face corner normals.
Differential Revision: https://developer.blender.org/D12770
Diffstat (limited to 'source/blender/blenkernel/intern/mesh.cc')
-rw-r--r-- | source/blender/blenkernel/intern/mesh.cc | 85 |
1 files changed, 42 insertions, 43 deletions
diff --git a/source/blender/blenkernel/intern/mesh.cc b/source/blender/blenkernel/intern/mesh.cc index 8ceaced1972..1aa3477437a 100644 --- a/source/blender/blenkernel/intern/mesh.cc +++ b/source/blender/blenkernel/intern/mesh.cc @@ -94,6 +94,10 @@ static void mesh_init_data(ID *id) BKE_mesh_runtime_init_data(mesh); + /* A newly created mesh does not have normals, so tag them dirty. This will be cleared + * by #BKE_mesh_vertex_normals_clear_dirty or #BKE_mesh_poly_normals_ensure. */ + BKE_mesh_normals_tag_dirty(mesh); + mesh->face_sets_color_seed = BLI_hash_int(PIL_check_seconds_timer_i() & UINT_MAX); } @@ -150,12 +154,21 @@ static void mesh_copy_data(Main *bmain, ID *id_dst, const ID *id_src, const int mesh_dst->mselect = (MSelect *)MEM_dupallocN(mesh_dst->mselect); + /* Set normal layers dirty, since they aren't included in CD_MASK_MESH and are therefore not + * copied to the destination mesh. Alternatively normal layers could be copied if they aren't + * dirty, avoiding recomputation in some cases. However, a copied mesh is often changed anyway, + * so that idea is not clearly better. With proper reference counting, all custom data layers + * could be copied as the cost would be much lower. */ + BKE_mesh_normals_tag_dirty(mesh_dst); + /* TODO: Do we want to add flag to prevent this? */ if (mesh_src->key && (flag & LIB_ID_COPY_SHAPEKEY)) { BKE_id_copy_ex(bmain, &mesh_src->key->id, (ID **)&mesh_dst->key, flag); /* XXX This is not nice, we need to make BKE_id_copy_ex fully re-entrant... */ mesh_dst->key->from = &mesh_dst->id; } + + BKE_mesh_assert_normals_dirty_or_calculated(mesh_dst); } static void mesh_free_data(ID *id) @@ -335,6 +348,10 @@ static void mesh_blend_read_data(BlendDataReader *reader, ID *id) BLI_endian_switch_uint32_array(tf->col, 4); } } + + /* We don't expect to load normals from files, since they are derived data. */ + BKE_mesh_normals_tag_dirty(mesh); + BKE_mesh_assert_normals_dirty_or_calculated(mesh); } static void mesh_blend_read_lib(BlendLibReader *reader, ID *id) @@ -1036,6 +1053,8 @@ void BKE_mesh_copy_parameters_for_eval(Mesh *me_dst, const Mesh *me_src) BKE_mesh_copy_parameters(me_dst, me_src); + BKE_mesh_assert_normals_dirty_or_calculated(me_dst); + /* Copy vertex group names. */ BLI_assert(BLI_listbase_is_empty(&me_dst->vertex_group_names)); BKE_defgroup_copy_list(&me_dst->vertex_group_names, &me_src->vertex_group_names); @@ -1083,6 +1102,18 @@ Mesh *BKE_mesh_new_nomain_from_template_ex(const Mesh *me_src, mesh_tessface_clear_intern(me_dst, false); } + me_dst->runtime.cd_dirty_poly = me_src->runtime.cd_dirty_poly; + me_dst->runtime.cd_dirty_vert = me_src->runtime.cd_dirty_vert; + + /* Ensure that when no normal layers exist, they are marked dirty, because + * normals might not have been included in the mask of copied layers. */ + if (!CustomData_has_layer(&me_dst->vdata, CD_NORMAL)) { + me_dst->runtime.cd_dirty_vert |= CD_MASK_NORMAL; + } + if (!CustomData_has_layer(&me_dst->pdata, CD_NORMAL)) { + me_dst->runtime.cd_dirty_poly |= CD_MASK_NORMAL; + } + /* The destination mesh should at least have valid primary CD layers, * even in cases where the source mesh does not. */ mesh_ensure_cdlayers_primary(me_dst, do_tessface); @@ -1882,24 +1913,10 @@ void BKE_mesh_vert_coords_apply_with_mat4(Mesh *mesh, BKE_mesh_normals_tag_dirty(mesh); } -void BKE_mesh_vert_normals_apply(Mesh *mesh, const short (*vert_normals)[3]) -{ - /* This will just return the pointer if it wasn't a referenced layer. */ - MVert *mv = (MVert *)CustomData_duplicate_referenced_layer( - &mesh->vdata, CD_MVERT, mesh->totvert); - mesh->mvert = mv; - for (int i = 0; i < mesh->totvert; i++, mv++) { - copy_v3_v3_short(mv->no, vert_normals[i]); - } - mesh->runtime.cd_dirty_vert &= ~CD_MASK_NORMAL; -} - void BKE_mesh_calc_normals_split_ex(Mesh *mesh, MLoopNorSpaceArray *r_lnors_spacearr) { float(*r_loopnors)[3]; - float(*polynors)[3]; short(*clnors)[2] = nullptr; - bool free_polynors = false; /* Note that we enforce computing clnors when the clnor space array is requested by caller here. * However, we obviously only use the auto-smooth angle threshold @@ -1921,26 +1938,8 @@ void BKE_mesh_calc_normals_split_ex(Mesh *mesh, MLoopNorSpaceArray *r_lnors_spac /* may be nullptr */ clnors = (short(*)[2])CustomData_get_layer(&mesh->ldata, CD_CUSTOMLOOPNORMAL); - if (CustomData_has_layer(&mesh->pdata, CD_NORMAL)) { - /* This assume that layer is always up to date, not sure this is the case - * (esp. in Edit mode?)... */ - polynors = (float(*)[3])CustomData_get_layer(&mesh->pdata, CD_NORMAL); - free_polynors = false; - } - else { - polynors = (float(*)[3])MEM_malloc_arrayN(mesh->totpoly, sizeof(float[3]), __func__); - BKE_mesh_calc_normals_poly_and_vertex(mesh->mvert, - mesh->totvert, - mesh->mloop, - mesh->totloop, - mesh->mpoly, - mesh->totpoly, - polynors, - nullptr); - free_polynors = true; - } - BKE_mesh_normals_loop_split(mesh->mvert, + BKE_mesh_vertex_normals_ensure(mesh), mesh->totvert, mesh->medge, mesh->totedge, @@ -1948,7 +1947,7 @@ void BKE_mesh_calc_normals_split_ex(Mesh *mesh, MLoopNorSpaceArray *r_lnors_spac r_loopnors, mesh->totloop, mesh->mpoly, - (const float(*)[3])polynors, + BKE_mesh_poly_normals_ensure(mesh), mesh->totpoly, use_split_normals, split_angle, @@ -1956,12 +1955,8 @@ void BKE_mesh_calc_normals_split_ex(Mesh *mesh, MLoopNorSpaceArray *r_lnors_spac clnors, nullptr); - if (free_polynors) { - MEM_freeN(polynors); - } + BKE_mesh_assert_normals_dirty_or_calculated(mesh); - mesh->runtime.cd_dirty_vert &= ~CD_MASK_NORMAL; - mesh->runtime.cd_dirty_poly &= ~CD_MASK_NORMAL; mesh->runtime.cd_dirty_loop &= ~CD_MASK_NORMAL; } @@ -1989,7 +1984,7 @@ struct SplitFaceNewEdge { /* Detect needed new vertices, and update accordingly loops' vertex indices. * WARNING! Leaves mesh in invalid state. */ -static int split_faces_prepare_new_verts(const Mesh *mesh, +static int split_faces_prepare_new_verts(Mesh *mesh, MLoopNorSpaceArray *lnors_spacearr, SplitFaceNewVert **new_verts, MemArena *memarena) @@ -2001,8 +1996,9 @@ static int split_faces_prepare_new_verts(const Mesh *mesh, const int loops_len = mesh->totloop; int verts_len = mesh->totvert; - MVert *mvert = mesh->mvert; MLoop *mloop = mesh->mloop; + BKE_mesh_vertex_normals_ensure(mesh); + float(*vert_normals)[3] = BKE_mesh_vertex_normals_for_write(mesh); BLI_bitmap *verts_used = BLI_BITMAP_NEW(verts_len, __func__); BLI_bitmap *done_loops = BLI_BITMAP_NEW(loops_len, __func__); @@ -2046,7 +2042,7 @@ static int split_faces_prepare_new_verts(const Mesh *mesh, * vnor should always be defined to 'automatic normal' value computed from its polys, * not some custom normal. * Fortunately, that's the loop normal space's 'lnor' reference vector. ;) */ - normal_float_to_short_v3(mvert[vert_idx].no, (*lnor_space)->vec_lnor); + copy_v3_v3(vert_normals[vert_idx], (*lnor_space)->vec_lnor); } else { /* Add new vert to list. */ @@ -2137,6 +2133,7 @@ static void split_faces_split_new_verts(Mesh *mesh, { const int verts_len = mesh->totvert - num_new_verts; MVert *mvert = mesh->mvert; + float(*vert_normals)[3] = BKE_mesh_vertex_normals_for_write(mesh); /* Remember new_verts is a single linklist, so its items are in reversed order... */ MVert *new_mv = &mvert[mesh->totvert - 1]; @@ -2145,9 +2142,10 @@ static void split_faces_split_new_verts(Mesh *mesh, BLI_assert(new_verts->new_index != new_verts->orig_index); CustomData_copy_data(&mesh->vdata, &mesh->vdata, new_verts->orig_index, i, 1); if (new_verts->vnor) { - normal_float_to_short_v3(new_mv->no, new_verts->vnor); + copy_v3_v3(vert_normals[i], new_verts->vnor); } } + BKE_mesh_vertex_normals_clear_dirty(mesh); } /* Perform actual split of edges. */ @@ -2235,6 +2233,7 @@ void BKE_mesh_split_faces(Mesh *mesh, bool free_loop_normals) /* Also frees new_verts/edges temp data, since we used its memarena to allocate them. */ BKE_lnor_spacearr_free(&lnors_spacearr); + BKE_mesh_assert_normals_dirty_or_calculated(mesh); #ifdef VALIDATE_MESH BKE_mesh_validate(mesh, true, true); #endif |