Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.blender.org/blender.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorHans Goudey <h.goudey@me.com>2022-09-05 19:56:34 +0300
committerHans Goudey <h.goudey@me.com>2022-09-05 19:56:34 +0300
commit05952aa94d33eeb504fa63618ba35c2bcc8bd19b (patch)
treec9ec37adf20c3c37ccaab44869220dcbe8e987a3 /source/blender/blenkernel/intern/mesh_iterators.c
parent63cfc8f9f6d623f33b50c5c07976af2b22845713 (diff)
Mesh: Remove redundant custom data pointers
For copy-on-write, we want to share attribute arrays between meshes where possible. Mutable pointers like `Mesh.mvert` make that difficult by making ownership vague. They also make code more complex by adding redundancy. The simplest solution is just removing them and retrieving layers from `CustomData` as needed. Similar changes have already been applied to curves and point clouds (e9f82d3dc7ee, 410a6efb747f). Removing use of the pointers generally makes code more obvious and more reusable. Mesh data is now accessed with a C++ API (`Mesh::edges()` or `Mesh::edges_for_write()`), and a C API (`BKE_mesh_edges(mesh)`). The CoW changes this commit makes possible are described in T95845 and T95842, and started in D14139 and D14140. The change also simplifies the ongoing mesh struct-of-array refactors from T95965. **RNA/Python Access Performance** Theoretically, accessing mesh elements with the RNA API may become slower, since the layer needs to be found on every random access. However, overhead is already high enough that this doesn't make a noticible differenc, and performance is actually improved in some cases. Random access can be up to 10% faster, but other situations might be a bit slower. Generally using `foreach_get/set` are the best way to improve performance. See the differential revision for more discussion about Python performance. Cycles has been updated to use raw pointers and the internal Blender mesh types, mostly because there is no sense in having this overhead when it's already compiled with Blender. In my tests this roughly halves the Cycles mesh creation time (0.19s to 0.10s for a 1 million face grid). Differential Revision: https://developer.blender.org/D15488
Diffstat (limited to 'source/blender/blenkernel/intern/mesh_iterators.c')
-rw-r--r--source/blender/blenkernel/intern/mesh_iterators.c33
1 files changed, 18 insertions, 15 deletions
diff --git a/source/blender/blenkernel/intern/mesh_iterators.c b/source/blender/blenkernel/intern/mesh_iterators.c
index 77e62918441..352ad8e9042 100644
--- a/source/blender/blenkernel/intern/mesh_iterators.c
+++ b/source/blender/blenkernel/intern/mesh_iterators.c
@@ -65,7 +65,7 @@ void BKE_mesh_foreach_mapped_vert(
}
}
else {
- const MVert *mv = mesh->mvert;
+ const MVert *mv = BKE_mesh_vertices(mesh);
const int *index = CustomData_get_layer(&mesh->vdata, CD_ORIGINDEX);
const float(*vert_normals)[3] = (flag & MESH_FOREACH_USE_NORMAL) ?
BKE_mesh_vertex_normals_ensure(mesh) :
@@ -120,8 +120,8 @@ void BKE_mesh_foreach_mapped_edge(
}
}
else {
- const MVert *mv = mesh->mvert;
- const MEdge *med = mesh->medge;
+ const MVert *mv = BKE_mesh_vertices(mesh);
+ const MEdge *med = BKE_mesh_edges(mesh);
const int *index = CustomData_get_layer(&mesh->edata, CD_ORIGINDEX);
if (index) {
@@ -188,9 +188,9 @@ void BKE_mesh_foreach_mapped_loop(Mesh *mesh,
CustomData_get_layer(&mesh->ldata, CD_NORMAL) :
NULL;
- const MVert *mv = mesh->mvert;
- const MLoop *ml = mesh->mloop;
- const MPoly *mp = mesh->mpoly;
+ const MVert *mv = BKE_mesh_vertices(mesh);
+ const MLoop *ml = BKE_mesh_loops(mesh);
+ const MPoly *mp = BKE_mesh_polygons(mesh);
const int *v_index = CustomData_get_layer(&mesh->vdata, CD_ORIGINDEX);
const int *f_index = CustomData_get_layer(&mesh->pdata, CD_ORIGINDEX);
int p_idx, i;
@@ -261,8 +261,9 @@ void BKE_mesh_foreach_mapped_face_center(
}
}
else {
- const MVert *mvert = mesh->mvert;
- const MPoly *mp = mesh->mpoly;
+ const MVert *mvert = BKE_mesh_vertices(mesh);
+ const MPoly *mp = BKE_mesh_polygons(mesh);
+ const MLoop *loops = BKE_mesh_loops(mesh);
const MLoop *ml;
float _no_buf[3];
float *no = (flag & MESH_FOREACH_USE_NORMAL) ? _no_buf : NULL;
@@ -275,7 +276,7 @@ void BKE_mesh_foreach_mapped_face_center(
continue;
}
float cent[3];
- ml = &mesh->mloop[mp->loopstart];
+ ml = &loops[mp->loopstart];
BKE_mesh_calc_poly_center(mp, ml, mvert, cent);
if (flag & MESH_FOREACH_USE_NORMAL) {
BKE_mesh_calc_poly_normal(mp, ml, mvert, no);
@@ -286,7 +287,7 @@ void BKE_mesh_foreach_mapped_face_center(
else {
for (int i = 0; i < mesh->totpoly; i++, mp++) {
float cent[3];
- ml = &mesh->mloop[mp->loopstart];
+ ml = &loops[mp->loopstart];
BKE_mesh_calc_poly_center(mp, ml, mvert, cent);
if (flag & MESH_FOREACH_USE_NORMAL) {
BKE_mesh_calc_poly_normal(mp, ml, mvert, no);
@@ -303,7 +304,9 @@ void BKE_mesh_foreach_mapped_subdiv_face_center(
void *userData,
MeshForeachFlag flag)
{
- const MPoly *mp = mesh->mpoly;
+ const MVert *verts = BKE_mesh_vertices(mesh);
+ const MPoly *mp = BKE_mesh_polygons(mesh);
+ const MLoop *loops = BKE_mesh_loops(mesh);
const MLoop *ml;
const MVert *mv;
const float(*vert_normals)[3] = (flag & MESH_FOREACH_USE_NORMAL) ?
@@ -319,9 +322,9 @@ void BKE_mesh_foreach_mapped_subdiv_face_center(
if (orig == ORIGINDEX_NONE) {
continue;
}
- ml = &mesh->mloop[mp->loopstart];
+ ml = &loops[mp->loopstart];
for (int j = 0; j < mp->totloop; j++, ml++) {
- mv = &mesh->mvert[ml->v];
+ mv = &verts[ml->v];
if (BLI_BITMAP_TEST(facedot_tags, ml->v)) {
func(userData,
orig,
@@ -333,9 +336,9 @@ void BKE_mesh_foreach_mapped_subdiv_face_center(
}
else {
for (int i = 0; i < mesh->totpoly; i++, mp++) {
- ml = &mesh->mloop[mp->loopstart];
+ ml = &loops[mp->loopstart];
for (int j = 0; j < mp->totloop; j++, ml++) {
- mv = &mesh->mvert[ml->v];
+ mv = &verts[ml->v];
if (BLI_BITMAP_TEST(facedot_tags, ml->v)) {
func(userData, i, mv->co, (flag & MESH_FOREACH_USE_NORMAL) ? vert_normals[ml->v] : NULL);
}