Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.blender.org/blender.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorHans Goudey <h.goudey@me.com>2022-09-05 19:56:34 +0300
committerHans Goudey <h.goudey@me.com>2022-09-05 19:56:34 +0300
commit05952aa94d33eeb504fa63618ba35c2bcc8bd19b (patch)
treec9ec37adf20c3c37ccaab44869220dcbe8e987a3 /source/blender/blenkernel/intern/mesh_calc_edges.cc
parent63cfc8f9f6d623f33b50c5c07976af2b22845713 (diff)
Mesh: Remove redundant custom data pointers
For copy-on-write, we want to share attribute arrays between meshes where possible. Mutable pointers like `Mesh.mvert` make that difficult by making ownership vague. They also make code more complex by adding redundancy. The simplest solution is just removing them and retrieving layers from `CustomData` as needed. Similar changes have already been applied to curves and point clouds (e9f82d3dc7ee, 410a6efb747f). Removing use of the pointers generally makes code more obvious and more reusable. Mesh data is now accessed with a C++ API (`Mesh::edges()` or `Mesh::edges_for_write()`), and a C API (`BKE_mesh_edges(mesh)`). The CoW changes this commit makes possible are described in T95845 and T95842, and started in D14139 and D14140. The change also simplifies the ongoing mesh struct-of-array refactors from T95965. **RNA/Python Access Performance** Theoretically, accessing mesh elements with the RNA API may become slower, since the layer needs to be found on every random access. However, overhead is already high enough that this doesn't make a noticible differenc, and performance is actually improved in some cases. Random access can be up to 10% faster, but other situations might be a bit slower. Generally using `foreach_get/set` are the best way to improve performance. See the differential revision for more discussion about Python performance. Cycles has been updated to use raw pointers and the internal Blender mesh types, mostly because there is no sense in having this overhead when it's already compiled with Blender. In my tests this roughly halves the Cycles mesh creation time (0.19s to 0.10s for a 1 million face grid). Differential Revision: https://developer.blender.org/D15488
Diffstat (limited to 'source/blender/blenkernel/intern/mesh_calc_edges.cc')
-rw-r--r--source/blender/blenkernel/intern/mesh_calc_edges.cc14
1 files changed, 8 insertions, 6 deletions
diff --git a/source/blender/blenkernel/intern/mesh_calc_edges.cc b/source/blender/blenkernel/intern/mesh_calc_edges.cc
index 31e20750cf2..8a8960fb8a7 100644
--- a/source/blender/blenkernel/intern/mesh_calc_edges.cc
+++ b/source/blender/blenkernel/intern/mesh_calc_edges.cc
@@ -80,9 +80,10 @@ static void add_existing_edges_to_hash_maps(Mesh *mesh,
uint32_t parallel_mask)
{
/* Assume existing edges are valid. */
+ const Span<MEdge> edges = mesh->edges();
threading::parallel_for_each(edge_maps, [&](EdgeMap &edge_map) {
const int task_index = &edge_map - edge_maps.data();
- for (const MEdge &edge : Span(mesh->medge, mesh->totedge)) {
+ for (const MEdge &edge : edges) {
OrderedEdge ordered_edge{edge.v1, edge.v2};
/* Only add the edge when it belongs into this map. */
if (task_index == (parallel_mask & ordered_edge.hash2())) {
@@ -96,10 +97,11 @@ static void add_polygon_edges_to_hash_maps(Mesh *mesh,
MutableSpan<EdgeMap> edge_maps,
uint32_t parallel_mask)
{
- const Span<MLoop> loops{mesh->mloop, mesh->totloop};
+ const Span<MPoly> polys = mesh->polygons();
+ const Span<MLoop> loops = mesh->loops();
threading::parallel_for_each(edge_maps, [&](EdgeMap &edge_map) {
const int task_index = &edge_map - edge_maps.data();
- for (const MPoly &poly : Span(mesh->mpoly, mesh->totpoly)) {
+ for (const MPoly &poly : polys) {
Span<MLoop> poly_loops = loops.slice(poly.loopstart, poly.totloop);
const MLoop *prev_loop = &poly_loops.last();
for (const MLoop &next_loop : poly_loops) {
@@ -157,10 +159,11 @@ static void update_edge_indices_in_poly_loops(Mesh *mesh,
Span<EdgeMap> edge_maps,
uint32_t parallel_mask)
{
- const MutableSpan<MLoop> loops{mesh->mloop, mesh->totloop};
+ const Span<MPoly> polys = mesh->polygons();
+ MutableSpan<MLoop> loops = mesh->loops_for_write();
threading::parallel_for(IndexRange(mesh->totpoly), 100, [&](IndexRange range) {
for (const int poly_index : range) {
- MPoly &poly = mesh->mpoly[poly_index];
+ const MPoly &poly = polys[poly_index];
MutableSpan<MLoop> poly_loops = loops.slice(poly.loopstart, poly.totloop);
MLoop *prev_loop = &poly_loops.last();
@@ -242,7 +245,6 @@ void BKE_mesh_calc_edges(Mesh *mesh, bool keep_existing_edges, const bool select
CustomData_reset(&mesh->edata);
CustomData_add_layer(&mesh->edata, CD_MEDGE, CD_ASSIGN, new_edges.data(), new_totedge);
mesh->totedge = new_totedge;
- mesh->medge = new_edges.data();
/* Explicitly clear edge maps, because that way it can be parallelized. */
clear_hash_tables(edge_maps);