Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.blender.org/blender.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/source
diff options
context:
space:
mode:
authorSergey Sharybin <sergey.vfx@gmail.com>2015-07-20 17:08:06 +0300
committerSergey Sharybin <sergey.vfx@gmail.com>2015-07-20 23:29:26 +0300
commit3d364896725db8336d785ba6cf977b62c0f2c0ce (patch)
treeb189cdc4956b557d19b0c2ea976c6f48affa1ed2 /source
parent2466c4f8cebd3977f29524d79050feff44b40fff (diff)
OpenSubdiv: Commit of OpenSubdiv integration into Blender
This commit contains all the remained parts needed for initial integration of OpenSubdiv into Blender's subdivision surface code. Includes both GPU and CPU backends which works in the following way: - When SubSurf modifier is the last in the modifiers stack then GPU pipeline of OpenSubdiv is used, making viewport performance as fast as possible. This also requires graphscard with GLSL 1.5 support. If this requirement is not met, then no GPU pipeline is used at all. - If SubSurf is not a last modifier or if DerivesMesh is being evaluated for rendering then CPU limit evaluation API from OpenSubdiv is used. This only replaces the legacy evaluation code from CCGSubSurf_legacy, but keeps CCG structures exactly the same as they used to be for ages now. This integration is fully covered with ifdef and not enabled by default because there are several TODOs to be solved first: - Face varying data interpolation is not really cleanly implemented for GPU in OpenSubdiv 3.0. It is also not implemented for limit evaluation API. This basically means we'll have really hard time supporting UVs. - Limit evaluation only works with adaptivly subdivided meshes so far, which basically means all the points of CCG are pushed to the limit. This gives different result from old code. - There are some serious optimizations possible on the topology refiner creation, which would speed up initial OpenSubdiv mesh creation. - There are some hardcoded asumptions in the GPU and DerivedMesh areas which could be generalized. That's something where Antony and Campbell can help, making it so the code is structured in a way which is reusable by all planned viewport projects. - There are also some workarounds in the dependency graph to make sure OpenGL buffers are only freed from the main thread. Those who'll be wanting to make experiments with this code should grab dev branch (NOT master) from https://github.com/Nazg-Gul/OpenSubdiv/tree/dev There are some patches applied in there which we're working on on getting into upstream.
Diffstat (limited to 'source')
-rw-r--r--source/blender/blenkernel/BKE_modifier.h6
-rw-r--r--source/blender/blenkernel/BKE_subsurf.h5
-rw-r--r--source/blender/blenkernel/CMakeLists.txt10
-rw-r--r--source/blender/blenkernel/SConscript5
-rw-r--r--source/blender/blenkernel/intern/CCGSubSurf.c98
-rw-r--r--source/blender/blenkernel/intern/CCGSubSurf.h46
-rw-r--r--source/blender/blenkernel/intern/CCGSubSurf_intern.h57
-rw-r--r--source/blender/blenkernel/intern/DerivedMesh.c66
-rw-r--r--source/blender/blenkernel/intern/scene.c45
-rw-r--r--source/blender/blenkernel/intern/subsurf_ccg.c609
-rw-r--r--source/blender/gpu/CMakeLists.txt4
-rw-r--r--source/blender/gpu/GPU_draw.h5
-rw-r--r--source/blender/gpu/GPU_material.h10
-rw-r--r--source/blender/gpu/SConscript4
-rw-r--r--source/blender/gpu/intern/gpu_codegen.c178
-rw-r--r--source/blender/gpu/intern/gpu_codegen.h3
-rw-r--r--source/blender/gpu/intern/gpu_draw.c73
-rw-r--r--source/blender/gpu/intern/gpu_extensions.c100
-rw-r--r--source/blender/gpu/intern/gpu_material.c89
-rw-r--r--source/blender/gpu/intern/gpu_simple_shader.c3
-rw-r--r--source/blender/gpu/shaders/gpu_shader_material.glsl5
-rw-r--r--source/blender/gpu/shaders/gpu_shader_vertex.glsl21
-rw-r--r--source/blender/makesdna/DNA_userdef_types.h13
-rw-r--r--source/blender/makesrna/SConscript4
-rw-r--r--source/blender/makesrna/intern/CMakeLists.txt7
-rw-r--r--source/blender/makesrna/intern/SConscript4
-rw-r--r--source/blender/makesrna/intern/rna_userdef.c75
-rw-r--r--source/blender/modifiers/CMakeLists.txt4
-rw-r--r--source/blender/modifiers/intern/MOD_subsurf.c31
-rw-r--r--source/blender/windowmanager/CMakeLists.txt7
-rw-r--r--source/blender/windowmanager/SConscript4
-rw-r--r--source/blender/windowmanager/intern/wm_init_exit.c8
-rw-r--r--source/blenderplayer/CMakeLists.txt4
-rw-r--r--source/gameengine/Ketsji/BL_BlenderShader.cpp2
-rw-r--r--source/gameengine/Rasterizer/RAS_OpenGLRasterizer/RAS_StorageIM.cpp2
35 files changed, 1428 insertions, 179 deletions
diff --git a/source/blender/blenkernel/BKE_modifier.h b/source/blender/blenkernel/BKE_modifier.h
index cc53f9409fd..ded6e13e003 100644
--- a/source/blender/blenkernel/BKE_modifier.h
+++ b/source/blender/blenkernel/BKE_modifier.h
@@ -116,6 +116,12 @@ typedef enum ModifierApplyFlag {
MOD_APPLY_IGNORE_SIMPLIFY = 1 << 3, /* Ignore scene simplification flag and use subdivisions
* level set in multires modifier.
*/
+ MOD_APPLY_ALLOW_GPU = 1 << 4, /* Allow modifier to be applied and stored in the GPU.
+ * Used by the viewport in order to be able to have SS
+ * happening on GPU.
+ * Render pipeline (including viewport render) should
+ * have DM on the CPU.
+ */
} ModifierApplyFlag;
diff --git a/source/blender/blenkernel/BKE_subsurf.h b/source/blender/blenkernel/BKE_subsurf.h
index c1c96c8228c..93baca37100 100644
--- a/source/blender/blenkernel/BKE_subsurf.h
+++ b/source/blender/blenkernel/BKE_subsurf.h
@@ -57,7 +57,8 @@ typedef enum {
SUBSURF_IS_FINAL_CALC = 2,
SUBSURF_FOR_EDIT_MODE = 4,
SUBSURF_IN_EDIT_MODE = 8,
- SUBSURF_ALLOC_PAINT_MASK = 16
+ SUBSURF_ALLOC_PAINT_MASK = 16,
+ SUBSURF_USE_GPU_BACKEND = 32,
} SubsurfFlags;
struct DerivedMesh *subsurf_make_derived_from_derived(
@@ -99,7 +100,7 @@ typedef struct CCGDerivedMesh {
struct CCGSubSurf *ss;
int freeSS;
- int drawInteriorEdges, useSubsurfUv;
+ int drawInteriorEdges, useSubsurfUv, useGpuBackend;
struct {int startVert; struct CCGVert *vert; } *vertMap;
struct {int startVert; int startEdge; struct CCGEdge *edge; } *edgeMap;
diff --git a/source/blender/blenkernel/CMakeLists.txt b/source/blender/blenkernel/CMakeLists.txt
index 4b489a02459..349d1303da3 100644
--- a/source/blender/blenkernel/CMakeLists.txt
+++ b/source/blender/blenkernel/CMakeLists.txt
@@ -62,6 +62,8 @@ set(INC_SYS
set(SRC
intern/CCGSubSurf.c
intern/CCGSubSurf_legacy.c
+ intern/CCGSubSurf_opensubdiv.c
+ intern/CCGSubSurf_opensubdiv_converter.c
intern/CCGSubSurf_util.c
intern/DerivedMesh.c
intern/action.c
@@ -496,6 +498,14 @@ if(WITH_FREESTYLE)
add_definitions(-DWITH_FREESTYLE)
endif()
+if(WITH_OPENSUBDIV)
+ add_definitions(-DWITH_OPENSUBDIV)
+ list(APPEND INC_SYS
+ ../../../intern/opensubdiv
+ ${OPENSUBDIV_INCLUDE_DIRS}
+ )
+endif()
+
## Warnings as errors, this is too strict!
#if(MSVC)
# set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /WX")
diff --git a/source/blender/blenkernel/SConscript b/source/blender/blenkernel/SConscript
index 9d19e1c29b4..5e6d402115b 100644
--- a/source/blender/blenkernel/SConscript
+++ b/source/blender/blenkernel/SConscript
@@ -168,6 +168,11 @@ if env['WITH_BF_INTERNATIONAL']:
if env['WITH_BF_FREESTYLE']:
defs.append('WITH_FREESTYLE')
+if env['WITH_BF_OPENSUBDIV']:
+ defs.append('WITH_OPENSUBDIV')
+ incs += ' #intern/opensubdiv'
+ incs += ' ' + env['BF_OPENSUBDIV_INC']
+
if env['OURPLATFORM'] in ('win32-vc', 'win32-mingw', 'linuxcross', 'win64-vc', 'win64-mingw'):
incs += ' ' + env['BF_PTHREADS_INC']
incs += ' ../../../intern/utfconv'
diff --git a/source/blender/blenkernel/intern/CCGSubSurf.c b/source/blender/blenkernel/intern/CCGSubSurf.c
index a60e1fa3076..9ac6166606e 100644
--- a/source/blender/blenkernel/intern/CCGSubSurf.c
+++ b/source/blender/blenkernel/intern/CCGSubSurf.c
@@ -37,6 +37,11 @@
#include "CCGSubSurf_intern.h"
#include "BKE_subsurf.h"
+#ifdef WITH_OPENSUBDIV
+# include "opensubdiv_capi.h"
+# include "opensubdiv_converter_capi.h"
+#endif
+
#include "GL/glew.h"
/***/
@@ -299,6 +304,23 @@ CCGSubSurf *ccgSubSurf_new(CCGMeshIFC *ifc, int subdivLevels, CCGAllocatorIFC *a
ss->tempVerts = NULL;
ss->tempEdges = NULL;
+#ifdef WITH_OPENSUBDIV
+ ss->osd_evaluator = NULL;
+ ss->osd_mesh = NULL;
+ ss->osd_topology_refiner = NULL;
+ ss->osd_mesh_invalid = false;
+ ss->osd_coarse_coords_invalid = false;
+ ss->osd_vao = 0;
+ ss->skip_grids = false;
+ ss->osd_compute = 0;
+ ss->osd_uvs_invalid = true;
+ ss->osd_subsurf_uv = 0;
+ ss->osd_uv_index = -1;
+ ss->osd_next_face_ptex_index = 0;
+ ss->osd_coarse_coords = NULL;
+ ss->osd_num_coarse_coords = 0;
+#endif
+
return ss;
}
}
@@ -307,6 +329,24 @@ void ccgSubSurf_free(CCGSubSurf *ss)
{
CCGAllocatorIFC allocatorIFC = ss->allocatorIFC;
CCGAllocatorHDL allocator = ss->allocator;
+#ifdef WITH_OPENSUBDIV
+ if (ss->osd_evaluator != NULL) {
+ openSubdiv_deleteEvaluatorDescr(ss->osd_evaluator);
+ }
+ if (ss->osd_mesh != NULL) {
+ /* TODO(sergey): Make sure free happens form the main thread! */
+ openSubdiv_deleteOsdGLMesh(ss->osd_mesh);
+ }
+ if (ss->osd_vao != 0) {
+ glDeleteVertexArrays(1, &ss->osd_vao);
+ }
+ if (ss->osd_coarse_coords != NULL) {
+ MEM_freeN(ss->osd_coarse_coords);
+ }
+ if (ss->osd_topology_refiner != NULL) {
+ openSubdiv_deleteTopologyRefinerDescr(ss->osd_topology_refiner);
+ }
+#endif
if (ss->syncState) {
ccg_ehash_free(ss->oldFMap, (EHEntryFreeFP) _face_free, ss);
@@ -467,6 +507,9 @@ CCGError ccgSubSurf_initFullSync(CCGSubSurf *ss)
ss->tempEdges = MEM_mallocN(sizeof(*ss->tempEdges) * ss->lenTempArrays, "CCGSubsurf tempEdges");
ss->syncState = eSyncState_Vert;
+#ifdef WITH_OPENSUBDIV
+ ss->osd_next_face_ptex_index = 0;
+#endif
return eCCGError_None;
}
@@ -607,6 +650,9 @@ CCGError ccgSubSurf_syncVert(CCGSubSurf *ss, CCGVertHDL vHDL, const void *vertDa
ccg_ehash_insert(ss->vMap, (EHEntry *) v);
v->flags = 0;
}
+#ifdef WITH_OPENSUBDIV
+ v->osd_index = ss->vMap->numEntries - 1;
+#endif
}
if (v_r) *v_r = v;
@@ -789,6 +835,15 @@ CCGError ccgSubSurf_syncFace(CCGSubSurf *ss, CCGFaceHDL fHDL, int numVerts, CCGV
}
}
}
+#ifdef WITH_OPENSUBDIV
+ f->osd_index = ss->osd_next_face_ptex_index;
+ if (numVerts == 4) {
+ ss->osd_next_face_ptex_index++;
+ }
+ else {
+ ss->osd_next_face_ptex_index += numVerts;
+ }
+#endif
}
if (f_r) *f_r = f;
@@ -797,7 +852,18 @@ CCGError ccgSubSurf_syncFace(CCGSubSurf *ss, CCGFaceHDL fHDL, int numVerts, CCGV
static void ccgSubSurf__sync(CCGSubSurf *ss)
{
- ccgSubSurf__sync_legacy(ss);
+#ifdef WITH_OPENSUBDIV
+ /* TODO(sergey): This is because OSD evaluator does not support
+ * bilinear subdivision scheme at this moment.
+ */
+ if (ss->meshIFC.simpleSubdiv == false || ss->skip_grids == true) {
+ ccgSubSurf__sync_opensubdiv(ss);
+ }
+ else
+#endif
+ {
+ ccgSubSurf__sync_legacy(ss);
+ }
}
CCGError ccgSubSurf_processSync(CCGSubSurf *ss)
@@ -1128,15 +1194,39 @@ CCGError ccgSubSurf_stitchFaces(CCGSubSurf *ss, int lvl, CCGFace **effectedF, in
int ccgSubSurf_getNumVerts(const CCGSubSurf *ss)
{
- return ss->vMap->numEntries;
+#ifdef WITH_OPENSUBDIV
+ if (ss->skip_grids) {
+ return ccgSubSurf__getNumOsdBaseVerts(ss);
+ }
+ else
+#endif
+ {
+ return ss->vMap->numEntries;
+ }
}
int ccgSubSurf_getNumEdges(const CCGSubSurf *ss)
{
- return ss->eMap->numEntries;
+#ifdef WITH_OPENSUBDIV
+ if (ss->skip_grids) {
+ return ccgSubSurf__getNumOsdBaseEdges(ss);
+ }
+ else
+#endif
+ {
+ return ss->eMap->numEntries;
+ }
}
int ccgSubSurf_getNumFaces(const CCGSubSurf *ss)
{
- return ss->fMap->numEntries;
+#ifdef WITH_OPENSUBDIV
+ if (ss->skip_grids) {
+ return ccgSubSurf__getNumOsdBaseFaces(ss);
+ }
+ else
+#endif
+ {
+ return ss->fMap->numEntries;
+ }
}
CCGVert *ccgSubSurf_getVert(CCGSubSurf *ss, CCGVertHDL v)
diff --git a/source/blender/blenkernel/intern/CCGSubSurf.h b/source/blender/blenkernel/intern/CCGSubSurf.h
index 2b86a2a66b2..23f7e71a311 100644
--- a/source/blender/blenkernel/intern/CCGSubSurf.h
+++ b/source/blender/blenkernel/intern/CCGSubSurf.h
@@ -80,6 +80,9 @@ void ccgSubSurf_free (CCGSubSurf *ss);
CCGError ccgSubSurf_initFullSync (CCGSubSurf *ss);
CCGError ccgSubSurf_initPartialSync (CCGSubSurf *ss);
+#ifdef WITH_OPENSUBDIV
+CCGError ccgSubSurf_initOpenSubdivSync (CCGSubSurf *ss);
+#endif
CCGError ccgSubSurf_syncVert (CCGSubSurf *ss, CCGVertHDL vHDL, const void *vertData, int seam, CCGVert **v_r);
CCGError ccgSubSurf_syncEdge (CCGSubSurf *ss, CCGEdgeHDL eHDL, CCGVertHDL e_vHDL0, CCGVertHDL e_vHDL1, float crease, CCGEdge **e_r);
@@ -190,4 +193,47 @@ CCGFace* ccgFaceIterator_getCurrent (CCGFaceIterator *fi);
int ccgFaceIterator_isStopped (CCGFaceIterator *fi);
void ccgFaceIterator_next (CCGFaceIterator *fi);
+#ifdef WITH_OPENSUBDIV
+struct DerivedMesh;
+
+/* Check if topology changed and evaluators are to be re-created. */
+void ccgSubSurf_checkTopologyChanged(CCGSubSurf *ss, struct DerivedMesh *dm);
+
+/* Create topology refiner from give derived mesh which then later will be
+ * used for GL mesh creation.
+ */
+void ccgSubSurf_prepareTopologyRefiner(CCGSubSurf *ss, struct DerivedMesh *dm);
+
+/* Make sure GL mesh exists, up to date and ready to draw. */
+bool ccgSubSurf_prepareGLMesh(CCGSubSurf *ss, bool use_osd_glsl);
+
+/* Draw given partitions of the GL mesh.
+ *
+ * TODO(sergey): fill_quads is actually an invariant and should be part
+ * of the prepare routine.
+ */
+void ccgSubSurf_drawGLMesh(CCGSubSurf *ss, bool fill_quads,
+ int start_partition, int num_partitions);
+
+/* Controls whether CCG are needed (Cmeaning CPU evaluation) or fully GPU compute
+ * and draw is allowed.
+ */
+void ccgSubSurf_setSkipGrids(CCGSubSurf *ss, bool skip_grids);
+bool ccgSubSurf_needGrids(CCGSubSurf *ss);
+
+/* Set evaluator's face varying data from UV coordinates.
+ * Used for CPU evaluation.
+ */
+void ccgSubSurf_evaluatorSetFVarUV(CCGSubSurf *ss,
+ struct DerivedMesh *dm,
+ int layer_index);
+
+/* TODO(sergey): Temporary call to test things. */
+void ccgSubSurf_evaluatorFVarUV(CCGSubSurf *ss,
+ int face_index, int S,
+ float grid_u, float grid_v,
+ float uv[2]);
+
+#endif
+
#endif /* __CCGSUBSURF_H__ */
diff --git a/source/blender/blenkernel/intern/CCGSubSurf_intern.h b/source/blender/blenkernel/intern/CCGSubSurf_intern.h
index 1689ac482ef..d80bdcdb7fc 100644
--- a/source/blender/blenkernel/intern/CCGSubSurf_intern.h
+++ b/source/blender/blenkernel/intern/CCGSubSurf_intern.h
@@ -161,6 +161,9 @@ typedef enum {
eSyncState_Edge,
eSyncState_Face,
eSyncState_Partial,
+#ifdef WITH_OPENSUBDIV
+ eSyncState_OpenSubdiv,
+#endif
} SyncState;
struct CCGSubSurf {
@@ -203,6 +206,60 @@ struct CCGSubSurf {
int lenTempArrays;
CCGVert **tempVerts;
CCGEdge **tempEdges;
+
+#ifdef WITH_OPENSUBDIV
+ /* Skip grids means no CCG geometry is created and subsurf is possible
+ * to be completely done on GPU.
+ */
+ bool skip_grids;
+
+ /* ** GPU backend. ** */
+
+ /* Compute device used by GL mesh. */
+ short osd_compute;
+ /* Coarse (base mesh) vertex coordinates.
+ *
+ * Filled in from the modifier stack and passed to OpenSubdiv compute
+ * on mesh display.
+ */
+ float (*osd_coarse_coords)[3];
+ int osd_num_coarse_coords;
+ /* Denotes whether coarse positions in the GL mesh are invalid.
+ * Used to avoid updating GL mesh coords on every redraw.
+ */
+ bool osd_coarse_coords_invalid;
+
+ /* GL mesh descriptor, used for refinment and draw. */
+ struct OpenSubdiv_GLMesh *osd_mesh;
+ /* Refiner which is used to create GL mesh.
+ *
+ * Refiner is created from the modifier stack and used later from the main
+ * thread to construct GL mesh to avoid threaded access to GL.
+ */
+ struct OpenSubdiv_TopologyRefinerDescr *osd_topology_refiner; /* Only used at synchronization stage. */
+ /* Denotes whether osd_mesh is invalid now due to topology changes and needs
+ * to be reconstructed.
+ *
+ * Reconstruction happens from main thread due to OpenGL communication.
+ */
+ bool osd_mesh_invalid;
+ /* Vertex array used for osd_mesh draw. */
+ unsigned int osd_vao;
+
+ /* ** CPU backend. ** */
+
+ /* Limit evaluator, used to evaluate CCG. */
+ struct OpenSubdiv_EvaluatorDescr *osd_evaluator;
+ /* Next PTex face index, used while CCG synchroization
+ * to fill in PTex index of CCGFace.
+ */
+ int osd_next_face_ptex_index;
+
+ /* ** Needs review. ** */
+ bool osd_subsurf_uv;
+ int osd_uv_index;
+ bool osd_uvs_invalid;
+#endif
};
/* ** Utility macros ** */
diff --git a/source/blender/blenkernel/intern/DerivedMesh.c b/source/blender/blenkernel/intern/DerivedMesh.c
index 7582eb2324a..05ec83e70a9 100644
--- a/source/blender/blenkernel/intern/DerivedMesh.c
+++ b/source/blender/blenkernel/intern/DerivedMesh.c
@@ -1621,6 +1621,7 @@ static void mesh_calc_modifiers(
const bool useRenderParams, int useDeform,
const bool need_mapping, CustomDataMask dataMask,
const int index, const bool useCache, const bool build_shapekey_layers,
+ const bool allow_gpu,
/* return args */
DerivedMesh **r_deform, DerivedMesh **r_final)
{
@@ -1663,6 +1664,8 @@ static void mesh_calc_modifiers(
if (useCache)
app_flags |= MOD_APPLY_USECACHE;
+ if (allow_gpu)
+ app_flags |= MOD_APPLY_ALLOW_GPU;
if (useDeform)
deform_app_flags |= MOD_APPLY_USECACHE;
@@ -2327,9 +2330,9 @@ static void editbmesh_calc_modifiers(
}
if (mti->applyModifierEM)
- ndm = modwrap_applyModifierEM(md, ob, em, dm, MOD_APPLY_USECACHE);
+ ndm = modwrap_applyModifierEM(md, ob, em, dm, MOD_APPLY_USECACHE | MOD_APPLY_ALLOW_GPU);
else
- ndm = modwrap_applyModifier(md, ob, dm, MOD_APPLY_USECACHE);
+ ndm = modwrap_applyModifier(md, ob, dm, MOD_APPLY_USECACHE | MOD_APPLY_ALLOW_GPU);
ASSERT_IS_VALID_DM(ndm);
if (ndm) {
@@ -2449,6 +2452,23 @@ static void editbmesh_calc_modifiers(
MEM_freeN(deformedVerts);
}
+#ifdef WITH_OPENSUBDIV
+/* The idea is to skip CPU-side ORCO calculation when
+ * we'll be using GPU backend of OpenSubdiv. This is so
+ * playback performance is kept as high as posssible.
+ */
+static bool calc_modifiers_skip_orco(const Object *ob)
+{
+ const ModifierData *last_md = ob->modifiers.last;
+ if (last_md != NULL &&
+ last_md->type == eModifierType_Subsurf)
+ {
+ return true;
+ }
+ return false;
+}
+#endif
+
static void mesh_build_data(
Scene *scene, Object *ob, CustomDataMask dataMask,
const bool build_shapekey_layers, const bool need_mapping)
@@ -2458,8 +2478,15 @@ static void mesh_build_data(
BKE_object_free_derived_caches(ob);
BKE_object_sculpt_modifiers_changed(ob);
+#ifdef WITH_OPENSUBDIV
+ if (calc_modifiers_skip_orco(ob)) {
+ dataMask &= ~CD_MASK_ORCO;
+ }
+#endif
+
mesh_calc_modifiers(
scene, ob, NULL, false, 1, need_mapping, dataMask, -1, true, build_shapekey_layers,
+ true,
&ob->derivedDeform, &ob->derivedFinal);
DM_set_object_boundbox(ob, ob->derivedFinal);
@@ -2486,6 +2513,12 @@ static void editbmesh_build_data(Scene *scene, Object *obedit, BMEditMesh *em, C
BKE_editmesh_free_derivedmesh(em);
+#ifdef WITH_OPENSUBDIV
+ if (calc_modifiers_skip_orco(obedit)) {
+ dataMask &= ~CD_MASK_ORCO;
+ }
+#endif
+
editbmesh_calc_modifiers(
scene, obedit, em, dataMask,
&em->derivedCage, &em->derivedFinal);
@@ -2597,7 +2630,7 @@ DerivedMesh *mesh_create_derived_render(Scene *scene, Object *ob, CustomDataMask
DerivedMesh *final;
mesh_calc_modifiers(
- scene, ob, NULL, true, 1, false, dataMask, -1, false, false,
+ scene, ob, NULL, true, 1, false, dataMask, -1, false, false, false,
NULL, &final);
return final;
@@ -2608,7 +2641,7 @@ DerivedMesh *mesh_create_derived_index_render(Scene *scene, Object *ob, CustomDa
DerivedMesh *final;
mesh_calc_modifiers(
- scene, ob, NULL, true, 1, false, dataMask, index, false, false,
+ scene, ob, NULL, true, 1, false, dataMask, index, false, false, false,
NULL, &final);
return final;
@@ -2627,7 +2660,7 @@ DerivedMesh *mesh_create_derived_view(
ob->transflag |= OB_NO_PSYS_UPDATE;
mesh_calc_modifiers(
- scene, ob, NULL, false, 1, false, dataMask, -1, false, false,
+ scene, ob, NULL, false, 1, false, dataMask, -1, false, false, false,
NULL, &final);
ob->transflag &= ~OB_NO_PSYS_UPDATE;
@@ -2642,7 +2675,7 @@ DerivedMesh *mesh_create_derived_no_deform(
DerivedMesh *final;
mesh_calc_modifiers(
- scene, ob, vertCos, false, 0, false, dataMask, -1, false, false,
+ scene, ob, vertCos, false, 0, false, dataMask, -1, false, false, false,
NULL, &final);
return final;
@@ -2655,7 +2688,7 @@ DerivedMesh *mesh_create_derived_no_virtual(
DerivedMesh *final;
mesh_calc_modifiers(
- scene, ob, vertCos, false, -1, false, dataMask, -1, false, false,
+ scene, ob, vertCos, false, -1, false, dataMask, -1, false, false, false,
NULL, &final);
return final;
@@ -2668,7 +2701,7 @@ DerivedMesh *mesh_create_derived_physics(
DerivedMesh *final;
mesh_calc_modifiers(
- scene, ob, vertCos, false, -1, true, dataMask, -1, false, false,
+ scene, ob, vertCos, false, -1, true, dataMask, -1, false, false, false,
NULL, &final);
return final;
@@ -2682,7 +2715,7 @@ DerivedMesh *mesh_create_derived_no_deform_render(
DerivedMesh *final;
mesh_calc_modifiers(
- scene, ob, vertCos, true, 0, false, dataMask, -1, false, false,
+ scene, ob, vertCos, true, 0, false, dataMask, -1, false, false, false,
NULL, &final);
return final;
@@ -3400,9 +3433,18 @@ void DM_set_object_boundbox(Object *ob, DerivedMesh *dm)
{
float min[3], max[3];
- INIT_MINMAX(min, max);
-
- dm->getMinMax(dm, min, max);
+#ifdef WITH_OPENSUBDIV
+ /* TODO(sergey): Currently no way to access bounding box from hi-res mesh. */
+ if (dm->type == DM_TYPE_CCGDM) {
+ copy_v3_fl3(min, -1.0f, -1.0f, -1.0f);
+ copy_v3_fl3(max, 1.0f, 1.0f, 1.0f);
+ }
+ else
+#endif
+ {
+ INIT_MINMAX(min, max);
+ dm->getMinMax(dm, min, max);
+ }
if (!ob->bb)
ob->bb = MEM_callocN(sizeof(BoundBox), "DM-BoundBox");
diff --git a/source/blender/blenkernel/intern/scene.c b/source/blender/blenkernel/intern/scene.c
index 8f3a99cc051..19c2ff10901 100644
--- a/source/blender/blenkernel/intern/scene.c
+++ b/source/blender/blenkernel/intern/scene.c
@@ -89,6 +89,11 @@
#include "BKE_unit.h"
#include "BKE_world.h"
+#ifdef WITH_OPENSUBDIV
+# include "BKE_modifier.h"
+# include "CCGSubSurf.h"
+#endif
+
#include "DEG_depsgraph.h"
#include "RE_engine.h"
@@ -1345,6 +1350,11 @@ static void scene_do_rb_simulation_recursive(Scene *scene, float ctime)
*/
#define MBALL_SINGLETHREAD_HACK
+/* Need this because CCFDM holds some OpenGL resources. */
+#ifdef WITH_OPENSUBDIV
+# define OPENSUBDIV_GL_WORKAROUND
+#endif
+
#ifdef WITH_LEGACY_DEPSGRAPH
typedef struct StatisicsEntry {
struct StatisicsEntry *next, *prev;
@@ -1546,6 +1556,37 @@ static bool scene_need_update_objects(Main *bmain)
DAG_id_type_tagged(bmain, ID_AR); /* Armature */
}
+#ifdef OPENSUBDIV_GL_WORKAROUND
+/* CCG DrivedMesh currently hold some OpenGL handles, which could only be
+ * released from the main thread.
+ *
+ * Ideally we need to use gpu_buffer_free, but it's a bit tricky because
+ * some buffers are only accessible from OpenSubdiv side.
+ */
+static void scene_free_unused_opensubdiv_cache(Scene *scene)
+{
+ Base *base;
+ for (base = scene->base.first; base; base = base->next) {
+ Object *object = base->object;
+ if (object->type == OB_MESH && object->recalc & OB_RECALC_DATA) {
+ ModifierData *md = object->modifiers.last;
+ if (md != NULL && md->type == eModifierType_Subsurf) {
+ SubsurfModifierData *smd = (SubsurfModifierData *) md;
+ bool object_in_editmode = object->mode == OB_MODE_EDIT;
+ if (object_in_editmode && smd->mCache != NULL) {
+ ccgSubSurf_free(smd->mCache);
+ smd->mCache = NULL;
+ }
+ if (!object_in_editmode && smd->emCache != NULL) {
+ ccgSubSurf_free(smd->emCache);
+ smd->emCache = NULL;
+ }
+ }
+ }
+ }
+}
+#endif
+
static void scene_update_objects(EvaluationContext *eval_ctx, Main *bmain, Scene *scene, Scene *scene_parent)
{
TaskScheduler *task_scheduler = BLI_task_scheduler_get();
@@ -1564,6 +1605,10 @@ static void scene_update_objects(EvaluationContext *eval_ctx, Main *bmain, Scene
return;
}
+#ifdef OPENSUBDIV_GL_WORKAROUND
+ scene_free_unused_opensubdiv_cache(scene);
+#endif
+
state.eval_ctx = eval_ctx;
state.scene = scene;
state.scene_parent = scene_parent;
diff --git a/source/blender/blenkernel/intern/subsurf_ccg.c b/source/blender/blenkernel/intern/subsurf_ccg.c
index d419fc70be9..7d21b3304cd 100644
--- a/source/blender/blenkernel/intern/subsurf_ccg.c
+++ b/source/blender/blenkernel/intern/subsurf_ccg.c
@@ -77,6 +77,10 @@
#include "CCGSubSurf.h"
+#ifdef WITH_OPENSUBDIV
+# include "opensubdiv_capi.h"
+#endif
+
/* assumes MLoop's are layed out 4 for each poly, in order */
#define USE_LOOP_LAYOUT_FAST
@@ -88,7 +92,8 @@ static ThreadRWMutex origindex_cache_rwlock = BLI_RWLOCK_INITIALIZER;
static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
int drawInteriorEdges,
int useSubsurfUv,
- DerivedMesh *dm);
+ DerivedMesh *dm,
+ bool use_gpu_backend);
static int ccgDM_use_grid_pbvh(CCGDerivedMesh *ccgdm);
///
@@ -268,6 +273,7 @@ static int getFaceIndex(CCGSubSurf *ss, CCGFace *f, int S, int x, int y, int edg
}
}
+#ifndef WITH_OPENSUBDIV
static void get_face_uv_map_vert(UvVertMap *vmap, struct MPoly *mpoly, struct MLoop *ml, int fi, CCGVertHDL *fverts)
{
UvMapVert *v, *nv;
@@ -409,7 +415,96 @@ static int ss_sync_from_uv(CCGSubSurf *ss, CCGSubSurf *origss, DerivedMesh *dm,
return 1;
}
+#endif /* WITH_OPENSUBDIV */
+
+#ifdef WITH_OPENSUBDIV
+static void set_subsurf_ccg_uv(CCGSubSurf *ss,
+ DerivedMesh *dm,
+ DerivedMesh *result,
+ int layer_index)
+{
+ CCGFace **faceMap;
+ MTFace *tf;
+ MLoopUV *mluv;
+ CCGFaceIterator fi;
+ int index, gridSize, gridFaces, totface, x, y, S;
+ MLoopUV *dmloopuv = CustomData_get_layer_n(&dm->loopData, CD_MLOOPUV, layer_index);
+ /* need to update both CD_MTFACE & CD_MLOOPUV, hrmf, we could get away with
+ * just tface except applying the modifier then looses subsurf UV */
+ MTFace *tface = CustomData_get_layer_n(&result->faceData, CD_MTFACE, layer_index);
+ MLoopUV *mloopuv = CustomData_get_layer_n(&result->loopData, CD_MLOOPUV, layer_index);
+
+ if (dmloopuv == NULL || (tface == NULL && mloopuv == NULL)) {
+ return;
+ }
+
+ ccgSubSurf_evaluatorSetFVarUV(ss, dm, layer_index);
+
+ /* get some info from CCGSubSurf */
+ totface = ccgSubSurf_getNumFaces(ss);
+ gridSize = ccgSubSurf_getGridSize(ss);
+ gridFaces = gridSize - 1;
+
+ /* make a map from original faces to CCGFaces */
+ faceMap = MEM_mallocN(totface * sizeof(*faceMap), "facemapuv");
+ for (ccgSubSurf_initFaceIterator(ss, &fi); !ccgFaceIterator_isStopped(&fi); ccgFaceIterator_next(&fi)) {
+ CCGFace *f = ccgFaceIterator_getCurrent(&fi);
+ faceMap[GET_INT_FROM_POINTER(ccgSubSurf_getFaceFaceHandle(f))] = f;
+ }
+
+ /* load coordinates from uvss into tface */
+ tf = tface;
+ mluv = mloopuv;
+ for (index = 0; index < totface; index++) {
+ CCGFace *f = faceMap[index];
+ int numVerts = ccgSubSurf_getFaceNumVerts(f);
+ for (S = 0; S < numVerts; S++) {
+ for (y = 0; y < gridFaces; y++) {
+ for (x = 0; x < gridFaces; x++) {
+ float grid_u = ((float)(x)) / (gridSize - 1),
+ grid_v = ((float)(y)) / (gridSize - 1);
+ float uv[2];
+ /* TODO(sergey): Evaluator all 4 corners. */
+ ccgSubSurf_evaluatorFVarUV(ss,
+ index,
+ S,
+ grid_u, grid_v,
+ uv);
+ if (tf) {
+ copy_v2_v2(tf->uv[0], uv);
+ copy_v2_v2(tf->uv[1], uv);
+ copy_v2_v2(tf->uv[2], uv);
+ copy_v2_v2(tf->uv[3], uv);
+ tf++;
+ }
+ if (mluv) {
+ copy_v2_v2(mluv[0].uv, uv);
+ copy_v2_v2(mluv[1].uv, uv);
+ copy_v2_v2(mluv[2].uv, uv);
+ copy_v2_v2(mluv[3].uv, uv);
+ mluv += 4;
+ }
+ }
+ }
+ }
+ }
+ MEM_freeN(faceMap);
+}
+static void set_subsurf_uv(CCGSubSurf *ss,
+ DerivedMesh *dm,
+ DerivedMesh *result,
+ int layer_index)
+{
+ if (!ccgSubSurf_needGrids(ss)) {
+ /* GPU backend is used, no need to evaluate UVs on CPU. */
+ /* TODO(sergey): Think of how to support edit mode of UVs. */
+ }
+ else {
+ set_subsurf_ccg_uv(ss, dm, result, layer_index);
+ }
+}
+#else /* WITH_OPENSUBDIV */
static void set_subsurf_uv(CCGSubSurf *ss, DerivedMesh *dm, DerivedMesh *result, int n)
{
CCGSubSurf *uvss;
@@ -490,6 +585,7 @@ static void set_subsurf_uv(CCGSubSurf *ss, DerivedMesh *dm, DerivedMesh *result,
ccgSubSurf_free(uvss);
MEM_freeN(faceMap);
}
+#endif /* WITH_OPENSUBDIV */
/* face weighting */
typedef struct FaceVertWeightEntry {
@@ -571,8 +667,10 @@ static void free_ss_weights(WeightTable *wtable)
MEM_freeN(wtable->weight_table);
}
-static void ss_sync_from_derivedmesh(CCGSubSurf *ss, DerivedMesh *dm,
- float (*vertexCos)[3], int useFlatSubdiv)
+static void ss_sync_ccg_from_derivedmesh(CCGSubSurf *ss,
+ DerivedMesh *dm,
+ float (*vertexCos)[3],
+ int useFlatSubdiv)
{
float creaseFactor = (float) ccgSubSurf_getSubdivisionLevels(ss);
#ifndef USE_DYNSIZE
@@ -672,6 +770,37 @@ static void ss_sync_from_derivedmesh(CCGSubSurf *ss, DerivedMesh *dm,
#endif
}
+#ifdef WITH_OPENSUBDIV
+static void ss_sync_osd_from_derivedmesh(CCGSubSurf *ss,
+ DerivedMesh *dm)
+{
+ ccgSubSurf_initFullSync(ss);
+ ccgSubSurf_prepareTopologyRefiner(ss, dm);
+ ccgSubSurf_processSync(ss);
+}
+#endif /* WITH_OPENSUBDIV */
+
+static void ss_sync_from_derivedmesh(CCGSubSurf *ss,
+ DerivedMesh *dm,
+ float (*vertexCos)[3],
+ int use_flat_subdiv)
+{
+#ifdef WITH_OPENSUBDIV
+ /* Reset all related descriptors if actual mesh topology changed or if
+ * other evlauation-related settings changed.
+ */
+ ccgSubSurf_checkTopologyChanged(ss, dm);
+ if (!ccgSubSurf_needGrids(ss)) {
+ /* TODO(sergey): Use vertex coordinates and flat subdiv flag. */
+ ss_sync_osd_from_derivedmesh(ss, dm);
+ }
+ else
+#endif
+ {
+ ss_sync_ccg_from_derivedmesh(ss, dm, vertexCos, use_flat_subdiv);
+ }
+}
+
/***/
static int ccgDM_getVertMapIndex(CCGSubSurf *ss, CCGVert *v)
@@ -1651,6 +1780,16 @@ static void ccgDM_drawEdges(DerivedMesh *dm, bool drawLooseEdges, bool drawAllEd
int gridSize = ccgSubSurf_getGridSize(ss);
int useAging;
+#ifdef WITH_OPENSUBDIV
+ if (ccgdm->useGpuBackend) {
+ /* TODO(sergey): We currently only support all edges drawing. */
+ if (ccgSubSurf_prepareGLMesh(ss, true)) {
+ ccgSubSurf_drawGLMesh(ss, false, -1, -1);
+ }
+ return;
+ }
+#endif
+
CCG_key_top_level(&key, ss);
ccgdm_pbvh_update(ccgdm);
@@ -1722,6 +1861,13 @@ static void ccgDM_drawLooseEdges(DerivedMesh *dm)
int totedge = ccgSubSurf_getNumEdges(ss);
int i, j, edgeSize = ccgSubSurf_getEdgeSize(ss);
+#ifdef WITH_OPENSUBDIV
+ if (ccgdm->useGpuBackend) {
+ /* TODO(sergey): Needs implementation. */
+ return;
+ }
+#endif
+
CCG_key_top_level(&key, ss);
for (j = 0; j < totedge; j++) {
@@ -2301,7 +2447,34 @@ static void ccgDM_drawFacesSolid(DerivedMesh *dm, float (*partial_redraw_planes)
return;
}
-
+
+#ifdef WITH_OPENSUBDIV
+ if (ccgdm->useGpuBackend) {
+ CCGSubSurf *ss = ccgdm->ss;
+ DMFlagMat *faceFlags = ccgdm->faceFlags;
+ int new_matnr;
+ bool draw_smooth;
+ if (UNLIKELY(ccgSubSurf_prepareGLMesh(ss, setMaterial != NULL) == false)) {
+ return;
+ }
+ /* TODO(sergey): Single matierial currently. */
+ if (faceFlags) {
+ draw_smooth = (faceFlags[0].flag & ME_SMOOTH);
+ new_matnr = (faceFlags[0].mat_nr + 1);
+ }
+ else {
+ draw_smooth = true;
+ new_matnr = 1;
+ }
+ if (setMaterial) {
+ setMaterial(new_matnr, NULL);
+ }
+ glShadeModel(draw_smooth ? GL_SMOOTH : GL_FLAT);
+ ccgSubSurf_drawGLMesh(ss, true, -1, -1);
+ return;
+ }
+#endif
+
GPU_vertex_setup(dm);
GPU_normal_setup(dm);
GPU_triangle_setup(dm);
@@ -2334,6 +2507,30 @@ static void ccgDM_drawMappedFacesGLSL(DerivedMesh *dm,
short (*lnors)[4][3] = dm->getTessFaceDataArray(dm, CD_TESSLOOPNORMAL);
int a, i, do_draw, numVerts, matnr, new_matnr, totface;
+#ifdef WITH_OPENSUBDIV
+ if (ccgdm->useGpuBackend) {
+ int new_matnr;
+ bool draw_smooth;
+ GPU_draw_update_fvar_offset(dm);
+ if (UNLIKELY(ccgSubSurf_prepareGLMesh(ss, false) == false)) {
+ return;
+ }
+ /* TODO(sergey): Single matierial currently. */
+ if (faceFlags) {
+ draw_smooth = (faceFlags[0].flag & ME_SMOOTH);
+ new_matnr = (faceFlags[0].mat_nr + 1);
+ }
+ else {
+ draw_smooth = true;
+ new_matnr = 1;
+ }
+ glShadeModel(draw_smooth ? GL_SMOOTH : GL_FLAT);
+ setMaterial(new_matnr, &gattribs);
+ ccgSubSurf_drawGLMesh(ss, true, -1, -1);
+ return;
+ }
+#endif
+
CCG_key_top_level(&key, ss);
ccgdm_pbvh_update(ccgdm);
@@ -2506,6 +2703,13 @@ static void ccgDM_drawMappedFacesMat(DerivedMesh *dm,
short (*lnors)[4][3] = dm->getTessFaceDataArray(dm, CD_TESSLOOPNORMAL);
int a, i, numVerts, matnr, new_matnr, totface;
+#ifdef WITH_OPENSUBDIV
+ if (ccgdm->useGpuBackend) {
+ BLI_assert(!"Not currently supported");
+ return;
+ }
+#endif
+
CCG_key_top_level(&key, ss);
ccgdm_pbvh_update(ccgdm);
@@ -2678,6 +2882,16 @@ static void ccgDM_drawFacesTex_common(DerivedMesh *dm,
int mat_index;
int tot_element, start_element, tot_drawn;
+#ifdef WITH_OPENSUBDIV
+ if (ccgdm->useGpuBackend) {
+ if (ccgSubSurf_prepareGLMesh(ss, true) == false) {
+ return;
+ }
+ ccgSubSurf_drawGLMesh(ss, true, -1, -1);
+ return;
+ }
+#endif
+
CCG_key_top_level(&key, ss);
ccgdm_pbvh_update(ccgdm);
@@ -2847,6 +3061,26 @@ static void ccgDM_drawMappedFaces(DerivedMesh *dm,
int gridFaces = gridSize - 1, totface;
int prev_mat_nr = -1;
+#ifdef WITH_OPENSUBDIV
+ if (ccgdm->useGpuBackend) {
+ /* TODO(sergey): This is for cases when vertex colors or weights
+ * are visualising. Currently we don't have CD layers for this data
+ * and here we only make it so there's no garbage displayed.
+ *
+ * In the future we'll either need to have CD for this data or pass
+ * this data as face-varying or vertex-varying data in OSD mesh.
+ */
+ if (setDrawOptions == NULL) {
+ glColor3f(0.8f, 0.8f, 0.8f);
+ }
+ if (UNLIKELY(ccgSubSurf_prepareGLMesh(ss, true) == false)) {
+ return;
+ }
+ ccgSubSurf_drawGLMesh(ss, true, -1, -1);
+ return;
+ }
+#endif
+
CCG_key_top_level(&key, ss);
/* currently unused -- each original face is handled separately */
@@ -3016,6 +3250,13 @@ static void ccgDM_drawMappedEdges(DerivedMesh *dm,
CCGKey key;
int i, useAging, edgeSize = ccgSubSurf_getEdgeSize(ss);
+#ifdef WITH_OPENSUBDIV
+ if (ccgdm->useGpuBackend) {
+ BLI_assert(!"Not currently supported");
+ return;
+ }
+#endif
+
CCG_key_top_level(&key, ss);
ccgSubSurf_getUseAgeCounts(ss, &useAging, NULL, NULL, NULL);
@@ -3051,6 +3292,13 @@ static void ccgDM_drawMappedEdgesInterp(DerivedMesh *dm,
CCGEdgeIterator ei;
int i, useAging, edgeSize = ccgSubSurf_getEdgeSize(ss);
+#ifdef WITH_OPENSUBDIV
+ if (ccgdm->useGpuBackend) {
+ BLI_assert(!"Not currently supported");
+ return;
+ }
+#endif
+
CCG_key_top_level(&key, ss);
ccgSubSurf_getUseAgeCounts(ss, &useAging, NULL, NULL, NULL);
@@ -3145,9 +3393,11 @@ static void ccgDM_release(DerivedMesh *dm)
if (ccgdm->pmap_mem) MEM_freeN(ccgdm->pmap_mem);
MEM_freeN(ccgdm->edgeFlags);
MEM_freeN(ccgdm->faceFlags);
- MEM_freeN(ccgdm->vertMap);
- MEM_freeN(ccgdm->edgeMap);
- MEM_freeN(ccgdm->faceMap);
+ if (ccgdm->useGpuBackend == false) {
+ MEM_freeN(ccgdm->vertMap);
+ MEM_freeN(ccgdm->edgeMap);
+ MEM_freeN(ccgdm->faceMap);
+ }
MEM_freeN(ccgdm);
}
}
@@ -3681,74 +3931,8 @@ static void ccgDM_calcNormals(DerivedMesh *dm)
dm->dirty &= ~DM_DIRTY_NORMALS;
}
-static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
- int drawInteriorEdges,
- int useSubsurfUv,
- DerivedMesh *dm)
+static void set_default_ccgdm_callbacks(CCGDerivedMesh *ccgdm)
{
- CCGDerivedMesh *ccgdm = MEM_callocN(sizeof(*ccgdm), "ccgdm");
- CCGVertIterator vi;
- CCGEdgeIterator ei;
- CCGFaceIterator fi;
- int index, totvert, totedge, totface;
- int i;
- int vertNum, edgeNum, faceNum;
- int *vertOrigIndex, *faceOrigIndex, *polyOrigIndex, *base_polyOrigIndex, *edgeOrigIndex;
- short *edgeFlags;
- DMFlagMat *faceFlags;
- int *polyidx = NULL;
-#ifndef USE_DYNSIZE
- int *loopidx = NULL, *vertidx = NULL;
- BLI_array_declare(loopidx);
- BLI_array_declare(vertidx);
-#endif
- int loopindex, loopindex2;
- int edgeSize;
- int gridSize;
- int gridFaces, gridCuts;
- /*int gridSideVerts;*/
- int gridSideEdges;
- int numTex, numCol;
- int hasPCol, hasOrigSpace;
- int gridInternalEdges;
- WeightTable wtable = {NULL};
- /* MCol *mcol; */ /* UNUSED */
- MEdge *medge = NULL;
- /* MFace *mface = NULL; */
- MPoly *mpoly = NULL;
- bool has_edge_cd;
-
- DM_from_template(&ccgdm->dm, dm, DM_TYPE_CCGDM,
- ccgSubSurf_getNumFinalVerts(ss),
- ccgSubSurf_getNumFinalEdges(ss),
- ccgSubSurf_getNumFinalFaces(ss),
- ccgSubSurf_getNumFinalFaces(ss) * 4,
- ccgSubSurf_getNumFinalFaces(ss));
-
- CustomData_free_layer_active(&ccgdm->dm.polyData, CD_NORMAL,
- ccgdm->dm.numPolyData);
-
- numTex = CustomData_number_of_layers(&ccgdm->dm.loopData, CD_MLOOPUV);
- numCol = CustomData_number_of_layers(&ccgdm->dm.loopData, CD_MLOOPCOL);
- hasPCol = CustomData_has_layer(&ccgdm->dm.loopData, CD_PREVIEW_MLOOPCOL);
- hasOrigSpace = CustomData_has_layer(&ccgdm->dm.loopData, CD_ORIGSPACE_MLOOP);
-
- if (
- (numTex && CustomData_number_of_layers(&ccgdm->dm.faceData, CD_MTFACE) != numTex) ||
- (numCol && CustomData_number_of_layers(&ccgdm->dm.faceData, CD_MCOL) != numCol) ||
- (hasPCol && !CustomData_has_layer(&ccgdm->dm.faceData, CD_PREVIEW_MCOL)) ||
- (hasOrigSpace && !CustomData_has_layer(&ccgdm->dm.faceData, CD_ORIGSPACE)) )
- {
- CustomData_from_bmeshpoly(&ccgdm->dm.faceData,
- &ccgdm->dm.polyData,
- &ccgdm->dm.loopData,
- ccgSubSurf_getNumFinalFaces(ss));
- }
-
- /* We absolutely need that layer, else it's no valid tessellated data! */
- polyidx = CustomData_add_layer(&ccgdm->dm.faceData, CD_ORIGINDEX, CD_CALLOC,
- NULL, ccgSubSurf_getNumFinalFaces(ss));
-
ccgdm->dm.getMinMax = ccgDM_getMinMax;
ccgdm->dm.getNumVerts = ccgDM_getNumVerts;
ccgdm->dm.getNumEdges = ccgDM_getNumEdges;
@@ -3801,7 +3985,7 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
ccgdm->dm.foreachMappedEdge = ccgDM_foreachMappedEdge;
ccgdm->dm.foreachMappedLoop = ccgDM_foreachMappedLoop;
ccgdm->dm.foreachMappedFaceCenter = ccgDM_foreachMappedFaceCenter;
-
+
ccgdm->dm.drawVerts = ccgDM_drawVerts;
ccgdm->dm.drawEdges = ccgDM_drawEdges;
ccgdm->dm.drawLooseEdges = ccgDM_drawLooseEdges;
@@ -3820,14 +4004,22 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
ccgdm->dm.copy_gpu_data = ccgDM_copy_gpu_data;
ccgdm->dm.release = ccgDM_release;
-
- ccgdm->ss = ss;
- ccgdm->drawInteriorEdges = drawInteriorEdges;
- ccgdm->useSubsurfUv = useSubsurfUv;
+}
+
+static void create_ccgdm_maps(CCGDerivedMesh *ccgdm,
+ CCGSubSurf *ss)
+{
+ CCGVertIterator vi;
+ CCGEdgeIterator ei;
+ CCGFaceIterator fi;
+ int totvert, totedge, totface;
totvert = ccgSubSurf_getNumVerts(ss);
ccgdm->vertMap = MEM_mallocN(totvert * sizeof(*ccgdm->vertMap), "vertMap");
- for (ccgSubSurf_initVertIterator(ss, &vi); !ccgVertIterator_isStopped(&vi); ccgVertIterator_next(&vi)) {
+ for (ccgSubSurf_initVertIterator(ss, &vi);
+ !ccgVertIterator_isStopped(&vi);
+ ccgVertIterator_next(&vi))
+ {
CCGVert *v = ccgVertIterator_getCurrent(&vi);
ccgdm->vertMap[GET_INT_FROM_POINTER(ccgSubSurf_getVertVertHandle(v))].vert = v;
@@ -3835,7 +4027,10 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
totedge = ccgSubSurf_getNumEdges(ss);
ccgdm->edgeMap = MEM_mallocN(totedge * sizeof(*ccgdm->edgeMap), "edgeMap");
- for (ccgSubSurf_initEdgeIterator(ss, &ei); !ccgEdgeIterator_isStopped(&ei); ccgEdgeIterator_next(&ei)) {
+ for (ccgSubSurf_initEdgeIterator(ss, &ei);
+ !ccgEdgeIterator_isStopped(&ei);
+ ccgEdgeIterator_next(&ei))
+ {
CCGEdge *e = ccgEdgeIterator_getCurrent(&ei);
ccgdm->edgeMap[GET_INT_FROM_POINTER(ccgSubSurf_getEdgeEdgeHandle(e))].edge = e;
@@ -3843,13 +4038,60 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
totface = ccgSubSurf_getNumFaces(ss);
ccgdm->faceMap = MEM_mallocN(totface * sizeof(*ccgdm->faceMap), "faceMap");
- for (ccgSubSurf_initFaceIterator(ss, &fi); !ccgFaceIterator_isStopped(&fi); ccgFaceIterator_next(&fi)) {
+ for (ccgSubSurf_initFaceIterator(ss, &fi);
+ !ccgFaceIterator_isStopped(&fi);
+ ccgFaceIterator_next(&fi))
+ {
CCGFace *f = ccgFaceIterator_getCurrent(&fi);
ccgdm->faceMap[GET_INT_FROM_POINTER(ccgSubSurf_getFaceFaceHandle(f))].face = f;
}
+}
+
+/* Fill in all geometry arrays making it possible to access any
+ * hires data from the CPU.
+ */
+static void set_ccgdm_all_geometry(CCGDerivedMesh *ccgdm,
+ CCGSubSurf *ss,
+ DerivedMesh *dm,
+ bool useSubsurfUv)
+{
+ const int totvert = ccgSubSurf_getNumVerts(ss);
+ const int totedge = ccgSubSurf_getNumEdges(ss);
+ const int totface = ccgSubSurf_getNumFaces(ss);
+ int index;
+ int i;
+ int vertNum = 0, edgeNum = 0, faceNum = 0;
+ int *vertOrigIndex, *faceOrigIndex, *polyOrigIndex, *base_polyOrigIndex, *edgeOrigIndex;
+ short *edgeFlags = ccgdm->edgeFlags;
+ DMFlagMat *faceFlags = ccgdm->faceFlags;
+ int *polyidx = NULL;
+#ifndef USE_DYNSIZE
+ int *loopidx = NULL, *vertidx = NULL;
+ BLI_array_declare(loopidx);
+ BLI_array_declare(vertidx);
+#endif
+ int loopindex, loopindex2;
+ int edgeSize;
+ int gridSize;
+ int gridFaces, gridCuts;
+ int gridSideEdges;
+ int numTex, numCol;
+ int hasPCol, hasOrigSpace;
+ int gridInternalEdges;
+ WeightTable wtable = {NULL};
+ MEdge *medge = NULL;
+ MPoly *mpoly = NULL;
+ bool has_edge_cd;
- ccgdm->reverseFaceMap = MEM_callocN(sizeof(int) * ccgSubSurf_getNumFinalFaces(ss), "reverseFaceMap");
+ numTex = CustomData_number_of_layers(&ccgdm->dm.loopData, CD_MLOOPUV);
+ numCol = CustomData_number_of_layers(&ccgdm->dm.loopData, CD_MLOOPCOL);
+ hasPCol = CustomData_has_layer(&ccgdm->dm.loopData, CD_PREVIEW_MLOOPCOL);
+ hasOrigSpace = CustomData_has_layer(&ccgdm->dm.loopData, CD_ORIGSPACE_MLOOP);
+
+ /* We absolutely need that layer, else it's no valid tessellated data! */
+ polyidx = CustomData_add_layer(&ccgdm->dm.faceData, CD_ORIGINDEX, CD_CALLOC,
+ NULL, ccgSubSurf_getNumFinalFaces(ss));
edgeSize = ccgSubSurf_getEdgeSize(ss);
gridSize = ccgSubSurf_getGridSize(ss);
@@ -3857,11 +4099,7 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
gridCuts = gridSize - 2;
/*gridInternalVerts = gridSideVerts * gridSideVerts; - as yet, unused */
gridSideEdges = gridSize - 1;
- gridInternalEdges = (gridSideEdges - 1) * gridSideEdges * 2;
-
- vertNum = 0;
- edgeNum = 0;
- faceNum = 0;
+ gridInternalEdges = (gridSideEdges - 1) * gridSideEdges * 2;
/* mvert = dm->getVertArray(dm); */ /* UNUSED */
medge = dm->getEdgeArray(dm);
@@ -3869,10 +4107,6 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
mpoly = CustomData_get_layer(&dm->polyData, CD_MPOLY);
base_polyOrigIndex = CustomData_get_layer(&dm->polyData, CD_ORIGINDEX);
-
- /*CDDM hack*/
- edgeFlags = ccgdm->edgeFlags = MEM_callocN(sizeof(short) * totedge, "edgeFlags");
- faceFlags = ccgdm->faceFlags = MEM_callocN(sizeof(DMFlagMat) * totface, "faceFlags");
vertOrigIndex = DM_get_vert_data_layer(&ccgdm->dm, CD_ORIGINDEX);
edgeOrigIndex = DM_get_edge_data_layer(&ccgdm->dm, CD_ORIGINDEX);
@@ -3902,13 +4136,12 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
#ifdef USE_DYNSIZE
int loopidx[numVerts], vertidx[numVerts];
#endif
-
w = get_ss_weights(&wtable, gridCuts, numVerts);
ccgdm->faceMap[index].startVert = vertNum;
ccgdm->faceMap[index].startEdge = edgeNum;
ccgdm->faceMap[index].startFace = faceNum;
-
+
faceFlags->flag = mpoly ? mpoly[origIndex].flag : 0;
faceFlags->mat_nr = mpoly ? mpoly[origIndex].mat_nr : 0;
faceFlags++;
@@ -3932,7 +4165,6 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
CCGVert *v = ccgSubSurf_getFaceVert(f, s);
vertidx[s] = GET_INT_FROM_POINTER(ccgSubSurf_getVertVertHandle(v));
}
-
/*I think this is for interpolating the center vert?*/
w2 = w; // + numVerts*(g2_wid-1) * (g2_wid-1); //numVerts*((g2_wid-1) * g2_wid+g2_wid-1);
@@ -4003,7 +4235,7 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
CustomData_interp(&dm->loopData, &ccgdm->dm.loopData,
loopidx, w2, NULL, numVerts, loopindex2);
loopindex2++;
-
+
w2 = w + s * numVerts * g2_wid * g2_wid + ((y) * g2_wid + (x + 1)) * numVerts;
CustomData_interp(&dm->loopData, &ccgdm->dm.loopData,
loopidx, w2, NULL, numVerts, loopindex2);
@@ -4016,7 +4248,7 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
ccg_loops_to_corners(&ccgdm->dm.faceData, &ccgdm->dm.loopData,
&ccgdm->dm.polyData, loopindex2 - 4, faceNum, faceNum,
numTex, numCol, hasPCol, hasOrigSpace);
-
+
/*set original index data*/
if (faceOrigIndex) {
/* reference the index in 'polyOrigIndex' */
@@ -4123,26 +4355,149 @@ static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
vertNum++;
}
- ccgdm->dm.numVertData = vertNum;
- ccgdm->dm.numEdgeData = edgeNum;
- ccgdm->dm.numTessFaceData = faceNum;
- ccgdm->dm.numLoopData = loopindex2;
- ccgdm->dm.numPolyData = faceNum;
-
- /* All tessellated CD layers were updated! */
- ccgdm->dm.dirty &= ~DM_DIRTY_TESS_CDLAYERS;
-
#ifndef USE_DYNSIZE
BLI_array_free(vertidx);
BLI_array_free(loopidx);
#endif
free_ss_weights(&wtable);
+ BLI_assert(vertNum == ccgSubSurf_getNumFinalVerts(ss));
+ BLI_assert(edgeNum == ccgSubSurf_getNumFinalEdges(ss));
+ BLI_assert(loopindex2 == ccgSubSurf_getNumFinalFaces(ss) * 4);
+ BLI_assert(faceNum == ccgSubSurf_getNumFinalFaces(ss));
+
+}
+
+/* Fill in only geometry arrays needed for the GPU tessellation. */
+static void set_ccgdm_gpu_geometry(CCGDerivedMesh *ccgdm,
+ CCGSubSurf *ss,
+ DerivedMesh *dm)
+{
+ const int totface = ccgSubSurf_getNumFaces(ss);
+ MPoly *mpoly = CustomData_get_layer(&dm->polyData, CD_MPOLY);
+ int index;
+ DMFlagMat *faceFlags = ccgdm->faceFlags;
+
+ for (index = 0; index < totface; index++) {
+ faceFlags->flag = mpoly ? mpoly[index].flag : 0;
+ faceFlags->mat_nr = mpoly ? mpoly[index].mat_nr : 0;
+ faceFlags++;
+ }
+
+ /* TODO(sergey): Fill in edge flags. */
+}
+
+static CCGDerivedMesh *getCCGDerivedMesh(CCGSubSurf *ss,
+ int drawInteriorEdges,
+ int useSubsurfUv,
+ DerivedMesh *dm,
+ bool use_gpu_backend)
+{
+ const int totedge = ccgSubSurf_getNumEdges(ss);
+ const int totface = ccgSubSurf_getNumFaces(ss);
+ CCGDerivedMesh *ccgdm = MEM_callocN(sizeof(*ccgdm), "ccgdm");
+ int numTex, numCol;
+ int hasPCol, hasOrigSpace;
+
+ if (use_gpu_backend == false) {
+ DM_from_template(&ccgdm->dm, dm, DM_TYPE_CCGDM,
+ ccgSubSurf_getNumFinalVerts(ss),
+ ccgSubSurf_getNumFinalEdges(ss),
+ ccgSubSurf_getNumFinalFaces(ss),
+ ccgSubSurf_getNumFinalFaces(ss) * 4,
+ ccgSubSurf_getNumFinalFaces(ss));
+
+ numTex = CustomData_number_of_layers(&ccgdm->dm.loopData,
+ CD_MLOOPUV);
+ numCol = CustomData_number_of_layers(&ccgdm->dm.loopData,
+ CD_MLOOPCOL);
+ hasPCol = CustomData_has_layer(&ccgdm->dm.loopData,
+ CD_PREVIEW_MLOOPCOL);
+ hasOrigSpace = CustomData_has_layer(&ccgdm->dm.loopData,
+ CD_ORIGSPACE_MLOOP);
+
+ if (
+ (numTex && CustomData_number_of_layers(&ccgdm->dm.faceData,
+ CD_MTFACE) != numTex) ||
+ (numCol && CustomData_number_of_layers(&ccgdm->dm.faceData,
+ CD_MCOL) != numCol) ||
+ (hasPCol && !CustomData_has_layer(&ccgdm->dm.faceData,
+ CD_PREVIEW_MCOL)) ||
+ (hasOrigSpace && !CustomData_has_layer(&ccgdm->dm.faceData,
+ CD_ORIGSPACE)) )
+ {
+ CustomData_from_bmeshpoly(&ccgdm->dm.faceData,
+ &ccgdm->dm.polyData,
+ &ccgdm->dm.loopData,
+ ccgSubSurf_getNumFinalFaces(ss));
+ }
+
+ CustomData_free_layer_active(&ccgdm->dm.polyData, CD_NORMAL,
+ ccgdm->dm.numPolyData);
+
+ ccgdm->reverseFaceMap =
+ MEM_callocN(sizeof(int) * ccgSubSurf_getNumFinalFaces(ss),
+ "reverseFaceMap");
+
+ create_ccgdm_maps(ccgdm, ss);
+ }
+ else {
+ DM_from_template(&ccgdm->dm, dm, DM_TYPE_CCGDM,
+ 0, 0, 0, 0, dm->getNumPolys(dm));
+ CustomData_copy_data(&dm->polyData,
+ &ccgdm->dm.polyData,
+ 0, 0, dm->getNumPolys(dm));
+ }
+
+ set_default_ccgdm_callbacks(ccgdm);
+
+ ccgdm->ss = ss;
+ ccgdm->drawInteriorEdges = drawInteriorEdges;
+ ccgdm->useSubsurfUv = useSubsurfUv;
+ ccgdm->useGpuBackend = use_gpu_backend;
+
+ /* CDDM hack. */
+ ccgdm->edgeFlags = MEM_callocN(sizeof(short) * totedge, "edgeFlags");
+ ccgdm->faceFlags = MEM_callocN(sizeof(DMFlagMat) * totface, "faceFlags");
+
+ if (use_gpu_backend == false) {
+ set_ccgdm_all_geometry(ccgdm, ss, dm, useSubsurfUv != 0);
+ }
+ else {
+ set_ccgdm_gpu_geometry(ccgdm, ss, dm);
+ }
+
+ ccgdm->dm.numVertData = ccgSubSurf_getNumFinalVerts(ss);
+ ccgdm->dm.numEdgeData = ccgSubSurf_getNumFinalEdges(ss);
+ ccgdm->dm.numTessFaceData = ccgSubSurf_getNumFinalFaces(ss);
+ ccgdm->dm.numLoopData = ccgdm->dm.numTessFaceData * 4;
+ ccgdm->dm.numPolyData = ccgdm->dm.numTessFaceData;
+
+ /* All tessellated CD layers were updated! */
+ ccgdm->dm.dirty &= ~DM_DIRTY_TESS_CDLAYERS;
+
return ccgdm;
}
/***/
+static bool subsurf_use_gpu_backend(SubsurfFlags flags)
+{
+#ifdef WITH_OPENSUBDIV
+ /* Use GPU backend if it's a last modifier in the stack
+ * and user choosed to use any of the OSD compute devices,
+ * but also check if GPU has all needed features.
+ */
+ return
+ (flags & SUBSURF_USE_GPU_BACKEND) != 0 &&
+ (U.opensubdiv_compute_type != USER_OPENSUBDIV_COMPUTE_NONE) &&
+ (openSubdiv_supportGPUDisplay());
+#else
+ (void)flags;
+ return false;
+#endif
+}
+
struct DerivedMesh *subsurf_make_derived_from_derived(
struct DerivedMesh *dm,
struct SubsurfModifierData *smd,
@@ -4154,18 +4509,28 @@ struct DerivedMesh *subsurf_make_derived_from_derived(
int useSubsurfUv = smd->flags & eSubsurfModifierFlag_SubsurfUv;
int drawInteriorEdges = !(smd->flags & eSubsurfModifierFlag_ControlEdges);
CCGDerivedMesh *result;
+ bool use_gpu_backend = subsurf_use_gpu_backend(flags);
/* note: editmode calculation can only run once per
* modifier stack evaluation (uses freed cache) [#36299] */
if (flags & SUBSURF_FOR_EDIT_MODE) {
int levels = (smd->modifier.scene) ? get_render_subsurf_level(&smd->modifier.scene->r, smd->levels, false) : smd->levels;
+ /* TODO(sergey): Same as emCache below. */
+ if ((flags & SUBSURF_IN_EDIT_MODE) && smd->mCache) {
+ ccgSubSurf_free(smd->mCache);
+ smd->mCache = NULL;
+ }
+
smd->emCache = _getSubSurf(smd->emCache, levels, 3, useSimple | useAging | CCG_CALC_NORMALS);
- ss_sync_from_derivedmesh(smd->emCache, dm, vertCos, useSimple);
+#ifdef WITH_OPENSUBDIV
+ ccgSubSurf_setSkipGrids(smd->emCache, use_gpu_backend);
+#endif
+ ss_sync_from_derivedmesh(smd->emCache, dm, vertCos, useSimple);
result = getCCGDerivedMesh(smd->emCache,
drawInteriorEdges,
- useSubsurfUv, dm);
+ useSubsurfUv, dm, use_gpu_backend);
}
else if (flags & SUBSURF_USE_RENDER_PARAMS) {
/* Do not use cache in render mode. */
@@ -4180,7 +4545,7 @@ struct DerivedMesh *subsurf_make_derived_from_derived(
ss_sync_from_derivedmesh(ss, dm, vertCos, useSimple);
result = getCCGDerivedMesh(ss,
- drawInteriorEdges, useSubsurfUv, dm);
+ drawInteriorEdges, useSubsurfUv, dm, false);
result->freeSS = 1;
}
@@ -4212,23 +4577,41 @@ struct DerivedMesh *subsurf_make_derived_from_derived(
result = getCCGDerivedMesh(smd->mCache,
drawInteriorEdges,
- useSubsurfUv, dm);
+ useSubsurfUv, dm, false);
}
else {
CCGFlags ccg_flags = useSimple | CCG_USE_ARENA | CCG_CALC_NORMALS;
-
+ CCGSubSurf *prevSS = NULL;
+
if (smd->mCache && (flags & SUBSURF_IS_FINAL_CALC)) {
+#ifdef WITH_OPENSUBDIV
+ /* With OpenSubdiv enabled we always tries to re-use previos
+ * subsurf structure in order to save computation time since
+ * re-creation is rather a complicated business.
+ *
+ * TODO(sergey): There was a good eason why final calculation
+ * used to free entirely cached subsurf structure. reason of
+ * this is to be investiated still to be sure we don't have
+ * regressions here.
+ */
+ prevSS = smd->mCache;
+#else
ccgSubSurf_free(smd->mCache);
smd->mCache = NULL;
+#endif
}
+
if (flags & SUBSURF_ALLOC_PAINT_MASK)
ccg_flags |= CCG_ALLOC_MASK;
- ss = _getSubSurf(NULL, levels, 3, ccg_flags);
+ ss = _getSubSurf(prevSS, levels, 3, ccg_flags);
+#ifdef WITH_OPENSUBDIV
+ ccgSubSurf_setSkipGrids(ss, use_gpu_backend);
+#endif
ss_sync_from_derivedmesh(ss, dm, vertCos, useSimple);
- result = getCCGDerivedMesh(ss, drawInteriorEdges, useSubsurfUv, dm);
+ result = getCCGDerivedMesh(ss, drawInteriorEdges, useSubsurfUv, dm, use_gpu_backend);
if (flags & SUBSURF_IS_FINAL_CALC)
smd->mCache = ss;
diff --git a/source/blender/gpu/CMakeLists.txt b/source/blender/gpu/CMakeLists.txt
index 23a2b77d1e7..328623f884f 100644
--- a/source/blender/gpu/CMakeLists.txt
+++ b/source/blender/gpu/CMakeLists.txt
@@ -92,6 +92,7 @@ set(SRC
intern/gpu_private.h
)
+data_to_c_simple(shaders/gpu_shader_geometry.glsl SRC)
data_to_c_simple(shaders/gpu_program_smoke_frag.glsl SRC)
data_to_c_simple(shaders/gpu_program_smoke_color_frag.glsl SRC)
data_to_c_simple(shaders/gpu_shader_material.glsl SRC)
@@ -127,6 +128,9 @@ if(WITH_IMAGE_DDS)
add_definitions(-DWITH_DDS)
endif()
+if(WITH_OPENSUBDIV)
+ add_definitions(-DWITH_OPENSUBDIV)
+endif()
blender_add_lib(bf_gpu "${SRC}" "${INC}" "${INC_SYS}")
diff --git a/source/blender/gpu/GPU_draw.h b/source/blender/gpu/GPU_draw.h
index 26db4058d34..0992f8e9d21 100644
--- a/source/blender/gpu/GPU_draw.h
+++ b/source/blender/gpu/GPU_draw.h
@@ -148,6 +148,11 @@ void GPU_create_smoke(struct SmokeModifierData *smd, int highres);
/* Delayed free of OpenGL buffers by main thread */
void GPU_free_unused_buffers(void);
+#ifdef WITH_OPENSUBDIV
+struct DerivedMesh;
+void GPU_draw_update_fvar_offset(struct DerivedMesh *dm);
+#endif
+
#ifdef __cplusplus
}
#endif
diff --git a/source/blender/gpu/GPU_material.h b/source/blender/gpu/GPU_material.h
index 5995366c095..dd08ed83e5a 100644
--- a/source/blender/gpu/GPU_material.h
+++ b/source/blender/gpu/GPU_material.h
@@ -206,8 +206,8 @@ GPUBlendMode GPU_material_alpha_blend(GPUMaterial *material, float obcol[4]);
/* High level functions to create and use GPU materials */
GPUMaterial *GPU_material_world(struct Scene *scene, struct World *wo);
-GPUMaterial *GPU_material_from_blender(struct Scene *scene, struct Material *ma);
-GPUMaterial *GPU_material_matcap(struct Scene *scene, struct Material *ma);
+GPUMaterial *GPU_material_from_blender(struct Scene *scene, struct Material *ma, bool use_opensubdiv);
+GPUMaterial *GPU_material_matcap(struct Scene *scene, struct Material *ma, bool use_opensubdiv);
void GPU_material_free(struct ListBase *gpumaterial);
void GPU_materials_free(void);
@@ -322,6 +322,12 @@ typedef struct GPUParticleInfo
float angular_velocity[3];
} GPUParticleInfo;
+#ifdef WITH_OPENSUBDIV
+struct DerivedMesh;
+void GPU_material_update_fvar_offset(GPUMaterial *gpu_material,
+ struct DerivedMesh *dm);
+#endif
+
#ifdef __cplusplus
}
#endif
diff --git a/source/blender/gpu/SConscript b/source/blender/gpu/SConscript
index ff5fb42c021..880a6d14e26 100644
--- a/source/blender/gpu/SConscript
+++ b/source/blender/gpu/SConscript
@@ -60,9 +60,13 @@ if env['WITH_BF_SMOKE']:
if env['WITH_BF_DDS']:
defs.append('WITH_DDS')
+if env['WITH_BF_OPENSUBDIV']:
+ defs.append('WITH_OPENSUBDIV')
+
# generated data files
import os
sources.extend((
+ os.path.join(env['DATA_SOURCES'], "gpu_shader_geometry.glsl.c"),
os.path.join(env['DATA_SOURCES'], "gpu_program_smoke_frag.glsl.c"),
os.path.join(env['DATA_SOURCES'], "gpu_program_smoke_color_frag.glsl.c"),
os.path.join(env['DATA_SOURCES'], "gpu_shader_simple_frag.glsl.c"),
diff --git a/source/blender/gpu/intern/gpu_codegen.c b/source/blender/gpu/intern/gpu_codegen.c
index 335342c7123..68b9e3845f7 100644
--- a/source/blender/gpu/intern/gpu_codegen.c
+++ b/source/blender/gpu/intern/gpu_codegen.c
@@ -56,7 +56,7 @@
extern char datatoc_gpu_shader_material_glsl[];
extern char datatoc_gpu_shader_vertex_glsl[];
extern char datatoc_gpu_shader_vertex_world_glsl[];
-
+extern char datatoc_gpu_shader_geometry_glsl[];
static char *glsl_material_library = NULL;
@@ -531,8 +531,19 @@ static int codegen_print_uniforms_functions(DynStr *ds, ListBase *nodes)
}
}
else if (input->source == GPU_SOURCE_ATTRIB && input->attribfirst) {
+#ifdef WITH_OPENSUBDIV
+ bool skip_opensubdiv = input->attribtype == CD_TANGENT;
+ if (skip_opensubdiv) {
+ BLI_dynstr_appendf(ds, "#ifndef USE_OPENSUBDIV\n");
+ }
+#endif
BLI_dynstr_appendf(ds, "varying %s var%d;\n",
GPU_DATATYPE_STR[input->type], input->attribid);
+#ifdef WITH_OPENSUBDIV
+ if (skip_opensubdiv) {
+ BLI_dynstr_appendf(ds, "#endif\n");
+ }
+#endif
}
}
}
@@ -633,6 +644,12 @@ static char *code_generate_fragment(ListBase *nodes, GPUOutput *output)
char *code;
int builtins;
+#ifdef WITH_OPENSUBDIV
+ GPUNode *node;
+ GPUInput *input;
+#endif
+
+
#if 0
BLI_dynstr_append(ds, FUNCTION_PROTOTYPES);
#endif
@@ -650,7 +667,35 @@ static char *code_generate_fragment(ListBase *nodes, GPUOutput *output)
if (builtins & GPU_VIEW_NORMAL)
BLI_dynstr_append(ds, "\tvec3 facingnormal = (gl_FrontFacing)? varnormal: -varnormal;\n");
-
+
+ /* Calculate tangent space. */
+#ifdef WITH_OPENSUBDIV
+ {
+ bool has_tangent = false;
+ for (node = nodes->first; node; node = node->next) {
+ for (input = node->inputs.first; input; input = input->next) {
+ if (input->source == GPU_SOURCE_ATTRIB && input->attribfirst) {
+ if (input->attribtype == CD_TANGENT) {
+ BLI_dynstr_appendf(ds, "#ifdef USE_OPENSUBDIV\n");
+ BLI_dynstr_appendf(ds, "\t%s var%d;\n",
+ GPU_DATATYPE_STR[input->type],
+ input->attribid);
+ if (has_tangent == false) {
+ BLI_dynstr_appendf(ds, "\tvec3 Q1 = dFdx(inpt.v.position.xyz);\n");
+ BLI_dynstr_appendf(ds, "\tvec3 Q2 = dFdy(inpt.v.position.xyz);\n");
+ BLI_dynstr_appendf(ds, "\tvec2 st1 = dFdx(inpt.v.uv);\n");
+ BLI_dynstr_appendf(ds, "\tvec2 st2 = dFdy(inpt.v.uv);\n");
+ BLI_dynstr_appendf(ds, "\tvec3 T = normalize(Q1 * st2.t - Q2 * st1.t);\n");
+ }
+ BLI_dynstr_appendf(ds, "\tvar%d = vec4(T, 1.0);\n", input->attribid);
+ BLI_dynstr_appendf(ds, "#endif\n");
+ }
+ }
+ }
+ }
+ }
+#endif
+
codegen_declare_tmps(ds, nodes);
codegen_call_functions(ds, nodes, output);
@@ -678,10 +723,21 @@ static char *code_generate_vertex(ListBase *nodes, const GPUMatType type)
for (node = nodes->first; node; node = node->next) {
for (input = node->inputs.first; input; input = input->next) {
if (input->source == GPU_SOURCE_ATTRIB && input->attribfirst) {
+#ifdef WITH_OPENSUBDIV
+ bool skip_opensubdiv = ELEM(input->attribtype, CD_MTFACE, CD_TANGENT);
+ if (skip_opensubdiv) {
+ BLI_dynstr_appendf(ds, "#ifndef USE_OPENSUBDIV\n");
+ }
+#endif
BLI_dynstr_appendf(ds, "attribute %s att%d;\n",
GPU_DATATYPE_STR[input->type], input->attribid);
BLI_dynstr_appendf(ds, "varying %s var%d;\n",
GPU_DATATYPE_STR[input->type], input->attribid);
+#ifdef WITH_OPENSUBDIV
+ if (skip_opensubdiv) {
+ BLI_dynstr_appendf(ds, "#endif\n");
+ }
+#endif
}
}
}
@@ -706,11 +762,29 @@ static char *code_generate_vertex(ListBase *nodes, const GPUMatType type)
for (input = node->inputs.first; input; input = input->next)
if (input->source == GPU_SOURCE_ATTRIB && input->attribfirst) {
if (input->attribtype == CD_TANGENT) { /* silly exception */
+#ifdef WITH_OPENSUBDIV
+ BLI_dynstr_appendf(ds, "#ifndef USE_OPENSUBDIV\n");
+#endif
BLI_dynstr_appendf(ds, "\tvar%d.xyz = normalize(gl_NormalMatrix * att%d.xyz);\n", input->attribid, input->attribid);
BLI_dynstr_appendf(ds, "\tvar%d.w = att%d.w;\n", input->attribid, input->attribid);
+#ifdef WITH_OPENSUBDIV
+ BLI_dynstr_appendf(ds, "#endif\n");
+#endif
}
- else
+ else {
+#ifdef WITH_OPENSUBDIV
+ bool is_mtface = input->attribtype == CD_MTFACE;
+ if (is_mtface) {
+ BLI_dynstr_appendf(ds, "#ifndef USE_OPENSUBDIV\n");
+ }
+#endif
BLI_dynstr_appendf(ds, "\tvar%d = att%d;\n", input->attribid, input->attribid);
+#ifdef WITH_OPENSUBDIV
+ if (is_mtface) {
+ BLI_dynstr_appendf(ds, "#endif\n");
+ }
+#endif
+ }
}
/* unfortunately special handling is needed here because we abuse gl_Color/gl_SecondaryColor flat shading */
else if (input->source == GPU_SOURCE_OPENGL_BUILTIN) {
@@ -738,6 +812,61 @@ static char *code_generate_vertex(ListBase *nodes, const GPUMatType type)
return code;
}
+static char *code_generate_geometry(ListBase *nodes, bool use_opensubdiv)
+{
+#ifdef WITH_OPENSUBDIV
+ if (use_opensubdiv) {
+ DynStr *ds = BLI_dynstr_new();
+ GPUNode *node;
+ GPUInput *input;
+ char *code;
+
+ /* Generate varying declarations. */
+ for (node = nodes->first; node; node = node->next) {
+ for (input = node->inputs.first; input; input = input->next) {
+ if (input->source == GPU_SOURCE_ATTRIB && input->attribfirst) {
+ if (input->attribtype == CD_MTFACE) {
+ BLI_dynstr_appendf(ds, "varying %s var%d;\n",
+ GPU_DATATYPE_STR[input->type],
+ input->attribid);
+ BLI_dynstr_appendf(ds, "uniform int fvar%d_offset;\n",
+ input->attribid);
+ }
+ }
+ }
+ }
+
+ BLI_dynstr_append(ds, datatoc_gpu_shader_geometry_glsl);
+
+ /* Generate varying assignments. */
+ for (node = nodes->first; node; node = node->next) {
+ for (input = node->inputs.first; input; input = input->next) {
+ if (input->source == GPU_SOURCE_ATTRIB && input->attribfirst) {
+ if (input->attribtype == CD_MTFACE) {
+ BLI_dynstr_appendf(ds,
+ "\tINTERP_FACE_VARYING_2(var%d, "
+ "fvar%d_offset, st);\n",
+ input->attribid,
+ input->attribid);
+ }
+ }
+ }
+ }
+
+ BLI_dynstr_append(ds, "}\n\n");
+ code = BLI_dynstr_get_cstring(ds);
+ BLI_dynstr_free(ds);
+
+ //if (G.debug & G_DEBUG) printf("%s\n", code);
+
+ return code;
+ }
+#else
+ UNUSED_VARS(nodes, use_opensubdiv);
+#endif
+ return NULL;
+}
+
void GPU_code_generate_glsl_lib(void)
{
DynStr *ds;
@@ -786,8 +915,28 @@ static void gpu_nodes_extract_dynamic_inputs(GPUPass *pass, ListBase *nodes)
/* attributes don't need to be bound, they already have
* an id that the drawing functions will use */
- if (input->source == GPU_SOURCE_ATTRIB ||
- input->source == GPU_SOURCE_BUILTIN ||
+ if (input->source == GPU_SOURCE_ATTRIB) {
+#ifdef WITH_OPENSUBDIV
+ /* We do need mtface attributes for later, so we can
+ * update face-varuing variables offset in the texture
+ * buffer for proper sampling from the shader.
+ *
+ * We don't do anything about attribute itself, we
+ * only use it to learn which uniform name is to be
+ * updated.
+ *
+ * TODO(sergey): We can add ad extra uniform input
+ * for the offset, which will be purely internal and
+ * which would avoid having such an exceptions.
+ */
+ if (input->attribtype != CD_MTFACE) {
+ continue;
+ }
+#else
+ continue;
+#endif
+ }
+ if (input->source == GPU_SOURCE_BUILTIN ||
input->source == GPU_SOURCE_OPENGL_BUILTIN)
{
continue;
@@ -811,6 +960,14 @@ static void gpu_nodes_extract_dynamic_inputs(GPUPass *pass, ListBase *nodes)
if (extract)
input->shaderloc = GPU_shader_get_uniform(shader, input->shadername);
+#ifdef WITH_OPENSUBDIV
+ if (input->source == GPU_SOURCE_ATTRIB &&
+ input->attribtype == CD_MTFACE)
+ {
+ extract = 1;
+ }
+#endif
+
/* extract nodes */
if (extract) {
BLI_remlink(&node->inputs, input);
@@ -1432,11 +1589,11 @@ static void gpu_nodes_prune(ListBase *nodes, GPUNodeLink *outlink)
GPUPass *GPU_generate_pass(ListBase *nodes, GPUNodeLink *outlink,
GPUVertexAttribs *attribs, int *builtins,
- const GPUMatType type, const char *UNUSED(name))
+ const GPUMatType type, const char *UNUSED(name), const bool use_opensubdiv)
{
GPUShader *shader;
GPUPass *pass;
- char *vertexcode, *fragmentcode;
+ char *vertexcode, *geometrycode, *fragmentcode;
#if 0
if (!FUNCTION_LIB) {
@@ -1454,7 +1611,8 @@ GPUPass *GPU_generate_pass(ListBase *nodes, GPUNodeLink *outlink,
/* generate code and compile with opengl */
fragmentcode = code_generate_fragment(nodes, outlink->output);
vertexcode = code_generate_vertex(nodes, type);
- shader = GPU_shader_create(vertexcode, fragmentcode, NULL, glsl_material_library, NULL, 0, 0, 0);
+ geometrycode = code_generate_geometry(nodes, use_opensubdiv);
+ shader = GPU_shader_create(vertexcode, fragmentcode, geometrycode, glsl_material_library, NULL, 0, 0, 0);
/* failed? */
if (!shader) {
@@ -1474,6 +1632,7 @@ GPUPass *GPU_generate_pass(ListBase *nodes, GPUNodeLink *outlink,
pass->output = outlink->output;
pass->shader = shader;
pass->fragmentcode = fragmentcode;
+ pass->geometrycode = geometrycode;
pass->vertexcode = vertexcode;
pass->libcode = glsl_material_library;
@@ -1490,8 +1649,9 @@ void GPU_pass_free(GPUPass *pass)
gpu_inputs_free(&pass->inputs);
if (pass->fragmentcode)
MEM_freeN(pass->fragmentcode);
+ if (pass->geometrycode)
+ MEM_freeN(pass->geometrycode);
if (pass->vertexcode)
MEM_freeN(pass->vertexcode);
MEM_freeN(pass);
}
-
diff --git a/source/blender/gpu/intern/gpu_codegen.h b/source/blender/gpu/intern/gpu_codegen.h
index c6ed2e3f837..5aa187014ba 100644
--- a/source/blender/gpu/intern/gpu_codegen.h
+++ b/source/blender/gpu/intern/gpu_codegen.h
@@ -161,6 +161,7 @@ struct GPUPass {
struct GPUOutput *output;
struct GPUShader *shader;
char *fragmentcode;
+ char *geometrycode;
char *vertexcode;
const char *libcode;
};
@@ -170,7 +171,7 @@ typedef struct GPUPass GPUPass;
GPUPass *GPU_generate_pass(ListBase *nodes, struct GPUNodeLink *outlink,
struct GPUVertexAttribs *attribs, int *builtin,
- const GPUMatType type, const char *name);
+ const GPUMatType type, const char *name, const bool use_opensubdiv);
struct GPUShader *GPU_pass_shader(GPUPass *pass);
diff --git a/source/blender/gpu/intern/gpu_draw.c b/source/blender/gpu/intern/gpu_draw.c
index 04441fc1b20..af8b2a0f806 100644
--- a/source/blender/gpu/intern/gpu_draw.c
+++ b/source/blender/gpu/intern/gpu_draw.c
@@ -71,6 +71,7 @@
#include "BKE_node.h"
#include "BKE_object.h"
#include "BKE_scene.h"
+#include "BKE_subsurf.h"
#include "BKE_DerivedMesh.h"
#include "GPU_buffers.h"
@@ -82,6 +83,13 @@
#include "smoke_API.h"
+#ifdef WITH_OPENSUBDIV
+# include "DNA_mesh_types.h"
+# include "BKE_editmesh.h"
+
+# include "gpu_codegen.h"
+#endif
+
extern Material defmaterial; /* from material.c */
/* Text Rendering */
@@ -1357,7 +1365,7 @@ void GPU_free_images_old(void)
{
Image *ima;
static int lasttime = 0;
- int ctime = PIL_check_seconds_timer_i();
+ int ctime = (int)PIL_check_seconds_timer();
/*
* Run garbage collector once for every collecting period of time
@@ -1433,6 +1441,7 @@ static struct GPUMaterialState {
int lastmatnr, lastretval;
GPUBlendMode lastalphablend;
+ bool is_opensubdiv;
} GMS = {NULL};
/* fixed function material, alpha handed by caller */
@@ -1519,6 +1528,27 @@ void GPU_begin_object_materials(View3D *v3d, RegionView3D *rv3d, Scene *scene, O
const bool gamma = BKE_scene_check_color_management_enabled(scene);
const bool new_shading_nodes = BKE_scene_use_new_shading_nodes(scene);
const bool use_matcap = (v3d->flag2 & V3D_SHOW_SOLID_MATCAP) != 0; /* assumes v3d->defmaterial->preview is set */
+ bool use_opensubdiv = false;
+
+#ifdef WITH_OPENSUBDIV
+ {
+ DerivedMesh *derivedFinal = NULL;
+ Mesh *me = ob->data;
+ BMEditMesh *em = me->edit_btmesh;
+
+ if (em != NULL) {
+ derivedFinal = em->derivedFinal;
+ }
+ else {
+ derivedFinal = ob->derivedFinal;
+ }
+
+ if (derivedFinal != NULL && derivedFinal->type == DM_TYPE_CCGDM) {
+ CCGDerivedMesh *ccgdm = (CCGDerivedMesh *) derivedFinal;
+ use_opensubdiv = ccgdm->useGpuBackend;
+ }
+ }
+#endif
#ifdef WITH_GAMEENGINE
if (rv3d->rflag & RV3D_IS_GAME_ENGINE) {
@@ -1541,6 +1571,7 @@ void GPU_begin_object_materials(View3D *v3d, RegionView3D *rv3d, Scene *scene, O
GMS.gob = ob;
GMS.gscene = scene;
+ GMS.is_opensubdiv = use_opensubdiv;
GMS.totmat = use_matcap ? 1 : ob->totcol + 1; /* materials start from 1, default material is 0 */
GMS.glay = (v3d->localvd)? v3d->localvd->lay: v3d->lay; /* keep lamps visible in local view */
GMS.gscenelock = (v3d->scenelock != 0);
@@ -1572,7 +1603,7 @@ void GPU_begin_object_materials(View3D *v3d, RegionView3D *rv3d, Scene *scene, O
/* viewport material, setup in space_view3d, defaults to matcap using ma->preview now */
if (use_matcap) {
GMS.gmatbuf[0] = v3d->defmaterial;
- GPU_material_matcap(scene, v3d->defmaterial);
+ GPU_material_matcap(scene, v3d->defmaterial, use_opensubdiv);
/* do material 1 too, for displists! */
memcpy(&GMS.matbuf[1], &GMS.matbuf[0], sizeof(GPUMaterialFixed));
@@ -1590,7 +1621,7 @@ void GPU_begin_object_materials(View3D *v3d, RegionView3D *rv3d, Scene *scene, O
if (glsl) {
GMS.gmatbuf[0] = &defmaterial;
- GPU_material_from_blender(GMS.gscene, &defmaterial);
+ GPU_material_from_blender(GMS.gscene, &defmaterial, GMS.is_opensubdiv);
}
GMS.alphablend[0] = GPU_BLEND_SOLID;
@@ -1604,7 +1635,7 @@ void GPU_begin_object_materials(View3D *v3d, RegionView3D *rv3d, Scene *scene, O
if (ma == NULL) ma = &defmaterial;
/* create glsl material if requested */
- gpumat = glsl? GPU_material_from_blender(GMS.gscene, ma): NULL;
+ gpumat = glsl? GPU_material_from_blender(GMS.gscene, ma, GMS.is_opensubdiv): NULL;
if (gpumat) {
/* do glsl only if creating it succeed, else fallback */
@@ -1708,7 +1739,7 @@ int GPU_enable_material(int nr, void *attribs)
/* unbind glsl material */
if (GMS.gboundmat) {
if (GMS.is_alpha_pass) glDepthMask(0);
- GPU_material_unbind(GPU_material_from_blender(GMS.gscene, GMS.gboundmat));
+ GPU_material_unbind(GPU_material_from_blender(GMS.gscene, GMS.gboundmat, GMS.is_opensubdiv));
GMS.gboundmat = NULL;
}
@@ -1735,7 +1766,7 @@ int GPU_enable_material(int nr, void *attribs)
float auto_bump_scale;
- gpumat = GPU_material_from_blender(GMS.gscene, mat);
+ gpumat = GPU_material_from_blender(GMS.gscene, mat, GMS.is_opensubdiv);
GPU_material_vertex_attributes(gpumat, gattribs);
if (GMS.dob)
@@ -1802,7 +1833,7 @@ void GPU_disable_material(void)
glDisable(GL_CULL_FACE);
if (GMS.is_alpha_pass) glDepthMask(0);
- GPU_material_unbind(GPU_material_from_blender(GMS.gscene, GMS.gboundmat));
+ GPU_material_unbind(GPU_material_from_blender(GMS.gscene, GMS.gboundmat, GMS.is_opensubdiv));
GMS.gboundmat = NULL;
}
@@ -2108,3 +2139,31 @@ void GPU_state_init(void)
gpu_multisample(false);
}
+#ifdef WITH_OPENSUBDIV
+/* Update face-varying variables offset which might be
+ * different from mesh to mesh sharing the same material.
+ */
+void GPU_draw_update_fvar_offset(DerivedMesh *dm)
+{
+ int i;
+
+ /* Sanity check to be sure we only do this for OpenSubdiv draw. */
+ BLI_assert(dm->type == DM_TYPE_CCGDM);
+ BLI_assert(GMS.is_opensubdiv);
+
+ for (i = 0; i < GMS.totmat; ++i) {
+ Material *material = GMS.gmatbuf[i];
+ GPUMaterial *gpu_material;
+
+ if (material == NULL) {
+ continue;
+ }
+
+ gpu_material = GPU_material_from_blender(GMS.gscene,
+ material,
+ GMS.is_opensubdiv);
+
+ GPU_material_update_fvar_offset(gpu_material, dm);
+ }
+}
+#endif
diff --git a/source/blender/gpu/intern/gpu_extensions.c b/source/blender/gpu/intern/gpu_extensions.c
index b757aff4bdb..c71b827f463 100644
--- a/source/blender/gpu/intern/gpu_extensions.c
+++ b/source/blender/gpu/intern/gpu_extensions.c
@@ -61,8 +61,9 @@
# include "BLI_winstuff.h"
#endif
-#define MAX_DEFINE_LENGTH 72
-#define MAX_EXT_DEFINE_LENGTH 280
+/* TODO(sergey): Find better default values for this constants. */
+#define MAX_DEFINE_LENGTH 1024
+#define MAX_EXT_DEFINE_LENGTH 1024
/* Extensions support */
@@ -1528,8 +1529,14 @@ static void shader_print_errors(const char *task, const char *log, const char **
fprintf(stderr, "%s\n", log);
}
-static const char *gpu_shader_version(void)
+static const char *gpu_shader_version(bool use_opensubdiv)
{
+#ifdef WITH_OPENSUBDIV
+ if (use_opensubdiv) {
+ return "#version 150";
+ }
+#endif
+
/* turn on glsl 1.30 for bicubic bump mapping and ATI clipping support */
if (GLEW_VERSION_3_0 &&
(GPU_bicubic_bump_support() || GPU_type_matches(GPU_DEVICE_ATI, GPU_OS_ANY, GPU_DRIVER_ANY)))
@@ -1543,9 +1550,15 @@ static const char *gpu_shader_version(void)
static void gpu_shader_standard_extensions(char defines[MAX_EXT_DEFINE_LENGTH])
{
+#ifdef WITH_OPENSUBDIV
+ strcat(defines, "#extension GL_ARB_texture_query_lod: enable\n"
+ "#extension GL_ARB_gpu_shader5 : enable\n"
+ "#extension GL_ARB_explicit_attrib_location : require\n");
+#else
/* need this extension for high quality bump mapping */
if (GPU_bicubic_bump_support())
strcat(defines, "#extension GL_ARB_texture_query_lod: enable\n");
+#endif
if (GPU_geometry_shader_support())
strcat(defines, "#extension GL_EXT_geometry_shader4: enable\n");
@@ -1556,7 +1569,8 @@ static void gpu_shader_standard_extensions(char defines[MAX_EXT_DEFINE_LENGTH])
}
}
-static void gpu_shader_standard_defines(char defines[MAX_DEFINE_LENGTH])
+static void gpu_shader_standard_defines(bool use_opensubdiv,
+ char defines[MAX_DEFINE_LENGTH])
{
/* some useful defines to detect GPU type */
if (GPU_type_matches(GPU_DEVICE_ATI, GPU_OS_ANY, GPU_DRIVER_ANY)) {
@@ -1571,6 +1585,28 @@ static void gpu_shader_standard_defines(char defines[MAX_DEFINE_LENGTH])
if (GPU_bicubic_bump_support())
strcat(defines, "#define BUMP_BICUBIC\n");
+
+#ifdef WITH_OPENSUBDIV
+ /* TODO(sergey): Check whether we actually compiling shader for
+ * the OpenSubdiv mesh.
+ */
+ if (use_opensubdiv) {
+ strcat(defines, "#define USE_OPENSUBDIV\n");
+
+ /* TODO(sergey): not strictly speaking a define, but this is
+ * a global typedef which we don't have better place to define
+ * in yet.
+ */
+ strcat(defines, "struct VertexData {\n"
+ " vec4 position;\n"
+ " vec3 normal;\n"
+ " vec2 uv;"
+ "};\n");
+ }
+#else
+ UNUSED_VARS(use_opensubdiv);
+#endif
+
return;
}
@@ -1640,6 +1676,15 @@ void GPU_program_parameter_4f(GPUProgram *program, unsigned int location, float
GPUShader *GPU_shader_create(const char *vertexcode, const char *fragcode, const char *geocode, const char *libcode, const char *defines, int input, int output, int number)
{
+#ifdef WITH_OPENSUBDIF
+ /* TODO(sergey): used to add #version 150 to the geometry shader.
+ * Could safely be renamed to "use_geometry_code" since it's evry much
+ * liely any of geometry code will want to use GLSL 1.5.
+ */
+ bool use_opensubdiv = geocode != NULL;
+#else
+ bool use_opensubdiv = false;
+#endif
GLint status;
GLcharARB log[5000];
GLsizei length = 0;
@@ -1671,7 +1716,7 @@ GPUShader *GPU_shader_create(const char *vertexcode, const char *fragcode, const
return NULL;
}
- gpu_shader_standard_defines(standard_defines);
+ gpu_shader_standard_defines(use_opensubdiv, standard_defines);
gpu_shader_standard_extensions(standard_extensions);
if (vertexcode) {
@@ -1679,7 +1724,7 @@ GPUShader *GPU_shader_create(const char *vertexcode, const char *fragcode, const
/* custom limit, may be too small, beware */
int num_source = 0;
- source[num_source++] = gpu_shader_version();
+ source[num_source++] = gpu_shader_version(use_opensubdiv);
source[num_source++] = standard_extensions;
source[num_source++] = standard_defines;
@@ -1702,13 +1747,25 @@ GPUShader *GPU_shader_create(const char *vertexcode, const char *fragcode, const
}
if (fragcode) {
- const char *source[6];
+ const char *source[7];
int num_source = 0;
- source[num_source++] = gpu_shader_version();
+ source[num_source++] = gpu_shader_version(use_opensubdiv);
source[num_source++] = standard_extensions;
source[num_source++] = standard_defines;
+#ifdef WITH_OPENSUBDIV
+ /* TODO(sergey): Move to fragment shader source code generation. */
+ if (use_opensubdiv) {
+ source[num_source++] =
+ "#ifdef USE_OPENSUBDIV\n"
+ "in block {\n"
+ " VertexData v;\n"
+ "} inpt;\n"
+ "#endif\n";
+ }
+#endif
+
if (defines) source[num_source++] = defines;
if (libcode) source[num_source++] = libcode;
source[num_source++] = fragcode;
@@ -1732,7 +1789,7 @@ GPUShader *GPU_shader_create(const char *vertexcode, const char *fragcode, const
const char *source[6];
int num_source = 0;
- source[num_source++] = gpu_shader_version();
+ source[num_source++] = gpu_shader_version(use_opensubdiv);
source[num_source++] = standard_extensions;
source[num_source++] = standard_defines;
@@ -1753,7 +1810,9 @@ GPUShader *GPU_shader_create(const char *vertexcode, const char *fragcode, const
return NULL;
}
- GPU_shader_geometry_stage_primitive_io(shader, input, output, number);
+ if (!use_opensubdiv) {
+ GPU_shader_geometry_stage_primitive_io(shader, input, output, number);
+ }
}
@@ -1762,6 +1821,18 @@ GPUShader *GPU_shader_create(const char *vertexcode, const char *fragcode, const
glAttachObjectARB(shader->object, lib->lib);
#endif
+#ifdef WITH_OPENSUBDIV
+ if (use_opensubdiv) {
+ glBindAttribLocation(shader->object, 0, "position");
+ glBindAttribLocation(shader->object, 1, "normal");
+ GPU_shader_geometry_stage_primitive_io(shader,
+ GL_LINES_ADJACENCY_EXT,
+ GL_TRIANGLE_STRIP,
+ 4);
+
+ }
+#endif
+
glLinkProgramARB(shader->object);
glGetObjectParameterivARB(shader->object, GL_OBJECT_LINK_STATUS_ARB, &status);
if (!status) {
@@ -1775,6 +1846,15 @@ GPUShader *GPU_shader_create(const char *vertexcode, const char *fragcode, const
return NULL;
}
+#ifdef WITH_OPENSUBDIV
+ /* TODO(sergey): Find a better place for this. */
+ {
+ glProgramUniform1i(shader->object,
+ glGetUniformLocation(shader->object, "FVarDataBuffer"),
+ 31); /* GL_TEXTURE31 */
+ }
+#endif
+
return shader;
}
diff --git a/source/blender/gpu/intern/gpu_material.c b/source/blender/gpu/intern/gpu_material.c
index 5db516daa26..5b647232934 100644
--- a/source/blender/gpu/intern/gpu_material.c
+++ b/source/blender/gpu/intern/gpu_material.c
@@ -119,6 +119,8 @@ struct GPUMaterial {
ListBase lamps;
bool bound;
+
+ bool is_opensubdiv;
};
struct GPULamp {
@@ -223,7 +225,8 @@ static int GPU_material_construct_end(GPUMaterial *material, const char *passnam
outlink = material->outlink;
material->pass = GPU_generate_pass(&material->nodes, outlink,
- &material->attribs, &material->builtins, material->type, passname);
+ &material->attribs, &material->builtins, material->type,
+ passname, material->is_opensubdiv);
if (!material->pass)
return 0;
@@ -1673,21 +1676,27 @@ static GPUNodeLink *gpu_material_preview_matcap(GPUMaterial *mat, Material *ma)
}
/* new solid draw mode with glsl matcaps */
-GPUMaterial *GPU_material_matcap(Scene *scene, Material *ma)
+GPUMaterial *GPU_material_matcap(Scene *scene, Material *ma, bool use_opensubdiv)
{
GPUMaterial *mat;
GPUNodeLink *outlink;
LinkData *link;
- for (link = ma->gpumaterial.first; link; link = link->next)
- if (((GPUMaterial*)link->data)->scene == scene)
- return link->data;
+ for (link = ma->gpumaterial.first; link; link = link->next) {
+ GPUMaterial *current_material = (GPUMaterial*)link->data;
+ if (current_material->scene == scene &&
+ current_material->is_opensubdiv == use_opensubdiv)
+ {
+ return current_material;
+ }
+ }
/* allocate material */
mat = GPU_material_construct_begin(ma);
mat->scene = scene;
mat->type = GPU_MATERIAL_TYPE_MESH;
-
+ mat->is_opensubdiv = use_opensubdiv;
+
if (ma->preview && ma->preview->rect[0]) {
outlink = gpu_material_preview_matcap(mat, ma);
}
@@ -1749,20 +1758,26 @@ GPUMaterial *GPU_material_world(struct Scene *scene, struct World *wo)
}
-GPUMaterial *GPU_material_from_blender(Scene *scene, Material *ma)
+GPUMaterial *GPU_material_from_blender(Scene *scene, Material *ma, bool use_opensubdiv)
{
GPUMaterial *mat;
GPUNodeLink *outlink;
LinkData *link;
- for (link = ma->gpumaterial.first; link; link = link->next)
- if (((GPUMaterial*)link->data)->scene == scene)
- return link->data;
+ for (link = ma->gpumaterial.first; link; link = link->next) {
+ GPUMaterial *current_material = (GPUMaterial*)link->data;
+ if (current_material->scene == scene &&
+ current_material->is_opensubdiv == use_opensubdiv)
+ {
+ return current_material;
+ }
+ }
/* allocate material */
mat = GPU_material_construct_begin(ma);
mat->scene = scene;
mat->type = GPU_MATERIAL_TYPE_MESH;
+ mat->is_opensubdiv = use_opensubdiv;
/* render pipeline option */
if (ma->mode & MA_TRANSP)
@@ -2250,7 +2265,8 @@ GPUShaderExport *GPU_shader_export(struct Scene *scene, struct Material *ma)
if (!GPU_glsl_support())
return NULL;
- mat = GPU_material_from_blender(scene, ma);
+ /* TODO(sergey): How to detemine whether we need OSD or not here? */
+ mat = GPU_material_from_blender(scene, ma, false);
pass = (mat)? mat->pass: NULL;
if (pass && pass->fragmentcode && pass->vertexcode) {
@@ -2421,3 +2437,54 @@ void GPU_free_shader_export(GPUShaderExport *shader)
MEM_freeN(shader);
}
+#ifdef WITH_OPENSUBDIV
+void GPU_material_update_fvar_offset(GPUMaterial *gpu_material,
+ DerivedMesh *dm)
+{
+ GPUPass *pass = gpu_material->pass;
+ GPUShader *shader = (pass != NULL ? pass->shader : NULL);
+ ListBase *inputs = (pass != NULL ? &pass->inputs : NULL);
+ GPUInput *input;
+
+ if (shader == NULL) {
+ return;
+ }
+
+ GPU_shader_bind(shader);
+
+ for (input = inputs->first;
+ input != NULL;
+ input = input->next)
+ {
+ if (input->source == GPU_SOURCE_ATTRIB &&
+ input->attribtype == CD_MTFACE)
+ {
+ char name[64];
+ /* TODO(sergey): This will work for until names are
+ * consistent, we'll need to solve this somehow in the future.
+ */
+ int layer_index;
+ int location;
+
+ if (input->attribname[0] != '\0') {
+ layer_index = CustomData_get_named_layer(&dm->loopData,
+ CD_MLOOPUV,
+ input->attribname);
+ }
+ else {
+ layer_index = CustomData_get_active_layer(&dm->loopData,
+ CD_MLOOPUV);
+ }
+
+ BLI_snprintf(name, sizeof(name),
+ "fvar%d_offset",
+ input->attribid);
+ location = GPU_shader_get_uniform(shader, name);
+ /* Multiply by 2 because we're offseting U and V variables. */
+ GPU_shader_uniform_int(shader, location, layer_index * 2);
+ }
+ }
+
+ GPU_shader_unbind();
+}
+#endif
diff --git a/source/blender/gpu/intern/gpu_simple_shader.c b/source/blender/gpu/intern/gpu_simple_shader.c
index b439a37f3c3..89d3c0f59df 100644
--- a/source/blender/gpu/intern/gpu_simple_shader.c
+++ b/source/blender/gpu/intern/gpu_simple_shader.c
@@ -152,7 +152,8 @@ static GPUShader *gpu_simple_shader(int options)
datatoc_gpu_shader_simple_vert_glsl,
datatoc_gpu_shader_simple_frag_glsl,
NULL,
- NULL, defines, 0, 0, 0);
+ NULL,
+ defines, 0, 0, 0);
if (shader) {
/* set texture map to first texture unit */
diff --git a/source/blender/gpu/shaders/gpu_shader_material.glsl b/source/blender/gpu/shaders/gpu_shader_material.glsl
index ee413c1e4de..12f55a955e3 100644
--- a/source/blender/gpu/shaders/gpu_shader_material.glsl
+++ b/source/blender/gpu/shaders/gpu_shader_material.glsl
@@ -2593,6 +2593,7 @@ void material_preview_matcap(vec4 color, sampler2D ima, vec4 N, vec4 mask, out v
vec3 normal;
vec2 tex;
+#ifndef USE_OPENSUBDIV
/* remap to 0.0 - 1.0 range. This is done because OpenGL 2.0 clamps colors
* between shader stages and we want the full range of the normal */
normal = vec3(2.0, 2.0, 2.0) * vec3(N.x, N.y, N.z) - vec3(1.0, 1.0, 1.0);
@@ -2600,6 +2601,10 @@ void material_preview_matcap(vec4 color, sampler2D ima, vec4 N, vec4 mask, out v
normal.z = 0.0;
}
normal = normalize(normal);
+#else
+ normal = inpt.v.normal;
+ mask = vec4(1.0, 1.0, 1.0, 1.0);
+#endif
tex.x = 0.5 + 0.49 * normal.x;
tex.y = 0.5 + 0.49 * normal.y;
diff --git a/source/blender/gpu/shaders/gpu_shader_vertex.glsl b/source/blender/gpu/shaders/gpu_shader_vertex.glsl
index b5d8dcc0f35..7e332706695 100644
--- a/source/blender/gpu/shaders/gpu_shader_vertex.glsl
+++ b/source/blender/gpu/shaders/gpu_shader_vertex.glsl
@@ -1,3 +1,11 @@
+#ifdef USE_OPENSUBDIV
+in vec3 normal;
+in vec4 position;
+
+out block {
+ VertexData v;
+} outpt;
+#endif
varying vec3 varposition;
varying vec3 varnormal;
@@ -8,10 +16,15 @@ varying float gl_ClipDistance[6];
void main()
{
- vec4 co = gl_ModelViewMatrix * gl_Vertex;
+#ifndef USE_OPENSUBDIV
+ vec4 position = gl_Vertex;
+ vec3 normal = gl_Normal;
+#endif
+
+ vec4 co = gl_ModelViewMatrix * position;
varposition = co.xyz;
- varnormal = normalize(gl_NormalMatrix * gl_Normal);
+ varnormal = normalize(gl_NormalMatrix * normal);
gl_Position = gl_ProjectionMatrix * co;
#ifdef CLIP_WORKAROUND
@@ -24,3 +37,7 @@ void main()
gl_ClipVertex = co;
#endif
+#ifdef USE_OPENSUBDIV
+ outpt.v.position = co;
+ outpt.v.normal = varnormal;
+#endif
diff --git a/source/blender/makesdna/DNA_userdef_types.h b/source/blender/makesdna/DNA_userdef_types.h
index 48ed3563d53..803c2922ed9 100644
--- a/source/blender/makesdna/DNA_userdef_types.h
+++ b/source/blender/makesdna/DNA_userdef_types.h
@@ -561,6 +561,9 @@ typedef struct UserDef {
short pie_menu_threshold; /* pie menu distance from center before a direction is set */
struct WalkNavigation walk_navigation;
+
+ short opensubdiv_compute_type;
+ char pad5[6];
} UserDef;
extern UserDef U; /* from blenkernel blender.c */
@@ -883,6 +886,16 @@ typedef enum eUserpref_VirtualPixel {
VIRTUAL_PIXEL_DOUBLE = 1,
} eUserpref_VirtualPixel;
+typedef enum eOpensubdiv_Computee_Type {
+ USER_OPENSUBDIV_COMPUTE_NONE = 0,
+ USER_OPENSUBDIV_COMPUTE_CPU = 1,
+ USER_OPENSUBDIV_COMPUTE_OPENMP = 2,
+ USER_OPENSUBDIV_COMPUTE_OPENCL = 3,
+ USER_OPENSUBDIV_COMPUTE_CUDA = 4,
+ USER_OPENSUBDIV_COMPUTE_GLSL_TRANSFORM_FEEDBACK = 5,
+ USER_OPENSUBDIV_COMPUTE_GLSL_COMPUTE = 6,
+} eOpensubdiv_Computee_Type;
+
#ifdef __cplusplus
}
#endif
diff --git a/source/blender/makesrna/SConscript b/source/blender/makesrna/SConscript
index f8eb2e1807c..52c01f6d476 100644
--- a/source/blender/makesrna/SConscript
+++ b/source/blender/makesrna/SConscript
@@ -139,6 +139,10 @@ if env['WITH_BF_FREESTYLE']:
defs.append('WITH_FREESTYLE')
incs += ' ../freestyle'
+if env['WITH_BF_OPENSUBDIV']:
+ incs += ' #/intern/opensubdiv'
+ defs.append('WITH_OPENSUBDIV')
+
if env['OURPLATFORM'] == 'linux':
cflags='-pthread'
diff --git a/source/blender/makesrna/intern/CMakeLists.txt b/source/blender/makesrna/intern/CMakeLists.txt
index e878e0877d2..bef2fa3084e 100644
--- a/source/blender/makesrna/intern/CMakeLists.txt
+++ b/source/blender/makesrna/intern/CMakeLists.txt
@@ -300,6 +300,13 @@ if(WITH_FREESTYLE)
add_definitions(-DWITH_FREESTYLE)
endif()
+if(WITH_OPENSUBDIV)
+ list(APPEND INC
+ ../../../../intern/opensubdiv
+ )
+ add_definitions(-DWITH_OPENSUBDIV)
+endif()
+
# Build makesrna executable
blender_include_dirs(
.
diff --git a/source/blender/makesrna/intern/SConscript b/source/blender/makesrna/intern/SConscript
index afa4ab3a67d..0e36a4c9712 100644
--- a/source/blender/makesrna/intern/SConscript
+++ b/source/blender/makesrna/intern/SConscript
@@ -154,6 +154,10 @@ if env['WITH_BF_FREESTYLE']:
defs.append('WITH_FREESTYLE')
incs += ' ../../freestyle'
+if env['WITH_BF_OPENSUBDIV']:
+ defs.append('WITH_OPENSUBDIV')
+ incs += ' #intern/opensubdiv'
+
if env['OURPLATFORM'] == 'linux':
cflags='-pthread'
diff --git a/source/blender/makesrna/intern/rna_userdef.c b/source/blender/makesrna/intern/rna_userdef.c
index cf327583d47..b2af13aa106 100644
--- a/source/blender/makesrna/intern/rna_userdef.c
+++ b/source/blender/makesrna/intern/rna_userdef.c
@@ -61,6 +61,19 @@ static EnumPropertyItem compute_device_type_items[] = {
};
#endif
+#ifdef WITH_OPENSUBDIV
+static EnumPropertyItem opensubdiv_compute_type_items[] = {
+ {USER_OPENSUBDIV_COMPUTE_NONE, "NONE", 0, "None", ""},
+ {USER_OPENSUBDIV_COMPUTE_CPU, "CPU", 0, "CPU", ""},
+ {USER_OPENSUBDIV_COMPUTE_OPENMP, "OPENMP", 0, "OpenMP", ""},
+ {USER_OPENSUBDIV_COMPUTE_OPENCL, "OPENCL", 0, "OpenCL", ""},
+ {USER_OPENSUBDIV_COMPUTE_CUDA, "CUDA", 0, "CUDA", ""},
+ {USER_OPENSUBDIV_COMPUTE_GLSL_TRANSFORM_FEEDBACK, "GLSL_TRANSFORM_FEEDBACL", 0, "GLSL Transform Feedback", ""},
+ {USER_OPENSUBDIV_COMPUTE_GLSL_COMPUTE, "GLSL_COMPUTE", 0, "GLSL Compute", ""},
+ { 0, NULL, 0, NULL, NULL}
+};
+#endif
+
static EnumPropertyItem audio_device_items[] = {
{0, "NONE", 0, "None", "Null device - there will be no audio output"},
#ifdef WITH_SDL
@@ -106,6 +119,10 @@ EnumPropertyItem navigation_mode_items[] = {
#include "CCL_api.h"
+#ifdef WITH_OPENSUBDIV
+# include "opensubdiv_capi.h"
+#endif
+
#ifdef WITH_SDL_DYNLOAD
# include "sdlew.h"
#endif
@@ -538,6 +555,54 @@ static EnumPropertyItem *rna_userdef_compute_device_itemf(bContext *UNUSED(C), P
}
#endif
+#ifdef WITH_OPENSUBDIV
+static EnumPropertyItem *rna_userdef_opensubdiv_compute_type_itemf(bContext *UNUSED(C), PointerRNA *UNUSED(ptr),
+ PropertyRNA *UNUSED(prop), bool *r_free)
+{
+ EnumPropertyItem *item = NULL;
+ int totitem = 0;
+ int evaluators = openSubdiv_getAvailableEvaluators();
+
+ RNA_enum_items_add_value(&item, &totitem, opensubdiv_compute_type_items, USER_OPENSUBDIV_COMPUTE_NONE);
+
+#define APPEND_COMPUTE(compute) \
+ if (evaluators & OPENSUBDIV_EVALUATOR_## compute) { \
+ RNA_enum_items_add_value(&item, &totitem, opensubdiv_compute_type_items, USER_OPENSUBDIV_COMPUTE_ ## compute); \
+ } ((void)0)
+
+ APPEND_COMPUTE(CPU);
+ APPEND_COMPUTE(OPENMP);
+ APPEND_COMPUTE(OPENCL);
+ APPEND_COMPUTE(CUDA);
+ APPEND_COMPUTE(GLSL_TRANSFORM_FEEDBACK);
+ APPEND_COMPUTE(GLSL_COMPUTE);
+
+#undef APPEND_COMPUTE
+
+ RNA_enum_item_end(&item, &totitem);
+ *r_free = true;
+
+ return item;
+}
+
+static void rna_userdef_opensubdiv_update(Main *bmain, Scene *UNUSED(scene), PointerRNA *UNUSED(ptr))
+{
+ Object *object;
+
+ for (object = bmain->object.first;
+ object;
+ object = object->id.next)
+ {
+ if (object->derivedFinal != NULL &&
+ object->derivedFinal->type == DM_TYPE_CCGDM)
+ {
+ DAG_id_tag_update(&object->id, OB_RECALC_OB);
+ }
+ }
+}
+
+#endif
+
static EnumPropertyItem *rna_userdef_audio_device_itemf(bContext *UNUSED(C), PointerRNA *UNUSED(ptr),
PropertyRNA *UNUSED(prop), bool *r_free)
{
@@ -4204,6 +4269,16 @@ static void rna_def_userdef_system(BlenderRNA *brna)
RNA_def_property_enum_funcs(prop, "rna_userdef_compute_device_get", NULL, "rna_userdef_compute_device_itemf");
RNA_def_property_ui_text(prop, "Compute Device", "Device to use for computation");
#endif
+
+#ifdef WITH_OPENSUBDIV
+ prop = RNA_def_property(srna, "opensubdiv_compute_type", PROP_ENUM, PROP_NONE);
+ RNA_def_property_flag(prop, PROP_ENUM_NO_CONTEXT);
+ RNA_def_property_enum_sdna(prop, NULL, "opensubdiv_compute_type");
+ RNA_def_property_enum_items(prop, opensubdiv_compute_type_items);
+ RNA_def_property_enum_funcs(prop, NULL, NULL, "rna_userdef_opensubdiv_compute_type_itemf");
+ RNA_def_property_ui_text(prop, "OpenSubdiv Compute Type", "Type of computer backend used with OpenSubdiv");
+ RNA_def_property_update(prop, NC_SPACE | ND_SPACE_PROPERTIES, "rna_userdef_opensubdiv_update");
+#endif
}
static void rna_def_userdef_input(BlenderRNA *brna)
diff --git a/source/blender/modifiers/CMakeLists.txt b/source/blender/modifiers/CMakeLists.txt
index fde0c773ef0..ad230dede24 100644
--- a/source/blender/modifiers/CMakeLists.txt
+++ b/source/blender/modifiers/CMakeLists.txt
@@ -151,4 +151,8 @@ if(WITH_OPENNL)
)
endif()
+if(WITH_OPENSUBDIV)
+ add_definitions(-DWITH_OPENSUBDIV)
+endif()
+
blender_add_lib(bf_modifiers "${SRC}" "${INC}" "${INC_SYS}")
diff --git a/source/blender/modifiers/intern/MOD_subsurf.c b/source/blender/modifiers/intern/MOD_subsurf.c
index 2a62a9e6081..5a03da91b2d 100644
--- a/source/blender/modifiers/intern/MOD_subsurf.c
+++ b/source/blender/modifiers/intern/MOD_subsurf.c
@@ -101,17 +101,33 @@ static DerivedMesh *applyModifier(ModifierData *md, Object *ob,
const bool useRenderParams = (flag & MOD_APPLY_RENDER) != 0;
const bool isFinalCalc = (flag & MOD_APPLY_USECACHE) != 0;
+#ifdef WITH_OPENSUBDIV
+ const bool allow_gpu = (flag & MOD_APPLY_ALLOW_GPU) != 0;
+ const bool do_cddm_convert = useRenderParams;
+#else
+ const bool do_cddm_convert = useRenderParams || !isFinalCalc;
+#endif
+
if (useRenderParams)
subsurf_flags |= SUBSURF_USE_RENDER_PARAMS;
if (isFinalCalc)
subsurf_flags |= SUBSURF_IS_FINAL_CALC;
if (ob->mode & OB_MODE_EDIT)
subsurf_flags |= SUBSURF_IN_EDIT_MODE;
-
+
+#ifdef WITH_OPENSUBDIV
+ /* TODO(sergey): Not entirely correct, modifiers on top of subsurf
+ * could be disabled.
+ */
+ if (md->next == NULL && allow_gpu && do_cddm_convert == false) {
+ subsurf_flags |= SUBSURF_USE_GPU_BACKEND;
+ }
+#endif
+
result = subsurf_make_derived_from_derived(derivedData, smd, NULL, subsurf_flags);
result->cd_flag = derivedData->cd_flag;
-
- if (useRenderParams || !isFinalCalc) {
+
+ if (do_cddm_convert) {
DerivedMesh *cddm = CDDM_copy(result);
result->release(result);
result = cddm;
@@ -129,6 +145,15 @@ static DerivedMesh *applyModifierEM(ModifierData *md, Object *UNUSED(ob),
DerivedMesh *result;
/* 'orco' using editmode flags would cause cache to be used twice in editbmesh_calc_modifiers */
SubsurfFlags ss_flags = (flag & MOD_APPLY_ORCO) ? 0 : (SUBSURF_FOR_EDIT_MODE | SUBSURF_IN_EDIT_MODE);
+#ifdef WITH_OPENSUBDIV
+ const bool allow_gpu = (flag & MOD_APPLY_ALLOW_GPU) != 0;
+ /* TODO(sergey): Not entirely correct, modifiers on top of subsurf
+ * could be disabled.
+ */
+ if (md->next == NULL && allow_gpu) {
+ ss_flags |= SUBSURF_USE_GPU_BACKEND;
+ }
+#endif
result = subsurf_make_derived_from_derived(derivedData, smd, NULL, ss_flags);
diff --git a/source/blender/windowmanager/CMakeLists.txt b/source/blender/windowmanager/CMakeLists.txt
index 3f5c86857b7..4014d370eb8 100644
--- a/source/blender/windowmanager/CMakeLists.txt
+++ b/source/blender/windowmanager/CMakeLists.txt
@@ -138,6 +138,13 @@ if(WITH_BUILDINFO)
add_definitions(-DWITH_BUILDINFO)
endif()
+if(WITH_OPENSUBDIV)
+ add_definitions(-DWITH_OPENSUBDIV)
+ list(APPEND INC
+ ../../../intern/opensubdiv
+ )
+endif()
+
if(WIN32)
if(WITH_INPUT_IME)
add_definitions(-DWITH_INPUT_IME)
diff --git a/source/blender/windowmanager/SConscript b/source/blender/windowmanager/SConscript
index 912d11762e1..47eee41cf16 100644
--- a/source/blender/windowmanager/SConscript
+++ b/source/blender/windowmanager/SConscript
@@ -86,4 +86,8 @@ if env['WITH_BF_PYTHON_SECURITY']:
if env['OURPLATFORM'] in ('linux', 'openbsd3', 'sunos5', 'freebsd7', 'freebsd8', 'freebsd9', 'aix4', 'aix5'):
defs.append("WITH_X11")
+if env['WITH_BF_OPENSUBDIV']:
+ defs.append("WITH_OPENSUBDIV")
+ incs += ' #intern/opensubdiv'
+
env.BlenderLib ( 'bf_windowmanager', sources, Split(incs), defines=defs, libtype=['core'], priority=[5] )
diff --git a/source/blender/windowmanager/intern/wm_init_exit.c b/source/blender/windowmanager/intern/wm_init_exit.c
index 0515cd01f6f..ad7044c6218 100644
--- a/source/blender/windowmanager/intern/wm_init_exit.c
+++ b/source/blender/windowmanager/intern/wm_init_exit.c
@@ -117,6 +117,10 @@
#include "BKE_sound.h"
#include "COM_compositor.h"
+#ifdef WITH_OPENSUBDIV
+# include "opensubdiv_capi.h"
+#endif
+
static void wm_init_reports(bContext *C)
{
ReportList *reports = CTX_wm_reports(C);
@@ -525,6 +529,10 @@ void WM_exit_ext(bContext *C, const bool do_python)
(void)do_python;
#endif
+#ifdef WITH_OPENSUBDIV
+ openSubdiv_cleanup();
+#endif
+
if (!G.background) {
GPU_global_buffer_pool_free();
GPU_free_unused_buffers();
diff --git a/source/blenderplayer/CMakeLists.txt b/source/blenderplayer/CMakeLists.txt
index a3210d2c25f..bf85d7daf32 100644
--- a/source/blenderplayer/CMakeLists.txt
+++ b/source/blenderplayer/CMakeLists.txt
@@ -211,6 +211,10 @@ endif()
list(APPEND BLENDER_SORTED_LIBS bf_intern_locale)
endif()
+ if(WITH_OPENSUBDIV)
+ list(APPEND BLENDER_SORTED_LIBS bf_intern_opensubdiv)
+ endif()
+
foreach(SORTLIB ${BLENDER_SORTED_LIBS})
set(REMLIB ${SORTLIB})
foreach(SEARCHLIB ${BLENDER_LINK_LIBS})
diff --git a/source/gameengine/Ketsji/BL_BlenderShader.cpp b/source/gameengine/Ketsji/BL_BlenderShader.cpp
index fabe49b668d..b5151615f3b 100644
--- a/source/gameengine/Ketsji/BL_BlenderShader.cpp
+++ b/source/gameengine/Ketsji/BL_BlenderShader.cpp
@@ -62,7 +62,7 @@ BL_BlenderShader::~BL_BlenderShader()
void BL_BlenderShader::ReloadMaterial()
{
- mGPUMat = (mMat) ? GPU_material_from_blender(mBlenderScene, mMat) : NULL;
+ mGPUMat = (mMat) ? GPU_material_from_blender(mBlenderScene, mMat, false) : NULL;
}
void BL_BlenderShader::SetProg(bool enable, double time, RAS_IRasterizer* rasty)
diff --git a/source/gameengine/Rasterizer/RAS_OpenGLRasterizer/RAS_StorageIM.cpp b/source/gameengine/Rasterizer/RAS_OpenGLRasterizer/RAS_StorageIM.cpp
index c978b908a6a..2cf6088629a 100644
--- a/source/gameengine/Rasterizer/RAS_OpenGLRasterizer/RAS_StorageIM.cpp
+++ b/source/gameengine/Rasterizer/RAS_OpenGLRasterizer/RAS_StorageIM.cpp
@@ -236,7 +236,7 @@ void RAS_StorageIM::IndexPrimitivesInternal(RAS_MeshSlot& ms, bool multi)
Material* blmat = current_polymat->GetBlenderMaterial();
Scene* blscene = current_polymat->GetBlenderScene();
if (!wireframe && blscene && blmat)
- GPU_material_vertex_attributes(GPU_material_from_blender(blscene, blmat), &current_gpu_attribs);
+ GPU_material_vertex_attributes(GPU_material_from_blender(blscene, blmat, false), &current_gpu_attribs);
else
memset(&current_gpu_attribs, 0, sizeof(current_gpu_attribs));
// DM draw can mess up blending mode, restore at the end