From 7d8a186335400cb7ad69d24aab89e7a215a70348 Mon Sep 17 00:00:00 2001 From: Lukas Stockner Date: Tue, 14 Jan 2020 00:33:21 +0100 Subject: Fix T73133: UDIM texture count in Eevee is limited by OpenGL Based on @fclem's suggestion in D6421, this commit implements support for storing all tiles of a UDIM texture in a single 2D array texture on the GPU. Previously, Eevee was binding one OpenGL texture per tile, quickly running into hardware limits with nontrivial UDIM texture sets. Workbench meanwhile had no UDIM support at all, as reusing the per-tile approach would require splitting the mesh by tile as well as texture. With this commit, both Workbench as well as Eevee now support huge numbers of tiles, with the eventual limits being GPU memory and ultimately GL_MAX_ARRAY_TEXTURE_LAYERS, which tends to be in the 1000s on modern GPUs. Initially my plan was to have one array texture per unique size, but managing the different textures and keeping everything consistent ended up being way too complex. Therefore, we now use a simpler version that allocates a texture that is large enough to fit the largest tile and then packs all tiles into as many layers as necessary. As a result, each UDIM texture only binds two textures (one for the actual images, one for metadata) regardless of how many tiles are used. Note that this rolls back per-tile GPUTextures, meaning that we again have per-Image GPUTextures like we did before the original UDIM commit, but now with four instead of two types. Reviewed By: fclem Differential Revision: https://developer.blender.org/D6456 --- source/blender/gpu/intern/gpu_texture.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) (limited to 'source/blender/gpu/intern/gpu_texture.c') diff --git a/source/blender/gpu/intern/gpu_texture.c b/source/blender/gpu/intern/gpu_texture.c index 497fc13a2c8..201194232db 100644 --- a/source/blender/gpu/intern/gpu_texture.c +++ b/source/blender/gpu/intern/gpu_texture.c @@ -1032,10 +1032,6 @@ GPUTexture *GPU_texture_create_buffer(eGPUTextureFormat tex_format, const GLuint GPUTexture *GPU_texture_from_bindcode(int textarget, int bindcode) { - /* see GPUInput::textarget: it can take two values - GL_TEXTURE_2D and GL_TEXTURE_CUBE_MAP - * these values are correct for glDisable, so textarget can be safely used in - * GPU_texture_bind/GPU_texture_unbind through tex->target_base */ - /* (is any of this obsolete now that we don't glEnable/Disable textures?) */ GPUTexture *tex = MEM_callocN(sizeof(GPUTexture), "GPUTexture"); tex->bindcode = bindcode; tex->number = -1; @@ -1052,12 +1048,8 @@ GPUTexture *GPU_texture_from_bindcode(int textarget, int bindcode) else { GLint w, h; - GLenum gettarget; - - if (textarget == GL_TEXTURE_2D) { - gettarget = GL_TEXTURE_2D; - } - else { + GLenum gettarget = textarget; + if (textarget == GL_TEXTURE_CUBE_MAP) { gettarget = GL_TEXTURE_CUBE_MAP_POSITIVE_X; } -- cgit v1.2.3