Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.blender.org/blender.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPatrick Mours <pmours@nvidia.com>2021-11-10 16:37:15 +0300
committerPatrick Mours <pmours@nvidia.com>2021-11-10 17:49:50 +0300
commitf56562043521a5c160585aea3f28167b4d3bc77d (patch)
treeaf1e155ac9b25b6ad15acbe8dba8bb8a2d8edebf /intern/cycles/kernel/device/cuda/config.h
parenta6e4cb092eb43b74379f99bdf82baab0db21603e (diff)
Fix T92985: CUDA errors with Cycles film convert kernels
rB3a4c8f406a3a3bf0627477c6183a594fa707a6e2 changed the macros that create the film convert kernel entry points, but in the process accidentally changed the parameter definition to one of those (which caused CUDA launch and misaligned address errors) and changed the implementation as well. This restores the correct implementation from before. In addition, the `ccl_gpu_kernel_threads` macro did not work as intended and caused the generated launch bounds to end up with an incorrect input for the second parameter (it was set to "thread_num_registers", rather than the result of the block number calculation). I'm not entirely sure why, as the macro definition looked sound to me. Decided to simply go with two separate macros instead, to simplify and solve this. Also changed how state is captured with the `ccl_gpu_kernel_lambda` macro slightly, to avoid a compiler warning (expression has no effect) that otherwise occurred. Maniphest Tasks: T92985 Differential Revision: https://developer.blender.org/D13175
Diffstat (limited to 'intern/cycles/kernel/device/cuda/config.h')
-rw-r--r--intern/cycles/kernel/device/cuda/config.h17
1 files changed, 5 insertions, 12 deletions
diff --git a/intern/cycles/kernel/device/cuda/config.h b/intern/cycles/kernel/device/cuda/config.h
index e333fe90332..003881d7912 100644
--- a/intern/cycles/kernel/device/cuda/config.h
+++ b/intern/cycles/kernel/device/cuda/config.h
@@ -92,25 +92,19 @@
/* Compute number of threads per block and minimum blocks per multiprocessor
* given the maximum number of registers per thread. */
-
-#define ccl_gpu_kernel_threads(block_num_threads) \
- extern "C" __global__ void __launch_bounds__(block_num_threads)
-
-#define ccl_gpu_kernel_threads_registers(block_num_threads, thread_num_registers) \
+#define ccl_gpu_kernel(block_num_threads, thread_num_registers) \
extern "C" __global__ void __launch_bounds__(block_num_threads, \
GPU_MULTIPRESSOR_MAX_REGISTERS / \
(block_num_threads * thread_num_registers))
-/* allow ccl_gpu_kernel to accept 1 or 2 parameters */
-#define SELECT_MACRO(_1, _2, NAME, ...) NAME
-#define ccl_gpu_kernel(...) \
- SELECT_MACRO(__VA_ARGS__, ccl_gpu_kernel_threads_registers, ccl_gpu_kernel_threads)(__VA_ARGS__)
+#define ccl_gpu_kernel_threads(block_num_threads) \
+ extern "C" __global__ void __launch_bounds__(block_num_threads)
#define ccl_gpu_kernel_signature(name, ...) kernel_gpu_##name(__VA_ARGS__)
#define ccl_gpu_kernel_call(x) x
-/* define a function object where "func" is the lambda body, and additional parameters are used to
+/* Define a function object where "func" is the lambda body, and additional parameters are used to
* specify captured state */
#define ccl_gpu_kernel_lambda(func, ...) \
struct KernelLambda { \
@@ -119,8 +113,7 @@
{ \
return (func); \
} \
- } ccl_gpu_kernel_lambda_pass; \
- ccl_gpu_kernel_lambda_pass
+ } ccl_gpu_kernel_lambda_pass
/* sanity checks */