Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.blender.org/blender.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMichael Jones <michael_p_jones@apple.com>2021-10-14 15:53:40 +0300
committerMichael Jones <michael_p_jones@apple.com>2021-10-14 18:14:43 +0300
commita0f269f682dab848afc80cd322d04a0c4a815cae (patch)
tree0978b1888273fbaa2d14550bde484c5247fa89ff /intern/cycles/kernel/kernel_accumulate.h
parent47caeb8c26686e24ea7e694f94fabee44f3d2dca (diff)
Cycles: Kernel address space changes for MSL
This is the first of a sequence of changes to support compiling Cycles kernels as MSL (Metal Shading Language) in preparation for a Metal GPU device implementation. MSL requires that all pointer types be declared with explicit address space attributes (device, thread, etc...). There is already precedent for this with Cycles' address space macros (ccl_global, ccl_private, etc...), therefore the first step of MSL-enablement is to apply these consistently. Line-for-line this represents the largest change required to enable MSL. Applying this change first will simplify future patches as well as offering the emergent benefit of enhanced descriptiveness. The vast majority of deltas in this patch fall into one of two cases: - Ensuring ccl_private is specified for thread-local pointer types - Ensuring ccl_global is specified for device-wide pointer types Additionally, the ccl_addr_space qualifier can be removed. Prior to Cycles X, ccl_addr_space was used as a context-dependent address space qualifier, but now it is either redundant (e.g. in struct typedefs), or can be replaced by ccl_global in the case of pointer types. Associated function variants (e.g. lcg_step_float_addrspace) are also redundant. In cases where address space qualifiers are chained with "const", this patch places the address space qualifier first. The rationale for this is that the choice of address space is likely to have the greater impact on runtime performance and overall architecture. The final part of this patch is the addition of a metal/compat.h header. This is partially complete and will be extended in future patches, paving the way for the full Metal implementation. Ref T92212 Reviewed By: brecht Maniphest Tasks: T92212 Differential Revision: https://developer.blender.org/D12864
Diffstat (limited to 'intern/cycles/kernel/kernel_accumulate.h')
-rw-r--r--intern/cycles/kernel/kernel_accumulate.h20
1 files changed, 12 insertions, 8 deletions
diff --git a/intern/cycles/kernel/kernel_accumulate.h b/intern/cycles/kernel/kernel_accumulate.h
index f4d00e4c20c..dc0aa9356f7 100644
--- a/intern/cycles/kernel/kernel_accumulate.h
+++ b/intern/cycles/kernel/kernel_accumulate.h
@@ -32,7 +32,9 @@ CCL_NAMESPACE_BEGIN
* that only one of those can happen at a bounce, and so do not need to accumulate
* them separately. */
-ccl_device_inline void bsdf_eval_init(BsdfEval *eval, const bool is_diffuse, float3 value)
+ccl_device_inline void bsdf_eval_init(ccl_private BsdfEval *eval,
+ const bool is_diffuse,
+ float3 value)
{
eval->diffuse = zero_float3();
eval->glossy = zero_float3();
@@ -45,7 +47,7 @@ ccl_device_inline void bsdf_eval_init(BsdfEval *eval, const bool is_diffuse, flo
}
}
-ccl_device_inline void bsdf_eval_accum(BsdfEval *eval,
+ccl_device_inline void bsdf_eval_accum(ccl_private BsdfEval *eval,
const bool is_diffuse,
float3 value,
float mis_weight)
@@ -60,29 +62,29 @@ ccl_device_inline void bsdf_eval_accum(BsdfEval *eval,
}
}
-ccl_device_inline bool bsdf_eval_is_zero(BsdfEval *eval)
+ccl_device_inline bool bsdf_eval_is_zero(ccl_private BsdfEval *eval)
{
return is_zero(eval->diffuse) && is_zero(eval->glossy);
}
-ccl_device_inline void bsdf_eval_mul(BsdfEval *eval, float value)
+ccl_device_inline void bsdf_eval_mul(ccl_private BsdfEval *eval, float value)
{
eval->diffuse *= value;
eval->glossy *= value;
}
-ccl_device_inline void bsdf_eval_mul3(BsdfEval *eval, float3 value)
+ccl_device_inline void bsdf_eval_mul3(ccl_private BsdfEval *eval, float3 value)
{
eval->diffuse *= value;
eval->glossy *= value;
}
-ccl_device_inline float3 bsdf_eval_sum(const BsdfEval *eval)
+ccl_device_inline float3 bsdf_eval_sum(ccl_private const BsdfEval *eval)
{
return eval->diffuse + eval->glossy;
}
-ccl_device_inline float3 bsdf_eval_diffuse_glossy_ratio(const BsdfEval *eval)
+ccl_device_inline float3 bsdf_eval_diffuse_glossy_ratio(ccl_private const BsdfEval *eval)
{
/* Ratio of diffuse and glossy to recover proportions for writing to render pass.
* We assume reflection, transmission and volume scatter to be exclusive. */
@@ -96,7 +98,9 @@ ccl_device_inline float3 bsdf_eval_diffuse_glossy_ratio(const BsdfEval *eval)
* to render buffers instead of using per-thread memory, and to avoid the
* impact of clamping on other contributions. */
-ccl_device_forceinline void kernel_accum_clamp(const KernelGlobals *kg, float3 *L, int bounce)
+ccl_device_forceinline void kernel_accum_clamp(ccl_global const KernelGlobals *kg,
+ ccl_private float3 *L,
+ int bounce)
{
#ifdef __KERNEL_DEBUG_NAN__
if (!isfinite3_safe(*L)) {