- add the use of DRWShaderLibrary to EEVEE's glsl codebase to reduce code
complexity and duplication.
- split bsdf_common_lib.glsl into multiple sub library which are now shared
with other engines.
- the surface shader code is now more organised and have its own files.
- change default world to use a material nodetree and make lookdev shader
more clear.
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D8306
This revisit the render pipeline to support time slicing for better motion
blur.
We support accumulation with or without the Post-process motion blur.
If using the post-process, we reuse last step next motion data to avoid
another scene reevaluation.
This also adds support for hair motion blur which is handled in a similar
way as mesh motion blur.
The total number of samples is distributed evenly accross all timesteps to
avoid sampling weighting issues. For this reason, the sample count is
(internally) rounded up to the next multiple of the step count.
Only FX Motion BLur: {F8632258}
FX Motion Blur + 4 time steps: {F8632260}
FX Motion Blur + 32 time steps: {F8632261}
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D8079
This adds object motion blur vectors for EEVEE as well as better noise
reduction for it.
For TAA reprojection we just compute the motion vector on the fly based on
camera motion and depth buffer. This makes possible to store another motion
vector only for the blurring which is not useful for TAA history fetching.
Motion Data is saved per object & per geometry if using deformation blur.
We support deformation motion blur by saving previous VBO and modifying the
actual GPUBatch for the geometry to include theses VBOs.
We store Previous and Next frame motion in the same motion vector buffer
(RG for prev and BA for next). This makes non linear motion blur (like
rotating objects) less prone to outward/inward blur.
We also improve the motion blur post process to expand outside the objects
border. We use a tile base approach and the max size of the blur is set via
a new render setting.
We use a background reconstruction method that needs another setting
(Background Separation).
Sampling is done using a fixed 8 dithered samples per direction. The final
render samples will clear the noise like other stochastic effects.
One caveat is that hair particles are not yet supported. Support will
come in another patch.
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D7297