Age | Commit message (Collapse) | Author |
|
* If Preview Samples are set to 0 (unlimited) it now assumes 65536 instead of INT_MAX.
This doesn't affect regular sampling, you can still enter fixed values of 100k or whatever.
|
|
|
|
|
|
|
|
|
|
causes blender crash
* Now OCL_init() returns error messages if the OpenCL library cannot be loaded.
|
|
character. also remove some dead code (return directly after return).
|
|
would leak a little bit of memory for every window created.
|
|
system include.
|
|
an actual bug as far as I can tell.
|
|
|
|
* Make it more clear for the user what affects 3D View and Final render.
* Static / Dynamic BVH only affects viewport, BVH Cache only final. (see BlenderSync::get_scene_params)
|
|
buffers option, it requires specific tile sizes and if they don't match what
OpenEXR expects file saving can get stuck.
Now I've made support for his optional, with a bl_use_save_buffers property for
RenderEngine, set to False by default.
|
|
|
|
memory.
|
|
RGB color components gave non-grey results when you might no expect it.
What happens is that some of the color channels are zero in the direct light
pass because their channel is zero in the color pass. The direct light pass is
defined as lighting divided by the color pass, and we can't divide by zero. We
do a division after all samples are added together to ensure that multiplication
in the compositor gives the exact combined pass even with antialiasing, DoF, ..
Found a simple tweak here, instead of setting such channels to zero it will set
it to the average of other non-zero color channels, which makes the results look
like the expected grey.
|
|
Patch for blender internal made by Campbell.
|
|
Issue is caused by missing sse flags for Clang compilers,
this flags only was set for GNU C compilers.
Added if branch for Clang now, which contains the same
flags apart from -mfpmath=sse, This is because Clang was
claiming it's unused argument.
Probably OSX would need some further checks since it's
also using Clang. I've got no idea why it could have
worked for OSX before..
|
|
have been normalized to 0..1 range.
|
|
|
|
sampling.
|
|
switching between an integrated Intel and a dedicated NVidia card, to use the
dedicated card for Blender.
A more portable and general solution would be nice, but it's all I could find:
http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/OptimusRenderingPolicies.pdf
|
|
running at the same time.
|
|
* Fix crash with negative values in Phong Ramp, and add some checks to survive INF and NAN values.
Patch by Brecht and myself.
|
|
* Some cleanup for castings.
|
|
|
|
changes.
|
|
* Assure SSE2 intrinsics are also used on SSE3 CPUs and x86.
|
|
* Reshuffle SSE #ifdefs to try to avoid compilation errors enabling SSE on 32 bit.
* Remove CUDA kernel launch size exception on Mac, is not needed.
* Make OSL file compilation quiet like c/cpp files.
|
|
|
|
* Avoid some unneeded int castings, they were only needed in the original Texture Nodes implementation as custom1 and custom2 were shorts.
|
|
* kernel_sse2 was built without actual SSE2 intrinsics on x86 systems.
|
|
patch #35866 by Doug Gale to fix it.
|
|
button disappearing as soon as it's clicked. Workaround now is to make this an
operator. Thanks to Lukas and Campbell for tracking this down.
|
|
"Single Layer"
at the start, more clearly indicate what the render time of the last frame was, some
other tweaks for consistency.
|
|
|
|
texture coordinate that should automatically use the default normal or texture
coordinate appropriate for that node, rather than some fixed value specified by
the user.
|
|
on many platforms but is not assured everywhere.
|
|
the default, and by not setting it the user can override it with an environmnet
variable, for example:
export OSL_OPTIONS="optimize=0"
|
|
|
|
images with open shading language enabled.
|
|
|
|
what looks like a compiler bug.
|
|
|
|
using
a compiler older than CUDA 5.0 it will give a warning and skip this architecture.
|
|
audio strip
|
|
way back to Pentium 4, using a slightly less efficient instruction.
Also ensure /Ox is used for Visual Studio for RelWithDebInfo builds.
|
|
* Add CUDA compiler version detection to cmake/scons/runtime
* Remove noinline in kernel_shader.h and reenable --use_fast_math if CUDA 5.x
is used, these were workarounds for CUDA 4.2 bugs
* Change max number of registers to 32 for sm 2.x (based on performance tests
from Martijn Berger and confirmed here), and also for NVidia OpenCL.
Overall it seems that with these changes and the latest CUDA 5.0 download, that
performance is as good as or better than the 2.67b release with the scenes and
graphics cards I tested.
|
|
Compiler optimization was accidentally set to /Ox for debug build too.
Changed this to be /Od in Debug and /Ox in Release mode.
|
|
* Some tweaks to the material "Settings" panel.
|