Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.blender.org/blender.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-04-09TaskScheduler: Minor Preparations for TBBBrecht Van Lommel
Tasks: move priority from task to task pool {rBf7c18df4f599fe39ffc914e645e504fcdbee8636} Tasks: split task.c into task_pool.cc and task_iterator.c {rB4ada1d267749931ca934a74b14a82479bcaa92e0} Differential Revision: https://developer.blender.org/D7385
2019-11-26BLI_task: Add pooled threaded index range iterator, Take II.Bastien Montagne
This code allows to push a set of different operations all based on iterations over a range of indices, and then process them all at once over multiple threads. This commit also adds unit tests for both old un-pooled, and new pooled task_parallel_range family of functions, as well as some basic performances tests. This is mainly interesting for relatively low amount of individual tasks, as expected. E.g. performance tests on a 32 threads machine, for a set of 10 different tasks, shows following improvements when using pooled version instead of ten sequential calls to BLI_task_parallel_range(): | Num Items | Sequential | Pooled | Speed-up | | --------- | ---------- | ------- | -------- | | 10K | 365 us | 138 us | 2.5 x | | 100K | 877 us | 530 us | 1.66 x | | 1000K | 5521 us | 4625 us | 1.25 x | Differential Revision: https://developer.blender.org/D6189 Note: Compared to previous commit yesterday, this reworks atomic handling in parallel iter code, and fixes a dummy double-free bug. Now we should only use the two critical values for synchronization from atomic calls results, which is the proper way to do things. Reading a value after an atomic operation does not guarantee you will get the latest value in all cases (especially on Windows release builds it seems).
2019-11-25Revert "BLI_task: Add pooled threaded index range iterator."Bastien Montagne
This reverts commit f9028a3be1f77c01edca44a68894e2ba9d9cfb14. This is giving weird heisenbug crash on only Windows release builds... Reverting until we understand to issue.
2019-11-25Revert "Cleanup: Unused variable in release build mode"Bastien Montagne
This reverts commit e0cada951982093453a91b80342ce20c4f421fc8.
2019-11-25Cleanup: Unused variable in release build modeSergey Sharybin
Thanks Bastien for code review!
2019-11-25BLI_task: Add pooled threaded index range iterator.Bastien Montagne
This code allows to push a set of different operations all based on iterations over a range of indices, and then process them all at once over multiple threads. This commit also adds unit tests for both old un-pooled, and new pooled `task_parallel_range` family of functions, as well as some basic performances tests. This is mainly interesting for relatively low amount of individual tasks, as expected. E.g. performance tests on a 32 threads machine, for a set of 10 different tasks, shows following improvements when using pooled version instead of ten sequential calls to `BLI_task_parallel_range()`: | Num Items | Sequential | Pooled | Speed-up | | --------- | ---------- | ------- | -------- | | 10K | 365 us | 138 us | 2.5 x | | 100K | 877 us | 530 us | 1.66 x | | 1000K | 5521 us | 4625 us | 1.25 x | Differential Revision: https://developer.blender.org/D6189
2019-11-20Cleanup: commentsCampbell Barton
2019-10-30BLI_task: Add new generic `BLI_task_parallel_iterator()`.Bastien Montagne
This new function is part of the 'parallel for loops' functions. It takes an iterator callback to generate items to be processed, in addition to the usual 'process' func callback. This allows to use common code from BLI_task for a wide range of custom iteratiors, whithout having to re-invent the wheel of the whole tasks & data chuncks handling. This supports all settings features from `BLI_task_parallel_range()`, including dynamic and static (if total number of items is knwon) scheduling, TLS data and its finalize callback, etc. One question here is whether we should provide usercode with a spinlock by default, or enforce it to always handle its own sync mechanism. I kept it, since imho it will be needed very often, and generating one is pretty cheap even if unused... ---------- Additionaly, this commit converts (currently unused) `BLI_task_parallel_listbase()` to use that generic code. This was done mostly as proof of concept, but performance-wise it shows some interesting data, roughly: - Very light processing (that should not be threaded anyway) is several times slower, which is expected due to more overhead in loop management code. - Heavier processing can be up to 10% quicker (probably thanks to the switch from dynamic to static scheduling, which reduces a lot locking to fill-in the per-tasks chunks of data). Similar speed-up in non-threaded case comes as a surprise though, not sure what can explain that. While this conversion is not really needed, imho we should keep it (instead of existing code for that function), it's easier to have complex handling logic in as few places as possible, for maintaining and for improving it. Note: That work was initially done to allow for D5372 to be possible... Unfortunately that one proved to be not better than orig code on performances point of view. Reviewed By: sergey Differential Revision: https://developer.blender.org/D5371
2019-09-18BLI_tasks: simplify/generalize heuristic computing default chunk size.Bastien Montagne
That code is simpler and more general (not limited to some specific values of thread numbers). It still gives similar default chunk size as what we had before, but handles smoother increase steps, and higher number of threads, by keeping increasing the chunk size. No functional change expected from that commit.
2019-09-07Cleanup: use post increment/decrementCampbell Barton
When the result isn't used, prefer post increment/decrement (already used nearly everywhere in Blender).
2019-09-05Mesh Batch Cache: Fix threading issueJacques Lucke
I believed the crash I experienced happened because: 1. The `extract_pos_nor_init` function is called. 2. Tasks are added to the task pool for `extract_pos_nor`. 3. The tasks begin to be executed while more tasks are added. 4. In some rare cases, all existing tasks are finished, but not all have been added yet. 5. This let the task-counter go down to zero. 6. This triggered a call to `extract_pos_nor_finish`. 7. Then more tasks are added and in the end `extract_pos_nor_finish` is called again. A solution is to use a task pool that is suspended when created. Unfortunately, there was an outdated comment, that was probably the root cause of the issue. Reviewers: fclem, sergey Differential Revision: https://developer.blender.org/D5680
2019-08-11Cleanup: spellingCampbell Barton
2019-07-30BLI_task: Cleanup: rename some structs to make them more generic.Bastien Montagne
TLS and Settings can be used by other types of parallel 'for loops', so removing 'Range' from their names. No functional changes expected here.
2019-07-30BLI_task: tweak default chunk size for `BLI_task_parallel_range()`.Bastien Montagne
Previously we were setting it to 1 (aka no 'chunking'), to follow previous behavior. However, this is far from optimal, especially with CPUs that can have tens of threads nowadays. Now taking an heuristic approach (inspired from the one already existing for `BLI_task_parallel_listbase()`, which tries to guesstimate best chunk sizes based on several factors (amount of threads/parallel tasks, total number of items, ...). Think this is a reasonable base ground, more optimization here would of course be possible. Note that code that was already explicitely settings some value here won't be affected at all by that change.
2019-05-26Fix: BLI_task_test deadlock on windows.Ray Molenkamp
This patch makes BLI_task_scheduler_create wait for all worker threads to have started before returning to caller. For very short workloads (BLI_taks_test) there is the chance that the worker threads have not fully started yet, and the main thread is calling pthread_join at the same time as pthread_setspecific is being called on the worker threads which causes a deadlock on pthreads4w. Differential Revision: https://developer.blender.org/D4936 Reviewed By: mont29, sergey, brecht
2019-04-21Cleanup: comments (long lines) in blenlibCampbell Barton
2019-04-17ClangFormat: apply to source, most of internCampbell Barton
Apply clang format as proposed in T53211. For details on usage and instructions for migrating branches without conflicts, see: https://wiki.blender.org/wiki/Tools/ClangFormat
2019-03-27Cleanup: style, use braces for blenlibCampbell Barton
2019-03-08Cleanup: spellingCampbell Barton
2019-02-18doxygen: add newline after \fileCampbell Barton
While \file doesn't need an argument, it can't have another doxy command after it.
2019-02-06Cleanup: remove redundant doxygen \file argumentCampbell Barton
Move \ingroup onto same line to be more compact and make it clear the file is in the group.
2019-02-01Cleanup: remove redundant, invalid info from headersCampbell Barton
BF-admins agree to remove header information that isn't useful, to reduce noise. - BEGIN/END license blocks Developers should add non license comments as separate comment blocks. No need for separator text. - Contributors This is often invalid, outdated or misleading especially when splitting files. It's more useful to git-blame to find out who has developed the code. See P901 for script to perform these edits.
2019-01-15Cleanup: comment line length (blenlib)Campbell Barton
Prevents clang-format wrapping text before comments.
2018-12-12Merge branch 'master' into blender2.8Campbell Barton
2018-12-12Cleanup: use colon separator after parameterCampbell Barton
Helps separate variable names from descriptive text. Was already used in some parts of the code, double space and dashes were used elsewhere.
2018-12-04BLI_task: fix queue in work_and_wait, and support resetting.Alexander Gavrilov
To make the pool more usable for running multiple stages of tasks, fix local queue handling in BLI_task_pool_work_and_wait. Specifically, after the wait loop the local queue should be empty, or the wait part of the function contract isn't fulfilled. Instead, check and run any tasks in queue before the wait loop. Also, add a new function that resets the suspended state of the pool.
2018-12-04Merge branch 'master' into blender2.8Sergey Sharybin
2018-12-04Cleanup: SpellingSergey Sharybin
2018-11-20Merge branch 'master' into blender2.8Sergey Sharybin
2018-11-20Task scheduler: Optimize parallel loop over listsSergey Sharybin
The goal is to address performance regression when going from few threads to 10s of threads. On a systems with more than 32 CPU threads the benefit of threaded loop was actually harmful. There are following tweaks now: - The chunk size is adaptive for the number of threads, which minimizes scheduling overhead. - The number of tasks is adaptive to the list size and chunk size. Here comes performance comparison on the production shot: Number of threads DEG time before DEG time after 44 0.09 0.02 32 0.055 0.025 16 0.025 0.025 8 0.035 0.033
2018-11-08Cleanup, spellingSergey Sharybin
2018-09-02Cleanup: comment blocksCampbell Barton
2018-09-02Cleanup: comment blocksCampbell Barton
2018-03-13Cleanup: doxygen commentsCampbell Barton
2018-02-15Cleanup: rename BLI_thread.h APICampbell Barton
- Use BLI_threadpool_ prefix for (deprecated) thread/listbase API. - Use BLI_thread as prefix for other functions. See P614 to apply instead of manually resolving conflicts.
2018-02-15Cleanup: use '_len' instead of '_size' w/ BLI APICampbell Barton
- When returning the number of items in a collection use BLI_*_len() - Keep _size() for size in bytes. - Keep _count() for data structures that don't store length (hint this isn't a simple getter). See P611 to apply instead of manually resolving conflicts.
2018-01-11Cleanup typo in comment.Bastien Montagne
2018-01-10Task scheduler: Use more const qualifiersSergey Sharybin
2018-01-09Task scheduler: Use single thread branch when range fits into single chunkSergey Sharybin
2018-01-09Task scheduler: Fix wrong tasks calculation when chunk size is too bigSergey Sharybin
2018-01-09Task scheduler: Use const qualifiers in parallel rangeSergey Sharybin
2018-01-09Task scheduler: Avoid over-allocation of tasks for parallel rangesSergey Sharybin
This seems to only cause extra rthreading overhead on systems with 10s of threads, without actually solving anything.
2018-01-09Task scheduler: Add minimum number of iterations per thread in parallel rangeSergey Sharybin
The idea is to support following: allow doing parallel for on a small range, each iteration of which takes lots of compute power, but limit such range to a subset of threads. For example, on a machine with 44 threads we can occupy 4 threads to handle range of 64 elements, 16 elements per thread, where each block of 16 elements is very complex to compute. The idea should be to use this setting instead of global use_threading flag, which is only based on size of array. Proper use of the new flag will improve threadability. This commit only contains internal task scheduler changes, this setting is not used yet by any areas.
2018-01-09Task scheduler: Simplify parallel range functionSergey Sharybin
Basically, split it up and avoid extra abstraction level.
2018-01-09Task scheduler: Use single parallel range function with more flexible functionSergey Sharybin
Now all the fine-tuning is happening using parallel range settings structure, which avoid passing long lists of arguments, allows extend fine-tuning further, avoid having lots of various functions which basically does the same thing.
2018-01-09Task scheduler: Get rid of extended version of parallel range callbackSergey Sharybin
Wrap all arguments into TLS type of argument. Avoids some branching and also makes it easier to extend things in the future.
2017-12-22Task scheduler: Clarify why do we need an atomic add of 0Sergey Sharybin
2017-12-22Task scheduler: Start with suspended pool to avoid threading overhead on pushSergey Sharybin
The idea is to avoid any threading overhead when we start pushing tasks in a loop. Similarly to how we do it from the new dependency graph. Gives couple of percent of speedup here, but also improves scalability.
2017-11-23Add a new parallel looper for MemPool items to BLI_task.Bastien Montagne
It merely uses the new thread-safe iterators system of mempool, quite straight forward. Note that to avoid possible confusion with two void pointers as parameters of the callback, a dummy opaque struct pointer is used instead for the second parameter (pointer generated by iteration over mempool), callback functions must explicitely convert it to expected real type. Also added a basic gtest for this new feature.
2017-11-23Cleanup: use signed atomic ops when needed.Bastien Montagne