Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/windirstat/llfio.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'classllfio__v2__xxx_1_1dynamic__thread__pool__group.html')
-rw-r--r--classllfio__v2__xxx_1_1dynamic__thread__pool__group.html304
1 files changed, 304 insertions, 0 deletions
diff --git a/classllfio__v2__xxx_1_1dynamic__thread__pool__group.html b/classllfio__v2__xxx_1_1dynamic__thread__pool__group.html
new file mode 100644
index 00000000..cebf3f57
--- /dev/null
+++ b/classllfio__v2__xxx_1_1dynamic__thread__pool__group.html
@@ -0,0 +1,304 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "https://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.17"/>
+<meta name="viewport" content="width=device-width, initial-scale=1"/>
+<title>LLFIO: llfio_v2_xxx::dynamic_thread_pool_group Class Reference</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="navtree.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="resize.js"></script>
+<script type="text/javascript" src="navtreedata.js"></script>
+<script type="text/javascript" src="navtree.js"></script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+ <td id="projectalign" style="padding-left: 0.5em;">
+ <div id="projectname">LLFIO
+ &#160;<span id="projectnumber">v2.00 late beta</span>
+ </div>
+ </td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.17 -->
+<script type="text/javascript">
+/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+/* @license-end */
+</script>
+<script type="text/javascript" src="menudata.js"></script>
+<script type="text/javascript" src="menu.js"></script>
+<script type="text/javascript">
+/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
+$(function() {
+ initMenu('',true,false,'search.php','Search');
+ $(document).ready(function() { init_search(); });
+});
+/* @license-end */</script>
+<div id="main-nav"></div>
+</div><!-- top -->
+<div id="side-nav" class="ui-resizable side-nav-resizable">
+ <div id="nav-tree">
+ <div id="nav-tree-contents">
+ <div id="nav-sync" class="sync"></div>
+ </div>
+ </div>
+ <div id="splitbar" style="-moz-user-select:none;"
+ class="ui-resizable-handle">
+ </div>
+</div>
+<script type="text/javascript">
+/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
+$(document).ready(function(){initNavTree('classllfio__v2__xxx_1_1dynamic__thread__pool__group.html',''); initResizable(); });
+/* @license-end */
+</script>
+<div id="doc-content">
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+ onmouseover="return searchBox.OnSearchSelectShow()"
+ onmouseout="return searchBox.OnSearchSelectHide()"
+ onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0"
+ name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div class="header">
+ <div class="summary">
+<a href="#nested-classes">Classes</a> &#124;
+<a href="#pub-methods">Public Member Functions</a> &#124;
+<a href="#pub-static-methods">Static Public Member Functions</a> &#124;
+<a href="#friends">Friends</a> &#124;
+<a href="classllfio__v2__xxx_1_1dynamic__thread__pool__group-members.html">List of all members</a> </div>
+ <div class="headertitle">
+<div class="title">llfio_v2_xxx::dynamic_thread_pool_group Class Reference<span class="mlabels"><span class="mlabel">abstract</span></span></div> </div>
+</div><!--header-->
+<div class="contents">
+
+<p>Work group within the global dynamic thread pool.
+ <a href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#details">More...</a></p>
+
+<p><code>#include &quot;dynamic_thread_pool_group.hpp&quot;</code></p>
+<table class="memberdecls">
+<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="nested-classes"></a>
+Classes</h2></td></tr>
+<tr class="memitem:"><td class="memItemLeft" align="right" valign="top">class &#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group_1_1io__aware__work__item.html">io_aware_work_item</a></td></tr>
+<tr class="memdesc:"><td class="mdescLeft">&#160;</td><td class="mdescRight">A work item which paces when it next executes according to i/o congestion. <a href="classllfio__v2__xxx_1_1dynamic__thread__pool__group_1_1io__aware__work__item.html#details">More...</a><br /></td></tr>
+<tr class="separator:"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:"><td class="memItemLeft" align="right" valign="top">class &#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group_1_1work__item.html">work_item</a></td></tr>
+<tr class="memdesc:"><td class="mdescLeft">&#160;</td><td class="mdescRight">An individual item of work within the work group. <a href="classllfio__v2__xxx_1_1dynamic__thread__pool__group_1_1work__item.html#details">More...</a><br /></td></tr>
+<tr class="separator:"><td class="memSeparator" colspan="2">&#160;</td></tr>
+</table><table class="memberdecls">
+<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="pub-methods"></a>
+Public Member Functions</h2></td></tr>
+<tr class="memitem:ab59c09d197cc2ab310375d6e0b4f06f8"><td class="memItemLeft" align="right" valign="top">virtual result&lt; void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#ab59c09d197cc2ab310375d6e0b4f06f8">submit</a> (span&lt; <a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group_1_1work__item.html">work_item</a> * &gt; work) noexcept=0</td></tr>
+<tr class="memdesc:ab59c09d197cc2ab310375d6e0b4f06f8"><td class="mdescLeft">&#160;</td><td class="mdescRight">Threadsafe. Submit one or more work items for execution. Note that you can submit more later. <a href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#ab59c09d197cc2ab310375d6e0b4f06f8">More...</a><br /></td></tr>
+<tr class="separator:ab59c09d197cc2ab310375d6e0b4f06f8"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:ac66e72ede37599df150ee8506a92dd66"><td class="memItemLeft" align="right" valign="top"><a id="ac66e72ede37599df150ee8506a92dd66"></a>
+result&lt; void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#ac66e72ede37599df150ee8506a92dd66">submit</a> (<a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group_1_1work__item.html">work_item</a> *wi) noexcept</td></tr>
+<tr class="memdesc:ac66e72ede37599df150ee8506a92dd66"><td class="mdescLeft">&#160;</td><td class="mdescRight">This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. <br /></td></tr>
+<tr class="separator:ac66e72ede37599df150ee8506a92dd66"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a1a5a7e9924b9e428e77d4167e716f57c"><td class="memItemLeft" align="right" valign="top"><a id="a1a5a7e9924b9e428e77d4167e716f57c"></a>
+virtual bool&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#a1a5a7e9924b9e428e77d4167e716f57c">stopped</a> () const noexcept=0</td></tr>
+<tr class="memdesc:a1a5a7e9924b9e428e77d4167e716f57c"><td class="mdescLeft">&#160;</td><td class="mdescRight">Threadsafe. True if all the work previously submitted is complete. <br /></td></tr>
+<tr class="separator:a1a5a7e9924b9e428e77d4167e716f57c"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a26d88fd329e5c0e04739b3214831a887"><td class="memItemLeft" align="right" valign="top"><a id="a26d88fd329e5c0e04739b3214831a887"></a>
+virtual result&lt; void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#a26d88fd329e5c0e04739b3214831a887">wait</a> (<a class="el" href="structllfio__v2__xxx_1_1deadline.html">deadline</a> d={}) const noexcept=0</td></tr>
+<tr class="memdesc:a26d88fd329e5c0e04739b3214831a887"><td class="mdescLeft">&#160;</td><td class="mdescRight">Threadsafe. Wait for work previously submitted to complete, returning any failures by any work item. <br /></td></tr>
+<tr class="separator:a26d88fd329e5c0e04739b3214831a887"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a1f8d01ba540996392dd4c8d87b8c6f41"><td class="memTemplParams" colspan="2"><a id="a1f8d01ba540996392dd4c8d87b8c6f41"></a>
+template&lt;class Rep , class Period &gt; </td></tr>
+<tr class="memitem:a1f8d01ba540996392dd4c8d87b8c6f41"><td class="memTemplItemLeft" align="right" valign="top">result&lt; bool &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#a1f8d01ba540996392dd4c8d87b8c6f41">wait_for</a> (const std::chrono::duration&lt; Rep, Period &gt; &amp;duration) const noexcept</td></tr>
+<tr class="memdesc:a1f8d01ba540996392dd4c8d87b8c6f41"><td class="mdescLeft">&#160;</td><td class="mdescRight">This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. <br /></td></tr>
+<tr class="separator:a1f8d01ba540996392dd4c8d87b8c6f41"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a00fdd6c7fb86e9b1e967a5a5f0305816"><td class="memTemplParams" colspan="2"><a id="a00fdd6c7fb86e9b1e967a5a5f0305816"></a>
+template&lt;class Clock , class Duration &gt; </td></tr>
+<tr class="memitem:a00fdd6c7fb86e9b1e967a5a5f0305816"><td class="memTemplItemLeft" align="right" valign="top">result&lt; bool &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#a00fdd6c7fb86e9b1e967a5a5f0305816">wait_until</a> (const std::chrono::time_point&lt; Clock, Duration &gt; &amp;timeout) const noexcept</td></tr>
+<tr class="memdesc:a00fdd6c7fb86e9b1e967a5a5f0305816"><td class="mdescLeft">&#160;</td><td class="mdescRight">This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. <br /></td></tr>
+<tr class="separator:a00fdd6c7fb86e9b1e967a5a5f0305816"><td class="memSeparator" colspan="2">&#160;</td></tr>
+</table><table class="memberdecls">
+<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="pub-static-methods"></a>
+Static Public Member Functions</h2></td></tr>
+<tr class="memitem:ab9e2295ae9773e218e21cd2cd28355bf"><td class="memItemLeft" align="right" valign="top">static const char *&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#ab9e2295ae9773e218e21cd2cd28355bf">implementation_description</a> () noexcept</td></tr>
+<tr class="memdesc:ab9e2295ae9773e218e21cd2cd28355bf"><td class="mdescLeft">&#160;</td><td class="mdescRight">A textual description of the underlying implementation of this dynamic thread pool group. <a href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#ab9e2295ae9773e218e21cd2cd28355bf">More...</a><br /></td></tr>
+<tr class="separator:ab9e2295ae9773e218e21cd2cd28355bf"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a3c4fac496df18522877ed70f86613212"><td class="memItemLeft" align="right" valign="top"><a id="a3c4fac496df18522877ed70f86613212"></a>
+static size_t&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#a3c4fac496df18522877ed70f86613212">current_nesting_level</a> () noexcept</td></tr>
+<tr class="memdesc:a3c4fac496df18522877ed70f86613212"><td class="mdescLeft">&#160;</td><td class="mdescRight">Returns the work item nesting level which would be used if a new dynamic thread pool group were created within the current work item. <br /></td></tr>
+<tr class="separator:a3c4fac496df18522877ed70f86613212"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a1184eb72e54c2c1070056e95f582d1c2"><td class="memItemLeft" align="right" valign="top"><a id="a1184eb72e54c2c1070056e95f582d1c2"></a>
+static <a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group_1_1work__item.html">work_item</a> *&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#a1184eb72e54c2c1070056e95f582d1c2">current_work_item</a> () noexcept</td></tr>
+<tr class="memdesc:a1184eb72e54c2c1070056e95f582d1c2"><td class="mdescLeft">&#160;</td><td class="mdescRight">Returns the work item the calling thread is running within, if any. <br /></td></tr>
+<tr class="separator:a1184eb72e54c2c1070056e95f582d1c2"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:aac4c23e6b02acabeebac08955fe264f7"><td class="memItemLeft" align="right" valign="top"><a id="aac4c23e6b02acabeebac08955fe264f7"></a>
+static uint32_t&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#aac4c23e6b02acabeebac08955fe264f7">ms_sleep_for_more_work</a> () noexcept</td></tr>
+<tr class="memdesc:aac4c23e6b02acabeebac08955fe264f7"><td class="mdescLeft">&#160;</td><td class="mdescRight">Returns the number of milliseconds that a thread is without work before it is shut down. Note that this will be zero on all but on Linux if using our local thread pool implementation, because the system controls this value on Windows, Grand Central Dispatch etc. <br /></td></tr>
+<tr class="separator:aac4c23e6b02acabeebac08955fe264f7"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:af3df91fd2d5b6e0036267142f0c5af4a"><td class="memItemLeft" align="right" valign="top">static uint32_t&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#af3df91fd2d5b6e0036267142f0c5af4a">ms_sleep_for_more_work</a> (uint32_t v) noexcept</td></tr>
+<tr class="memdesc:af3df91fd2d5b6e0036267142f0c5af4a"><td class="mdescLeft">&#160;</td><td class="mdescRight">Sets the number of milliseconds that a thread is without work before it is shut down, returning the value actually set. <a href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html#af3df91fd2d5b6e0036267142f0c5af4a">More...</a><br /></td></tr>
+<tr class="separator:af3df91fd2d5b6e0036267142f0c5af4a"><td class="memSeparator" colspan="2">&#160;</td></tr>
+</table><table class="memberdecls">
+<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="friends"></a>
+Friends</h2></td></tr>
+<tr class="memitem:acd9883ca1a476119de857fac1601332a"><td class="memItemLeft" align="right" valign="top"><a id="acd9883ca1a476119de857fac1601332a"></a>
+class&#160;</td><td class="memItemRight" valign="bottom"><b>dynamic_thread_pool_group_impl</b></td></tr>
+<tr class="separator:acd9883ca1a476119de857fac1601332a"><td class="memSeparator" colspan="2">&#160;</td></tr>
+</table>
+<a name="details" id="details"></a><h2 class="groupheader">Detailed Description</h2>
+<div class="textblock"><p>Work group within the global dynamic thread pool. </p>
+<p>Some operating systems provide a per-process global kernel thread pool capable of dynamically adjusting its kernel thread count to how many of the threads in the pool are currently blocked. The platform will choose the exact strategy used, but as an example of a strategy, one might keep creating new kernel threads so long as the total threads currently running and not blocked on page faults, i/o or syscalls, is below the hardware concurrency. Similarly, if more threads are running and not blocked than hardware concurrency, one might remove kernel threads from executing work. Such a strategy would dynamically increase concurrency until all CPUs are busy, but reduce concurrency if more work is being done than CPUs available.</p>
+<p>Such dynamic kernel thread pools are excellent for CPU bound processing, you simply fire and forget work into them. However, for i/o bound processing, you must be careful as there are gotchas. For non-seekable i/o, it is very possible that there could be 100k handles upon which we do i/o. Doing i/o on 100k handles using a dynamic thread pool would in theory cause the creation of 100k kernel threads, which would not be wise. A much better solution is to use an <code>io_multiplexer</code> to await changes in large sets of i/o handles.</p>
+<p>For seekable i/o, the same problem applies, but worse again: an i/o bound problem would cause a rapid increase in the number of kernel threads, which by definition makes i/o even more congested. Basically the system runs off into pathological performance loss. You must therefore never naively do i/o bound work (e.g. with memory mapped files) from within a dynamic thread pool without employing some mechanism to force concurrency downwards if the backing storage is congested.</p>
+<h2><a class="anchor" id="autotoc_md0"></a>
+Work groups</h2>
+<p>Instances of this class contain zero or more work items. Each work item is asked for its next item of work, and if an item of work is available, that item of work is executed by the global kernel thread pool at a time of its choosing. It is NEVER possible that any one work item is concurrently executed at a time, each work item is always sequentially executed with respect to itself. The only concurrency possible is <em>across</em> work items. Therefore, if you want to execute the same piece of code concurrently, you need to submit a separate work item for each possible amount of concurrency (e.g. <code>std::thread::hardware_concurrency()</code>).</p>
+<p>You can have as many or as few items of work as you like. You can dynamically submit additional work items at any time, except when a group is currently in the process of being stopped. The group of work items can be waited upon to complete, after which the work group becomes reset as if back to freshly constructed. You can also stop executing all the work items in the group, even if they have not fully completed. If any work item returns a failure, this equals a <code>stop()</code>, and the next <code>wait()</code> will return that error.</p>
+<p>Work items may create sub work groups as part of their operation. If they do so, the work items from such nested work groups are scheduled preferentially. This ensures good forward progress, so if you have 100 work items each of which do another 100 work items, you don't get 10,000 slowly progressing work. Rather, the work items in the first set progress slowly, whereas the work items in the second set progress quickly.</p>
+<p><code>work_item::next()</code> may optionally set a deadline to delay when that work item ought to be processed again. Deadlines can be relative or absolute.</p>
+<h2><a class="anchor" id="autotoc_md1"></a>
+C++ 23 Executors</h2>
+<p>As with elsewhere in LLFIO, as a low level facility, we don't implement <a href="https://wg21.link/P0443">https://wg21.link/P0443</a> Executors, but it is trivially easy to implement a dynamic equivalent to <code>std::static_thread_pool</code> using this class.</p>
+<h2><a class="anchor" id="autotoc_md2"></a>
+Implementation notes</h2>
+<h3><a class="anchor" id="autotoc_md3"></a>
+Microsoft Windows</h3>
+<p>On Microsoft Windows, the Win32 thread pool API is used (<a href="https://docs.microsoft.com/en-us/windows/win32/procthread/thread-pool-api">https://docs.microsoft.com/en-us/windows/win32/procthread/thread-pool-api</a>). This is an IOCP-aware thread pool which will dynamically increase the number of kernel threads until none are blocked. If more kernel threads are running than twice the number of CPUs in the system, the number of kernel threads is dynamically reduced. The maximum number of kernel threads which will run simultaneously is 500. Note that the Win32 thread pool is shared across the process by multiple Windows facilities.</p>
+<p>Note that the Win32 thread pool has built in support for IOCP, so if you have a custom i/o multiplexer, you can use the global Win32 thread pool to execute i/o completions handling. See <code>CreateThreadpoolIo()</code> for more.</p>
+<p>No dynamic memory allocation is performed by this implementation outside of the initial <code>make_dynamic_thread_pool_group()</code>. The Win32 thread pool API may perform dynamic memory allocation internally, but that is outside our control.</p>
+<p>Overhead of LLFIO above the Win32 thread pool API is very low, statistically unmeasurable.</p>
+<h3><a class="anchor" id="autotoc_md4"></a>
+POSIX</h3>
+<p>If not on Linux, you will need libdispatch which is detected by LLFIO cmake during configuration. libdispatch is better known as Grand Central Dispatch, originally a Mac OS technology but since ported to a high quality kernel based implementation on recent FreeBSDs, and to a lower quality userspace based implementation on Linux. Generally libdispatch should get automatically found on Mac OS without additional effort; on FreeBSD it may need installing from ports; on Linux you would need to explicitly install <code>libdispatch-dev</code> or the equivalent. You can force the use in cmake of libdispatch by setting the cmake variable <code>LLFIO_USE_LIBDISPATCH</code> to On.</p>
+<p>Overhead of LLFIO above the libdispatch API is very low, statistically unmeasurable.</p>
+<h3><a class="anchor" id="autotoc_md5"></a>
+Linux</h3>
+<p>On Linux only, we have a custom userspace implementation with superior performance. A similar strategy to Microsoft Windows' approach is used. We dynamically increase the number of kernel threads until none are sleeping awaiting i/o. If more kernel threads are running than three more than the number of CPUs in the system, the number of kernel threads is dynamically reduced. Note that <b>all</b> the kernel threads for the current process are considered, not just the kernel threads created by this thread pool implementation. Therefore, if you have alternative thread pool implementations (e.g. OpenMP, <code>std::async</code>), those are also included in the dynamic adjustment.</p>
+<p>As this is wholly implemented by this library, dynamic memory allocation occurs in the initial <code>make_dynamic_thread_pool_group()</code> and per thread creation, but otherwise the implementation does not perform dynamic memory allocations.</p>
+<p>After multiple rewrites, eventually I got this custom userspace implementation to have superior performance to both ASIO and libdispatch. For larger work items the difference is meaningless between all three, however for smaller work items I benchmarked this custom userspace implementation as beating (non-dynamic) ASIO by approx 29% and Linux libdispatch by approx 52% (note that Linux libdispatch appears to have a scale up bug when work items are small and few, it is often less than half the performance of LLFIO's custom implementation). </p>
+</div><h2 class="groupheader">Member Function Documentation</h2>
+<a id="ab9e2295ae9773e218e21cd2cd28355bf"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#ab9e2295ae9773e218e21cd2cd28355bf">&#9670;&nbsp;</a></span>implementation_description()</h2>
+
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+ <tr>
+ <td class="mlabels-left">
+ <table class="memname">
+ <tr>
+ <td class="memname">static const char* llfio_v2_xxx::dynamic_thread_pool_group::implementation_description </td>
+ <td>(</td>
+ <td class="paramname"></td><td>)</td>
+ <td></td>
+ </tr>
+ </table>
+ </td>
+ <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">inline</span><span class="mlabel">static</span><span class="mlabel">noexcept</span></span> </td>
+ </tr>
+</table>
+</div><div class="memdoc">
+
+<p>A textual description of the underlying implementation of this dynamic thread pool group. </p>
+<p>The current possible underlying implementations are:</p>
+<ul>
+<li>"Grand Central Dispatch" (Mac OS, FreeBSD, Linux)</li>
+<li>"Linux native" (Linux)</li>
+<li>"Win32 thread pool (Vista+)" (Windows)</li>
+</ul>
+<p>Which one is chosen depends on what was detected at cmake configure time, and possibly what the host OS running the program binary supports. </p>
+
+</div>
+</div>
+<a id="af3df91fd2d5b6e0036267142f0c5af4a"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#af3df91fd2d5b6e0036267142f0c5af4a">&#9670;&nbsp;</a></span>ms_sleep_for_more_work()</h2>
+
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+ <tr>
+ <td class="mlabels-left">
+ <table class="memname">
+ <tr>
+ <td class="memname">static uint32_t llfio_v2_xxx::dynamic_thread_pool_group::ms_sleep_for_more_work </td>
+ <td>(</td>
+ <td class="paramtype">uint32_t&#160;</td>
+ <td class="paramname"><em>v</em></td><td>)</td>
+ <td></td>
+ </tr>
+ </table>
+ </td>
+ <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">inline</span><span class="mlabel">static</span><span class="mlabel">noexcept</span></span> </td>
+ </tr>
+</table>
+</div><div class="memdoc">
+
+<p>Sets the number of milliseconds that a thread is without work before it is shut down, returning the value actually set. </p>
+<p>Note that this will have no effect (and thus return zero) on all but on Linux if using our local thread pool implementation, because the system controls this value on Windows, Grand Central Dispatch etc. </p>
+
+</div>
+</div>
+<a id="ab59c09d197cc2ab310375d6e0b4f06f8"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#ab59c09d197cc2ab310375d6e0b4f06f8">&#9670;&nbsp;</a></span>submit()</h2>
+
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+ <tr>
+ <td class="mlabels-left">
+ <table class="memname">
+ <tr>
+ <td class="memname">virtual result&lt;void&gt; llfio_v2_xxx::dynamic_thread_pool_group::submit </td>
+ <td>(</td>
+ <td class="paramtype">span&lt; <a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group_1_1work__item.html">work_item</a> * &gt;&#160;</td>
+ <td class="paramname"><em>work</em></td><td>)</td>
+ <td></td>
+ </tr>
+ </table>
+ </td>
+ <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">pure virtual</span><span class="mlabel">noexcept</span></span> </td>
+ </tr>
+</table>
+</div><div class="memdoc">
+
+<p>Threadsafe. Submit one or more work items for execution. Note that you can submit more later. </p>
+<p>Note that if the group is currently stopping, you cannot submit more work until the group has stopped. An error code comparing equal to <code>errc::operation_canceled</code> is returned if you try. </p>
+
+</div>
+</div>
+<hr/>The documentation for this class was generated from the following file:<ul>
+<li>include/llfio/v2.0/dynamic_thread_pool_group.hpp</li>
+</ul>
+</div><!-- contents -->
+</div><!-- doc-content -->
+<!-- start footer part -->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+ <ul>
+ <li class="navelem"><a class="el" href="namespacellfio__v2__xxx.html">llfio_v2_xxx</a></li><li class="navelem"><a class="el" href="classllfio__v2__xxx_1_1dynamic__thread__pool__group.html">dynamic_thread_pool_group</a></li>
+ <li class="footer">Generated by
+ <a href="http://www.doxygen.org/index.html">
+ <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.17 </li>
+ </ul>
+</div>
+</body>
+</html>