Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/marian-nmt/nccl.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNathan Luehr <nluehr@nvidia.com>2016-01-21 05:19:45 +0300
committerPrzemek Tredak <ptredak@nvidia.com>2016-01-22 00:00:21 +0300
commit59663167711b70dabb52a426a9fca0d712e3cc95 (patch)
tree1eb737503c3c63a0479ba2d02614ec09825cc814 /README.md
parent130ee246e21d3f73c977eda496ac9c90c3aa520b (diff)
Added support for more than 8 GPUs.
Change-Id: Iaa1841036a7bfdad6ebec99fed0adcd2bbe6ffad Reviewed-on: http://git-master/r/935459 Reviewed-by: Cliff Woolley <jwoolley@nvidia.com> Tested-by: Przemek Tredak <ptredak@nvidia.com>
Diffstat (limited to 'README.md')
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index e65e1ba..04f8fe1 100644
--- a/README.md
+++ b/README.md
@@ -4,7 +4,7 @@ Optimized primitives for collective multi-GPU communication.
## Introduction
-NCCL (pronounced "Nickel") is a stand-alone library of standard collective communication routines, such as all-gather, reduce, broadcast, etc., that have been optimized to achieve high bandwidth over PCIe. NCCL supports up to eight GPUs and can be used in either single- or multi-process (e.g., MPI) applications.
+NCCL (pronounced "Nickel") is a stand-alone library of standard collective communication routines, such as all-gather, reduce, broadcast, etc., that have been optimized to achieve high bandwidth over PCIe. NCCL supports an arbitrary number of GPUs installed in a single node and can be used in either single- or multi-process (e.g., MPI) applications.
## What's inside