Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/marian-nmt/nccl.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSylvain Jeaugey <sjeaugey@nvidia.com>2016-06-15 20:53:43 +0300
committerSylvain Jeaugey <sjeaugey@nvidia.com>2016-06-15 20:54:19 +0300
commit5d4716a8a38da8563911d5a2a8f0bbbad19b7ca5 (patch)
tree02e71633c0b85412f69dbe5902b5f2cb49c05b18
parentaa8f669a3da902c2feb9eb3ca5e0af9ab8e5b713 (diff)
Include link to blog post in README.md
-rw-r--r--README.md1
1 files changed, 1 insertions, 0 deletions
diff --git a/README.md b/README.md
index e8c1e95..05ca7ae 100644
--- a/README.md
+++ b/README.md
@@ -5,6 +5,7 @@ Optimized primitives for collective multi-GPU communication.
## Introduction
NCCL (pronounced "Nickel") is a stand-alone library of standard collective communication routines, such as all-gather, reduce, broadcast, etc., that have been optimized to achieve high bandwidth over PCIe. NCCL supports an arbitrary number of GPUs installed in a single node and can be used in either single- or multi-process (e.g., MPI) applications.
+[This blog post](https://devblogs.nvidia.com/parallelforall/fast-multi-gpu-collectives-nccl/) provides details on NCCL functionality, goals, and performance.
## What's inside