diff options
author | Jianyu Huang <jianyuhuang@fb.com> | 2019-08-12 19:19:22 +0300 |
---|---|---|
committer | Facebook Github Bot <facebook-github-bot@users.noreply.github.com> | 2019-08-12 19:25:22 +0300 |
commit | aceefe3e0cc59c6754c90d5f5ffe726666b1d0ac (patch) | |
tree | fae9ceed76a484c591f2b94b44972d43406ef738 | |
parent | 7b156071d8912dcf6711c88578c30f0f0d05d3a6 (diff) |
Update README.md with mentioning PyTorch (#116)
Summary:
As Title says.
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/116
Test Plan: CI
Differential Revision: D16747927
Pulled By: jianyuh
fbshipit-source-id: 6d60a12e11dad7da20ce0224de8bc611b2e44578
-rw-r--r-- | README.md | 6 |
1 files changed, 3 insertions, 3 deletions
@@ -12,9 +12,9 @@ row-wise quantization and outlier-aware quantization. FBGEMM also exploits fusion opportunities in order to overcome the unique challenges of matrix multiplication at lower precision with bandwidth-bound operations. -FBGEMM is used as a backend of Caffe2 quantized operators for x86 machines -(https://github.com/pytorch/pytorch/tree/master/caffe2/quantization/server). -We also plan to integrate FBGEMM into PyTorch. +FBGEMM is used as a backend of Caffe2 and PyTorch quantized operators for x86 machines: +* Caffe2: https://github.com/pytorch/pytorch/tree/master/caffe2/quantization/server +* PyTorch: https://github.com/pytorch/pytorch/tree/master/aten/src/ATen/native/quantized/cpu ## Examples |