Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/torch.github.io.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSoumith Chintala <soumith@fb.com>2016-02-04 20:42:47 +0300
committerSoumith Chintala <soumith@fb.com>2016-02-04 20:42:47 +0300
commit71b9ada5037dde696897fb3032c175f82571e240 (patch)
tree6f0bd8b03568e42a4423aefd17876a59fd69518c
parent7f15ccdc9d4ba991a1c26111b0aa56e5e1953cf1 (diff)
minor changesresnetblog
-rw-r--r--blog/_posts/2016-02-04-resnets.md3
1 files changed, 2 insertions, 1 deletions
diff --git a/blog/_posts/2016-02-04-resnets.md b/blog/_posts/2016-02-04-resnets.md
index 1f5f63a..1747fc8 100644
--- a/blog/_posts/2016-02-04-resnets.md
+++ b/blog/_posts/2016-02-04-resnets.md
@@ -76,6 +76,7 @@ These experiments help verify the model’s correctness and uncover some interes
We trained variants of the 18, 34, 50, and 101-layer ResNet models on the ImageNet classification dataset.
What's notable is that we achieved error rates that were better than the published results by using a different data augmentation method.
+We are also training a 152-layer ResNet model, but the model has not finished converging at the time of this post.
We used the scale and aspect ratio augmentation described in "Going Deeper with Convolutions" instead of the scale augmentation described in the ResNet paper. With ResNet-34, this improved top-1 validation error by about 1.2% points. We also used the color augmentation described in "Some Improvements on Deep Convolutional Neural Network Based Image Classification," but found that had a very small effect on ResNet-34.
@@ -147,7 +148,7 @@ The code for the CIFAR-10 ablation studes is at https://github.com/gcr/torch-res
# Pre-trained models
-We are releasing the ResNet-18, 34, 50 and 101 models for use by everyone in the community. We are hoping that this will help accelerate research in the community.
+We are releasing the ResNet-18, 34, 50 and 101 models for use by everyone in the community. We are hoping that this will help accelerate research in the community. We will release the 152-layer model when it finishes training.
The pre-trained models are available at https://github.com/facebook/fb.resnet.torch/tree/master/pretrained, and [includes instructions for fine-tuning on your own datasets](https://github.com/facebook/fb.resnet.torch/tree/master/pretrained#fine-tuning-on-a-custom-dataset).