Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/moses-smt/vowpal_wabbit.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/demo
diff options
context:
space:
mode:
authorPaul Mineiro <paul-github@mineiro.com>2013-12-23 00:18:50 +0400
committerPaul Mineiro <paul-github@mineiro.com>2013-12-23 00:18:50 +0400
commit109de502f575a7caf8ea9f71eaf7500725616eef (patch)
treef7991bf47958b31eb8346deefc2ed1200d7e273d /demo
parenta0701f50c9d5be3b0d4ffdb54e8d2d7e5f094125 (diff)
movielens demo
Diffstat (limited to 'demo')
-rw-r--r--demo/movielens/Makefile13
-rwxr-xr-xdemo/movielens/README.md6
2 files changed, 11 insertions, 8 deletions
diff --git a/demo/movielens/Makefile b/demo/movielens/Makefile
index 58063571..9fb80147 100644
--- a/demo/movielens/Makefile
+++ b/demo/movielens/Makefile
@@ -25,7 +25,10 @@ ml-%/ratings.dat: ml-%.zip
ml-%.ratings.train.vw: ml-%/ratings.dat
@echo -n "preprocessing movielens $* ..."
@./ratings2vw ml-$*.ratings.pre.train.vw ml-$*.ratings.test.vw $<
- @sort -R ml-$*.ratings.pre.train.vw > ml-$*.ratings.train.vw
+ @perl -ne 'BEGIN { srand 8675309; }; \
+ 1; print join "\t", rand (), $$_;' \
+ ml-$*.ratings.pre.train.vw | sort -k1 | \
+ cut -f2- > ml-$*.ratings.train.vw
@rm -f ml-$*.ratings.pre.train.vw
@echo " complete"
@@ -65,12 +68,12 @@ lrq.results: ml-1m.ratings.test.vw ml-1m.ratings.train.vw
@echo "*********************************************************"
@echo "* training low-rank interaction model (without dropout) *"
@echo "* *"
- @echo "* vw --lrq um5 ... *"
+ @echo "* vw --lrq um7 ... *"
@echo "*********************************************************"
@echo
@${VW} --loss_function quantile -l 0.1 -b 24 --passes 100 \
-k --cache_file $@.cache -d $(word 2,$+) --holdout_off \
- --l2 1e-6 --lrq um5 --adaptive --invariant -f $@.model
+ --l2 1e-6 --lrq um7 --adaptive --invariant -f $@.model
@echo "********************************************************"
@echo "* testing low-rank interaction model (without dropout) *"
@echo "********************************************************"
@@ -89,12 +92,12 @@ lrqdropout.results: ml-1m.ratings.test.vw ml-1m.ratings.train.vw
@echo "******************************************************"
@echo "* training low-rank interaction model (with dropout) *"
@echo "* *"
- @echo "* vw --lrq um10 --lrqdropout ... *"
+ @echo "* vw --lrq um14 --lrqdropout ... *"
@echo "******************************************************"
@echo
@${VW} --loss_function quantile -l 1 -b 24 --passes 100 \
-k --cache_file $@.cache -d $(word 2,$+) --holdout_off \
- --lrq um10 --lrqdropout --adaptive --invariant -f $@.model
+ --lrq um14 --lrqdropout --adaptive --invariant -f $@.model
@echo "*****************************************************"
@echo "* testing low-rank interaction model (with dropout) *"
@echo "*****************************************************"
diff --git a/demo/movielens/README.md b/demo/movielens/README.md
index 76998b58..c5c179d2 100755
--- a/demo/movielens/README.md
+++ b/demo/movielens/README.md
@@ -34,9 +34,9 @@ a bit of `--l2` regularization improves generalization.
### Demo Instructions ###
- `make shootout`: eventually produces three results indicating test MAE (mean absolute error) on movielens-1M for
- - linear: a model without any interactions. basically this creates a user bias and item bias fit. this is a surprisingly strong baseline in terms of MAE, but is useless for recommendation as it induces the same item ranking for all users. It achieves test MAE of 0.733 (at the time of this writing).
- - lrq: the linear model augmented with rank-5 interactions between users and movies, aka, "five latent factors". It achieves test MAE of 0.700. I determined that 5 was the best number to use through experimentation. The additional `vw` command-line flag vs. the linear model is `--l2 1e-6 --lrq um5`.
- - lrqdropout: the linear model augmented with rank-10 interactions between users and movies, and trained with dropout. It achieves test MAE of 0.692. Dropout effectively halves the number of latent factors, so unsurprisingly 10 factors seem to work best. The additional `vw` command-line flags vs. the linear model are `--lrq um10 --lrqdropout`.
+ - linear: a model without any interactions. basically this creates a user bias and item bias fit. this is a surprisingly strong baseline in terms of MAE, but is useless for recommendation as it induces the same item ranking for all users. It achieves test MAE of 0.731 (at the time of this writing).
+ - lrq: the linear model augmented with rank-5 interactions between users and movies, aka, "five latent factors". It achieves test MAE of 0.699. I determined that 7 was the best number to use through experimentation. The additional `vw` command-line flag vs. the linear model is `--l2 1e-6 --lrq um7`.
+ - lrqdropout: the linear model augmented with rank-10 interactions between users and movies, and trained with dropout. It achieves test MAE of 0.691. Dropout effectively halves the number of latent factors, so unsurprisingly 14 factors seem to work best. The additional `vw` command-line flags vs. the linear model are `--lrq um14 --lrqdropout`.
- the first time you invoke `make shootout` there is a lot of other output. invoking it a second time will allow you to just see the cached results.
Details about how `vw` is invoked is in the `Makefile`.