Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/moses-smt/vowpal_wabbit.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPaul Mineiro <paul-github@mineiro.com>2014-01-02 05:47:38 +0400
committerPaul Mineiro <paul-github@mineiro.com>2014-01-02 05:47:38 +0400
commit89540c155e272333edc45ceb24e98e7295c4e00c (patch)
treec07bedd121fd6e331cb80223641a560aad68133f /demo/movielens
parent5f8f19adb47946cb2bf6f41962131be73a9ed878 (diff)
tweak movielens demo
Diffstat (limited to 'demo/movielens')
-rw-r--r--demo/movielens/Makefile4
-rwxr-xr-xdemo/movielens/README.md2
2 files changed, 3 insertions, 3 deletions
diff --git a/demo/movielens/Makefile b/demo/movielens/Makefile
index f6ef4411..ece8d9e1 100644
--- a/demo/movielens/Makefile
+++ b/demo/movielens/Makefile
@@ -95,9 +95,9 @@ lrqdropout.results: ml-1m.ratings.test.vw ml-1m.ratings.train.vw
@echo "* vw --lrq um12 --lrqdropout ... *"
@echo "******************************************************"
@echo
- @${VW} --loss_function quantile -l 1 -b 24 --passes 100 \
+ @${VW} --loss_function quantile -l 0.45 -b 24 --passes 100 \
-k --cache_file $@.cache -d $(word 2,$+) --holdout_off \
- --lrq um12 --lrqdropout --adaptive --invariant -f $@.model
+ --lrq um14 --lrqdropout --adaptive --invariant -f $@.model
@echo "*****************************************************"
@echo "* testing low-rank interaction model (with dropout) *"
@echo "*****************************************************"
diff --git a/demo/movielens/README.md b/demo/movielens/README.md
index 4b6a49ab..674ca02b 100755
--- a/demo/movielens/README.md
+++ b/demo/movielens/README.md
@@ -35,7 +35,7 @@ You might find a bit of `--l2` regularization improves generalization.
- `make shootout`: eventually produces three results indicating test MAE (mean absolute error) on movielens-1M for
- linear: a model without any interactions. basically this creates a user bias and item bias fit. this is a surprisingly strong baseline in terms of MAE, but is useless for recommendation as it induces the same item ranking for all users. It achieves test MAE of 0.731.
- lrq: the linear model augmented with rank-7 interactions between users and movies, aka, "seven latent factors". It achieves test MAE of 0.698. I determined that 7 was the best number to use through experimentation. The additional `vw` command-line flags vs. the linear model are `--l2 1e-6 --lrq um7`. Performance is sensitive to the choice of `--l2` regularization strength.
- - lrqdropout: the linear model augmented with rank-12 interactions between users and movies, and trained with dropout. It achieves test MAE of 0.689. The additional `vw` command-line flags vs. the linear model are `--lrq um12 --lrqdropout`.
+ - lrqdropout: the linear model augmented with rank-14 interactions between users and movies, and trained with dropout. It achieves test MAE of 0.688. The additional `vw` command-line flags vs. the linear model are `--lrq um14 --lrqdropout`.
- the first time you invoke `make shootout` there is a lot of other output. invoking it a second time will allow you to just see the cached results.
Details about how `vw` is invoked is in the `Makefile`.