Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/marian-nmt/marian-regression-tests.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRoman Grundkiewicz <rgrundkiewicz@gmail.com>2021-01-25 20:59:31 +0300
committerRoman Grundkiewicz <rgrundkiewicz@gmail.com>2021-01-25 20:59:31 +0300
commit18c4e54806205a3a29b0a8435864d6312dccaacf (patch)
tree2ce1544a29e5169c6b5e0da06c8a8567241d4a49
parent712d5f5b9db45f0407ad43004b16b0d332f5df31 (diff)
Fix tests affected by changed logging formats
-rw-r--r--tests/training/scheduler/log_epoch_e.expected24
-rw-r--r--tests/training/scheduler/log_epoch_t.expected18
-rw-r--r--tests/training/scheduler/log_epoch_u.expected18
-rw-r--r--tests/training/scheduler/test_logical_epoch.sh2
-rw-r--r--tests/training/scheduler/test_logical_epoch_labels.sh2
-rw-r--r--tests/training/scheduler/test_logical_epoch_updates.sh2
6 files changed, 36 insertions, 30 deletions
diff --git a/tests/training/scheduler/log_epoch_e.expected b/tests/training/scheduler/log_epoch_e.expected
index 87cf167..c262c26 100644
--- a/tests/training/scheduler/log_epoch_e.expected
+++ b/tests/training/scheduler/log_epoch_e.expected
@@ -1,20 +1,22 @@
Training started
-Seen 1542 samples
+Parameter type float32, optimization type float32, casting types false
+Allocating memory for Adam-specific shards
+Seen 1,542 samples
Starting data epoch 2 in logical epoch 1.000
-Ep. 1.000 : Up. 10 : Sen. 768 : Cost 9.68880177 * 61,315 after 61,315
-Seen 1542 samples
+Ep. 1.000 : Up. 10 : Sen. 768 : Cost 9.68879700 * 61,315 @ 6,851 after 61,315
+Seen 1,542 samples
Starting data epoch 3 in logical epoch 1.500
-Ep. 1.500 : Up. 20 : Sen. 1,536 : Cost 9.67091751 * 61,279 after 122,594
-Seen 1542 samples
+Ep. 1.500 : Up. 20 : Sen. 1,536 : Cost 9.67091274 * 61,279 @ 6,585 after 122,594
+Seen 1,542 samples
Starting data epoch 4 in logical epoch 2.000
-Seen 1542 samples
+Seen 1,542 samples
Starting data epoch 5 in logical epoch 2.500
-Ep. 2.500 : Up. 30 : Sen. 512 : Cost 9.65089989 * 54,621 after 177,215
-Seen 1542 samples
+Ep. 2.500 : Up. 30 : Sen. 512 : Cost 9.65089798 * 54,621 @ 7,219 after 177,215
+Seen 1,542 samples
Starting data epoch 6 in logical epoch 3.000
-Ep. 3.000 : Up. 40 : Sen. 1,280 : Cost 9.63199997 * 61,545 after 238,760
-Seen 1542 samples
+Ep. 3.000 : Up. 40 : Sen. 1,280 : Cost 9.63199615 * 61,545 @ 6,916 after 238,760
+Seen 1,542 samples
Starting data epoch 7 in logical epoch 3.500
Training finished
Saving model to log_epoch_e/model.npz
-Saving Adam parameters to log_epoch_e/model.npz.optimizer.npz
+Saving Adam parameters
diff --git a/tests/training/scheduler/log_epoch_t.expected b/tests/training/scheduler/log_epoch_t.expected
index 1f57c2e..0373755 100644
--- a/tests/training/scheduler/log_epoch_t.expected
+++ b/tests/training/scheduler/log_epoch_t.expected
@@ -1,12 +1,14 @@
Training started
-Ep. 2.258 : Up. 4 : Sen. 512 : Cost 9.69286919 * 13,547 after 13,547
-Ep. 3.400 : Up. 6 : Sen. 768 : Cost 9.68953419 * 6,851 after 20,398
-Ep. 5.131 : Up. 9 : Sen. 1,152 : Cost 9.68455887 * 10,387 after 30,785
-Ep. 6.793 : Up. 12 : Sen. 1,536 : Cost 9.68291855 * 9,975 after 40,760
-Seen 1542 samples
+Parameter type float32, optimization type float32, casting types false
+Allocating memory for Adam-specific shards
+Ep. 2.258 : Up. 4 : Sen. 512 : Cost 9.69286919 * 13,547 @ 3,630 after 13,547
+Ep. 3.400 : Up. 6 : Sen. 768 : Cost 9.68952084 * 6,851 @ 3,634 after 20,398
+Ep. 5.131 : Up. 9 : Sen. 1,152 : Cost 9.68455029 * 10,387 @ 3,526 after 30,785
+Ep. 6.793 : Up. 12 : Sen. 1,536 : Cost 9.68291855 * 9,975 @ 3,457 after 40,760
+Seen 1,542 samples
Starting data epoch 2 in logical epoch 6.819
-Ep. 8.472 : Up. 16 : Sen. 384 : Cost 9.67040443 * 10,074 after 50,834
-Ep. 10.219 : Up. 19 : Sen. 768 : Cost 9.66528606 * 10,481 after 61,315
+Ep. 8.472 : Up. 16 : Sen. 384 : Cost 9.67040443 * 10,074 @ 3,589 after 50,834
+Ep. 10.219 : Up. 19 : Sen. 768 : Cost 9.66527557 * 10,481 @ 3,634 after 61,315
Training finished
Saving model to log_epoch_t/model.npz
-Saving Adam parameters to log_epoch_t/model.npz.optimizer.npz
+Saving Adam parameters
diff --git a/tests/training/scheduler/log_epoch_u.expected b/tests/training/scheduler/log_epoch_u.expected
index a8855f2..14a3064 100644
--- a/tests/training/scheduler/log_epoch_u.expected
+++ b/tests/training/scheduler/log_epoch_u.expected
@@ -1,15 +1,17 @@
Training started
-Seen 1542 samples
+Parameter type float32, optimization type float32, casting types false
+Allocating memory for Adam-specific shards
+Seen 1,542 samples
Starting data epoch 2 in logical epoch 0.700
-Ep. 1.000 : Up. 10 : Sen. 768 : Cost 9.68880177 * 61,315 after 61,315
-Seen 1542 samples
+Ep. 1.000 : Up. 10 : Sen. 768 : Cost 9.68879700 * 61,315 @ 6,851 after 61,315
+Seen 1,542 samples
Starting data epoch 3 in logical epoch 1.400
-Ep. 2.000 : Up. 20 : Sen. 1,536 : Cost 9.67091751 * 61,279 after 122,594
-Seen 1542 samples
+Ep. 2.000 : Up. 20 : Sen. 1,536 : Cost 9.67091274 * 61,279 @ 6,585 after 122,594
+Seen 1,542 samples
Starting data epoch 4 in logical epoch 2.100
-Seen 1542 samples
+Seen 1,542 samples
Starting data epoch 5 in logical epoch 2.800
-Ep. 3.000 : Up. 30 : Sen. 512 : Cost 9.65089989 * 54,621 after 177,215
+Ep. 3.000 : Up. 30 : Sen. 512 : Cost 9.65089798 * 54,621 @ 7,219 after 177,215
Training finished
Saving model to log_epoch_u/model.npz
-Saving Adam parameters to log_epoch_u/model.npz.optimizer.npz
+Saving Adam parameters
diff --git a/tests/training/scheduler/test_logical_epoch.sh b/tests/training/scheduler/test_logical_epoch.sh
index fca9f52..dcfa95d 100644
--- a/tests/training/scheduler/test_logical_epoch.sh
+++ b/tests/training/scheduler/test_logical_epoch.sh
@@ -25,7 +25,7 @@ test -e log_epoch_e/model.npz
test -e log_epoch_e.log
# Compare actual and expected outputs
-cat log_epoch_e.log | $MRT_TOOLS/strip-timestamps.sh | grep -v '^\[' | sed 's/ : Time.*//' > log_epoch_e.out
+cat log_epoch_e.log | $MRT_TOOLS/strip-timestamps.sh | grep -v '^\[' | grep -v 'Synced' | sed 's/ : Time.*//' > log_epoch_e.out
$MRT_TOOLS/diff-nums.py log_epoch_e.out log_epoch_e.expected -p 0.01 -o log_epoch_e.diff
# Exit with success code
diff --git a/tests/training/scheduler/test_logical_epoch_labels.sh b/tests/training/scheduler/test_logical_epoch_labels.sh
index c37c0fa..acbeda6 100644
--- a/tests/training/scheduler/test_logical_epoch_labels.sh
+++ b/tests/training/scheduler/test_logical_epoch_labels.sh
@@ -25,7 +25,7 @@ test -e log_epoch_t/model.npz
test -e log_epoch_t.log
# Compare actual and expected outputs
-cat log_epoch_t.log | $MRT_TOOLS/strip-timestamps.sh | grep -v '^\[' | sed 's/ : Time.*//' > log_epoch_t.out
+cat log_epoch_t.log | $MRT_TOOLS/strip-timestamps.sh | grep -v '^\[' | grep -v 'Synced' | sed 's/ : Time.*//' > log_epoch_t.out
$MRT_TOOLS/diff-nums.py log_epoch_t.out log_epoch_t.expected -p 0.01 -o log_epoch_t.diff
# Exit with success code
diff --git a/tests/training/scheduler/test_logical_epoch_updates.sh b/tests/training/scheduler/test_logical_epoch_updates.sh
index 8582120..326d4a6 100644
--- a/tests/training/scheduler/test_logical_epoch_updates.sh
+++ b/tests/training/scheduler/test_logical_epoch_updates.sh
@@ -25,7 +25,7 @@ test -e log_epoch_u/model.npz
test -e log_epoch_u.log
# Compare actual and expected outputs
-cat log_epoch_u.log | $MRT_TOOLS/strip-timestamps.sh | grep -v '^\[' | sed 's/ : Time.*//' > log_epoch_u.out
+cat log_epoch_u.log | $MRT_TOOLS/strip-timestamps.sh | grep -v '^\[' | grep -v 'Synced' | sed 's/ : Time.*//' > log_epoch_u.out
$MRT_TOOLS/diff-nums.py log_epoch_u.out log_epoch_u.expected -p 0.01 -o log_epoch_u.diff
# Exit with success code