Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/TharinduDR/TransQuest.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorTharinduDR <rhtdranasinghe@gmail.com>2020-10-14 17:18:42 +0300
committerTharinduDR <rhtdranasinghe@gmail.com>2020-10-14 17:18:42 +0300
commit3d02fe430b8b32152789b486b9731b3171e9a6d6 (patch)
treef527d38a225bb12c3cdcc3ebd9f1090c04dd11b3 /docs
parentcb322b8c2a682e2fc16b5d128ba0bbc40dac5067 (diff)
033: Adding documentation
Diffstat (limited to 'docs')
-rw-r--r--docs/architectures.md8
1 files changed, 6 insertions, 2 deletions
diff --git a/docs/architectures.md b/docs/architectures.md
index bc59e81..63900be 100644
--- a/docs/architectures.md
+++ b/docs/architectures.md
@@ -9,7 +9,7 @@ The first architecture proposed uses a single XLM-R transformer model. The input
### Minimal Start for a MonoTransQuest Model
-First read your data in to a pandas dataframe and format it so that it has three columns with headers text_a, text_b and labels. text_a is the source text, text_b is the target text and labels are the quality scores. Then initiate and train the model like the following code.
+First read your data in to a pandas dataframe and format it so that it has three columns with headers text_a, text_b and labels. text_a is the source text, text_b is the target text and labels are the quality scores. Then initiate and train the model like in the following code. train_df and eval_df are the pandas dataframes prepared with the above instructions.
```python
from transquest.algo.transformers.evaluation import pearson_corr, spearman_corr
@@ -22,9 +22,13 @@ model = QuestModel("xlmroberta", "xlm-roberta-large", num_labels=1, use_cuda=tor
model.train_model(train_df, eval_df=eval_df, pearson_corr=pearson_corr, spearman_corr=spearman_corr,
mae=mean_absolute_error)
```
+An example transformer_config is available [here.](https://github.com/TharinduDR/TransQuest/blob/master/examples/wmt_2020/ro_en/transformer_config.py). The best model will be saved to the path specified in the "best_model_dir" in transfomer_config. Then you can load it and do the predictions like this.
+```python
+model = QuestModel("xlmroberta", transformer_config["best_model_dir"], num_labels=1,
+ use_cuda=torch.cuda.is_available(), args=transformer_config)
-
+```
##SiameseTransQuest
![SiameseTransQuest Architecture](images/SiameseTransQuest.png)