Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/TharinduDR/TransQuest.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTharinduDR <rhtdranasinghe@gmail.com>2021-03-19 22:50:44 +0300
committerTharinduDR <rhtdranasinghe@gmail.com>2021-03-19 22:50:44 +0300
commit2760ce7d34ac55013cfe5b0c3bc81ffcac2a455a (patch)
treef9ce61a71cb6cfbeedf7c1c5eaff155e7cd3c105
parent14ed555fdd09e258471109e0a294e50ee3f122f5 (diff)
056: Code Refactoring
-rw-r--r--docs/architectures/sentence_level_architectures.md8
-rw-r--r--docs/architectures/word_level_architecture.md6
2 files changed, 9 insertions, 5 deletions
diff --git a/docs/architectures/sentence_level_architectures.md b/docs/architectures/sentence_level_architectures.md
index ce94585..9c9fa27 100644
--- a/docs/architectures/sentence_level_architectures.md
+++ b/docs/architectures/sentence_level_architectures.md
@@ -39,13 +39,13 @@ An example monotransquest_config is available [here.](https://github.com/Tharind
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", monotransquest_config["best_model_dir"], num_labels=1,
- use_cuda=torch.cuda.is_available(), args=monotransquest_config)
+ use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([[source, target]])
print(predictions)
```
-Predictions are the predicted quality scores. You will find more examples in [here.](https://tharindudr.github.io/TransQuest/examples/sentence_level/)
+Predictions are the predicted quality scores.
##SiameseTransQuest
The second approach proposed in this framework relies on a Siamese architecture where we feed the original text and the translation into two separate XLM-R transformer models.
@@ -121,4 +121,6 @@ test_data = SentencesDataset(examples=qe_reader.get_examples("test.tsv", test_fi
verbose=False)
```
-You will find the predictions in the test_result.txt file in the siamesetransquest_config['cache_dir'] folder. You can find more examples in [here.](https://tharindudr.github.io/TransQuest/examples/sentence_level) \ No newline at end of file
+You will find the predictions in the test_result.txt file in the siamesetransquest_config['cache_dir'] folder.
+
+Now that you know about the architectures in TransQuest, check how we can apply it in WMT QE shared tasks [here.](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) \ No newline at end of file
diff --git a/docs/architectures/word_level_architecture.md b/docs/architectures/word_level_architecture.md
index 0c79dc2..b6df13f 100644
--- a/docs/architectures/word_level_architecture.md
+++ b/docs/architectures/word_level_architecture.md
@@ -4,7 +4,7 @@ WE have one architecture that is capable of providing word level quality estimat
### Data Preparation
Please have your data as a pandas dataframe in this format.
-| source_column | target_column | source_tags_column | target_tags_column |
+| source | target | source_tags | target_tags |
| ----------------------------------------| ----------------------------------|--------------------|-------------------------------------|
| 52 mg wasserfreie Lactose . | 52 mg anhydrous lactose . | [OK OK OK OK OK] | [OK OK OK OK OK OK OK OK OK OK OK] |
| România sanofi-aventis România S.R.L. | Sanofi-Aventis România S. R. L. | [BAD OK OK OK] | [BAD BAD OK OK OK OK OK OK OK OK OK]|
@@ -36,10 +36,12 @@ An example microtransquest_config is available [here.](https://github.com/Tharin
```python
from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
-model = MonoTransQuestModel("xlmroberta", monotransquest_config["best_model_dir"],
+model = MicroTransQuestModel("xlmroberta", microtransquest_config["best_model_dir"],
use_cuda=torch.cuda.is_available() )
sources_tags, targets_tags = model.predict([[source, target]], split_on_space=True)
```
+
+Now that you know about the word-level architecture in TransQuest, check how we can apply it in WMT QE shared tasks [here.](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) \ No newline at end of file