Welcome to
mirror list
, hosted at
ThFree Co
, Russian Federation.
github.com/stanfordnlp/stanza.git - Unnamed repository; edit this file 'description' to name the repository.
index
:
github.com/stanfordnlp/stanza.git
142_resources_patch
NFC
am-notes
aws_sagemaker_tooling
azshan
beam
bert_mix
charlm_cache
charlm_checkpoint
con_add_pattn
con_attn2
con_bigrams
con_checkpoint
con_classifier
con_classifier_reranking
con_focal
con_freeze
con_kbest
con_lattn
con_lattn2
con_mixed_pattn
con_mlp
con_mlp_inputs
con_multitask
con_pattn_lr
con_pattn_replace
con_restart_transitions
con_self_gan
con_shift_tags
con_shift_tags2
con_shift_transitions
con_simple_transformer
con_simple_unary
con_skip
con_trans
con_tree_lstm
con_tree_lstm2
con_tree_lstm3
con_tstack
con_vector_dropout
con_vit
con_warmup
con_warmup_2
con_warmup_lattn
dataloader_local
de_ner
dev
elmo2
elmo_many
fewer_cuda
fix_unit_tests
gh-pages
gh-pages-sent
hebrew_combined
hi-layered-ner
hi-ner-cleaned
hi-shuffle
hi_ner
hi_ner_final
inorder_unary
kazakh_ner
kk_trans
lattn_issue
m1
main
marathi
margin_penalty
masakhane
masks
masks2
masks3
ner_bert
ner_bert_copy
ner_wv
ninf_langid
nner
no_header_pt
numeric_re
ordered_dict
pattn_issue
pos_bert
pos_charlm
ppf_data
pydataloader
refactor_dataloader
refactor_lstm
refactor_tok
refactor_tokenizer
refactor_tokkenizer_2
runner-demo
semgrex_search_visualization
sentence_ids
sentiment
sentiment_charlm
sentiment_lstm
sentiment_trainer
sindhi
spanish_sent
t5
t5b
tagger_mha
thai-sybrnn
thai_ner
tiny_ud
token
tokens
tr_ner
trans_lm
tweet
ug_ner
update_stanza
updated_eval
updated_eval_2
vi_bert_last
visualization
wandb
word_lstm_pattn
wordinput-sentsegmenter
xpos
Unnamed repository; edit this file 'description' to name the repository.
www-data
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2022-09-25
Add a TREE_LSTM node combination method.
con_tree_lstm2
John Bauer
2022-09-22
Transformer stack using MHA instead of LSTM as an option for the transitions ...
John Bauer
2022-09-22
Save the best_epoch as well
John Bauer
2022-09-21
Always use trainer.best_f1 instead of keeping a variable best_f1. Otherwise,...
John Bauer
2022-09-20
Clarify a bit of doc
John Bauer
2022-09-20
remove 'lstm' from transition_stack and constituent_stack, since any updated ...
John Bauer
2022-09-20
Add a split_size flag to make_lm_data.py to accommodate languages with smalle...
John Bauer
2022-09-20
Fix up the broken None args.output feature
John Bauer
2022-09-19
Add a bit of doc to LSTMTreeStack.__init__
John Bauer
2022-09-19
Refactor output() for the lstm_stacks. This makes it easier to refactor a ne...
John Bauer
2022-09-19
Comment on something that can be removed later
John Bauer
2022-09-19
Also refactor a constituent_lstm_stack. The unary transitions are a little w...
John Bauer
2022-09-19
Refactor the LSTM for the Transitions to a separate class. Will make it simp...
John Bauer
2022-09-17
Use the same lstm layer for transitions as well as constituents
John Bauer
2022-09-17
Restrict MultilingualPipeline to only load a few models. Will hopefully save...
John Bauer
2022-09-17
Maybe save a little GPU memory (fragmentation, at least) by reusing some larg...
John Bauer
2022-09-17
Try some option combinations which will hopefully make the models take up les...
John Bauer
2022-09-17
Also test sentence_boundary_vectors==none for the different transition embedd...
John Bauer
2022-09-17
Transition start should be the same as the embedding dim, not the hidden dim ...
John Bauer
2022-09-17
Save a best_f1 with the models so that we only resave a model when restarting...
John Bauer
2022-09-16
Add a small bit of doc
John Bauer
2022-09-16
Three lists appears slightly faster than one list of tuples
John Bauer
2022-09-16
Update tokenizer to use int64 for indices into an Embedding?
John Bauer
2022-09-16
Add a Hebrew charlm
John Bauer
2022-09-16
Each of these arguments should now be baked into every model we released in 1...
John Bauer
2022-09-15
Log notes about missing grad when using watch_regex
John Bauer
2022-09-15
Bump version number to release a few small changes
HEAD
v1.4.2
main
John Bauer
2022-09-15
normalize and sort dependencies, add transformers extra (#1124)
Nicholas Bollweg
2022-09-15
Hide the imports of SiLU and Mish from older versions of torch. #1120
John Bauer
2022-09-15
3.9 is a supported version of python now
John Bauer
2022-09-15
Stop requiring pytest for all installations. Instead we hopefully can hopefu...
John Bauer
2022-09-15
Switch dict + list to OrderedDict
John Bauer
2022-09-14
Update a couple versions in the README.md to better reflect reality - we supp...
John Bauer
2022-09-14
Squeeze a little bit more - only use depparse in the depparse pipeline
v1.4.1
John Bauer
2022-09-14
Turn some multilingual pipeline tests into fixtures. Again, should save memory
John Bauer
2022-09-14
Turn some pipelines getting built over and over into fixtures. Will make the...
John Bauer
2022-09-14
Now there should be POS models which match the PL charlms as well
John Bauer
2022-09-14
Simpler way to have PL charlm specific for NER
John Bauer
2022-09-14
Try to reduce the scope on various pipelines to make the test suite less like...
John Bauer
2022-09-14
Lower log level on some messages we don't want written to the pipeline
John Bauer
2022-09-14
Oops, bugfix. Otherwise you get the whole dictionary for a language/model pa...
John Bauer
2022-09-13
Temporarily don't include charlm in the POS models for PL - those haven't ret...
John Bauer
2022-09-13
Add charlm to the sentiment dependencies when building resources.json
John Bauer
2022-09-13
PL now has an NER model
John Bauer
2022-09-13
Add a couple sentiment models for v1.4.1
John Bauer
2022-09-13
Add a tool to evaluate treebanks that are written out by a parser, such as wh...
John Bauer
2022-09-13
Refactor a little bit. Make it so the scoring interface can handle either sc...
John Bauer
2022-09-13
Default trees written with format _O
John Bauer
2022-09-12
Don't double save_dir if the user gives save_dir as part of the model filename
John Bauer
2022-09-12
Fix remove_optimizer mode
John Bauer
[next]