Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007), pages 760-767. Guided learning for bidirectional sequence classification. Lemmatization and Lexicalized Statistical Parsing of Morphologically Rich Languages: the Case of French "SPMRL 2010 (NAACL 2010 workshop)" Seddah, Djamé, Chrupała, Grzegorz, Çetinoglu, Özlem and Candito, Marie.Lecture Notes in Computer Science 6608, pp. Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics? In Alexander Gelbukh (ed.), Computational Linguistics and Intelligent Text Processing, 12th International Conference, CICLing 2011, Proceedings, Part I. Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC'04). SVMTool: A general POS tagger generator based on Support Vector Machines. Coupling an annotated corpus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort. Intégrer des connaissances linguistiques dans un CRF : application à l'apprentissage d'un segmenteur-étiqueteur du français. Constant, Matthieu, Tellier, Isabelle, Duchier, Denys, Dupont, Yoann, Sigogne, Anthony, and Billot, Sylvie.Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. Chrupała, Grzegorz, Dinu, Georgiana and van Genabith, Josef."6th Applied Natural Language Processing Conference". TnT - A Statistical Part-of-Speech Tagger. Contextual string embeddings for sequence labeling. Akbik, Alan, Blythe, Duncan and Vollgraf, Roland.(*) External lexical information from the Lefff lexicon (Sagot 2010, Alexina project) Perceptron with external lexical information*Ĭhrupała et al. (***) Extra data: Whether system training exploited (usually large amounts of) extra unlabeled text, such as by semi-supervised learning, self-training, or using distributional similarity features, beyond the standard supervised training data. The distributed GENiA tagger is trained on a mixed training corpus and gets 96.94% on WSJ, and 98.26% on GENiA biomedical English. (**) GENiA: Results are for models trained and tested on the given corpora (to be comparable to other results). Brants (2000) reports 96.7% token accuracy and 85.5% unknown word accuracy on a 10-fold cross-validation of the Penn WSJ corpus. (*) TnT: Accuracy is as reported by Giménez and Márquez (2004) for the given test collection. Semi-supervised condensed nearest neighborīidirectional LSTM-CRF with contextual string embeddings Maximum entropy bidirectional easiest-first inference Maximum entropy cyclic dependency network Maximum entropy Markov model with external lexical information
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |