jeudi 12 mars 2015

Why MALLET LDA need to keep-sequence?



In the MALLET documentation, it requires --keep-sequence tag for Topic model training (Detail is at : http://ift.tt/11VBte5)


However, in my knowledge, regular LDA modeling use documents as bag of words, since including bigram will increase the feature space by a lot. I wonder why MALLET requires keep-sequence in LDA training, and how did MALLET actually use that sequential information?


Thank you for reading this post.




Aucun commentaire:

Enregistrer un commentaire