Building dictionary of words from large text
Posted
by LiorH
on Stack Overflow
See other posts from Stack Overflow
or by LiorH
Published on 2010-04-06T19:43:25Z
Indexed on
2010/04/06
19:53 UTC
Read the original article
Hit count: 404
I have a text file containing posts in English/Italian. I would like to read the posts into a data matrix so that each row represents a post and each column a word. The cells in the matrix are the counts of how many times each word appears in the post. The dictionary should consist of all the words in the whole file or a non exhaustive English/Italian dictionary.
I know this is a common essential preprocessing step for NLP.
Does anyone know of a tool\project that can perform this task?
Someone mentioned apache lucene, do you know if lucene index can be serialized to a data-structure similar to my needs?
© Stack Overflow or respective owner