Get_document_topics and get_term_topics in gensim

ldamodel in gensim has two methods: get_document_topics

and get_term_topics

.

Despite their use in this gensim notebook tutorial , I am not quite sure how to interpret the output get_term_topics

and created the code below to show what I mean:

from gensim import corpora, models

texts = [['human', 'interface', 'computer'],
 ['survey', 'user', 'computer', 'system', 'response', 'time'],
 ['eps', 'user', 'interface', 'system'],
 ['system', 'human', 'system', 'eps'],
 ['user', 'response', 'time'],
 ['trees'],
 ['graph', 'trees'],
 ['graph', 'minors', 'trees'],
 ['graph', 'minors', 'survey']]

# build the corpus, dict and train the model
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
model = models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=2, 
                                 random_state=0, chunksize=2, passes=10)

# show the topics
topics = model.show_topics()
for topic in topics:
    print topic
### (0, u'0.159*"system" + 0.137*"user" + 0.102*"response" + 0.102*"time" + 0.099*"eps" + 0.090*"human" + 0.090*"interface" + 0.080*"computer" + 0.052*"survey" + 0.030*"minors"')
### (1, u'0.267*"graph" + 0.216*"minors" + 0.167*"survey" + 0.163*"trees" + 0.024*"time" + 0.024*"response" + 0.024*"eps" + 0.023*"user" + 0.023*"system" + 0.023*"computer"')

# get_document_topics for a document with a single token 'user'
text = ["user"]
bow = dictionary.doc2bow(text)
print "get_document_topics", model.get_document_topics(bow)
### get_document_topics [(0, 0.74568415806946331), (1, 0.25431584193053675)]

# get_term_topics for the token user
print "get_term_topics: ", model.get_term_topics("user", minimum_probability=0.000001)
### get_term_topics:  [(0, 0.1124525558321441), (1, 0.006876306738765027)]

      

For the get_document_topics

conclusion it makes sense. Two probabilities are up to 1.0, and a topic user

with a higher probability (from model.show_topics()

) also has a higher probability.

But get_term_topics

questions arise for :

  • Probabilities don't stack up to 1.0, why?
  • While numerically, the topic that user

    has a higher probability (of model.show_topics()

    ) also has a higher number, what does that number mean?
  • Why should we use it get_term_topics

    at all when it get_document_topics

    can provide (presumably) the same functionality and have a meaningful result?
+3


source to share


1 answer


I was working on LDA Theme Modeling and came across this post. I have created two topics, for example topic1 and topic2.

Top 10 words for each topic: 0.009*"would" + 0.008*"experi" + 0.008*"need" + 0.007*"like" + 0.007*"code" + 0.007*"work" + 0.006*"think" + 0.006*"make" + 0.006*"one" + 0.006*"get

0.027*"ierr" + 0.018*"line" + 0.014*"0.0e+00" + 0.010*"error" + 0.009*"defin" + 0.009*"norm" + 0.006*"call" + 0.005*"type" + 0.005*"de" + 0.005*"warn

In the end I took 1 document to define the nearest topic.

for d in doc:
    bow = dictionary.doc2bow(d.split())
    t = lda.get_document_topics(bow)

      

and the output - [(0, 0.88935698141006414), (1, 0.1106430185899358)]

.



To answer your first question, probabilities are done up to 1.0 for a document, and that's what get_document_topics does. The document clearly states that it returns the distribution of topics for a given document as a list (topic_id, topic_probability) of 2 tuples.

Also, I tried get_term_topics for the "ierr" keyword

t = lda.get_term_topics("ierr", minimum_probability=0.000001)

and a result [(1, 0.027292299843400435)]

that is nothing more than the contribution of a word to define each topic that makes sense.

So, you can tag a document based on the theme distribution you are using using get_document_topics, and you can determine the importance of a word based on the contribution given to get_term_topics.

Hope this helps.

+3


source







All Articles