Memory error in Word2vec while loading freebase-skipgram model

I am trying to use word2vec and use an arbitrary freebase grammar model. But I am unable to load the model due to a memory error.

Here is a code snippet for it:

model = gensim.models.Word2Vec()
model = models.Word2Vec.load_word2vec_format('freebase-vectors-skipgram1000.bin.gz', binary=True)

      

I am getting the following error:

MemoryError                               Traceback (most recent call last)
<ipython-input-40-a1cfacf48c94> in <module>()
      1 model = gensim.models.Word2Vec()
----> 2 model = models.Word2Vec.load_word2vec_format('freebase-vectors-skipgram1000.bin.gz', binary=True)

/../../word2vec.pyc in load_word2vec_format(cls, fname, fvocab, binary, norm_only)
    583             vocab_size, layer1_size = map(int, header.split())  # throws for invalid file format
    584             result = Word2Vec(size=layer1_size)
--> 585             result.syn0 = zeros((vocab_size, layer1_size), dtype=REAL)
    586             if binary:
    587                 binary_len = dtype(REAL).itemsize * layer1_size

MemoryError: 

      

But the same works fine with google news using the following code:

model = gensim.models.Word2Vec()
model = models.Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)

      

I can't figure out why. Is it that freebase requires a lot more memory than Google news? I feel like it shouldn't be. Did I miss something?

+3


source to share


1 answer


I figured it out and it was related to the freebase memory requirement. When running on an 8gb machine with another ipython laptop it was giving me an error. closing all other processes and other laptops allowed me to boot it finally!



+1


source







All Articles