Read large (1.5GB) file in h2o R

I am using h2o package for modeling in R. For this, I want to read a dataset of about 1.5 GB using h2o.importfile (). I am starting an h2o server using the lines

library(h2oEnsemble)
h2o.init(max_mem_size = '1499m',nthreads=-1)

      

This creates a log

H2O is not running yet, starting it now...
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) Client VM (build 25.121-b13, mixed mode)

Starting H2O JVM and connecting: . Connection successful!

R is connected to the H2O cluster: 
H2O cluster uptime:         3 seconds 665 milliseconds 
H2O cluster version:        3.10.4.8 
H2O cluster version age:    28 days, 14 hours and 36 minutes  
H2O cluster name:           H2O_started_from_R_Lucifer_jvn970 
H2O cluster total nodes:    1 
H2O cluster total memory:   1.41 GB 
H2O cluster total cores:    4 
H2O cluster allowed cores:  4 
H2O cluster healthy:        TRUE 
H2O Connection ip:          localhost 
H2O Connection port:        54321 
H2O Connection proxy:       NA 
H2O Internal Security:      FALSE 
R Version:                  R version 3.3.2 (2016-10-31)` 

      

The following line gives me an error train=h2o.importFile(path=normalizePath("C:\\Users\\All data\\traindt.rds"))

DistributedException from localhost/127.0.0.1:54321, caused by java.lang.AssertionError

DistributedException from localhost/127.0.0.1:54321, caused by java.lang.AssertionError
at water.MRTask.getResult(MRTask.java:478)
at water.MRTask.getResult(MRTask.java:486)
at water.MRTask.doAll(MRTask.java:402)
at water.parser.ParseDataset.parseAllKeys(ParseDataset.java:246)
at water.parser.ParseDataset.access$000(ParseDataset.java:27)
at water.parser.ParseDataset$ParserFJTask.compute2(ParseDataset.java:195)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1315)
at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Caused by: java.lang.AssertionError
at water.parser.Categorical.addKey(Categorical.java:41)
at water.parser.FVecParseWriter.addStrCol(FVecParseWriter.java:127)
at water.parser.CsvParser.parseChunk(CsvParser.java:133)
at water.parser.Parser.readOneFile(Parser.java:187)
at water.parser.Parser.streamParseZip(Parser.java:217)
at water.parser.ParseDataset$MultiFileParseTask.streamParse(ParseDataset.java:907)
at water.parser.ParseDataset$MultiFileParseTask.map(ParseDataset.java:856)
at water.MRTask.compute2(MRTask.java:601)
at water.H2O$H2OCountedCompleter.compute1(H2O.java:1318)
at water.parser.ParseDataset$MultiFileParseTask$Icer.compute1(ParseDataset$MultiFileParseTask$Icer.java)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1314)
... 5 more

Error: DistributedException from localhost/127.0.0.1:54321, caused by java.lang.AssertionError

      

Any help on fixing this issue? Note. Assigning memory larger than 1499 MB also gives me an error (cannot allocate memory). I am using a 16GB memory environment

Edit: I download 64-bit Java and change my file to a csv file. Then I was able to assign max_mem_size to 5G and the problem was resolved.

For those facing the problem: 1. Download the latest 64-bit jdk 2. Execute the following line of line

h2o.init(max_mem_size = '5g',nthreads=-1)

      

+3


source to share


2 answers


train=h2o.importFile(path=normalizePath("C:\\Users\\All data\\traindt.rds")

      

Are you trying to upload a file .rds

? This is a binary R format that is unreadable h2o.importFile()

, so it won't work. You will need to save your training data in a cross-platform storage format (like CSV, SMVLight, etc.) if you want to read it directly into H2O. If you don't have a copy in another format, just save it in R:

# loads a `train` data.frame for example
load("C:\\Users\\All data\\traindt.rds")

# save as CSV
write.csv(train, "C:\\Users\\All data\\traindt.csv")

# import from CSV into H2O cluster directly
train = h2o.importFile(path = normalizePath("C:\\Users\\All data\\traindt.csv"))

      



Another option is to load it into R from a file .rds

and use the function as.h2o()

:

# loads a `train` data.frame for example
load("C:\\Users\\All data\\traindt.rds")

# send to H2O cluster
hf <- as.h2o(train)

      

+1


source


You are working with 32 bit java, which limits the memory you can run H2O with. One clue is that it won't start at a higher max_mem_size. Another clue is that it says "Client VM".

You want to use 64 bit Java instead. The 64-bit version will say "VM Server". You can download Java 8 SE JDK:



http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

Based on what you described, I recommend setting max_mem_size = '6g' or more, which will work fine on your system after installing the correct Java version.

+2


source







All Articles