Mlcp does not upload large number of files to directory

EDIT BELOW

we are using MarkLogic Content Pump to load data into ML8 database. We have a dev environment in which everything is fine, and a prod in which mlcp will not receive an estimate of the number of files being processed.

We have 2.1 million JSON documents that we want to download.

On the dev server (ML8 + CentOS6) we see the following:

15/07/13 13:19:35 INFO contentpump.ContentPump: Hadoop library version: 2.0.0-alpha
15/07/13 13:19:35 INFO contentpump.LocalJobRunner: Content type is set to MIXED.  The format of the  inserted documents will be determined by the MIME  type specification configured on MarkLogic Server.
15/07/13 13:19:35 WARN util.KerberosName: Kerberos krb5 configuration not found, setting default realm to empty
15/07/13 13:23:06 INFO input.FileInputFormat: Total input paths to process : 2147329
15/07/13 13:24:08 INFO contentpump.LocalJobRunner:  completed 0%
15/07/13 13:34:43 INFO contentpump.LocalJobRunner:  completed 1%
15/07/13 13:43:42 INFO contentpump.LocalJobRunner:  completed 2%
15/07/13 13:51:15 INFO contentpump.LocalJobRunner:  completed 3%

      

And it ends fine, loading data is fine.

Now we use the same data on a different machine as prod server (ML8 + CentOS 7 ), we get

15/07/14 17:02:21 INFO contentpump.ContentPump: Hadoop library version: 2.6.0
15/07/14 17:02:21 INFO contentpump.LocalJobRunner: Content type is set to MIXED.  The format of the  inserted documents will be determined by the MIME  type specification configured on MarkLogic Server.

      

Besides other OS we also have a newer version of mlcp on prod server 2.6.0 instead of 2.0.0. If we use the same command to import a directory with 2000 files, it works with prod ...

Job gets stuck counting the number of files to process ...

What could be the problem?

START EDIT we put mlcp in DEBUG and test with a little samle.zip

result:

[ashraf@77-72-150-125 ~]$ mlcp.sh import -host localhost -port 8140 -username ashraf -password duurz44m -input_file_path /home/ashraf/sample2.zip -input_compressed true  -mode local -output_uri_replace  "\".*,''\"" -output_uri_prefix incoming/linkedin/ -output_collections incoming,incoming/linkedin -output_permissions slush-dikw-node-role,read
15/07/16 16:36:31 DEBUG contentpump.ContentPump: Command: IMPORT
15/07/16 16:36:31 DEBUG contentpump.ContentPump: Arguments: -host localhost -port 8140 -username ashraf -password duurz44m -input_file_path /home/ashraf/sample2.zip -input_compressed true -mode local -output_uri_replace ".*,''" -output_uri_prefix incoming/linkedin/ -output_collections incoming,incoming/linkedin -output_permissions slush-dikw-node-role,read 
15/07/16 16:36:31 INFO contentpump.ContentPump: Hadoop library version: 2.6.0
15/07/16 16:36:31 DEBUG contentpump.ContentPump: Running in: localmode
15/07/16 16:36:31 INFO contentpump.LocalJobRunner: Content type is set to MIXED.  The format of the  inserted documents will be determined by the MIME  type specification configured on MarkLogic Server.
15/07/16 16:36:32 DEBUG contentpump.LocalJobRunner: Thread pool size: 4
15/07/16 16:36:32 INFO input.FileInputFormat: Total input paths to process : 1
15/07/16 16:36:33 DEBUG contentpump.LocalJobRunner: Thread Count for Split#0 : 4
15/07/16 16:36:33 DEBUG contentpump.CompressedDocumentReader: Starting file:/home/ashraf/sample2.zip
15/07/16 16:36:33 DEBUG contentpump.MultithreadedMapper: Running with 4 threads
15/07/16 16:36:33 DEBUG mapreduce.ContentWriter: Connect to localhost
15/07/16 16:36:33 DEBUG mapreduce.ContentWriter: Connect to localhost
15/07/16 16:36:33 DEBUG mapreduce.ContentWriter: Connect to localhost
15/07/16 16:36:33 DEBUG mapreduce.ContentWriter: Connect to localhost
15/07/16 16:36:34 INFO contentpump.LocalJobRunner:  completed 0%
15/07/16 16:36:39 INFO contentpump.LocalJobRunner:  completed 100%
2015-07-16 16:39:11.483 WARNING [19] (AbstractRequestController.runRequest): Error parsing HTTP headers: Premature EOF, partial header line read: ''
15/07/16 16:39:12 DEBUG contentpump.CompressedDocumentReader: Closing file:/home/ashraf/sample2.zip
15/07/16 16:39:12 INFO contentpump.LocalJobRunner: com.marklogic.contentpump.ContentPumpStats: 
15/07/16 16:39:12 INFO contentpump.LocalJobRunner: ATTEMPTED_INPUT_RECORD_COUNT: 1993
15/07/16 16:39:12 INFO contentpump.LocalJobRunner: SKIPPED_INPUT_RECORD_COUNT: 0
15/07/16 16:39:12 INFO contentpump.LocalJobRunner: Total execution time: 160 sec

      

Only the first json file is in the database, the rest is lost / lost?

Newline issue in JSON files?

(AbstractRequestController.runRequest): Error parsing HTTP headers: Premature EOF, partial header line read: ''

      

any clues would be great.

Hugo

+3


source to share


1 answer


I cannot tell what is going on. I think that in this case the support will be interesting. Can you send them or me an email with more details (and possibly files).

As a workaround: It's not hard to use the same version of MLCP on prod as you used on dev, just put it next to another (or wherever you like) and make sure you reference this (hint: in Roxy you have setting mlcp-home

).



You might also consider copying json documents and using the option -input_compressed

.

NTN!

+1


source







All Articles