Read a large document with the marklogic Nodejs API
I am trying to read xml / json docs using Node.js from MarkLogic. I downloaded the Node.js API database from the following url.
https://github.com/marklogic/node-client-api
It works great for small documents like 500KB. But our requirement is to read large documents, for example 2 MB or 10 MB.
Due to this, we have two cases as below: -
Case 1: - Whenever I try to read a document using the MarkLogic Node.js API I should get more than one fragment, but I only got one fragment as a response. Therefore, for this reason, it does not work.
var chunks = 0;
var length = 0;
db.documents.read("test.xml").stream('chunked').
on('data', function(chunk) {
console.log(chunk);
console.log(chunk.length);
chunks++;
length += chunk.length;
}).
on('error', function(error) {
console.log(JSON.stringify(error));
}).
on('end', function() {
console.log('read '+chunks+' chunks of '+length+' length');
console.log('done');
});
Case 2: - When I try to read some large document (like 2MB or 10MB) using the "http-digest-client" package, it works fine and I get the full xml as a response.
var digest = require('http-digest-client')('<Username>', '<password>');
digest.request({
host: '<Host server>',
path: '/v1/documents?uri=test.xml',
port: 8007,
method: 'GET',
}, function (res) {
reply(res);
});
I tested this with a large document in the example below (see url below), but I get the same answer I described in case 1 above.
https://github.com/marklogic/node-client-api/blob/develop/examples/read-stream.js#L27
As per my requirement, I would like to read a large document using the MarkLogic Node.js API (case 1).
- How can I read a large document using the MarkLogic Node.js API?
- Is it possible to increase the pool memory size or any other memory size?
- Is this problem related to memory size?
source to share