MongoDb + Mongoose QueryStream - after document change

I am trying to use Mongoose and its querystream in a scheduling application, but I may not understand how it works. I read this question here on SO [ New Mongoose QueryStream results and it seems that I am correct, but someone please explain:

If I filter a request like this -

Model.find().stream()

      

when I add or change something that matches .find()

, it should throw an event data

, right? Or am I completely wrong in my understanding of this problem?

For example, I'm trying to look at some data like:

 Events.find({'title':/^word/}).stream();

      

I change the names in the mongodb console and don't see any changes.

Can someone explain why?

+3


source to share


1 answer


Your understanding is indeed wrong, as the stream is only the output stream of the current response request, not something that "listens for new data". The result returned here is basically a node streaming interface, which is an optional choice as opposed to a "cursor", or even a direct translation to an array using mongoose's default methods.

So, the "flow" doesn't just "watch" anything. It's just a different way to deal with normal query results, but in a way that doesn't "grieve" all the results into memory at once. Rather, it uses event listeners to handle each result, since it is retrieved from the server cursor.

What you are actually talking about is the "tailable cursor" or a variation of it. In basic MongoDB operations, a "tailable cursor" can be implemented in a capped collection . This is a special type of collection with specific rules, so it may not be suitable for your purposes. They are intended for insert-only operations, which are typically appropriate for event queues.

In a model using a bounded collection (and only where a bounded collection is set), you implement like this:

var query = Events.find({'title':/^word/}).sort({ "$natural": -1}).limit(1);
var stream  = query.tailable({ "awaitdata": true}).stream();

// fires on data received
stream.on("data",function(data) {
    console.log(data);
});

      



"awaitdata" exists as an important parameter, as the "tailable" option, as it is the main thing that tells the query cursor to remain the "active" and "tail" addition to the collection that matches the query condition. But for this to work, your collection must be "closed".

An alternative and more convenient approach to this is to do something like meteor , where the "private collection" is actually the MongoDB oplog . This requires a replica set , but once a meteor fails, there is nothing wrong with having one node as a replica on its own. It is simply unwise to do this in production.

This is more convenient than the simple answer, but the basic idea is that the "oplog" is a limited collection that you can "feed" it for all writes to the database. The data for that event is then checked to determine which details were written to the collection that you want to monitor the write to. This data can then be used to request new information and do something like returning updated or new results to the client via a website or similar.

But the stream itself is just a stream. To "watch" for changes in a collection, you either need to implement it as limited or consider implementing a process based on viewing changes in the oplog as described.

+2


source







All Articles