Twitter Stream API runs in the background on OpenShift

I am looking for a twit stream api in javascript and script that runs successfully from the streaming API. My question is what would be the best way for me to set this up so it can run continuously. Or would it be better to switch to the search API? I'm just trying to collect tweets based on a few keywords, but I want to collect them and save them to Mongolab. Would a job at Cron be better for this? I am going to use openshift to handle streaming and handling.

I think I'm looking for guidance on the best route so I don't have to constantly monitor and check that it is collecting tweets.

Thank!

var Twit = require('twit');

var MongoClient = require('mongodb').MongoClient;



var T = new Twit({

consumer_key: '***',

consumer_secret: '***',

access_token: '***',

access_token_secret: '***'

});



var url = "***";

MongoClient.connect(url, function (err, db) {

var col= db.collection('test');

//  filter public stream on keywords

var stream = T.stream('statuses/filter', {track: ['#food', 'drinks']

});



stream.on('tweet', function (data) {


        console.log("tweet: " + data);

        col.insert(data, function (err, result) {

            if (!err) {

                console.log("insert successful on tweet: " + data.id);

            } else {

                console.log(err);

            }

        });



});

      

});

+3


source to share


1 answer


Someone else might be able to provide a better answer, but I think using an extra cronjob bucket is the way to go. You can configure it to run jobs on your cartridge at set intervals and thus not allow you to manually. Here is an OpenShift blog article to get you started https://blog.openshift.com/getting-started-with-cron-jobs-on-openshift/ .



0


source







All Articles