When should I apply Runloop to my program and why?

My requirement is that I want to call the API to request new information from my server every 6 seconds, so I wrote my code as shown below:

MyBackgroundThread(){

    while(self.isStop){
        [self callMyAPI]; 
        [NSThread sleepfortimeinterval : 6 ];
    }
}

      

But today I am finding out that the Foundation library provides a way to record a run loop. So I can rewrite my code like below:

MyBackgroundThread(){

    NSTimer *timer = [NSTimer timerWithTimeInterval:6 target:self selector:@selector(callMyAPI) userInfo:nil repeats:YES];
    [[NSRunLoop currentRunLoop] addTimer:timer forMode:NSDefaultRunLoopMode];
    [timer release];
    while (! self.isCancelled) {
        BOOL ret = [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]];
    }
}

      

However, I don't know if this is better for my job than my original one? If so, why? and how can I check the difference in efficiency (or other property?) between the two?

Thank!

+3


source to share


1 answer


I think there is no need to create a new start loop for the timer at all. I would suggest one of two approaches:

  • The schedule is NSTimer

    in the main run loop, but then the called method sends the request to the background.

  • Create a send timer to run on the specified background send queue. To do this, create a dispatch timer property:

    @property (nonatomic, strong) dispatch_source_t timer;
    
          

    and then instantiate and start dispatching the dispatch timer source to execute on your designated GCD queue:

    dispatch_queue_t queue = dispatch_queue_create("com.domain.app.polltimer", 0);
    self.timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
    dispatch_source_set_timer(self.timer, dispatch_walltime(NULL, 0), kPollFrequencySeconds * NSEC_PER_SEC, 1ull * NSEC_PER_SEC);
    
    dispatch_source_set_event_handler(self.timer, ^{
        <#code to be run upon timer event#>
    });
    
    dispatch_resume(self.timer);
    
          

There are times when creating a new for loop is useful, but in this simple scenario it seems unnecessary.


Having said that, it probably doesn't make sense to use a timer to start the network every six seconds. Instead, you probably want to start the next request six seconds after the last one. For a variety of reasons, your server might not respond for six seconds, and you don't want parallel requests to occur in these scenarios (which can happen if your requests are asynchronous).

So, I would tend to have the completion block callMyAPI

do something simple:



dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(6.0 * NSEC_PER_SEC)), queue, ^{
    <#code to issue next request#>
});

      

This completely eliminates the need for timers (and configurable start cycles).


Finally, if you really need to detect system changes at this rate, it could mean a completely different server architecture. For example, if you check every six seconds to see if something has changed on the server, you might consider a socket-based implementation or use push notifications. In both of these approaches, the server will tell the client applications when a significant event occurs, rather than an application like Bart Simpson in the back seat of a car, constantly asking, "Are we still there?"

The architecture involved is probably a function of how often the server data can change and what the requirements for the client application are.

+3


source







All Articles