Why isn't my application using all cores on Mac OS X?

I have a simple pthread program that (should) spawn many threads that just spin around and consume the CPU. However, I never see this program take up more than 1 of my 4 cores on my Mac OS X Mavericks laptop.

Theories as to why this is happening:

  • Does this mean that the OS does not allow a single process to take over the machine?
  • Is the OS X scheduler very proximity dense?
  • Is this a kernel install that I can tweak wherever?
  • Is OS X's pthread implementation flawed?

I have no idea.

I ask because I also have a serious application (written in D) that I would like to use the whole processor for some kind of parallel work, but even the simplest pthread does not go beyond 1 core.

#include <pthread.h>
#include <stdio.h>

void *waste_time(void* a) {
    for (int i = 0; i < 10000000; i++) {
        printf("%d\n", i);
    }

    return NULL;
}

int main(void) {
    const int threads = 100;
    pthread_t thread[threads];

    for (int i = 0; i < threads; i++) {
        pthread_create(&thread[i], NULL, waste_time, NULL);
    }

    for (int i = 0; i < threads; i++) {
        pthread_join(thread[i], NULL);
    }

    return 0;
}

      

+3


source to share


1 answer


Take the printf () call from the waste_time () loop - it's called 10,000,000 times! Get rid of it entirely, or put it outside the loop, just before returning.



The was_time () function is currently dominated by this C lib and, under OS-I / O, a call that has an internal lock to avoid catastrophic multithreaded access to stdout. Locking serializes almost the entire process of your process, and therefore the OS can run it (mostly) on a single core.

+1


source







All Articles