What is the relative difference between in-process, inter-processor, and in-system calls?

Ignoring payload size, what is the relative difference in latency between an in-proc call (say in C ++ or Java), a socket call for a process on the same machine, and a socket call for a process on a different machine? This can be expressed as minimum latency in ns / ms or in terms of relative orders.

I'm looking for something similar to this:

http://duartes.org/gustavo/blog/post/what-your-computer-does-while-you-wait

... but extends to in-process and network calls (assume fast intranet).

+1


source to share


2 answers


It's a good idea ... not accurate, but it gives a rough relationship:

method call - ~ 100s ns synchronized method call - ~ 1000s ns reflexive method call - low ~ 10000 ns

machine loop - ~ 30,000-150,000 ns



local network network - 1-2 ms

Internet - 30-100 ms

+1


source


Does pinging your local machine and pinging the remote machine give you any sense of relationship? Of course, calling the method will be a different dimension.



0


source







All Articles