Creating many sockets in ZMQ - too many file errors
I am trying to create sockets with a transport class from the same context in C. inproc://
I can create 2036 sockets, when I try to create more zmq_socket()
, it returns and says 24 . NULL
zmq_errno
'Too many open files'
How to create more than 2036 sockets? Moreover, it inproc
forces me to use only one context.
There are a few things I don't understand:
- sockets end up being converted to inproc
, why is this taking up files?
- Magnification doesn't help, system file limitation is the limiting factor
- I can't increase the file limitation with help on my Mac, no workaround. ZMQ_MAX_SOCKETS
ulimit
// the code is actually in cython and can be found here:
source to share
Multi-component solution:
inproc
does not force you to have a common instance Context()
, but it is convenient to have one, since signal / message transmission goes without any data transfers, just using Zero-copying, manipulating pointers for memory blocks in RAM, which is very fast.
I started gathering facts related to ZeroMQ that there are about 70,000 ~ 200,000 file descriptors available for "sockets" supported by O / S kernel settings, but your posted goals are higher. Much higher.
Considering your git-public multi-agent ABCE Project document is about nanosecond shaving, the HPC domain class solution has (cit./emphasis added :)
excellent number 1.073.545.225, much more agents than even the most complex supercomputer can fit into the memory of even the most complex supercomputer, some small hundreds of thousands of file descriptors do not cost much time.
At the same time, your project faces several challenges.
Deleting task layers step by step, step by step:
File descriptors (FD) - Linux O / S level - System-wide restrictions:
To see the actual as-is state:
modify file /etc/sysctl.conf
# vi /etc/sysctl.conf
Add the config directive like this:
fs.file-max = 100000
Save and close the file.
Users need to log out and log back in for the changes to take effect, or simply enter the following command:
# sysctl -p
Check your settings with the command:
# cat /proc/sys/fs/file-max
(Max) User Specific File Descriptors (FD):
Each user additionally has ( soft-limit, hard-limit )
:
# su - ABsinthCE
$ ulimit -Hn
$ ulimit -Sn
However, you can restrict the user (or anyone else) to any specific restrictions by editing the file , type: ABsinthCE
/etc/security/limits.conf
# vi /etc/security/limits.conf
If you have set the user an ABsinthCE
appropriate soft and hard limit:
ABsinthCE soft nofile 123456
ABsinthCE hard nofile 234567
Anything that isn't free - each file descriptor takes up some kernel memory, so at some point you can and will run out of it. Several hundred thousand file descriptors are not a problem for server deployments that use event-driven server architectures (epoll on Linux). But just forget to try and grow it somewhere near the 1.073.545.225 quoted level.
Today it is possible to use a private HPC machine (not cloud illusion) with ~ 50-500 TB RAM.
However, the multi-agent architecture of the Project application should be overridden and not interrupted when allocating extreme resources (simply because of simplifying syntactic simplicity).
Professional multi-agent simulators are right because of the extreme scaling very, VERY CONSERVATIVE to lock the agent instance resources.
Thus, the best results are to be expected (both in terms of performance and timing) when using direct memory-mapped operations. The ZeroMQ transport class is fine and does not require the instance to allocate an IO stream (since there is no data pump at all if only the transport class is used), which is very efficient for the fast prototyping phase. The same approach would be risky to scale up significantly higher to levels expected in production. inproc://
Context()
inproc://
Scaling Sync Session Throughput and Accelerated Time is the next set of goals designed to increase static scaling of multi-agent-based simulations and increase simulator performance.
For the serious nanosecond hunt, follow Bloomberg's excellent guru, John Lakos, HPC discernment.
Either pre-distribute (as is common best practice in the RTOS domain) and don't highlight at all, or follow John's incredible support for testing ideas presented at ACCU 2017.
source to share
You can change them using (tested on Yosemite and El Capitan) but the problem is what needs to be changed. Here is a post on this topic: Increasing the maximum number of tcp / ip connections in Linux sysctl
Linux and Mac are BSD 4.x based, but BSD man pages sysctl
are available online.
Note. sysctl
is a private interface on iOS.
source to share