Shared memory performance and protection from other processes

I am trying to implement a JIT compiler (I have very ugly hobbies).

I would like to have one main process that stores some constant variables, and a second process (which was compiled just in time) that does some calculations and can access and write the constant variables.

The second process can be modified and recompiled, but the constant variables must remain the same between the two executions of the second process.

My first question is, is shared memory the right tool for this? (Also from a performance standpoint, because I want the execution to be as fast as possible.)

My second question is, if I use shared memory as described in shm_overview.7 , it seems to me that any other process with the same uid can access it. How can I prevent this? I would like only the above two processes to have access to this shared memory.

+3


source to share


3 answers


An alternative architecture that you may want to consider is dynamic loading. Instead of two processes, you only have the first; it uses dlopen()

to load your newly compiled code. It calls the entry point of this "library" and the code has access to the entire space, including the constant variables. When you return, you unload the library, ready for the next "run".



Creating such a loadable library and calling it is quite simple and faster than executing a whole new process. No permissions issues as your only process decides what to download and run.

+6


source


  • Yes, shared memory is the right tool for this. It will act (looking at the big picture) like a file that processes can read and write, with differences that:

    • shared memory will be more efficient and
    • shared memory will not survive reboots.
  • I do not know of any hard way to limit the shared memory segment to only selected processes, excluding others with the same UID. Typically, if you own something, you have complete control over it, and processes with the same UID have the same access * . But, if you create a shared memory segment shmget

    using IPC_PRIVATE

    as a key, it will be a little more difficult for unrelated processes. It will only be accessed by its id

    (id) that returns shmget

    . For some other process to find id

    , it will need to run ipcs

    and parse the output. However, you need a way to doid

    available for your second process (the one that was completed just in time); possibly as a parameter or environment variable.
    _______________
    * except for differences in access caused by different GIDs or group members.



+3


source


I would like only the above two processes to have access to this shared memory.

This is not real. Unless you resort to some additional security framework ( grsecurity , SELinux and their friends), the privileges defined by the standard UNIX environment are such that another process running with the same UID can have complete control over your processes, including stopping / restarting. killing, tracing , checking and modifying the entire process memory. Thus, even if you manage to hide shared memory from standard SHM access in some way, you cannot completely prevent other processes from interfering.

+2


source







All Articles