Could you overflow the message queue of the Erlang process?

I'm still in the Erlang learning phase, so I might be wrong, but this is how I understood the process message queue.

A process can be in a main receive loop, receiving certain types of messages, and later it can be placed in a waiting loop to process a different message type in a second loop. If the process received messages intended for the first cycle in the second cycle, it would simply queue them, ignore them for a while, and process only those messages that it can match in the current cycle it is in. if it enters the first receive loop again, it will start over from the beginning and process again the messages that it can match.

Now my question would be, if this is how Erlang works, and I understood it correctly, then what happens when a malicious process sends a bunch of messages that the process will never process. Will the queue end up overflowing, causing the process to fail, or how should I handle this? I'll give an example to illustrate what I mean.

Now, if the malware holds on to the Pid and goes over Pid ! {malicioudata, LotsOfData}

multiple times, will these messages get filtered out since they will never be processed or will they just queue up?

startproc() -> firstloop(InitValues).

firstloop(Values) ->
  receive
    retrieveinformation ->
      WaitingList=askforinformation(),
      retrieveloop(WaitingList);
    dostuff ->
      NewValues=doingstuff(),
      firstloop(NewValues);
    sendmeyourdata ->
      sendingdata(Values),
      firstloop(Values)
  end.

retrieveloop([],Values) -> firstloop(Values).
retrieveloop(WaitingList,Values) ->
  receive
    {hereismyinformation,Id,MyInfo} ->
      NewValues=dosomethingwithinfo(Id,MyInfo),
      retrieveloop(lists:remove(Id,1,WaitingList),NewValues);
  end.

      

+3


source to share


2 answers


There is no hard cap on the number of messages, and there is no memory limit that you are limited to, but you can of course run out of memory if you have billions of messages (or a few super huge ones, perhaps).

Long before you OOM because of a huge mailbox, you will notice that selective receives a long time (not that "selective reception" is a good pattern to follow most of the time ...) or innocently peek into the mail queue of the process and realized that you have opened Pandora Box in your terminal .



This is usually seen as a throttling and monitoring problem in the Erlang world. If you are unable to keep up and your problem is parallelizable, you need more workers. If you are maximizing your equipment, you need more efficient performance. If you are still not using your hardware, can no longer receive it, and you are still overwhelmed, then you need to decide how to implement pushback or load shedding.

+2


source


Unfortunately, there is no "message queue overflow" and will grow until the VM crashes due to a memory allocation error.

The solution is to discard any invalid messages in the main loop, because you shouldn't receive any of {hereismyinformation, _,_}

and the one you receive in askforinformation()

due to the blocking nature of your process.



startproc() -> firstloop(InitValues).

firstloop(Values) ->
  receive
    retrieveinformation ->
      WaitingList=askforinformation(),
      retrieveloop(WaitingList, Values); % i assume you meant that
    dostuff ->
      NewValues=doingstuff(),
      firstloop(NewValues);
    sendmeyourdata ->
      sendingdata(Values),
      firstloop(Values);
    _ -> 
      firstloop(Values) % you can't get {hereismyinformation, _,_} here so we can drop any invalid message
  end.

retrieveloop([],Values) -> firstloop(Values).
retrieveloop(WaitingList,Values) ->
  receive
    {hereismyinformation,Id,MyInfo} ->
      NewValues=dosomethingwithinfo(Id,MyInfo),
      retrieveloop(lists:remove(Id,1,WaitingList),NewValues);
  end.

      

It is not a problem with unexpected messages because it is easy to avoid, but when the process queue grows faster than it is processed. There is a good work structure for production systems for this particular problem .

+1


source







All Articles