Simulate real UDP application and measure traffic load on OMNeT ++

So I am experimenting with OMNeT ++ and I have created a DataCenter with Fat Tree topology. Now I want to see how a UDP application works (as if it were real). I used INET Framework and implemented ClientStream VideoStream application.

So my question is:

My network is working fine and I don't want to. I want to measure the file received by the client and compare it with the server. But even though I have uploaded a lot of traffic to the network (multiple UDP and TCP applications) the received datarate is exactly the same as the transmitted datarate, and I guess in real life this traffic rate will vary in such dynamic environments. So how can I achieve the correct conditions on OMNeT ++ to simulate real communication (perhaps with packet loss and latency, etc.) so that I can measure these loads?

The .ini file is used:

[Config UDPStreamMultiple]


**.Pod[2].racks[1].servers[1].vms[2].numUdpApps = 1
**.Pod[2].racks[1].servers[1].vms[2].udpApp[0].typename = "UDPVideoStreamSvr"
**.Pod[2].racks[1].servers[1].vms[2].udpApp[0].localPort = 1000
**.Pod[2].racks[1].servers[1].vms[2].udpApp[0].sendInterval = 1s
**.Pod[2].racks[1].servers[1].vms[2].udpApp[0].packetLen = 20480B
**.Pod[2].racks[1].servers[1].vms[2].udpApp[0].videoSize = 512000B

**.Pod[3].racks[0].servers[0].vms[0].numUdpApps = 1
**.Pod[3].racks[0].servers[0].vms[0].udpApp[0].typename = "UDPVideoStreamSvr"
**.Pod[3].racks[0].servers[0].vms[0].udpApp[0].localPort = 1000
**.Pod[3].racks[0].servers[0].vms[0].udpApp[0].sendInterval = 1s
**.Pod[3].racks[0].servers[0].vms[0].udpApp[0].packetLen = 2048B
**.Pod[3].racks[0].servers[0].vms[0].udpApp[0].videoSize = 51200B


**.Pod[0].racks[0].servers[0].vms[0].numUdpApps = 1
**.Pod[0].racks[0].servers[0].vms[0].udpApp[0].typename = "UDPVideoStreamCli"
**.Pod[0].racks[0].servers[0].vms[0].udpApp[0].serverAddress = "20.0.0.47"
**.Pod[0].racks[0].servers[0].vms[0].udpApp[0].serverPort = 1000


**.Pod[1].racks[0].servers[0].vms[1].numUdpApps = 1
**.Pod[1].racks[0].servers[0].vms[1].udpApp[0].typename = "UDPVideoStreamCli"
**.Pod[1].racks[0].servers[0].vms[1].udpApp[0].serverAddress = "20.0.0.49"
**.Pod[1].racks[0].servers[0].vms[1].udpApp[0].serverPort = 1000

**.Pod[2].racks[0].servers[0].vms[1].numUdpApps = 1
**.Pod[2].racks[0].servers[0].vms[1].udpApp[0].typename = "UDPVideoStreamCli"
**.Pod[2].racks[0].servers[0].vms[1].udpApp[0].serverAddress = "20.0.0.49"
**.Pod[2].racks[0].servers[0].vms[1].udpApp[0].serverPort = 1000

**.Pod[2].racks[1].servers[0].vms[1].numUdpApps = 1
**.Pod[2].racks[1].servers[0].vms[1].udpApp[0].typename = "UDPVideoStreamCli"
**.Pod[2].racks[1].servers[0].vms[1].udpApp[0].serverAddress = "20.0.0.49"
**.Pod[2].racks[1].servers[0].vms[1].udpApp[0].serverPort = 1000

      

Thanks in advance.

+3


source to share


1 answer


So, after a lot of research, thinking and testing, I managed to achieve the desired goal. I did the following:

Since the entire Datacenter topology is built in such a way as to reduce the load that routers have and balance the traffic in order to take the necessary measures, I created a small network. A simple 3 nodes (StandardHost) from INET Framework and a simple UDP application that goes from node A to node B to the middle (e.g. nodeA ---> midHost ---> nodeB). There should be several commands in the .ini file, for example:



**.ppp[*].queueType = "DropTailQueue"
**.ppp[*].queue.frameCapacity = 50
**.ppp[*].numOutputHooks = 1
**.ppp[*].outputHook[*].typename = "ThruputMeter"

      

These commands manage the links between nodes and can be scaled to fit someone's needs (perhaps adjust the frame size or queue type). By making this small network, you can easily set it up and get the metrics you want. Hopefully I can help anyone who wants to do the same to figure out how to do it.

+1


source







All Articles