Run graphic display on multiple systems, sync

I have a series of systems on a local network where a synchronous display routine is running. For example, think of the chorus line. The program in which they were executed has been fixed. I have each "client" download the entire routine and then communicate with the central "server" at fixed points in the routine for synchronization. The procedure itself is mundane, with perhaps 20 possible instructions.

Each client runs the same procedure, but at any time they can do completely different things. One part of the chorus line can climb to the left, the other - with the feet, but all the time with each other. Customers can join and opt out at any time, but they all have a part assigned to them. If there is no one to run this part, it simply won't run.

This is all coded in C # .Net.

The client display is a Windows Forms application. The server accepts TCP connections and then services them in a loop, keeping the basic clock of what is going on. Clients send a signal that says "I have reached sync point 32" (or 19, or 5 or whatever) and waits for a server confirmation and then goes over. Or the server may say "No, you need to start at sync point 15".

This all works great. There is a minor bit with a delay between the first and last clients to get to the sync point, but it's barely noticeable. Run for several months.

Then the specification changed.

Clients now need to respond to commands in real time from the server - this is no longer a pre-installed dance program. The server will send instructions and the dance program will be compiled on the fly. I enjoy redesigning the protocol, maintenance loops, and programming instructions.

My toolbox has anything in the standard .Net 3.5 toolbar. Installing new software is a pain in the ass as it can have so many systems (clients) involved.

I'm looking for suggestions on how to synchronize clients (some kind of blocking system? UDP? Broadcast?), "Dance program" propagation, anything that can make it easier than a traditional TCP / IP system.

Please be aware that there are time and speed limits. I could put the dance program on a network database, but I would have to write instructions quickly and there would be many readers using a rather thick protocol (DBI, SqlClient, etc.) to get a little bit of text. It seems too complicated. And I still need something to sync them all.

Suggestions? Opinions? Wild ass? Code examples?

PS: Answers cannot be marked as "correct" (since this is not a "correct" answer), but +1 votes for good suggestions.

+2


source to share


1 answer


I did something similar (quite a long time ago) with a bank sync of 4 displays, each controlled by one system, receiving messages from a central server.

The architecture we finally settled on after running a fair amount of tests with one "master". In your case, this will have one of your 20 clients act as master, and connect it to the server over TCP.

The server will then send the entire series of commands for the series through this machine.

This machine then used UDP to send real-time instructions to each of the other machines (19 other clients on its local network) to update their displays. We used UDP for several reasons: there was less overhead, which helped lower overall resource usage. Also, since you are updating in real time, if one or two "frames" were out of sync, this was never noticeable, at least not not noticeable enough for our purposes (with human sitting and interaction with the system).



The key to doing this, however, is having an intelligent means of communication between the main server and the "master" device - you want the bandwidth to be as low as possible. In a case similar to yours, I would probably come up with a single binary blob that has the current instruction for 20 machines in the smallest form. (Maybe something like 20 bytes, or 40 bytes if you need it, etc.). Then the "workshop" machine will worry about transferring it to the other 19 machines and itself.

There are a few nice things about this - it's much easier for a server to pass time to one machine in the cluster rather than every machine in the cluster. This allows us, for example, to create a single, centralized server "drive" with multiple clusters efficiently, without any ridiculous hardware requirements anywhere. It also supports very simple client code. It just has to listen to the UDP datagram and do whatever it says - in your case it sounds like it will have one of 20 commands, so the client becomes very simple.

The "master" server is the most complex one. In our implementation, we actually had the same client code as the other 19 (as separate processes), and one "translation" that took a blob, split it into 20 pieces, and transmitted. It was pretty easy to write and worked very well.

+1


source







All Articles