Architecture Considerations: Centralized Server v / s Localized Server Approach

My question has to do with the architecture of the application I'm currently working on. We are currently installing a server locally on each field and that server receives data from the client and does some processing on it and then generates an output and a receipt is printed based on the output and that the output is stored in a centralized hourly database downloading from a local server on client boxes.

I care about how good practice is to install the server locally in each client window, or its the best approach to a centralized server. When asked, it was suggested that if we were to use a centralized server, than latency, speed, and throughput would occur due to each client request hitting the server, thereby increasing the execution time, the throughput would also be greatly reduced and latency.

Note:

The business application is logistics delivery and supply, the application generates all routing, rating and other label related information that is needed to send a package from source to destination. Ex. Apple, Dell ship millions and millions of packages and so this server does all the work of labeling, routing and scoring ... Hopefully this would make the picture clearer :)

Here, the client process handles millions and millions of transactions and therefore requests a very high win rate.

Thank.

+2


source to share


4 answers


It depends on what system you have and what your requirements are.

One of the benefits of a centralized server model is that you can scale the number of clients and the number of servers yourself to make the most of your hardware, and it also allows for redundancy if one of your servers crashes. For example, SOA web services fit this model. This is due to the increase in latency, which if you have real-time systems with SLAs that require responses within a few milliseconds than this is probably not the way to go.



Since it seems like you are after a very quick response time than what you currently have is a perfectly reasonable decision.

Synchronizing data with a database on a schedule can be done differently, if you are looking for a way to do it closer to real time, perhaps the message queue will work. This would probably make things even easier.

0


source


Client-server environments (including the web interface) have advantages and disadvantages, so the context of your application is critical. In your scenario, you have distributed servers, so the workload is balanced. However, you have a nightmare in terms of maintenance of each server (software, operations, reliability, etc.). A centralized server provides better service / monitoring / etc, but also carries an increased workload.

The answer to your situation depends a lot on the needs of your application. While millions of transactions sound like a lot, well-designed applications can handle this load reasonably well. However, you can send a significant amount of data into these transactional queries, which can make the process cumbersome and unreliable. Again, the application context is very important.

Based on the notes you provided, it sounds like there is some local server processing that processes transactions in real time, but asynchronously processes / minimizes the data load on the central db on a schedule. This is certainly not a bad approach, although it does increase the environmental complexity.



I will gladly edit my answer if you can provide more details on your application.

Hope it helps.

0


source


Both approaches can work successfully.

The disadvantages of a storage and forwarding system are that you will not have up-to-date information in a central location, which is happening at the transport station. The technical flaws of a more fully connected centralized system are not necessarily throughput and transaction throughput as they can be reduced with more resources (this is a cost issue, not a technical issue), but a fully connected system has more points of failure and no local backup. options.

On the cost side, although thicker clients have lower bandwidth costs, client administration increases management costs. Generally, management costs, although they can be mitigated, are labor and ancillary costs, which often outweigh the costs of raw technology.

0


source


As others have said, it all depends on what you are doing.

However, the most important thing to look at is how many times you cross the boundaries of the car. If you can minimize this, you will be in very good shape. In general, I would avoid RPC mechanics whenever possible, as it will be two machine border crossings :)

The problem with having a "server" on every local machine is simple - how do you maintain a consistent state?

Also, your network topology will be an important factor. If everyone is on a local subnet (ideally on the same switch), latency won't be an issue unless you have badly designed networking code. If you're going through the cloud, that's a different story.

0


source







All Articles