Best practices for setting up a shared database for a distributed C # application

I would like to know my parameters for the following scenario:

I have a C # winforms application (developed in VS 2010) distributed to a number of offices in the country. The application communicates with a C # web service that is on the core server in a separate location and there is one database (SQL Server 2012) in a different location. (All servers are running Windows Server 2008)

The head office (where we are) uses the same interface to manage certain information about the database, which should be available to all offices in real time. At the same time, any data they change should be readily available to us at headquarters as we have a real-time dashboard web application that tracks statistics across the site.

Currently, users are complaining about the speed of the application. It is said to be very slow. We operate in a business-critical environment where every minute of waiting can mean the loss of a customer.

I researched the following options but not from the db base so not too sure which is the best route for my scenario.

  • Terminal Services / Sessions (which I just implemented at headquarters and they say this is a great improvement, although there is a terrible lag on someones desktop, which is not that nice to work with.)
  • Transactional replication (sounds like something quite plausible for my scenario, but would require all offices to have their own SQL server database on their separate servers, and they have a tendency to "mess around" and break whatever is left for them!) so we take over all of their servers, but they are franchises, so they have their own IT staff on the site.)

I currently have a lot of data that is cached when the application starts, but it takes 2-3 minutes to complete, which is just not acceptable!

Does anyone have any ideas?

+3


source to share


1 answer


When everything is running through a web service, there is no need for additional deployment of SQL servers for the client. WS will not be able to communicate with these databases unless WS has also been deployed locally.

Before suggesting any specific improvements, you need to make a comparison of where your bottlenecks occur. What is the latency between different clients and the web service and then from the web service and database? Is there an expectation in the database? Once you know the worst case scenario, improve on this and then work your way down.

Some general thoughts though:



  • Move WS closer to the database
  • Load data at the web service level to save the DB calls.
  • Find WS calls and try to optimize bandwidth
  • If the search data doesn't change all of this frequently, use a local copy of SQL CE to cache this data and use MS Sync Framework to sync the data with SQL Server
  • Use SQL CE for everything on the client machine and use a background process to sync between client and WS

UPDATE Two more thoughts after your comment. If your payload of your web service is / is large, you can try adding compression to the web service (if not already implemented).

You can also update your client to make WS calls asynchronously, either on a thread, or if you are using .NET 4.5 using async / await. This will at least allow the customer to use the UI, but not necessarily fix any data load time issues.

+2


source







All Articles