Best Practice for Intercepting Web APIs

Maybe this question has already been done, but I haven't found a definitive answer ... Let's say I have a Web API 2.0 application hosted on IIS . I think what I understand is that the best practice (to avoid deadlocks on the client) is to always use async methods from the GUI event for HttpClient calls. And that's good, and it works. But what is the best practice if I had a client application that does not have a GUI (e.g. Window Service, Console Application), but only synchronous methods that can be called from? In this case, I am using the following logic:

void MySyncMethodOnMyWindowServiceApp()
{
  list = GetDataAsync().Result().ToObject<List<MyClass>>();
} 

async Task<Jarray> GetDataAsync()
{
  list = await Client.GetAsync(<...>).ConfigureAwait(false);
  return await response.Content.ReadAsAsync<JArray>().ConfigureAwait(false);
}

      

Unfortunately, this can still cause locks on the client that happen randomly on random machines.

The client application stops at this point and never returns:

list = await Client.GetAsync(<...>).ConfigureAwait(false);

      

+3


source to share


3 answers


If it's something that can run in the background and doesn't need to be synchronous, try wrapping the code (which calls the async method) in Task.Run (). I'm not sure what will solve the "deadlock" problem (if it doesn't sync, another problem), but if you want to use async / await, if you don't have async all the way through, I'm not sure if there is an advantage if you don't run it in a background thread. I had a case of adding Task.Run () in multiple places (in my case, from an MVC controller that I changed to be asynchronous) and calls to async methods not only improved performance but also improved reliability (not sure if it was a "dead end" but seemed like something) under heavier load.

You will find that using Task.Run () is seen by some as a bad way to do this, but I really couldn't see a better way to do this in my situation and it really looked like an improvement. Perhaps this is one of those things where there is a perfect way to do it versus how to make it work in the imperfect situation you are in. :-)

[Updated due to code requests]

So, like anyone else, you must do an "asynchronous downward path". In my case, my data was not asynchronous, but my UI was. So I went as far as possible, so I wrapped my calls with Task.Run in a way that makes sense. I think this trick if it makes sense that everything can run in parallel, otherwise you are just synchronous (if you use asynchronous mode and immediately resolve it by making it wait for a response). I had several readings that I could do in parallel.

In the example above, I think you have to get to the point asynchronously, and then at some point determine where you can spin the thread and perform the operation independently of other code. Let's say you have an operation that saves data, but you don't have to wait for a response - you save it and you're done. The only thing you might have to watch out for is not to close the program without waiting for that thread / task to complete. Where this makes sense in your code is up to you.

The syntax is pretty simple. I took the existing code, changed the controller to async, returning the task of my class that was previously returning.

var myTask = Task.Run(() =>
{
   //...some code that can run independently....  In my case, loading data
});
// ...other code that can run at the same time as the above....

await Task.WhenAll(myTask, otherTask); 
//..or...
await myTask;

//At this point, the result is available from the task
myDataValue = myTask.Result;

      

See MSDN for possible examples:   https://msdn.microsoft.com/en-us/library/hh195051(v=vs.110).aspx



[Update 2, more relevant to the original question]

Let's say your data is treated as an asynchronous method.

private async Task<MyClass> Read()

      

You can call it, save the job and wait for it when ready:

var runTask = Read();
//... do other code that can run in parallel

await runTask;

      

So for this purpose, by calling the asynchronous code that asks for the original poster, I don't think you need Task.Run (), although I don't think you can use "wait", the re async method - you need an alternative syntax for Wait.

The trick is that without running any code in parallel, it makes little sense, so thinking about multithreading is still a question.

+2


source


Usage Task<T>.Result

is the equivalent Wait

that will execute a synchronous block on a thread. Having asynchronous methods in WebApi and then blocking all callers at the same time makes the WebApi method synchronous. On boot, you will be blocked if the number of concurrent waits exceeds the server / application thread pool.

So, remember the "async all the down" rule of thumb. You want the long running work (getting the List collection) to be asynchronous. If the caller needs to be in sync, you want to make this conversion from async to sync (using result or wait) as close to ground as possible. Keep their long async process and make sure to keep the sync part as short as possible. This will drastically reduce thread blocking time.

So, for example, you can do something like this.



void MySyncMethodOnMyWindowServiceApp()
{
   List<MyClass> myClasses = GetMyClassCollectionAsync().Result;
}

Task<List<MyClass>> GetMyListCollectionAsync()
{
   var data = await GetDataAsync(); // <- long running call to remote WebApi?
   return data.ToObject<List<MyClass>>();
}

      

The key part is that the long progress task remains inactive and not blocked because the wait is used.

Also, don't confuse responsiveness with scalability. Both are valid reasons for using asynchronously. Yes, responsiveness is the reason for using async (to avoid blocking the UI thread). You are correct, this does not apply to backend service, but this is not why async is used in WebApi. WebApi is also an unconverted interface. If the only benefit of asynchronous code is UI layer responsiveness, then WebApi will synchronize your code from start to finish. Another reason for using async is scalability (excluding deadlocks), which is why WebApi calls are bound to async. Keeping long async processes helps IIS make better use of a limited number of threads. By default, there are only 12 worker threads per core.This can be picked up, but this is not a magic bullet as streams are relatively expensive (about 1MB of overhead per stream). waiting allows you to do more with less. More parallel long running processes on smaller threads before deadlock occurs.

+1


source


The problem you are having is related to something else. Your usage ConfigureAwait(false)

prevents deadlocks here. Solve the error and you're fine.

See Should we switch to using asynchronous I / O by default? , to which the answer is no. You must make a case-by-case decision and choose asynchronous mode when the benefits outweigh the costs. It is important to understand that asynchronous IO has performance costs associated with it. In non-GUI scripts, only a few targeted scripts benefit from asynchronous I / O in any way. The benefits can be huge, but only in these cases.

Here's another helpful post: fooobar.com/questions/264633 / ...

0


source







All Articles