Partial file downloads are automatically removed

I have C # code that does some file uploads to my apache server via HttpWebRequests. While the download is in progress, I can use ls -la

to view the growing file size.

Now, if I, for example, pulled out the network cable for my computers, the partial download of the file remains on the server.

However, if I just close my C # application, the partial file gets deleted!

I guess this is due to the fact that my streams are closing gracefully. How can I prevent this behavior? I want my partial file downloads to remain regardless of how the download application behaves.

I tried using a destructor for abort

my request stream as well as calling System.Environment.Exit(1)

, neither of which had any effect.

+3


source to share


6 answers


Pulling up a network cable will never be equivalent to interrupting a stream or closing a socket, as this is a failure at the lower OSI layer .



Whenever the application is closed, the network session is terminated and any pending operation is canceled. I don't think there is a workaround unless you programmatically split the file transfer into smaller chunks and save them as you go (this way you will have a manual incremental transfer, but it does require some server side server chunk).

+3


source


Write a simple HTTP proxy that keeps accepting connections but never closes the connection to your server.

Even easier using netcat 1.10 (although this will only accept one connection)



nc -q $FOREVER -l -p 12345 -c 'nc $YOUR_SERVER 80'

      

Then connect your C # client to localhost: 12345

+1


source


This might be a silly suggestion, but what if you call Process.GetCurrentProcess().Kill();

while the application is closing?

0


source


Before proceeding to handle partial downloads, start by testing if keepalives is enabled in your Apache configuration solves your problem of receiving partial downloads.

This can result in fewer disconnects and therefore less need to process their partial data. Such shutdowns can be client, server related, but often they are related to an intermediate node such as a firewall. The keepalives parameter keeps constant "dummy" traffic (0 bytes of data), thus advertising to all parties that the connection is still alive.

For a large site with heavy concurrent load, keepalives are a bad thing, so they are disabled by default. This option simplifies connection management for Apache by preventing optimization of connection reuse as well as slightly additional network traffic. But perhaps you have a specialized use case where this is not a concern.

Keepalives will never help you at all if your customers are just prone to crash too quickly (that is, if you constantly observe a constant download progress). They can help you greatly if the problem is network related.

They will help you if your clients will generate data gradually, with large delays between downloaded chunks.

0


source


Have you checked if your application goes to

void FinishUpload(IAsyncResult result) {…}

      

( line 240 ) when interrupting / killing the application? If so, you may not be responding to the callback. It's a little messy, but might give you a place to start digging.

0


source


Does Apache support the SendChunked HTTPRequest property?

If so, it's worth a try.

http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.sendchunked.aspx

0


source







All Articles