REST WCF - Stream loading is VERY slow with 65535 (64KB) chunks that cannot be modified

We have a WCF method that returns a stream opened via REST. we compared regular download (from website) with WCF method and we figured out the following for a 70MB file:

  • on a regular site - loading takes ~ 10 seconds - block size 1 MB
  • in WCF method - it took ~ 20 seconds - the block size was ALWAYS 65,535 bytes .

we have a user flow that actually spills over into another product, which makes the time difference even worse - 1 minute for a regular site, while WCF takes 2 minutes.

because we need to support very large files - this becomes crucial.

we settled on debugging and found that the Read method of the stream that invokes WCF calls always has a block size of 65,535 - which is causing the sluggishness .

we tried several server configurations - like this:

End point:

   <endpoint address="Download" binding="webHttpBinding" bindingConfiguration="webDownloadHttpBindingConfig"  behaviorConfiguration="web" contract="IAPI" />

      

Binding:

<binding name="webDownloadHttpBindingConfig" maxReceivedMessageSize="20000000" maxBufferSize="20000000" transferMode="Streamed">
                              <readerQuotas maxDepth="32" maxStringContentLength="20000000" maxArrayLength="20000000" maxBytesPerRead="20000000" maxNameTableCharCount="20000000"/>
                              <security mode="Transport">
                                     <transport clientCredentialType="None" proxyCredentialType="None" realm=""/>
                              </security>
                       </binding>

      

The client, which is a REST client (cannot use WCF binding, we don't want to reference it) - is built like this:

System.Net.HttpWebRequest request = (HttpWebRequest)WebRequest.Create(CombineURI(BaseURL, i_RelativeURL));

     request.Proxy = null; // We are not using proxy
     request.Timeout = i_Timeout;
     request.Method = i_MethodType;
     request.ContentType = i_ContentType;

     string actualResult = string.Empty;
     TResult result = default(TResult);
     if (!string.IsNullOrEmpty(m_AuthenticationToken))
     {
        request.Headers.Add(ControllerConsts.AUTH_HEADER_KEY, m_AuthenticationToken);
     }

     using (var response = request.GetResponse())
        {
           using (Stream responseStream = response.GetResponseStream())
           {
              byte[] buffer = new byte[1048576];

              int read;
              while ((read = responseStream.Read(buffer, 0, buffer.Length)) > 0)
              {
                 o_Stream.Write(buffer, 0, read);
              }
           }
        }

      

basically we are just streams to stream.

so whatever we do - the server ALWAYS gets a chunk size of 65535 (we tried several client / server configurations)

What are we missing?

Thank!

== EDIT 8/4/15 Microsoft's answer == Hi, we've worked with Microsoft on this case, this is their answer:

When a WCF client calls a WCF method that returns a stream, it actually gets a reference to the MessageBodyStream instance. The MessageBodyStream ultimately relies on WebResponseInputStream to actually read data through this relationship graph:

  • MessageBodyStream has a member, a message that refers to an InternalByteStreamMessage instance
  • InternalByteStreamMessage has a bodyWriter member that refers to a StreamBasedStreamedBodyWriter instance.
  • StreamBasedStreamedBodyWriter has a stream member that refers to a MaxMessageSizeStream instance.
  • MaxMessageSizeStream has a stream member that refers to a WebResponseInputStream instance

When you call Read () on a stream, WebResponseInputStream.Read () is ultimately called (you can test this yourself by setting a breakpoint in Visual Studio - one caveat: Only My Code option in Visual Studio - Debugging must be disabled to hit a breakpoint). The relevant part of WebResponseInputStream.Read () looks like this:

                    return BaseStream.Read(buffer, offset, Math.Min(count, maxSocketRead));

      

where maxSocketRead is defined as 64KB. The comment above maxSocketRead says, "To avoid bloating the kernel buffers, we throttled our reads. Http.sys handles this observation, but System.Net does not." This means that if you specify too large a read value, you will exceed the kernel's own buffer size and create lower performance as it needs more work.

Is this causing a performance bottleneck? No, it shouldn't. Reading too many bytes at the same time (say 256 bytes) will degrade performance. But 64KB should be the value that translates into good performance. In these cases, the real bottleneck is usually the bandwidth of the network, not how quickly the data is read by the client. To maximize performance, it is important that the read cycle is as dense as possible (in other words, there are no significant delays between reads). Also keep in mind that objects larger than 80KB go into a LOB array in .Net, which has less efficient memory management than a "normal" heap (compaction does not occur under normal conditions, so memory fragmentation can occur).

+3


source to share


1 answer


we worked with microsoft about this case, this is their answer:

When a WCF client calls a WCF method that returns a stream, it actually gets a reference to the MessageBodyStream instance. The MessageBodyStream ultimately relies on WebResponseInputStream to actually read data through this relationship graph:

MessageBodyStream has a member, message that refers to an InternalByteStreamMessage instance InternalByteStreamMessage has a member, bodyWriter, which refers to a StreamBasedStreamedBodyWriter instance StreamBasedStreamedBodyWriter has a member, a stream that refers to a MaxMessageSizeStream instance, MaxMessageSizeStream has a ) on the stream, WebResponseInputStream.Read () is ultimately called (you can test this yourself by setting a breakpoint in Visual Studio - one caveat: the "Only my code" option in Visual Studio - Debugging must be disabled to hit the breakpoint) ... The relevant part of WebResponseInputStream.Read () looks like this:

                return BaseStream.Read(buffer, offset, Math.Min(count, maxSocketRead));

      

where maxSocketRead is defined as 64KB. The comment above maxSocketRead says, "To avoid bloating the kernel buffers, we throttled our reads. Http.sys handles this observation, but System.Net does not." This means that if you specify too large a read value, you will exceed the kernel's own buffer size and create lower performance as it needs more work.



Is this causing a performance bottleneck? No, it shouldn't. Reading too many bytes at the same time (say 256 bytes) will degrade performance. But 64KB should be the value that translates into good performance. In these cases, the real bottleneck is usually the network bandwidth rather than how quickly the data is read by the client. To maximize performance, it is important that the read cycle is as dense as possible (in other words, there are no significant delays between reads). Also keep in mind that objects larger than 80KB go into a LOB array in .Net, which has less efficient memory management than a "normal" heap (compaction does not occur under normal conditions, so memory fragmentation can occur).

Possible solution : need to cache in large amounts of memory (eg use MemoryStream and while WCF stream calls your custom "Read" - cache inside 1MB or more / less - whatever you want.

then when 1MB (or some other value) exceeds - pushes it towards your actual user thread and keeps caching large chunks

this has not been tested, but I think it should address performance issues.

+1


source







All Articles