FileSystemWatcher InternalBufferOverflow
I get a System.IO.Internal.BufferOverflowException when I try to monitor a folder on a Distributed File System (DFS) path: many changes at once. It works great when the FileSystemWatcher is in control of the local / network path that is not using that filesystem.
I can get an event from 1000+ files on a local path and I don't get a BufferOverflow exception, however when I copy a file to a folder that is in DFS I can't even get an event from one (To clarify this, I get an error caused by ...).
I already tried to install:
fileSystemWatcher.InternalBufferSize = 65536;
I'm not sure if this will help you, but the path looks like this:
\\corpnet\cloud\\Network\testFolder\myFolderToMonitor
Edit: 1 I'm not sure why there are two double slashes in the path. I can control without problem folder up to the path \ corpnet \ cloud . I get errors when I try to control any folder starting with
...\\Network\...
Any hints from you are appreciated.
thank
source to share
Of course, too many changes at once, this is a problem with fire. You have already increased the buffer size to the maximum allowed, Windows does not allow more. It is allocated in "precious" memory, the kernel memory pool.
It can be a very active file server, but more often it is caused by a problem in your code. You don't drink from the fireman fast enough. It is imperative that event handlers return as quickly as possible so that the buffer is empty enough to keep up with the rate of change on the file server.
This is very often repeated, a typical implementation does something unwise, like copying a file, reading it, looping until the file cannot be opened. Dear things, loop error is a very common error, the file is very rarely usable when the event fires, because any application that modifies the file will still open it. There is no upper limit on how long it can keep a file lock. Obviously this will always result in a buffer overflow.
So, a properly handled FileSystemWatcher event handler does nothing, but it quickly puts the passed file path into a thread safe queue and does nothing else that should never take more than a microsecond. And uses another thread to try and flush that queue again, dealing with the possibility that the file isn't opening yet.
source to share
I have the same problem. No problem browsing local folder and dumping to 5 new files. But when I look at the network folder, I get the error "Too many changes in the directory at once". There are 5 files in total.
Have you found a fix yet?
I can't really tweak the code, so my interim fix is to delete the files in the temp folder and poll that folder. When new files appear, I move them separately with 500ms delay
source to share