In an ISAPI filter, what is a good approach to have a shared log file across multiple processes?

I have an ISAPI filter that runs on IIS6 or 7. When there are multiple worker processes ("Web Garden"), the filter will be loaded and run in each w3wp.exe.

How can I efficiently allow the filter to log its actions in a single summary log file?

  • Log messages from different (parallel) processes should not interfere with each other. In other words, a single log message emitted by any of w3wp.exe should be implemented as one continuous line in the log file.

  • there should be minimal conflict for the log file. Websites can serve 100 requests per second.

  • strict timing is preferred. In other words, if process w3wp.exe # 1 sends a message to t1, then process # 2 sends a message to t2, then process # 1 sends a message to t3, the messages should be displayed in the correct time order in the log file.

The current approach I have is that each process has a separate log file. This has obvious disadvantages.

Some ideas:

  • make one of w3wp.exe the "owner of the log file" and send all log messages through this special process. This has problems in the case of a recycling workflow.

  • use an OS mutex to protect access to the log file. Is this high enough level? In this case, each w3wp.exe file will have a FILE in the same file system file. Do I have to fflush the logfile after every write? Will this work?

any suggestions?

+2


source to share


7 replies


At first I was going to say that I like your current approach best, because each process has nothing, and I realized that perhaps they are probably all using the same hard drive. So there is still a bottleneck where conflict occurs. Or maybe the OS and hard drive controllers are really smart about this?

I think you want the logging not to slow down the threads that are doing the real work.

So, start another process on the same machine (lower priority?) That actually writes the log messages to disk. Communicate with another process using not UDP as intended, but rather memory shared by the processes. Also known, vaguely, as a memory mapped file. Learn more about memory mapped files . At my company, we found that memory mapped files are much faster than TCP / IP loopback for communication in the same field, so I guess it will be faster than UDP.

What you actually have in your shared memory might be for std :: queue starters, where push and pops are protected with a mutex. Your ISAPI threads will grab the mutex to queue things up. The logging process would grab the mutex to pop things out of the queue, release the mutex, and then write the writes to disk. The mutex is only protected by updating the shared memory, not updating the file, so in theory it would appear that the mutex will hold for a shorter time, creating less bottleneck.



The logging process could even reorder the writing order to capture the timestamps.

Here's another option: Contine to have a separate log for each process, but have a log thread in each process, so that the main critical thread of time doesn't have to wait for the logging to take place to continue.

The problem with everything I've written here is that the whole system - the hardware, the os, the way a multi-core L1 / L2 processor works, your software - is too complex to be easily predictable just by thinking about it. Build some simple, proven concepts, use them with some timing, and try them out on real hardware.

+2


source


Will there be access to the database in this sense?



+1


source


I have used a UDP based registration system in the past and I was happy with this solution.

Logs are sent over UDP to the log collection process, which is responsible for saving it to a file on a regular basis.

I don't know if it can work in your high performance context, but I was satisfied with this solution in a less stressful application.

Hope this helps.

+1


source


Instead of manipulating the Mutex file, you can simply use the Win32 file locking mechanisms with LockFile () and UnlockFile ().

0


source


My suggestion is to send messages asynchronously (UDP) to the process that will be responsible for writing the log.
The process will:
- one receiver of the stream will queue messages;
- one thread is responsible for removing messages from the queue, placing them in a time-ordered list;
- one stream monitoring message in the list, and only messages with a time length exceeding the minimum should be saved in the file (to prevent the delayed message from being out of order).

0


source


You can continue registering to split the files and find / write a tool to merge them later (maybe automated, or you can just run it where you want to use the files.)

0


source


Event Tracing for Windows , included in Windows Vista and later, provides a good way to do this.

Excerpt:

Event Tracing for Windows (ETW) is an efficient kernel-level tracing mechanism that allows you to log kernel or application events to a log file. You can use events in real time or from a log file and use them to debug your application or determine what performance issues are occurring in your application.

0


source







All Articles