Slower/cached Linux file system required
Posted
by
Chopper3
on Server Fault
See other posts from Server Fault
or by Chopper3
Published on 2011-11-23T09:50:15Z
Indexed on
2011/11/23
9:54 UTC
Read the original article
Hit count: 220
I know it sounds odd but I need a slower or cached filesystem.
I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers.
The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem.
So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones.
I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future.
Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.
© Server Fault or respective owner