Linux's best filesystem to work with 10000's of files without overloading the system I/O
Posted
by
mhambra
on Server Fault
See other posts from Server Fault
or by mhambra
Published on 2011-02-12T16:57:01Z
Indexed on
2011/02/13
15:27 UTC
Read the original article
Hit count: 492
Hi all. It is known that certain AMD64 Linuxes are subject of being unresponsive under heavy disk I/O (see Gentoo forums: AMD64 system slow/unresponsive during disk access (Part 2)), unfortunately have such one.
I want to put /var/tmp/portage
and /usr/portage
trees to a separate partition, but what FS to choose for it?
Requirements:
* for journaling, performance is preffered over safe data read/write operations
* optimized to read/write 10000 of small files
Candidates:
* ext2 without any journaling
* BtrFS
In Phoronix tests, BtrFS had demonstrated a good random access performance (fat better than XFS thereby it may be less CPU-aggressive). However, unpacking operation seems to be faster with XFS there, but it was tested that unpacking kernel tree to XFS makes my system to react slower for 51% disregard of any renice'd processes and/or schedulers.
Why no ReiserFS? Google'd this (q: reiserfs ext2 cpu):
1 Apr 2006 ... Surprisingly, the ReiserFS and the XFS used significantly more CPU to remove file tree (86% and 65%) when other FS used about 15% (Ext3 and ...
Is it same now?
© Server Fault or respective owner