Ensuring a repeatable directory ordering in linux
Posted
by
Paul Biggar
on Server Fault
See other posts from Server Fault
or by Paul Biggar
Published on 2012-07-10T01:06:15Z
Indexed on
2012/07/10
3:17 UTC
Read the original article
Hit count: 537
I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM.
Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories.
As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject
, and b.rb uses MyObject
. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject
, and fail.
But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result.
So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?
© Server Fault or respective owner