2013-01-25

Spec vs Test

Moored to a bollard


One of the things I've been busy doing this week is tightening the filesystem contract for Hadoop, as defined in FileSystemContractBaseTest : there are some assumptions in there that are clearly "so obvious" it wasn't worth testing.
  1. that isFile() is true for a file of size zero & size n for n>0
  2. that you can't rename a directory to a child dir of depth 1 and depth n>1
  3. that you can't rename the root directory to anything
  4. that if you write a file in one case, you can't read it using a filename in a different case (breaks on local for NTFS and HFS+)
  5. that if you have a file in one case and write a file with the same name in a different case, you have two files with different case values.
  6. that you can successfully read a file where the byte range can be in 0-255.
  7. That when you overwrite an existing file with new data, and read in that file again, you get the new data back.
  8. You can have long filenames and paths
Of these tests, #1 and #7 are directly related to some problems I've encountered in collaborating on implementing a Hadoop Filesystem for OpenStack, in collaboration with Rackspace and Mirantis. #1 -OpenStack Swift doesn't differentiate a dir with no children from an empty directory -they are both 0-byte objects which become directories when there is >1 objects with a path name under that object's name. #2: we don't eventual consistency here.

The others? Things I thought of as I went along, looking at assumptions in the tests (byte ranges of generated test data), and assumptions so implicit nobody bothered to specify them: case logic, mv dir dir/subdir being banned, etc. And experience with Windows NT filesystems.

This is important, because  FileSystemContractBaseTest is effectively the definition of the expected behaviour of a Hadoop-compatible filesystem.

It's in a medium-level procedural language, but it can be automatically verified by machines, at least for the test datasets provided, and we can hook it up to Jenkins for automated tests.

And when an implementation of FIleSystem fails any of the tests, we can point to it and say "what are you going to do about that?"

If there is a weakness: dataset size. HDFS will let you create a file of size > 1PB if you have a datacentre with the capacity and the time, but our tests don't go anywhere near that big. Even the tests against S3 & OpenStack (currently) don't try and push up files >4GB to see how that gets handled. I think I'll add a test-time property to let you choose the file size for a new test "testBigFilesAreHandled()"

The point is: tests are specifications. If you write the tests after the code they may simply be verifying your assumptions, but if you do them first, or you spend some time thinking "what are some foundational expectations -byte ranges, case sensitivity, etc- you can come up with more ideas about what is wanted -and more specifications to write.

Your tests can't prove that your implementation really, really matches all the requirements of the specifications, and its really hard to test some of the concurrency aspects (how to simulate the deletion/renaming of a sub-tree of a directory that is in the process renamed/deleted). Code walkthroughs are the best we can do in Java today.

Despite those limits, for the majority of the code in an application, tests + reviews are mostly adequate. Which is why I've stated before: the tests are the closest we have to a specification of Hadoop's behaviour, other than the defacto behaviour of what gets released by the ASF as a non-alpha, non-beta release of the Apache Hadoop source tree (with some binaries created for convenience).

Where are we weak?
  • testing of concurrency handling.
  • failure modes, especially in the distributed bit.
  • behaviour in systems and networkings that are in some way "wrong" -Hadoop contains some implicit expectations about the skills of whoever installs it.
I'd love to know how better to test this stuff, or at least prove valid behaviour in Java with the Java5+ memory model on an x86 part , a 1GbE ethernet between the cluster (with small or jumbo frames), & something external talking to the cluster off another network.

Suggestions?

[photo: van crashed into a wall due to snow and ice on nine-tree hill, tied to a bollard with some tape to stop it sliding any further]

No comments:

Post a Comment

Comments are usually moderated -sorry.