For most of our tests, we included figures for a native disk-based file system because disk hardware performance can be a significant factor. Since Cryptfs is a stackable file system, we included figures for Wrapfs and for Lofs, to be used as a base for evaluating the cost of stacking. When using lofs, Wrapfs, or Cryptfs, we mounted them over a local disk-based file system. CFS and TCFS are two encryption file systems based on NFS, so we also included the performance of native NFS. All NFS mounts used the local host as both server and client (i.e., mounting localhost:/path on /mnt), and used protocol version 2 over a UDP transport, with a user-space NFS server3. CFS was configured to use Blowfish (same as Cryptfs), but we had to configure TCFS to use DES, because it does not support Blowfish.
For the first set of tests, we measured the time it took to perform 10
successive builds of a large package (Am-utils) and
averaged the elapsed times. These results are listed in Table
1. For these tests, the standard deviation did not
exceed 0.8% of the mean.
Wrapfs is the baseline for evaluating the performance impact of the encryption algorithm, because the only difference between Wrapfs and Cryptfs is that the latter encrypts and decrypts data and file names. Cryptfs adds an overhead of 9.5-12.2% over Wrapfs. That is a significant overhead but is unavoidable. It is the cost of the Blowfish encryption code, which, while designed as a fast software cipher, is still CPU intensive.
Next, we compare the three encryption file systems. Cryptfs is 40-52% faster than TCFS. Since TCFS uses DES and Cryptfs uses Blowfish, however, it is more proper to compare Cryptfs to CFS. Still, Cryptfs is 12-30% faster than CFS. Because both CFS and Cryptfs use the same encryption algorithm, most of the difference between them stems from the extra context switches that CFS incurs.
For the second set of tests we performed microbenchmarks on the file systems listed in Table 1, specifically reading and writing of small and large files. These tests were designed to isolate and show the performance difference between Cryptfs, CFS, and TCFS for individual file system operations. Table 2 summarizes some of these results.
A complete and detailed analysis of the results listed in Table 2 is beyond the scope of this paper, and will have to take into account the size and effectiveness of the operating system's page and buffer caches. Nevertheless, these results clearly show that Cryptfs improves performance from as little as 43% to as much as over an order of magnitude. Additional performance analysis of Cryptfs is available elsewhere.
To test the performance of Usenetfs, we setup a test Usenet news server and configured it with test directories of increasingly greater number of files in each. Then we compared the performance of typical news server operations when these large directories were managed by Usenetfs and when they were not (i.e., straight onto ext2fs).
We performed 1000 random lookups of articles in large directories. When the directory had fewer than 2000 articles, Usenetfs added a small overhead of 70-80 milliseconds. The performance of ext2fs continued to degrade linearly, and when the directory had over 250,000 articles, performance of Usenetfs was over 100 times faster. When we performed sequential lookups, thus involving kernel caches, Usenetfs's performance was only two times better than ext2fs's for directories with 500 or more articles.
The results for deleting and adding new articles showed that Usenetfs' performance remained almost flat for all directory sizes we tested, while ext2fs's performance degraded linearly. With just 10,000 articles in the directory, adding or deleting articles was more than 10 times faster with Usenetfs.
Since Usenetfs uses 1000 more directories for managed ones, we expected the performance of reading a directory to be worse. Usenetfs takes an almost constant 500 milliseconds to read a managed directory, while ext2fs once again degraded linearly. It is not until there are over 100,000 articles in the directory, that Usenetfs's readdir is faster than ext2fs's. Although Usenetfs's performance also starts degrading linearly after a certain directory size, this is not a problem because the algorithm can be easily tuned and extended.
The last test we performed took into account all of the above factors. Once again, we built a large package on a busy news server that was configured to manage the top 6 newsgroups using Usenetfs. This test was designed to measure the reserve capacity on the news server, or how much more free did the CPU become due to using Usenetfs. With Usenetfs, compile times improved by an average of 22%. During periods of heavy activity on the news server, such as article expirations, compile times improved by a factor of 2-3. Additional performance analysis of Usenetfs is available elsewhere.
Table 3 shows the overall estimated times that it
took us to develop the file systems mentioned in this paper. Since the
first ports were for Linux 2.0, they took longer as we were also learning
our way around Linux and stackable file systems in general. The bulk of the
time was spent initially on porting the Wrapfs template. Using this
template, other filesystems were implemented faster.
Another interesting measure of the complexity of Wrapfs is the size of the code. The total number of source code lines for Wrapfs in Linux 2.0 is 2157, but that number grew to by more than 50% to 3279 lines when we ported Wrapfs to the 2.1 kernel. This is a testament to the unfortunate complexity that Linux 2.1 added, mostly due to the integration with the dentry concept.