You need to tune the reiserfs parameters for this.
Note: these numbers are in units of 4k blocks.
JOURNAL_TRANS_MAX must be less than JOURNAL_BLOCK_COUNT, and not be bigger than the default (1024). Every time a transaction starts, the log needs at least JOURNAL_TRANS_MAX log blocks available, and transactions are flushed if there aren't enough log blocks ready. The default ratio is
JOURNAL_BLOCK_COUNT / JOURNAL_TRANS_MAX = 8
If the ratio is 1, you more or less have synchronous updates to metdata, and things get very slow. As you try different values for JOURNAL_BLOCK_COUNT, try ratios of 2,4, and 8 for JOURNAL_TRANS_MAX.
JOURNAL_MAX_BATCH controls the size of a joinable transaction. To keep overhead low, multiple transactions are combined into one before it is written the log. This number must be less than JOURNAL_TRANS_MAX.
In theory, the smallest possible JOURNAL_BLOCK_COUNT or JOURNAL_TRANS_MAX size is around 48 blocks. Transactions this size will be slow, as a lot of the journal speed comes from the batching described above.
You might also want to shrink RESERVED_FOR_PRESERVE_LIST. It is 500 right now, but in theory could be set to 0. It used to be space used only by the preserve lists, which no longer exist. We really need to do tests with this value at 0.
Some users have done a little benchmarking and found:
JOURNAL_BLOCK_COUNT 512 JOURNAL_TRANS_MAX 128 JOURNAL_MAX_BATCH 127
Works pretty well. A journal block count of 256 gave poor performance with just about every other combination of parameters.
But, the performace will depend on how hard you hit the metadata. If you don't do a lot of work on small files (<16k), or if you don't do many file creations/deletions, smaller journal sizes might work well for you.