|
|
(7 intermediate revisions by one user not shown) |
Line 1: |
Line 1: |
− | <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> | + | <font color="red">This page is a disaster, do we even want to clean this up? It's all stale benchmarks anyway :-\</font> |
− | <html> | + | |
− | <head>
| + | |
− | <BASE HREF="http://www.namesys.com.wstub.archive.org/benchmarks.html">
| + | |
| | | |
− | <title>Benchmarks Of Reiser4</title>
| + | __TOC__ |
− | </head>
| + | |
| | | |
− | <body>
| + | == Benchmarks Of Reiser4 == |
− | <h1>Benchmarks Of ReiserFS Version 4</h1>
| + | |
| | | |
− | <body>
| + | The <tt>htree</tt> (<tt>-O dir_index</tt>) feature is the recent attempt by ext3 developers to handle large directories as well as ReiserFS by using better than linear search algorithms. One of the interesting results in this benchmark was that <tt>htree</tt> does bad things to ext3 performance, at least for this benchmark. This means that trying to have usable performance for large directories with ext3 can severely impact your performance for the non-large case. |
− | <hr>
| + | |
− | <H1>Remarks</H1>
| + | |
− | <p> | + | |
− | Htree (-O dir_index) is the recent attempt by ext3 developers to
| + | |
− | handle large directories as well as reiserfs by using better than | + | |
− | linear search algorithms. One of the interesting results in this | + | |
− | benchmark was that htree does bad things to ext3 performance, at least | + | |
− | for this benchmark. This means that trying to have usable performance | + | |
− | for large directories with ext3 can severely impact your performance for the | + | |
− | non-large case. | + | |
− | <p>
| + | |
− | You'll note that in our latest benchmark at the top here we use larger
| + | |
− | filesets. It seems that ext3 does a poor job of utilizing its write
| + | |
− | cache for the case where the fileset uses a lot of memory without
| + | |
− | exceeding it, and by increasing the size of the fileset we get a
| + | |
− | fairer (read, better for ext3) benchmark for the create phase. The
| + | |
− | use of filesets small enough to barely fit into RAM for the create
| + | |
− | (but not the copy) phase was due to my being lax in supervising the
| + | |
− | benchmarking, but it did reveal something interesting. Probably
| + | |
− | Andrew Morton will fix that pretty quick --- it's most likely not a
| + | |
− | deep fix to make like fixing htrees would be.
| + | |
− | <p>
| + | |
− | If anyone knows where the tail combining patch for ext3 went to, let
| + | |
− | us know so we can benchmark that.... good tail combining performance
| + | |
− | is not trivial to get right and I am wondering if there is a
| + | |
− | performance reason it did not go in.
| + | |
− | <p>
| + | |
− | Keep in mind that these benchmarks are still evolving and maturing,
| + | |
− | and I need to give the mongo code a complete review again as it has
| + | |
− | been worked on by others quite a bit. Note that while I like the
| + | |
− | mongo benchmarks, those who are concerned it may be stacked in our
| + | |
− | favor can look at the benchmarks run by others on lkml, one of which
| + | |
− | is at the bottom of this, which while not as elaborate and detailed as
| + | |
− | mongo, comes up with roughly the same result.
| + | |
| | | |
− | <p>
| |
− | Andrew Morton wrote some beautiful readahead code in VM, many thanks
| |
− | to him for what it contributes to V4 performance, unfortunately it
| |
− | should be confessed that these benchmarks utterly fail to measure its
| |
− | cleverness for real world usage patterns. In fact, these benchmarks
| |
− | basically access everything once in each pass, which is not at all
| |
− | realistic in representing typical server workloads. So understand
| |
− | them as validly illuminating some aspects of performance, not all
| |
− | aspects, if you could be so generous.
| |
| | | |
− | <p> | + | You'll note that in our latest benchmark at the top here we use larger filesets. It seems that ext3 does a poor job of utilizing its write cache for the case where the fileset uses a lot of memory without exceeding it, and by increasing the size of the fileset we get a fairer (read, better for ext3) benchmark for the create phase. The use of filesets small enough to barely fit into RAM for the create (but not the copy) phase was due to my being lax in supervising the benchmarking, but it did reveal something interesting. Probably Andrew Morton will fix that pretty quick --- it's most likely not a deep fix to make like fixing <tt>htree</tt> would be. |
| | | |
− | We ran data ordered ext3 benchmarks at the suggestion of Andrew
| + | If anyone knows where the tail combining patch for ext3 went to, let us know so we can benchmark that.... good tail combining performance is not trivial to get right and I am wondering if there is a |
− | Morton, but they came out slower for this benchmark. We need to
| + | performance reason it did not go in. |
− | increase the base size range to 8k and run again.
| + | |
| | | |
− | <p>
| + | Keep in mind that these benchmarks are still evolving and maturing, and I need to give the mongo code a complete review again as it has been worked on by others quite a bit. Note that while I like the |
− | V4 is a fully atomic filesystem, keep in mind that these performance
| + | mongo benchmarks, those who are concerned it may be stacked in our favor can look at the benchmarks run by others on lkml, one of which is at the bottom of this, which while not as elaborate and detailed as mongo, comes up with roughly the same result. |
− | numbers are with every FS operation performed as a fully atomic
| + | |
− | transaction. We are the first to make that performance effective to
| + | |
− | do. Look for a user space transactions interface to come out soon....
| + | |
| | | |
− | <p>
| + | Andrew Morton wrote some beautiful readahead code in VM, many thanks to him for what it contributes to V4 performance, unfortunately it should be confessed that these benchmarks utterly fail to measure its |
− | Finally, remember that reiser4 is more space efficient than V3, the df
| + | cleverness for real world usage patterns. In fact, these benchmarks basically access everything once in each pass, which is not at all realistic in representing typical server workloads. So understand |
− | measurements are there for looking at....;-)
| + | them as validly illuminating some aspects of performance, not all aspects, if you could be so generous. |
− | <hr>
| + | |
− | <ul>
| + | |
| | | |
− | <li><font color=red>linux-2.6.15-mm4</font> : mongo <a
| + | We ran data ordered ext3 benchmarks at the suggestion of Andrew Morton, but they came out slower for this benchmark. We need to increase the base size range to 8k and run again. |
− | href="#mongo.2.6.15-mm4"> comparison</a>
| + | |
− | <tt>ext3 vs reiser4 with "unixfile" regular file plugin and reiser4
| + | |
− | with "cryptcompress" regular file plugin</tt> </li>
| + | |
| | | |
− |
| + | [[Reiser4]] is a fully atomic filesystem, keep in mind that these performance numbers are with every FS operation performed as a fully atomic transaction. We are the first to make that performance effective to do. Look for a user space transactions interface to come out soon. |
− | <li>linux-2.6.11 : mongo <a
| + | |
− | href="#mongo.2.6.11"> comparison</a> against
| + | |
− | <tt>xfs and ext2</tt> </li>
| + | |
| | | |
− | <li>linux-2.6.8.1-mm3 : mongo <a
| + | Finally, remember that Reiser4 is more space efficient than [[ReiserFS]], the <tt>df(1)</tt> measurements are there for looking at....;-) |
− | href="#mongo.2.6.8.1-mm3"> comparison</a> against
| + | |
− | <tt>ext3</tt> </li>
| + | |
| | | |
− | <li>2004.03.26 slow.c <a href="#slow.2004.03.26">comparison</a> against
| + | === mongo 2.6.15-mm4 === |
− | <tt>ext2, ext3</tt> </li>
| + | |
− |
| + | |
− | <li>2003.11.20 mongo <a href="#mongo.2003.11.20">comparison</a> against
| + | |
− | <tt>ext3</tt> </li>
| + | |
| | | |
− | <li>Bonnie++ <a href="#bonnie++.2003.09.30">comparison</a> of <tt>reiser4</tt> and <tt>ext3</tt> done at 2003.09.30.
| + | [[Mongo]] comparison, ext3 vs reiser4 with "unixfile" regular file plugin and reiser4 with "cryptcompress" regular file plugin |
− | </li>
| + | |
| | | |
− | <li>2003.09.25 mongo <a href="#mongo.2003.09.25">comparison</a> against
| + | Comparative results of mongo benchmark for ext3 vs reiser4 with "unixfile" regular file plugin vs reiser4 with [ftp://ftp.namesys.com/pub/tmp/cryptcompress_patches cryptcompress] regular file plugin. |
− | <tt>ext3</tt> </li>
| + | |
| | | |
− | <!--
| |
− | <li>2003.08.28 mongo <a href="#mongo.2003.08.28">comparison</a> against
| |
− | <tt>ext3</tt> </li>
| |
| | | |
− | <li>2003.08.27 mongo <a href="#mongo.2003.08.27">comparison</a> against
| + | * 2.6.15-mm4 #1 Sat Feb 11 20:00:11 MSK 2006 |
− | <tt>ext3</tt> </li>
| + | * cryptcompress-4.patch |
− | | + | * mem total = 516312 KB |
− | <li>2003.08.26 mongo <a href="#mongo.2003.08.26">comparison</a> against
| + | * Intel(R) Xeon(TM) CPU 2.40GHz, running UP kernel |
− | <tt>ext3</tt> </li>
| + | |
− | | + | |
− | <li>2003.08.18 mongo <a href="#mongo.2003.08.18">comparison</a> against
| + | |
− | <tt>ext3</tt> </li>
| + | |
− | | + | |
− | <li>2003.08.12 mongo <a href="#mongo.2003.08.12">comparison</a> against
| + | |
− | <tt>ext3</tt> </li>
| + | |
− | -->
| + | |
− | <li>Older mongo <a href="#mongo.2003.08.28">results</a> (2003.08.28).</li>
| + | |
− | | + | |
− | <li>mongo <a href="#mongo.2003.07.10">results</a> obtained before
| + | |
− | LinuxTAG (2003.07.10). Here reiser4 is compared with reiserfs.</li>
| + | |
− | | + | |
− | | + | |
− | <li>External benchmarks <a href="#grant">by Grant Miner</a>.</li>
| + | |
− | | + | |
− | </ul>
| + | |
− | | + | |
− | | + | |
− | <hr>
| + | |
− | <a name="mongo.2.6.15-mm4"></a>
| + | |
− | linux-2.6.15-mm4 <a href="benchmarks/mongo_readme.html">mongo</a> results
| + | |
− | | + | |
− | <p><b>Comparative results of mongo benchmark for ext3 vs reiser4 with "unixfile" regular
| + | |
− | file plugin vs reiser4 with "cryptcompress" regular file plugin</b>
| + | |
− | <p>
| + | |
− | <p>The cryptcompress patch against 2.6.15-mm4 and new version of reiser4progs are from
| + | |
− | <br>
| + | |
− | ftp://ftp.namesys.com/pub/tmp/cryptcompress_patches
| + | |
− | | + | |
− | </p>
| + | |
− | | + | |
− | <dl>
| + | |
− | <dt>reiser4 </dt>
| + | |
− | <dd>2.6.15-mm4 cryptcompress-4.patch</dd>
| + | |
− | <dt>mem total</dt>
| + | |
− | <dd>516312</dd>
| + | |
− | <dt>machine </dt>
| + | |
− | <dd>Intel(R) Xeon(TM) CPU 2.40GHz, <b>running UP kernel</b></dd>
| + | |
− | <dt>kernel </dt>
| + | |
− | | + | |
− | <dd>2.6.15-mm4 #1 Sat Feb 11 20:00:11 MSK 2006</dd>
| + | |
− | <dt>date </dt>
| + | |
− | <dd>Sat Feb 11 21:03:21 2006</dd>
| + | |
− | <dd>Sat Feb 11 21:18:43 2006</dd>
| + | |
− | <dd>Sat Feb 11 21:37:52 2006</dd>
| + | |
− | </dl>
| + | |
| | | |
| <p>Legend:</p> | | <p>Legend:</p> |
Line 367: |
Line 249: |
| | | |
| <td bgcolor=#E0E0C0 align=right><tt><font color=red> 2.685 </font></tt></td> | | <td bgcolor=#E0E0C0 align=right><tt><font color=red> 2.685 </font></tt></td> |
− | </tt></td> | + | </tt> |
− | </tr>
| + | <tr><td bgcolor=black colspan=13><font color=white> |
− | <tr><td bgcolor=black colspan=13><font color=white></td></tr> | + | |
− | <tr><td colspan=13 align=right>
| + | |
| <tr> <td colspan=13 bgcolor=#303030><b><font color=white>DIR=/mnt1 GAMMA=0.2 WRITE_BUFFER=131072 PHASE_APPEND=off SYNC=off PHASE_DELETE=rm NPROC=1 DEV=/dev/hda9 DD_MBCOUNT=5000 FILE_SIZE=8192 REP_COUNTER=1 PHASE_COPY=cp INFO_R4=2.6.15-mm4 cryptcompress-4.patch PHASE_READ=find BYTES=1024000000 PHASE_OVERWRITE=off PHASE_MODIFY=off </td></tr> | | <tr> <td colspan=13 bgcolor=#303030><b><font color=white>DIR=/mnt1 GAMMA=0.2 WRITE_BUFFER=131072 PHASE_APPEND=off SYNC=off PHASE_DELETE=rm NPROC=1 DEV=/dev/hda9 DD_MBCOUNT=5000 FILE_SIZE=8192 REP_COUNTER=1 PHASE_COPY=cp INFO_R4=2.6.15-mm4 cryptcompress-4.patch PHASE_READ=find BYTES=1024000000 PHASE_OVERWRITE=off PHASE_MODIFY=off </td></tr> |
− | <tr><td colspan=13 align=right> <font size=-2>Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</font></td></tr>
| |
− | </table>
| |
| | | |
− | <!--
| + | Legend: <font color="green">green</font> color means the result is |
− | <p><b>Legend:</b> <font color="green">green</font> color means the result is
| + | |
| better (less) than reference value from the first column, results | | better (less) than reference value from the first column, results |
| marked as <font color="red">red</font> are worse than reference value, | | marked as <font color="red">red</font> are worse than reference value, |
| best results are <u>underlined</u> other results which fit into 2% | | best results are <u>underlined</u> other results which fit into 2% |
− | margin of the best result are underlined also.</p> | + | margin of the best result are underlined also. |
− | --><p><a href="http://www.namesys.com/intbenchmarks/mongo/06.02.11.belka.crc/charts/comp.html">The same results in the charts</a></p>
| + | |
| | | |
| + | === mongo 2.6.11 === |
| | | |
− | <hr>
| + | [[mongo]] comparison against xfs and ext2 |
− | <a name="mongo.2.6.11"></a>
| + | |
− | linux-2.6.11 <a href="benchmarks/mongo_readme.html">mongo</a> results
| + | |
| | | |
| | | |
Line 677: |
Line 552: |
| | | |
| | | |
− | <hr>
| + | === mongo 2.6.8.1-mm3 === |
− | <a name="mongo.2.6.8.1-mm3"></a>
| + | |
| + | [[mongo]] comparison against ext3 |
| | | |
− | linux-2.6.8.1-mm3 <a href="benchmarks/mongo_readme.html">mongo</a> results
| |
| | | |
| <dl> | | <dl> |
Line 1,055: |
Line 930: |
| | | |
| | | |
− | <hr>
| + | === slow.c 2004-03-26 === |
− | <a name="slow.2004.03.26">2004.03.26 slow.c benchmark results</a>
| + | |
| + | [[:File:Slow.c.txt|slow.c]] comparison against ext2 and ext3, 2004-03-26 |
| + | |
| <p> | | <p> |
| This is <a href="http://www.jburgess.uklinux.net/slow.c">slow.c</a> benchmark resutls for the latest 2004.03.26 reiser4 snapshot. | | This is <a href="http://www.jburgess.uklinux.net/slow.c">slow.c</a> benchmark resutls for the latest 2004.03.26 reiser4 snapshot. |
Line 1,192: |
Line 1,069: |
| </table> | | </table> |
| | | |
| + | === mongo 2003-11-20 === |
| | | |
− | <hr>
| + | [[mongo]] comparison against ext3, 2003-11-20 |
− | | + | |
− | <a name="mongo.2003.11.20"></a>2003.11.20 <a href="benchmarks/mongo_readme.html">mongo</a> results
| + | |
| | | |
| <dl> | | <dl> |
Line 1,514: |
Line 1,390: |
| </table> | | </table> |
| | | |
− | <hr>
| |
| | | |
− | <a name="mongo.2003.09.25"></a>2003.09.25 <a href="benchmarks/mongo_readme.html">mongo</a> results
| + | === mongo 2003-09-25 === |
| + | |
| + | [[mongo]] comparison against ext3, 2003-09-25 |
| | | |
| <dl> | | <dl> |
Line 1,837: |
Line 1,714: |
| </table> | | </table> |
| | | |
− | <hr>
| + | |
− | <a name="mongo.2003.08.28"></a>2003.08.28 <a href="benchmarks/mongo_readme.html">mongo</a> results
| + | === mongo 2003-08-28 === |
| + | |
| + | [[mongo]] comparison against ext3, 2003-08-28 |
| | | |
| <body text=black> | | <body text=black> |
Line 2,159: |
Line 2,038: |
| | | |
| | | |
− | <hr>
| + | === mongo 2003-08-27 === |
| + | |
| + | [[mongo]] comparison against ext3 |
| | | |
− | <a name="mongo.2003.08.27"></a>2003.08.27 <a href="benchmarks/mongo_readme.html">mongo</a> results
| |
| | | |
| <dl> | | <dl> |
Line 2,720: |
Line 2,600: |
| </table> | | </table> |
| | | |
− | <hr>
| |
| | | |
− | <a name="mongo.2003.08.26"></a>2003.08.26 <a href="benchmarks/mongo_readme.html">mongo</a> results
| + | === mongo 2003-08-26 === |
| + | |
| + | [[mongo]] comparison against ext3 |
| | | |
| | | |
Line 3,014: |
Line 2,895: |
| </table> | | </table> |
| | | |
− | <hr>
| + | |
− | <a name="mongo.2003.08.18"></a>2003.08.18 <a href="benchmarks/mongo_readme.html">mongo</a> results
| + | === mongo, 2003-08-18 === |
| + | |
| + | [[mongo]] comparison</a> against ext3 |
| | | |
| <dl> | | <dl> |
Line 3,305: |
Line 3,188: |
| </table> | | </table> |
| | | |
− | <hr>
| + | === mongo, 2003-08-12 === |
| + | |
| + | [[mongo]] comparison against ext3 |
| | | |
− | <a name="mongo.2003.08.12"></a>2003.08.12 <a href="benchmarks/mongo_readme.html">mongo</a> results
| |
| <dl> | | <dl> |
| <dt>mem total</dt> | | <dt>mem total</dt> |
Line 3,598: |
Line 3,482: |
| </table> | | </table> |
| | | |
− | <hr>
| + | |
− | <p>
| + | === mongo 2003-07-10 === |
− | <a name="mongo.2003.07.23"></a>
| + | |
− | Below is older (2003.07.23) mongo results.
| + | [[mongo]] comparison, reiserfs vs. reiser4, 2003-07-10, obtained before [http://mail.fsfeurope.org/pipermail/booth/2003-February/000083.html LinuxTAG 2003] |
− | </p>
| + | |
| | | |
| <table cols=10 cellpadding=2 cellspacing=2 noborder> | | <table cols=10 cellpadding=2 cellspacing=2 noborder> |
Line 4,734: |
Line 4,618: |
| </tbody></table> | | </tbody></table> |
| | | |
− | <hr>
| + | |
− | <a name="bonnie++.2003.09.30">
| + | === bonnie++ 2003-09-30 === |
− | This is bonnie++ output for reiser4 and ext3. This has been done in an attempt | + | |
− | to analyze <a href="http://fsbench.netnation.com/">results</a> obtained by | + | Bonnie++ comparison, ext3 vs reiser4 (2003-09-30) |
− | Mike Benoit. | + | |
| + | This is bonnie++ output for reiser4 and ext3. This has been done in an attempt to analyze <a href="http://fsbench.netnation.com/">results</a> obtained by Mike Benoit. |
| | | |
| Hardware specs: | | Hardware specs: |
Line 5,253: |
Line 5,138: |
| </html> | | </html> |
| | | |
− | [[category:ReiserFS]] | + | [[category:Reiser4]] |
| + | [[category:formatting-fixes-needed]] |
If anyone knows where the tail combining patch for ext3 went to, let us know so we can benchmark that.... good tail combining performance is not trivial to get right and I am wondering if there is a
performance reason it did not go in.
Keep in mind that these benchmarks are still evolving and maturing, and I need to give the mongo code a complete review again as it has been worked on by others quite a bit. Note that while I like the
mongo benchmarks, those who are concerned it may be stacked in our favor can look at the benchmarks run by others on lkml, one of which is at the bottom of this, which while not as elaborate and detailed as mongo, comes up with roughly the same result.
Andrew Morton wrote some beautiful readahead code in VM, many thanks to him for what it contributes to V4 performance, unfortunately it should be confessed that these benchmarks utterly fail to measure its
cleverness for real world usage patterns. In fact, these benchmarks basically access everything once in each pass, which is not at all realistic in representing typical server workloads. So understand
them as validly illuminating some aspects of performance, not all aspects, if you could be so generous.
We ran data ordered ext3 benchmarks at the suggestion of Andrew Morton, but they came out slower for this benchmark. We need to increase the base size range to 8k and run again.
Comparative results of mongo benchmark for ext3 vs reiser4 with "unixfile" regular file plugin vs reiser4 with cryptcompress regular file plugin.
Table presents absolute values (of elapsed time, CPU usage, CPU utilization,
disk usage) for reiser4 with "cryptcompress" regular file plugin, and ratios
against this reiser4 for reiser4 with "unixfile" regular file plugin and ext3. Red number means ratio is larger
than 1.0, that is, reiser4 with "cryptcompress" regular file plugin is better in this test. Green number means that it loses in this test.
</td></tr>
|
A.MKFS=mkfs.reiser4 -y -o create=create_ccreg40,compressMode=col8 MOUNT_OPTIONS=noatime FSTYPE=reiser4 |
</tr>
B.MKFS=mkfs.reiser4 -y MOUNT_OPTIONS=noatime FSTYPE=reiser4
(unixfile regular file plugin) |
</tr>
C.MOUNT_OPTIONS=noatime,data=ordered FSTYPE=ext3 |
</tr>
#0:</td></tr>
|
</td>
| REAL_TIME</td>
| CPU_TIME</td>
| CPU_UTIL</td>
| DF</td>
</tr>
|
</td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
</tr>
|
CREATE</td>
| 53.36</td>
| 1.234 </td>
| 4.249 </td>
</tt></td>
| 28.79</td>
| 0.493 </td>
| 1.108 </td>
</tt></td>
| 94.36</td>
| 0.255 </td>
| 0.155 </td>
</tt></td>
| 775856</td>
| 2.550 </td>
| 2.825 </td>
</tt></td>
</tr>
|
COPY</td>
| 137.6</td>
| 1.543 </td>
| 2.931 </td>
</tt></td>
| 40.91</td>
| 0.716 </td>
| 0.975 </td>
</tt></td>
| 59.94</td>
| 0.257 </td>
| 0.183 </td>
</tt></td>
| 1551756</td>
| 2.550 </td>
| 2.825 </td>
</tt></td>
</tr>
|
READ</td>
| 161.17</td>
| 1.087 </td>
| 1.077 </td>
</tt></td>
| 48.35</td>
| 0.433 </td>
| 0.195 </td>
</tt></td>
| 33.23</td>
| 0.487 </td>
| 0.291 </td>
</tt></td>
| 1551756</td>
| 2.550 </td>
| 2.825 </td>
</tt></td>
</tr>
|
STATS</td>
| 24.12</td>
| 0.936 </td>
| 0.927 </td>
</tt></td>
| 6.76</td>
| 0.941 </td>
| 0.624 </td>
</tt></td>
| 27.97</td>
| 1.005 </td>
| 0.676 </td>
</tt></td>
| 1551756</td>
| 2.550 </td>
| 2.825 </td>
</tt></td>
</tr>
|
DELETE</td>
| 155.26</td>
| 1.091 </td>
| 0.989 </td>
</tt></td>
| 38.76</td>
| 0.824 </td>
| 0.108 </td>
</tt></td>
| 26.33</td>
| 0.758 </td>
| 0.104 </td>
</tt></td>
| 4</td>
| 1.000 </td>
| 0.000 </td>
</tt></td>
</tr>
|
#1:DD_MBCOUNT=5000 </td></tr>
|
</td>
| REAL_TIME</td>
| CPU_TIME</td>
| CPU_UTIL</td>
| DF</td>
</tr>
|
</td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
</tr>
|
dd_writing_largefile</td>
| 116.02</td>
| 1.430 </td>
| 1.553 </td>
</tt></td>
| 38.65</td>
| 0.514 </td>
| 0.619 </td>
</tt></td>
| 92.86</td>
| 0.155 </td>
| 0.149 </td>
</tt></td>
| 1909012</td>
| 2.682 </td>
| 2.685 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 153.76</td>
| 0.996 </td>
| 1.001 </td>
</tt></td>
| 58.11</td>
| 0.192 </td>
| 0.147 </td>
</tt></td>
| 38.73</td>
| 0.224 </td>
| 0.152 </td>
</tt></td>
| 1909012</td>
| 2.682 </td>
| 2.685 </td>
</tt>
|
|
DIR=/mnt1 GAMMA=0.2 WRITE_BUFFER=131072 PHASE_APPEND=off SYNC=off PHASE_DELETE=rm NPROC=1 DEV=/dev/hda9 DD_MBCOUNT=5000 FILE_SIZE=8192 REP_COUNTER=1 PHASE_COPY=cp INFO_R4=2.6.15-mm4 cryptcompress-4.patch PHASE_READ=find BYTES=1024000000 PHASE_OVERWRITE=off PHASE_MODIFY=off </td></tr>
Legend: green color means the result is
better (less) than reference value from the first column, results
marked as red are worse than reference value,
best results are underlined other results which fit into 2%
margin of the best result are underlined also.
[edit] mongo 2.6.11
mongo comparison against xfs and ext2
- reiser4 </dt>
- reiser4-for-2.6.11-5.patch from <a href="ftp://ftp.namesys.com/pub/reiser4-for-2.6/2.6.11">ftp://ftp.namesys.com/pub/reiser4-for-2.6/2.6.11</a> </dd>
- mem total</dt>
- 254496</dd>
- machine </dt>
- bones</dd>
- kernel </dt>
- 2.6.11-reiser4-5 #2 SMP Sat Jun 4 20:06:47 MSD 2005</dd>
- date </dt>
- Fri Jun 17 23:52:17 2005</dd>
In this test 81% of files are chosen from the 0-10k size range and 19% from
the 10-100k size range.
Legend:
- A reiser4
- B reiserfs v3 (notail)
- C ext2
- D xfs default
Table presents absolute values (of elapsed time, CPU usage, CPU utilization,
disk usage) for reiser4, and ratios against reiser4 for all other
configurations. Red number means ratio is larger
than 1.0, that is, reiser4 is better in this test. Green number means that reiser4 loses in this test.
</td></tr>
|
A.FSTYPE=reiser4 |
</tr>
B.FSTYPE=reiserfs MOUNT_OPTIONS=notail |
</tr>
C.FSTYPE=ext2 |
</tr>
D.MKFS=mkfs.xfs -f FSTYPE=xfs |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| CPU_UTIL</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td>
</tr>
|
CREATE</td>
| 66.12</td>
| 2.022 </td>
| 2.686 </td>
| 4.288 </td>
</tt></td>
| 34.98</td>
| 0.901 </td>
| 1.114 </td>
| 1.445 </td>
</tt></td>
| 29.86</td>
| 0.424 </td>
| 0.398 </td>
| 0.398 </td>
</tt></td>
| 1623204</td>
| 1.086 </td>
| 1.107 </td>
| 1.098 </td>
</tt></td>
</tr>
|
COPY</td>
| 187.77</td>
| 1.438 </td>
| 1.751 </td>
| 3.733 </td>
</tt></td>
| 44.8</td>
| 0.883 </td>
| 1.124 </td>
| 1.161 </td>
</tt></td>
| 14.85</td>
| 0.606 </td>
| 0.611 </td>
| 0.353 </td>
</tt></td>
| 3245428</td>
| 1.087 </td>
| 1.107 </td>
| 1.098 </td>
</tt></td>
</tr>
|
READ</td>
| 151.01</td>
| 1.459 </td>
| 1.113 </td>
| 1.978 </td>
</tt></td>
| 44.34</td>
| 0.607 </td>
| 0.470 </td>
| 1.535 </td>
</tt></td>
| 18.54</td>
| 0.444 </td>
| 0.500 </td>
| 0.724 </td>
</tt></td>
| 3245428</td>
| 1.087 </td>
| 1.107 </td>
| 1.098 </td>
</tt></td>
</tr>
|
STATS</td>
| 22.04</td>
| 1.314 </td>
| 0.812 </td>
| 2.871 </td>
</tt></td>
| 8.61</td>
| 0.698 </td>
| 0.571 </td>
| 4.591 </td>
</tt></td>
| 20.11</td>
| 0.528 </td>
| 0.709 </td>
| 1.579 </td>
</tt></td>
| 3245428</td>
| 1.087 </td>
| 1.107 </td>
| 1.098 </td>
</tt></td>
</tr>
|
DELETE</td>
| 108.77</td>
| 0.313 </td>
| 1.193 </td>
| 3.071 </td>
</tt></td>
| 41</td>
| 0.637 </td>
| 0.091 </td>
| 1.795 </td>
</tt></td>
| 21.45</td>
| 1.795 </td>
| 0.077 </td>
| 0.556 </td>
</tt></td>
| 4</td>
| 0.000 </td>
| 0.000 </td>
| 14877.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=5000 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| CPU_UTIL</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td>
</tr>
|
dd_writing_largefile</td>
| 536.06</td>
| 1.005 </td>
| 1.017 </td>
| 0.982 </td>
</tt></td>
| 122.28</td>
| 0.826 </td>
| 0.819 </td>
| 0.806 </td>
</tt></td>
| 14.99</td>
| 0.771 </td>
| 0.711 </td>
| 0.742 </td>
</tt></td>
| 5120008</td>
| 1.001 </td>
| 1.001 </td>
| 1.012 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 145.32</td>
| 1.031 </td>
| 0.965 </td>
| 0.982 </td>
</tt></td>
| 157.51</td>
| 0.947 </td>
| 0.890 </td>
| 0.880 </td>
</tt></td>
| 57.01</td>
| 0.901 </td>
| 0.909 </td>
| 0.884 </td>
</tt></td>
| 5120008</td>
| 1.001 </td>
| 1.001 </td>
| 1.012 </td>
</tt></td>
</tr>
| </td></tr>
|
| INFO_R4=2.6.11 + reiser4-5 REP_COUNTER=1 DEV=/dev/hda5 DD_MBCOUNT=5000 PHASE_OVERWRITE=off FILE_SIZE=8192 NPROC=3 PHASE_READ=find PHASE_DELETE=rm PHASE_APPEND=off WRITE_BUFFER=131072 DIR=/mnt1 PHASE_MODIFY=off BYTES=1024000000 PHASE_COPY=cp GAMMA=0.2 SYNC=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
[edit] mongo 2.6.8.1-mm3
mongo comparison against ext3
- reiser4 </dt>
- large key</dd>
- mem total</dt>
- 254324</dd>
- machine </dt>
- bones</dd>
- kernel </dt>
- 2.6.8.1-mm3 #3 SMP Mon Aug 23 19:33:13 MSD 2004</dd>
- date </dt>
- Tue Aug 31 15:47:51 2004</dd>
In this test 81% of files are chosen from the 0-10k size range and 19% from
the 10-100k size range.
Legend:
- A reiser4
- B reiser4, extents only
- C reiserfs v3 (notail)
- D ext3 in data=writeback mode (meta-data only journalling)
- E ext3 in data=journal mode
- F ext3 in data=ordered mode
<img src="http://www.namesys.com/intbenchmarks/mongo/04.08.26/256MB.RAM/one-thread-8k.g02.charts/CREATE.0.png">
<img src="http://www.namesys.com/intbenchmarks/mongo/04.08.26/256MB.RAM/one-thread-8k.g02.charts/COPY.0.png">
<img src="http://www.namesys.com/intbenchmarks/mongo/04.08.26/256MB.RAM/one-thread-8k.g02.charts/READ.0.png">
<img src="http://www.namesys.com/intbenchmarks/mongo/04.08.26/256MB.RAM/one-thread-8k.g02.charts/STATS.0.png">
<img src="http://www.namesys.com/intbenchmarks/mongo/04.08.26/256MB.RAM/one-thread-8k.g02.charts/DELETE.0.png">
<img src="http://www.namesys.com/intbenchmarks/mongo/04.08.26/256MB.RAM/one-thread-8k.g02.charts/dd_writing_largefile.1.png">
<img src="http://www.namesys.com/intbenchmarks/mongo/04.08.26/256MB.RAM/one-thread-8k.g02.charts/dd_reading_largefile.1.png">
Table presents absolute values (of elapsed time, CPU usage, CPU utilization,
disk usage) for reiser4, and ratios against reiser4 for all other
configurations. Red number means ratio is larger
than 1.0, that is, reiser4 is better in this test. Green number means that reiser4 loses in this test.
</td></tr>
|
A.FSTYPE=reiser4 |
</tr>
B.FSTYPE=reiser4 MKFS=mkfs.reiser4 -q -o extent=extent40 |
</tr>
C.MOUNT_OPTIONS=notail FSTYPE=reiserfs |
</tr>
D.MOUNT_OPTIONS="data=writeback" FSTYPE=ext3 |
</tr>
E.MOUNT_OPTIONS="data=journal" FSTYPE=ext3 |
</tr>
F.MOUNT_OPTIONS="data=ordered" FSTYPE=ext3 |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| CPU_UTIL</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
CREATE</td>
| 91.6</td>
| 0.988 </td>
| 1.983 </td>
| 2.592 </td>
| 3.010 </td>
| 2.256 </td>
</tt></td>
| 31.13</td>
| 0.965 </td>
| 0.826 </td>
| 2.577 </td>
| 2.529 </td>
| 2.802 </td>
</tt></td>
| 22.63</td>
| 0.981 </td>
| 0.350 </td>
| 0.791 </td>
| 0.738 </td>
| 1.000 </td>
</tt></td>
| 1978440</td>
| 1.000 </td>
| 1.088 </td>
| 1.108 </td>
| 1.108 </td>
| 1.108 </td>
</tt></td>
</tr>
|
COPY</td>
| 219.5</td>
| 0.968 </td>
| 1.674 </td>
| 2.241 </td>
| 2.105 </td>
| 1.819 </td>
</tt></td>
| 54.04</td>
| 0.938 </td>
| 0.792 </td>
| 1.694 </td>
| 2.004 </td>
| 1.860 </td>
</tt></td>
| 16.01</td>
| 0.996 </td>
| 0.460 </td>
| 0.663 </td>
| 0.839 </td>
| 0.890 </td>
</tt></td>
| 3956708</td>
| 1.000 </td>
| 1.088 </td>
| 1.108 </td>
| 1.108 </td>
| 1.108 </td>
</tt></td>
</tr>
|
READ</td>
| 187.34</td>
| 1.007 </td>
| 1.617 </td>
| 1.282 </td>
| 1.295 </td>
| 1.250 </td>
</tt></td>
| 38.61</td>
| 1.002 </td>
| 0.711 </td>
| 0.615 </td>
| 0.622 </td>
| 0.615 </td>
</tt></td>
| 13.05</td>
| 0.995 </td>
| 0.441 </td>
| 0.520 </td>
| 0.517 </td>
| 0.533 </td>
</tt></td>
| 3956708</td>
| 1.000 </td>
| 1.088 </td>
| 1.108 </td>
| 1.108 </td>
| 1.108 </td>
</tt></td>
</tr>
|
STATS</td>
| 23.71</td>
| 0.968 </td>
| 1.162 </td>
| 0.943 </td>
| 0.943 </td>
| 0.943 </td>
</tt></td>
| 10.91</td>
| 0.944 </td>
| 0.717 </td>
| 0.661 </td>
| 0.674 </td>
| 0.658 </td>
</tt></td>
| 24.46</td>
| 0.971 </td>
| 0.587 </td>
| 0.700 </td>
| 0.707 </td>
| 0.697 </td>
</tt></td>
| 3956708</td>
| 1.000 </td>
| 1.088 </td>
| 1.108 </td>
| 1.108 </td>
| 1.108 </td>
</tt></td>
</tr>
|
DELETE</td>
| 156.84</td>
| 0.993 </td>
| 0.233 </td>
| 1.264 </td>
| 1.270 </td>
| 1.216 </td>
</tt></td>
| 53.05</td>
| 0.938 </td>
| 0.440 </td>
| 0.209 </td>
| 0.215 </td>
| 0.214 </td>
</tt></td>
| 18.23</td>
| 0.947 </td>
| 1.758 </td>
| 0.157 </td>
| 0.160 </td>
| 0.167 </td>
</tt></td>
| 4</td>
| 1.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=768 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| CPU_UTIL</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
dd_writing_largefile</td>
| 30.09</td>
| 1.006 </td>
| 1.286 </td>
| 1.342 </td>
| 2.473 </td>
| 1.311 </td>
</tt></td>
| 5.24</td>
| 0.996 </td>
| 0.966 </td>
| 1.286 </td>
| 1.393 </td>
| 1.437 </td>
</tt></td>
| 11.43</td>
| 0.994 </td>
| 0.631 </td>
| 0.796 </td>
| 0.655 </td>
| 0.967 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 28.38</td>
| 0.969 </td>
| 1.010 </td>
| 0.980 </td>
| 0.982 </td>
| 0.999 </td>
</tt></td>
| 4.37</td>
| 0.979 </td>
| 1.014 </td>
| 0.911 </td>
| 0.895 </td>
| 0.936 </td>
</tt></td>
| 8.88</td>
| 1.030 </td>
| 0.922 </td>
| 0.858 </td>
| 0.854 </td>
| 0.867 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| REP_COUNTER=1 PHASE_COPY=cp INFO_R4=2.6.8.1-mm3 + parse_options.patch FILE_SIZE=8192 DEV=/dev/hda6 PHASE_MODIFY=off DD_MBCOUNT=768 PHASE_APPEND=off PHASE_OVERWRITE=off SYNC=off DIR=/mnt1 PHASE_DELETE=rm NPROC=1 BYTES=1024000000 GAMMA=0.2 PHASE_READ=find WRITE_BUFFER=131072 </td></tr>
| Produced by <a href=http://namesys.com/>Mongo</a> benchmark suite.</td></tr>
</table>
[edit] slow.c 2004-03-26
slow.c comparison against ext2 and ext3, 2004-03-26
This is <a href="http://www.jburgess.uklinux.net/slow.c">slow.c</a> benchmark resutls for the latest 2004.03.26 reiser4 snapshot.
<b>slow.c</b> is a simple program by Jon Burgess which writes
and reads multiple data streams. For the details and the source code look at
<a href="http://marc.theaimsgroup.com/?l=linux-kernel&m=107652683608384&w=2">
the discussion<a> in the linux-kernel mailing list.
kernel : 2.6.5-rc2
RAM : 256Mb
reiser4 : <a href="http://www.namesys.com/snapshots/2004.03.26/">2004.03.26 snapshot</a>
Hardware specs:
processor : 1
vendor_id : AuthenticAMD
cpu family : 6
model : 6
model name : AMD Athlon(tm) Processor
stepping : 2
cpu MHz : 1460.098
cache size : 256 KB
bogomips : 2916.35
Dual CPU AMD Athlon(tm) 1.4Ghz
# hdparm /dev/hda6:
multcount = 16 (on)
IO_support = 1 (32-bit)
unmaskirq = 1 (on)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 65535/16/63, sectors = 35937342, start = 84164598
# hdparm -t /dev/hda6
/dev/hda6:
Timing buffered disk reads: 84 MB in 3.07 seconds = 27.39 MB/sec
# hdparm -i /dev/hda
/dev/hda:
Model=IC35L060AVER07-0, FwRev=ER6OA44A, SerialNo=SZPTZMB6154
Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=40
BuffType=DualPortCache, BuffSize=1916kB, MaxMultSect=16, MultSect=16
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=120103200
IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2
AdvancedPM=yes: disabled (255) WriteCache=enabled
Drive conforms to: ATA/ATAPI-5 T13 1321D revision 1:
* signifies the current active mode
<!--
(500Mb of data)
test : ./slow foo 500
Results :
==============================================================
| 1 stream | 2 streams
--------------+-----------------------------------------------
| WRITE READ | WRITE READ
--------------+-----------------------------------------------
ext2 25.08Mb/s 27.08Mb/s 13.72Mb/s 14.04Mb/s
reiser4 26.31Mb/s 26.99Mb/s 24.03Mb/s 26.84Mb/s
reiser4-extents 25.28Mb/s 27.40Mb/s 24.12Mb/s 26.85Mb/s
ext3-ordered 20.99Mb/s 26.40Mb/s 12.01Mb/s 13.34Mb/s
ext3-journal 10.13Mb/s 24.48Mb/s 8.87Mb/s 13.26Mb/s
reiserfs 20.42Mb/s 27.67Mb/s 12.98Mb/s 13.13Mb/s
reiserfs-notail 20.07Mb/s 27.58Mb/s 13.04Mb/s 13.25Mb/s
==============================================================
-->
(1000Mb of data)
test : ./slow foo 1000
Results :
<!--
==============================================================================================================
| 1 stream | 2 streams | 4 streams | 8 stream
--------------+-----------------------------------------------------------------------------------------------
| WRITE READ | WRITE READ | WRITE READ | WRITE READ
--------------+-----------------------------------------------------------------------------------------------
ext2 24.66Mb/s 27.56Mb/s 13.40Mb/s 13.67Mb/s 7.73Mb/s 6.94Mb/s 6.69Mb/s 3.52Mb/s
reiser4 25.42Mb/s 27.71Mb/s 23.96Mb/s 26.34Mb/s 24.55Mb/s 26.58Mb/s 24.90Mb/s 26.76Mb/s
reiser4-extents 25.60Mb/s 27.68Mb/s 24.19Mb/s 25.92Mb/s 25.24Mb/s 27.12Mb/s 25.39Mb/s 26.72Mb/s
ext3-ordered 20.05Mb/s 26.46Mb/s 11.06Mb/s 13.12Mb/s 9.63Mb/s 6.76Mb/s 10.02Mb/s 3.48Mb/s
ext3-journal 10.10Mb/s 26.81Mb/s 8.87Mb/s 13.08Mb/s 8.59Mb/s 6.84Mb/s 8.14Mb/s 3.47Mb/s
reiserfs 20.19Mb/s 27.48Mb/s 12.69Mb/s 13.03Mb/s 8.27Mb/s 6.84Mb/s 7.87Mb/s 4.13Mb/s
reiserfs-notail 20.31Mb/s 27.10Mb/s 12.74Mb/s 13.09Mb/s 8.33Mb/s 6.89Mb/s 7.87Mb/s 4.17Mb/s
=============================================================================================================
-->
<img src="intbenchmarks/slow/04.03.25-int.snapshot.bones/wr.1.png"> |
<img src="intbenchmarks/slow/04.03.25-int.snapshot.bones/wr.2.png"> |
<img src="intbenchmarks/slow/04.03.25-int.snapshot.bones/wr.4.png"> |
<img src="intbenchmarks/slow/04.03.25-int.snapshot.bones/wr.8.png"> |
<img src="intbenchmarks/slow/04.03.25-int.snapshot.bones/rd.1.png"> |
<img src="intbenchmarks/slow/04.03.25-int.snapshot.bones/rd.2.png"> |
<img src="intbenchmarks/slow/04.03.25-int.snapshot.bones/rd.4.png"> |
<img src="intbenchmarks/slow/04.03.25-int.snapshot.bones/rd.8.png"> |
[edit] mongo 2003-11-20
mongo comparison against ext3, 2003-11-20
- reiser4 </dt>
- </dd>
- mem total</dt>
- 255716</dd>
- machine </dt>
- belka</dd>
- kernel </dt>
- 2.6.0-test9 #2 SMP Thu Nov 20 16:08:42 MSK 2003</dd>
- date </dt>
- Thu Nov 20 16:16:50 2003</dd>
In this test 80% of files are chosen from the 0-8k size range, 16% from
the 0-80k size range, 0.8 x 4% from the 0-800k size range, etc. Most
files are small, most bytes are in large files.
Legend:
- A reiser4
- B reiser4, extents only
- C reiserfs v3
- D ext3 in data=writeback mode (meta-data only journalling)
- E ext3 in data=journal mode
- F ext3 in data=ordered mode
- G ext3 with htree (hashed directories)
Table presents absolute values (of elapsed time, CPU usage, and disk
usage) for reiser4, and ratios against reiser4 for all other
configurations. Red number means ratio is larger
than 1.0, that is, reiser4 is better in this test. Green number means that reiser4 loses in this test.
</td></tr>
|
A.INFO_R4= FSTYPE=reiser4 |
</tr>
B.INFO_R4= MKFS=mkfs.reiser4 -q -o policy=extents FSTYPE=reiser4 |
</tr>
C.FSTYPE=reiserfs |
</tr>
D.MOUNT_OPTIONS=data=writeback FSTYPE=ext3 |
</tr>
E.MOUNT_OPTIONS=data=journal FSTYPE=ext3 |
</tr>
F.MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
G.MKFS=mkfs.ext3 -O dir_index MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
</tr>
|
CREATE</td>
| 21.81</td>
| 1.171 </td>
| 3.983 </td>
| 3.253 </td>
| 3.702 </td>
| 3.161 </td>
| 3.212 </td>
</tt></td>
| 6.38</td>
| 1.130 </td>
| 1.020 </td>
| 2.461 </td>
| 2.461 </td>
| 2.354 </td>
| 0.851 </td>
</tt></td>
| 607612</td>
| 1.091 </td>
| 1.035 </td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
</tt></td>
</tr>
|
COPY</td>
| 64.37</td>
| 1.089 </td>
| 3.046 </td>
| 1.980 </td>
| 1.834 </td>
| 1.929 </td>
| 6.246 </td>
</tt></td>
| 11.55</td>
| 1.047 </td>
| 0.797 </td>
| 1.590 </td>
| 1.725 </td>
| 1.542 </td>
| 0.698 </td>
</tt></td>
| 1214992</td>
| 1.091 </td>
| 1.034 </td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
| 1.108 </td>
</tt></td>
</tr>
|
READ</td>
| 45.38</td>
| 1.026 </td>
| 3.406 </td>
| 1.248 </td>
| 1.307 </td>
| 1.232 </td>
| 7.192 </td>
</tt></td>
| 10.13</td>
| 0.934 </td>
| 0.517 </td>
| 0.454 </td>
| 0.453 </td>
| 0.444 </td>
| 0.504 </td>
</tt></td>
| 1214992</td>
| 1.091 </td>
| 1.034 </td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
| 1.108 </td>
</tt></td>
</tr>
|
STATS</td>
| 5.74</td>
| 1.030 </td>
| 3.413 </td>
| 1.014 </td>
| 1.033 </td>
| 1.021 </td>
| 1.634 </td>
</tt></td>
| 2.34</td>
| 1.000 </td>
| 0.936 </td>
| 0.761 </td>
| 0.791 </td>
| 0.774 </td>
| 0.744 </td>
</tt></td>
| 1214992</td>
| 1.091 </td>
| 1.034 </td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
| 1.108 </td>
</tt></td>
</tr>
|
DELETE</td>
| 46.94</td>
| 0.424 </td>
| 0.520 </td>
| 1.017 </td>
| 1.043 </td>
| 0.956 </td>
| 1.315 </td>
</tt></td>
| 14.19</td>
| 0.743 </td>
| 0.443 </td>
| 0.200 </td>
| 0.206 </td>
| 0.201 </td>
| 0.234 </td>
</tt></td>
| 4</td>
| 1.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=768 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
</tr>
|
dd_writing_largefile</td>
| 29.33</td>
| 1.026 </td>
| 1.184 </td>
| 1.102 </td>
| 2.499 </td>
| 1.097 </td>
| 1.098 </td>
</tt></td>
| 2.61</td>
| 1.008 </td>
| 0.659 </td>
| 1.437 </td>
| 2.054 </td>
| 1.556 </td>
| 1.571 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 22.96</td>
| 1.000 </td>
| 1.056 </td>
| 1.003 </td>
| 1.004 </td>
| 1.003 </td>
| 1.006 </td>
</tt></td>
| 2.26</td>
| 0.991 </td>
| 0.912 </td>
| 0.796 </td>
| 0.765 </td>
| 0.779 </td>
| 0.783 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| NPROC=1 DIR=/mnt/testfs SYNC=off PHASE_COPY=cp REP_COUNTER=1 GAMMA=0.2 PHASE_OVERWRITE=off FILE_SIZE=8192 BYTES=512000000 PHASE_APPEND=off PHASE_READ=find DEV=/dev/hdb3 DD_MBCOUNT=768 WRITE_BUFFER=131072 PHASE_DELETE=rm PHASE_MODIFY=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
[edit] mongo 2003-09-25
mongo comparison against ext3, 2003-09-25
- reiser4 </dt>
- </dd>
- mem total</dt>
- 255048</dd>
- machine </dt>
- belka</dd>
- kernel </dt>
- 2.6.0-test5 #33 SMP Thu Sep 25 15:45:38 MSD 2003</dd>
- date </dt>
- Thu Sep 25 15:57:38 2003</dd>
In this test 80% of files are chosen from the 0-8k size range, 16% from
the 0-80k size range, 0.8 x 4% from the 0-800k size range, etc. Most
files are small, most bytes are in large files.
Legend:
- A reiser4
- B reiser4, extents only
- C reiserfs v3
- D ext3 in data=writeback mode (meta-data only journalling)
- E ext3 in data=journal mode
- F ext3 in data=ordered mode
- G ext3 with htree (hashed directories)
Table presents absolute values (of elapsed time, CPU usage, and disk
usage) for reiser4, and ratios against reiser4 for all other
configurations. Red number means ratio is larger
than 1.0, that is, reiser4 is better in this test. Green number means that reiser4 loses in this test.
</td></tr>
|
A.INFO_R4= FSTYPE=reiser4 |
</tr>
B.INFO_R4= MKFS=mkfs.reiser4 -q -o policy=extents FSTYPE=reiser4 |
</tr>
C.FSTYPE=reiserfs |
</tr>
D.MOUNT_OPTIONS=data=writeback FSTYPE=ext3 |
</tr>
E.MOUNT_OPTIONS=data=journal FSTYPE=ext3 |
</tr>
F.MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
G.MKFS=mkfs.ext3 -O dir_index MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
</tr>
|
CREATE</td>
| 23.57</td>
| 1.158 </td>
| 3.714 </td>
| 3.263 </td>
| 3.234 </td>
| 3.020 </td>
| 3.376 </td>
</tt></td>
| 6.66</td>
| 1.075 </td>
| 0.947 </td>
| 2.240 </td>
| 2.357 </td>
| 2.264 </td>
| 0.835 </td>
</tt></td>
| 608548</td>
| 1.090 </td>
| 1.034 </td>
| 1.105 </td>
| 1.105 </td>
| 1.105 </td>
| 1.106 </td>
</tt></td>
</tr>
|
COPY</td>
| 64.98</td>
| 1.083 </td>
| 3.050 </td>
| 2.023 </td>
| 1.810 </td>
| 1.908 </td>
| 6.850 </td>
</tt></td>
| 12.18</td>
| 1.057 </td>
| 0.776 </td>
| 1.507 </td>
| 1.603 </td>
| 1.518 </td>
| 0.743 </td>
</tt></td>
| 1216784</td>
| 1.090 </td>
| 1.033 </td>
| 1.105 </td>
| 1.105 </td>
| 1.105 </td>
| 1.106 </td>
</tt></td>
</tr>
|
READ</td>
| 44.65</td>
| 1.028 </td>
| 3.733 </td>
| 1.237 </td>
| 1.114 </td>
| 1.179 </td>
| 7.694 </td>
</tt></td>
| 10.28</td>
| 0.933 </td>
| 0.590 </td>
| 0.608 </td>
| 0.593 </td>
| 0.608 </td>
| 0.620 </td>
</tt></td>
| 1216784</td>
| 1.090 </td>
| 1.033 </td>
| 1.105 </td>
| 1.105 </td>
| 1.105 </td>
| 1.106 </td>
</tt></td>
</tr>
|
STATS</td>
| 5.88</td>
| 0.998 </td>
| 3.139 </td>
| 0.981 </td>
| 1.020 </td>
| 0.929 </td>
| 1.655 </td>
</tt></td>
| 2.29</td>
| 0.987 </td>
| 0.900 </td>
| 0.747 </td>
| 0.782 </td>
| 0.747 </td>
| 0.755 </td>
</tt></td>
| 1216784</td>
| 1.090 </td>
| 1.033 </td>
| 1.105 </td>
| 1.105 </td>
| 1.105 </td>
| 1.106 </td>
</tt></td>
</tr>
|
DELETE</td>
| 46.65</td>
| 0.438 </td>
| 0.504 </td>
| 1.109 </td>
| 1.023 </td>
| 1.022 </td>
| 1.376 </td>
</tt></td>
| 14.19</td>
| 0.746 </td>
| 0.431 </td>
| 0.206 </td>
| 0.211 </td>
| 0.211 </td>
| 0.232 </td>
</tt></td>
| 4</td>
| 1.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=768 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
</tr>
|
dd_writing_largefile</td>
| 30.78</td>
| 1.017 </td>
| 1.177 </td>
| 1.063 </td>
| 2.394 </td>
| 1.066 </td>
| 1.056 </td>
</tt></td>
| 3.11</td>
| 0.981 </td>
| 0.553 </td>
| 1.180 </td>
| 1.701 </td>
| 1.296 </td>
| 1.318 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 22.96</td>
| 1.001 </td>
| 1.045 </td>
| 1.005 </td>
| 1.005 </td>
| 1.004 </td>
| 1.006 </td>
</tt></td>
| 2.41</td>
| 0.996 </td>
| 0.867 </td>
| 0.739 </td>
| 0.718 </td>
| 0.739 </td>
| 0.722 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| NPROC=1 DIR=/mnt/testfs SYNC=off PHASE_COPY=cp REP_COUNTER=1 GAMMA=0.2 PHASE_OVERWRITE=off FILE_SIZE=8192 BYTES=512000000 PHASE_APPEND=off PHASE_READ=find DEV=/dev/hdb3 DD_MBCOUNT=768 WRITE_BUFFER=131072 PHASE_DELETE=rm PHASE_MODIFY=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
[edit] mongo 2003-08-28
mongo comparison against ext3, 2003-08-28
<body text=black>
- reiser4 </dt>
- </dd>
- mem total</dt>
- 256276</dd>
- machine </dt>
- belka</dd>
- kernel </dt>
- 2.6.0-test4 #194 SMP Thu Aug 28 17:18:47 MSD 2003</dd>
- date </dt>
- Thu Aug 28 17:20:18 2003</dd>
In this test 80% of files are chosen from the 0-8k size range, 16% from
the 0-80k size range, 0.8 x 4% from the 0-800k size range, etc. Most
files are small, most bytes are in large files.
Legend:
- A reiser4
- B reiser4, extents only
- C ext3 in data=writeback mode (meta-data only journalling)
- D ext3 in data=journal mode
- E ext3 in data=ordered mode
- F ext3 with htree (hashed directories)
Table presents absolute values (of elapsed time, CPU usage, and disk
usage) for reiser4, and ratios against reiser4 for all other
configurations. Red number means ratio is larger
than 1.0, that is, reiser4 is better in this test. Green number means that reiser4 loses in this test.
</td></tr>
|
A.INFO_R4= FSTYPE=reiser4 |
</tr>
B.INFO_R4= MKFS=mkfs.reiser4 -q -o policy=extents FSTYPE=reiser4 |
</tr>
C.FSTYPE=reiserfs |
</tr>
D.MOUNT_OPTIONS=data=writeback FSTYPE=ext3 |
</tr>
E.MOUNT_OPTIONS=data=journal FSTYPE=ext3 |
</tr>
F.MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
G.MKFS=mkfs.ext3 -O dir_index MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
</tr>
|
CREATE</td>
| 21.94</td>
| 1.056 </td>
| 3.957 </td>
| 3.049 </td>
| 3.430 </td>
| 3.399 </td>
| 3.558 </td>
</tt></td>
| 6.7</td>
| 1.104 </td>
| 0.913 </td>
| 2.213 </td>
| 2.334 </td>
| 2.345 </td>
| 0.821 </td>
</tt></td>
| 608452</td>
| 1.091 </td>
| 1.034 </td>
| 1.105 </td>
| 1.105 </td>
| 1.105 </td>
| 1.106 </td>
</tt></td>
</tr>
|
COPY</td>
| 64.05</td>
| 1.078 </td>
| 3.112 </td>
| 1.964 </td>
| 1.703 </td>
| 2.022 </td>
| 7.356 </td>
</tt></td>
| 11.37</td>
| 1.039 </td>
| 0.819 </td>
| 1.538 </td>
| 1.692 </td>
| 1.568 </td>
| 0.708 </td>
</tt></td>
| 1216572</td>
| 1.091 </td>
| 1.033 </td>
| 1.106 </td>
| 1.106 </td>
| 1.106 </td>
| 1.106 </td>
</tt></td>
</tr>
|
READ</td>
| 52.53</td>
| 1.072 </td>
| 2.882 </td>
| 1.056 </td>
| 1.126 </td>
| 1.124 </td>
| 7.158 </td>
</tt></td>
| 9.8</td>
| 0.914 </td>
| 0.538 </td>
| 0.489 </td>
| 0.467 </td>
| 0.456 </td>
| 0.551 </td>
</tt></td>
| 1216572</td>
| 1.091 </td>
| 1.033 </td>
| 1.106 </td>
| 1.106 </td>
| 1.106 </td>
| 1.106 </td>
</tt></td>
</tr>
|
STATS</td>
| 5.82</td>
| 0.973 </td>
| 3.251 </td>
| 1.040 </td>
| 1.009 </td>
| 1.048 </td>
| 1.641 </td>
</tt></td>
| 2.29</td>
| 0.991 </td>
| 0.926 </td>
| 0.755 </td>
| 0.742 </td>
| 0.751 </td>
| 0.734 </td>
</tt></td>
| 1216572</td>
| 1.091 </td>
| 1.033 </td>
| 1.106 </td>
| 1.106 </td>
| 1.106 </td>
| 1.106 </td>
</tt></td>
</tr>
|
DELETE</td>
| 46.96</td>
| 0.409 </td>
| 0.491 </td>
| 0.949 </td>
| 0.988 </td>
| 0.987 </td>
| 1.382 </td>
</tt></td>
| 13.89</td>
| 0.734 </td>
| 0.453 </td>
| 0.210 </td>
| 0.204 </td>
| 0.202 </td>
| 0.238 </td>
</tt></td>
| 4</td>
| 1.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=768 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td> | G/A </td>
</tr>
|
dd_writing_largefile</td>
| 26.1</td>
| 1.006 </td>
| 1.205 </td>
| 1.066 </td>
| 2.353 </td>
| 1.068 </td>
| 1.070 </td>
</tt></td>
| 3.18</td>
| 1.028 </td>
| 0.547 </td>
| 1.173 </td>
| 1.708 </td>
| 1.327 </td>
| 1.296 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 18.99</td>
| 1.009 </td>
| 1.072 </td>
| 1.009 </td>
| 1.007 </td>
| 1.006 </td>
| 1.008 </td>
</tt></td>
| 2.12</td>
| 1.000 </td>
| 0.925 </td>
| 0.877 </td>
| 0.844 </td>
| 0.830 </td>
| 0.811 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| NPROC=1 DIR=/mnt/testfs SYNC=off PHASE_COPY=cp REP_COUNTER=1 GAMMA=0.2 PHASE_OVERWRITE=off FILE_SIZE=8192 BYTES=512000000 PHASE_APPEND=off PHASE_READ=find DEV=/dev/hdb3 DD_MBCOUNT=768 WRITE_BUFFER=131072 PHASE_DELETE=rm PHASE_MODIFY=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
[edit] mongo 2003-08-27
mongo comparison against ext3
- reiser4 </dt>
- </dd>
- mem total</dt>
- 256276</dd>
- machine </dt>
- belka</dd>
- kernel </dt>
- 2.6.0-test4 #189 SMP Wed Aug 27 20:36:51 MSD 2003</dd>
- date </dt>
- Wed Aug 27 20:44:02 2003</dd>
In this test 80% of files are chosen from the 0-8k size range, 16% from
the 0-80k size range, 0.8 x 4% from the 0-800k size range, etc. Most
files are small, most bytes are in large files.
Legend:
- A reiser4
- B reiser4, extents only
- C ext3 in data=writeback mode (meta-data only journalling)
- D ext3 in data=journal mode
- E ext3 in data=ordered mode
- F ext3 with htree (hashed directories)
Table presents absolute values (of elapsed time, CPU usage, and disk
usage) for reiser4, and ratios against reiser4 for all other
configurations. Red number means ratio is larger
than 1.0, that is, reiser4 is better in this test. Green number means that reiser4 loses in this test.
</td></tr>
|
A.INFO_R4= FSTYPE=reiser4 |
</tr>
B.INFO_R4= MKFS=mkfs.reiser4 -q -o policy=extents FSTYPE=reiser4 |
</tr>
C.MOUNT_OPTIONS=data=writeback FSTYPE=ext3 |
</tr>
D.MOUNT_OPTIONS=data=journal FSTYPE=ext3 |
</tr>
E.MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
F.MKFS=mkfs.ext3 -O dir_index MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
CREATE</td>
| 22.41</td>
| 1.108 </td>
| 3.673 </td>
| 3.325 </td>
| 2.975 </td>
| 3.213 </td>
</tt></td>
| 7.66</td>
| 1.069 </td>
| 1.347 </td>
| 1.415 </td>
| 1.410 </td>
| 0.708 </td>
</tt></td>
| 635264</td>
| 1.096 </td>
| 1.110 </td>
| 1.110 </td>
| 1.110 </td>
| 1.111 </td>
</tt></td>
</tr>
|
COPY</td>
| 90.92</td>
| 1.099 </td>
| 1.471 </td>
| 1.221 </td>
| 1.470 </td>
| 4.989 </td>
</tt></td>
| 12.14</td>
| 1.068 </td>
| 1.066 </td>
| 1.241 </td>
| 1.094 </td>
| 0.668 </td>
</tt></td>
| 1269840</td>
| 1.096 </td>
| 1.110 </td>
| 1.110 </td>
| 1.110 </td>
| 1.112 </td>
</tt></td>
</tr>
|
READ</td>
| 82.21</td>
| 1.063 </td>
| 0.861 </td>
| 0.852 </td>
| 0.791 </td>
| 4.417 </td>
</tt></td>
| 10.57</td>
| 0.914 </td>
| 0.400 </td>
| 0.428 </td>
| 0.402 </td>
| 0.534 </td>
</tt></td>
| 1269840</td>
| 1.096 </td>
| 1.110 </td>
| 1.110 </td>
| 1.110 </td>
| 1.112 </td>
</tt></td>
</tr>
|
STATS</td>
| 8.52</td>
| 0.993 </td>
| 0.822 </td>
| 0.816 </td>
| 0.811 </td>
| 1.335 </td>
</tt></td>
| 2.96</td>
| 0.997 </td>
| 0.561 </td>
| 0.564 </td>
| 0.584 </td>
| 0.608 </td>
</tt></td>
| 1269840</td>
| 1.096 </td>
| 1.110 </td>
| 1.110 </td>
| 1.110 </td>
| 1.112 </td>
</tt></td>
</tr>
|
DELETE</td>
| 69.69</td>
| 0.301 </td>
| 0.749 </td>
| 0.717 </td>
| 0.659 </td>
| 0.912 </td>
</tt></td>
| 14.73</td>
| 0.703 </td>
| 0.208 </td>
| 0.207 </td>
| 0.213 </td>
| 0.237 </td>
</tt></td>
| 4</td>
| 1.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=768 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
dd_writing_largefile</td>
| 25.85</td>
| 1.000 </td>
| 1.092 </td>
| 2.335 </td>
| 1.085 </td>
| 1.095 </td>
</tt></td>
| 3.27</td>
| 0.982 </td>
| 1.159 </td>
| 1.648 </td>
| 1.251 </td>
| 1.254 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 19</td>
| 0.999 </td>
| 1.005 </td>
| 1.007 </td>
| 1.007 </td>
| 1.007 </td>
</tt></td>
| 2.18</td>
| 0.963 </td>
| 0.807 </td>
| 0.803 </td>
| 0.789 </td>
| 0.803 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| NPROC=1 DIR=/mnt/testfs SYNC=off PHASE_COPY=cp REP_COUNTER=1 GAMMA=0.2 PHASE_OVERWRITE=off FILE_SIZE=8000 BYTES=512000000 PHASE_APPEND=off PHASE_READ=find DEV=/dev/hdb3 DD_MBCOUNT=768 WRITE_BUFFER=131072 PHASE_DELETE=rm PHASE_MODIFY=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
This is the same test as above, but with base file size 4k, that is,
in this test 80% of files are chosen from the 0-4k size range, 16%
from the 0-40k size range, 0.8 x 4% from the 0-400k size range, etc.
- reiser4 </dt>
- </dd>
- mem total</dt>
- 255580</dd>
- machine </dt>
- belka</dd>
- kernel </dt>
- 2.6.0-test4 #176 SMP Tue Aug 26 19:09:38 MSD 2003</dd>
- date </dt>
- Wed Aug 27 12:41:54 2003</dd>
</td></tr>
|
A.INFO_R4= FSTYPE=reiser4 |
</tr>
B.INFO_R4= MKFS=mkfs.reiser4 -q -o policy=extents FSTYPE=reiser4 |
</tr>
C.MOUNT_OPTIONS=data=writeback FSTYPE=ext3 |
</tr>
D.MOUNT_OPTIONS=data=journal FSTYPE=ext3 |
</tr>
E.MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
F.MKFS=mkfs.ext3 -O dir_index MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
CREATE</td>
| 33.86</td>
| 1.223 </td>
| 1.305 </td>
| 2.895 </td>
| 1.549 </td>
| 1.298 </td>
</tt></td>
| 14.11</td>
| 1.118 </td>
| 1.967 </td>
| 2.046 </td>
| 2.045 </td>
| 0.647 </td>
</tt></td>
| 789424</td>
| 1.208 </td>
| 1.180 </td>
| 1.180 </td>
| 1.180 </td>
| 1.181 </td>
</tt></td>
</tr>
|
COPY</td>
| 119.68</td>
| 1.228 </td>
| 1.237 </td>
| 1.397 </td>
| 1.277 </td>
| 7.061 </td>
</tt></td>
| 23.05</td>
| 1.108 </td>
| 1.484 </td>
| 1.683 </td>
| 1.515 </td>
| 0.691 </td>
</tt></td>
| 1578216</td>
| 1.208 </td>
| 1.180 </td>
| 1.180 </td>
| 1.180 </td>
| 1.182 </td>
</tt></td>
</tr>
|
READ</td>
| 118.5</td>
| 1.217 </td>
| 1.041 </td>
| 1.065 </td>
| 1.020 </td>
| 6.585 </td>
</tt></td>
| 19.84</td>
| 0.993 </td>
| 0.436 </td>
| 0.446 </td>
| 0.431 </td>
| 0.540 </td>
</tt></td>
| 1578216</td>
| 1.208 </td>
| 1.180 </td>
| 1.180 </td>
| 1.180 </td>
| 1.182 </td>
</tt></td>
</tr>
|
STATS</td>
| 24.69</td>
| 0.951 </td>
| 0.677 </td>
| 0.696 </td>
| 0.677 </td>
| 1.151 </td>
</tt></td>
| 7.75</td>
| 1.008 </td>
| 0.590 </td>
| 0.582 </td>
| 0.583 </td>
| 0.645 </td>
</tt></td>
| 1578216</td>
| 1.208 </td>
| 1.180 </td>
| 1.180 </td>
| 1.180 </td>
| 1.182 </td>
</tt></td>
</tr>
|
DELETE</td>
| 114.49</td>
| 0.438 </td>
| 0.174 </td>
| 0.188 </td>
| 0.177 </td>
| 0.257 </td>
</tt></td>
| 32.64</td>
| 0.790 </td>
| 0.193 </td>
| 0.199 </td>
| 0.194 </td>
| 0.223 </td>
</tt></td>
| 4</td>
| 1.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=768 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
dd_writing_largefile</td>
| 26.24</td>
| 1.002 </td>
| 1.066 </td>
| 2.311 </td>
| 1.056 </td>
| 1.063 </td>
</tt></td>
| 3.25</td>
| 0.997 </td>
| 1.138 </td>
| 1.622 </td>
| 1.286 </td>
| 1.298 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 19.04</td>
| 0.994 </td>
| 1.002 </td>
| 1.003 </td>
| 1.002 </td>
| 1.001 </td>
</tt></td>
| 2.08</td>
| 1.038 </td>
| 0.870 </td>
| 0.870 </td>
| 0.870 </td>
| 0.837 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| NPROC=1 DIR=/mnt/testfs SYNC=off PHASE_COPY=cp REP_COUNTER=1 GAMMA=0.2 PHASE_OVERWRITE=off FILE_SIZE=4000 BYTES=512000000 PHASE_APPEND=off PHASE_READ=find DEV=/dev/hdb3 DD_MBCOUNT=768 WRITE_BUFFER=131072 PHASE_DELETE=rm PHASE_MODIFY=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
[edit] mongo 2003-08-26
mongo comparison against ext3
- reiser4 </dt>
- </dd>
- mem total</dt>
- 904048</dd>
- machine </dt>
- belka</dd>
- kernel </dt>
- 2.6.0-test4 #176 SMP Tue Aug 26 19:09:38 MSD 2003</dd>
- date </dt>
- Tue Aug 26 19:34:39 2003</dd>
In this test 80% of files are chosen from the 0-4k size range, 16% from
the 0-40k size range, 0.8 x 4% from the 0-400k size range, etc. Most
files are small, most bytes are in large files.
Legend:
- A reiser4
- B reiser4, extents only
- C ext3 in data=writeback mode (meta-data only journalling)
- D ext3 in data=journal mode
- E ext3 in data=ordered mode
- F ext3 with htree (hashed directories)
Table presents absolute values (of elapsed time, CPU usage, and disk
usage) for reiser4, and ratios against reiser4 for all other
configurations. Red number means ratio is larger
than 1.0, that is, reiser4 is better in this test. Green number means that reiser4 loses in this test.
</td></tr>
|
A.INFO_R4= FSTYPE=reiser4 |
</tr>
B.INFO_R4= MKFS=mkfs.reiser4 -q -o policy=extents FSTYPE=reiser4 |
</tr>
C.MOUNT_OPTIONS=data=writeback FSTYPE=ext3 |
</tr>
D.MOUNT_OPTIONS=data=journal FSTYPE=ext3 |
</tr>
E.MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
F.MKFS=mkfs.ext3 -O dir_index MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
CREATE</td>
| 27.6</td>
| 1.311 </td>
| 1.567 </td>
| 3.538 </td>
| 1.668 </td>
| 1.566 </td>
</tt></td>
| 13.55</td>
| 1.166 </td>
| 2.035 </td>
| 2.162 </td>
| 2.189 </td>
| 0.670 </td>
</tt></td>
| 788884</td>
| 1.208 </td>
| 1.181 </td>
| 1.181 </td>
| 1.181 </td>
| 1.182 </td>
</tt></td>
</tr>
|
COPY</td>
| 113.71</td>
| 1.237 </td>
| 1.167 </td>
| 1.460 </td>
| 1.227 </td>
| 7.387 </td>
</tt></td>
| 23.13</td>
| 1.169 </td>
| 1.498 </td>
| 1.691 </td>
| 1.591 </td>
| 0.709 </td>
</tt></td>
| 1577560</td>
| 1.208 </td>
| 1.181 </td>
| 1.181 </td>
| 1.181 </td>
| 1.183 </td>
</tt></td>
</tr>
|
READ</td>
| 111.51</td>
| 1.239 </td>
| 1.157 </td>
| 1.176 </td>
| 1.096 </td>
| 7.017 </td>
</tt></td>
| 20.76</td>
| 1.042 </td>
| 0.424 </td>
| 0.415 </td>
| 0.416 </td>
| 0.521 </td>
</tt></td>
| 1577560</td>
| 1.208 </td>
| 1.181 </td>
| 1.181 </td>
| 1.181 </td>
| 1.183 </td>
</tt></td>
</tr>
|
STATS</td>
| 20.22</td>
| 1.034 </td>
| 0.834 </td>
| 0.827 </td>
| 0.832 </td>
| 1.439 </td>
</tt></td>
| 7.47</td>
| 1.009 </td>
| 0.590 </td>
| 0.585 </td>
| 0.584 </td>
| 0.631 </td>
</tt></td>
| 1577560</td>
| 1.208 </td>
| 1.181 </td>
| 1.181 </td>
| 1.181 </td>
| 1.183 </td>
</tt></td>
</tr>
|
DELETE</td>
| 110.98</td>
| 0.437 </td>
| 0.183 </td>
| 0.180 </td>
| 0.185 </td>
| 0.277 </td>
</tt></td>
| 33.03</td>
| 0.838 </td>
| 0.196 </td>
| 0.192 </td>
| 0.193 </td>
| 0.221 </td>
</tt></td>
| 4</td>
| 1.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=768 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
dd_writing_largefile</td>
| 26.03</td>
| 1.000 </td>
| 1.096 </td>
| 2.340 </td>
| 1.092 </td>
| 1.080 </td>
</tt></td>
| 3.48</td>
| 1.011 </td>
| 1.083 </td>
| 1.583 </td>
| 1.187 </td>
| 1.190 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 19</td>
| 0.995 </td>
| 1.001 </td>
| 0.999 </td>
| 1.001 </td>
| 0.999 </td>
</tt></td>
| 2.28</td>
| 1.018 </td>
| 0.741 </td>
| 0.737 </td>
| 0.741 </td>
| 0.724 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| NPROC=1 DIR=/mnt/testfs SYNC=off PHASE_COPY=cp REP_COUNTER=1 GAMMA=0.2 PHASE_OVERWRITE=off FILE_SIZE=4000 BYTES=512000000 PHASE_APPEND=off PHASE_READ=find DEV=/dev/hdb3 DD_MBCOUNT=768 WRITE_BUFFER=131072 PHASE_DELETE=rm PHASE_MODIFY=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
[edit] mongo, 2003-08-18
mongo comparison</a> against ext3
- reiser4 </dt>
- </dd>
- mem total</dt>
- 255992</dd>
- machine </dt>
- belka</dd>
- kernel </dt>
- 2.6.0-test3 #37 SMP Mon Aug 18 18:12:14 MSD 2003</dd>
- date </dt>
- ðÎÄ 18 á×Ç 2003 20:24:16</dd>
In this test 80% of files are chosen from the 0-8k size range, 16% from
the 0-80k size range, 0.8 x 4% from the 0-800k size range, etc. Most
files are small, most bytes are in large files.
Legend:
- A reiser4
- B reiser4, extents only
- C ext3 in data=writeback mode (meta-data only journalling)
- D ext3 in data=journal mode
- E ext3 in data=ordered mode
- F ext3 with htree (hashed directories)
Table presents absolute values (of elapsed time, CPU usage, and disk
usage) for reiser4, and ratios against reiser4 for all other
configurations. Red number means ratio is larger
than 1.0, that is, reiser4 is better in this test. Green number means that reiser4 loses in this test.
</td></tr>
|
A.INFO_R4= FSTYPE=reiser4 |
</tr>
B.INFO_R4=ext MKFS=mkfs.reiser4 -q -o policy=extents FSTYPE=reiser4 |
</tr>
C.MOUNT_OPTIONS=data=writeback FSTYPE=ext3 |
</tr>
D.MOUNT_OPTIONS=data=journal FSTYPE=ext3 |
</tr>
E.MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
F.MKFS=mkfs.ext3 -O dir_index MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
CREATE</td>
| 29.16</td>
| 1.220 </td>
| 1.422 </td>
| 3.779 </td>
| 1.491 </td>
| 1.645 </td>
</tt></td>
| 13.52</td>
| 1.182 </td>
| 2.013 </td>
| 2.087 </td>
| 1.997 </td>
| 0.657 </td>
</tt></td>
| 789364</td>
| 1.208 </td>
| 1.180 </td>
| 1.180 </td>
| 1.180 </td>
| 1.181 </td>
</tt></td>
</tr>
|
COPY</td>
| 119.64</td>
| 1.211 </td>
| 1.191 </td>
| 1.473 </td>
| 1.230 </td>
| 7.288 </td>
</tt></td>
| 21.98</td>
| 1.152 </td>
| 1.515 </td>
| 1.746 </td>
| 1.520 </td>
| 0.695 </td>
</tt></td>
| 1578116</td>
| 1.208 </td>
| 1.180 </td>
| 1.180 </td>
| 1.180 </td>
| 1.182 </td>
</tt></td>
</tr>
|
READ</td>
| 116.55</td>
| 1.213 </td>
| 1.177 </td>
| 1.025 </td>
| 1.134 </td>
| 6.850 </td>
</tt></td>
| 18.35</td>
| 1.035 </td>
| 0.447 </td>
| 0.436 </td>
| 0.431 </td>
| 0.569 </td>
</tt></td>
| 1578116</td>
| 1.208 </td>
| 1.180 </td>
| 1.180 </td>
| 1.180 </td>
| 1.182 </td>
</tt></td>
</tr>
|
STATS</td>
| 21.65</td>
| 1.050 </td>
| 0.779 </td>
| 0.811 </td>
| 0.782 </td>
| 1.358 </td>
</tt></td>
| 7.56</td>
| 1.001 </td>
| 0.599 </td>
| 0.612 </td>
| 0.611 </td>
| 0.638 </td>
</tt></td>
| 1578116</td>
| 1.208 </td>
| 1.180 </td>
| 1.180 </td>
| 1.180 </td>
| 1.182 </td>
</tt></td>
</tr>
|
DELETE</td>
| 112.37</td>
| 0.434 </td>
| 0.179 </td>
| 0.198 </td>
| 0.177 </td>
| 0.281 </td>
</tt></td>
| 30.62</td>
| 0.851 </td>
| 0.205 </td>
| 0.205 </td>
| 0.203 </td>
| 0.230 </td>
</tt></td>
| 4</td>
| 1.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=768 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
dd_writing_largefile</td>
| 26.11</td>
| 1.011 </td>
| 1.090 </td>
| 2.388 </td>
| 1.076 </td>
| 1.083 </td>
</tt></td>
| 3.25</td>
| 0.945 </td>
| 1.092 </td>
| 1.640 </td>
| 1.255 </td>
| 1.231 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 19.09</td>
| 1.005 </td>
| 0.999 </td>
| 0.996 </td>
| 1.004 </td>
| 1.011 </td>
</tt></td>
| 2.09</td>
| 1.019 </td>
| 0.847 </td>
| 0.856 </td>
| 0.833 </td>
| 0.842 </td>
</tt></td>
| 786436</td>
| 1.000 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| NPROC=1 DIR=/mnt/testfs SYNC=off PHASE_COPY=cp REP_COUNTER=1 GAMMA=0.2 PHASE_OVERWRITE=off FILE_SIZE=4000 BYTES=512000000 PHASE_APPEND=off PHASE_READ=find DEV=/dev/hdb3 DD_MBCOUNT=768 WRITE_BUFFER=131072 PHASE_DELETE=rm PHASE_MODIFY=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
[edit] mongo, 2003-08-12
mongo comparison against ext3
- mem total</dt>
- 513284</dd>
- machine </dt>
- strelka</dd>
- kernel </dt>
- 2.6.0-test2 #52 SMP Tue Aug 12 15:17:12 MSD 2003</dd>
- date </dt>
- Tue Aug 12 15:38:47 2003</dd>
This is comparison of latest (2003.08.12) version of reiser4 with
ext3. Reiser4 is an atomic filesystem, so the comparison with data
journaling mode of ext3 is the fairest, but since most users use ext3
with data ordering mode, we compare against that also....
In this test 80% of files are chosen from the 0-8k size range, 16% from
the 0-80k size range, 0.8 x 4% from the 0-800k size range, etc. Most
files are small, most bytes are in large files.
Legend:
- A reiser4
- B ext3 in data=writeback mode (meta-data only journalling)
- C ext3 in data=journal mode
- D ext3 in data=ordered mode
- E ext3 with htree (hashed directories)
- F ext3 with support for filetypes in readdir()
Table presents absolute values (of elapsed time, CPU usage, and disk
usage) for reiser4, and ratios against reiser4 for all other
configurations. Red number means ratio is larger
than 1.0, that is, reiser4 is better in this test. Green number means that reiser4 loses in this test.
</td></tr>
|
A.INFO_R4= MKFS=/usr/local/sbin/mkfs.reiser4 -qf FSTYPE=reiser4 |
</tr>
B.MOUNT_OPTIONS=data=writeback FSTYPE=ext3 |
</tr>
C.MOUNT_OPTIONS=data=journal FSTYPE=ext3 |
</tr>
D.MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
E.MKFS=/usr/local/sbin/mkfs.ext3 -O dir_index MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
F.MKFS=/usr/local/sbin/mkfs.ext3 -O filetype MOUNT_OPTIONS=data=ordered FSTYPE=ext3 |
</tr>
#0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
CREATE</td>
| 14.06</td>
| 3.317 </td>
| 3.248 </td>
| 3.050 </td>
| 3.016 </td>
| 3.077 </td>
</tt></td>
| 5.3</td>
| 1.558 </td>
| 1.692 </td>
| 1.602 </td>
| 0.823 </td>
| 1.592 </td>
</tt></td>
| 458224</td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
</tt></td>
</tr>
|
COPY</td>
| 43.62</td>
| 1.982 </td>
| 1.733 </td>
| 2.033 </td>
| 6.685 </td>
| 1.904 </td>
</tt></td>
| 9.19</td>
| 1.163 </td>
| 1.286 </td>
| 1.230 </td>
| 0.706 </td>
| 1.200 </td>
</tt></td>
| 916172</td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
| 1.108 </td>
| 1.107 </td>
</tt></td>
</tr>
|
READ</td>
| 39.86</td>
| 1.091 </td>
| 1.091 </td>
| 1.140 </td>
| 6.003 </td>
| 1.119 </td>
</tt></td>
| 8.22</td>
| 0.467 </td>
| 0.454 </td>
| 0.464 </td>
| 0.529 </td>
| 0.443 </td>
</tt></td>
| 916172</td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
| 1.108 </td>
| 1.107 </td>
</tt></td>
</tr>
|
STATS</td>
| 1.54</td>
| 1.987 </td>
| 1.896 </td>
| 1.942 </td>
| 2.649 </td>
| 1.883 </td>
</tt></td>
| 0.26</td>
| 2.115 </td>
| 2.115 </td>
| 2.115 </td>
| 1.385 </td>
| 1.962 </td>
</tt></td>
| 916172</td>
| 1.107 </td>
| 1.107 </td>
| 1.107 </td>
| 1.108 </td>
| 1.107 </td>
</tt></td>
</tr>
|
DELETE</td>
| 37.85</td>
| 0.833 </td>
| 0.825 </td>
| 0.867 </td>
| 1.133 </td>
| 0.760 </td>
</tt></td>
| 11.11</td>
| 0.223 </td>
| 0.223 </td>
| 0.220 </td>
| 0.254 </td>
| 0.222 </td>
</tt></td>
| 4</td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=500 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
| A</td> | B/A </td> | C/A </td> | D/A </td> | E/A </td> | F/A </td>
</tr>
|
dd_writing_largefile</td>
| 42.15</td>
| 1.062 </td>
| 2.534 </td>
| 1.066 </td>
| 1.071 </td>
| 1.073 </td>
</tt></td>
| 7.86</td>
| 1.094 </td>
| 1.500 </td>
| 1.206 </td>
| 1.211 </td>
| 1.198 </td>
</tt></td>
| 512004</td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 36.5</td>
| 1.005 </td>
| 1.008 </td>
| 1.005 </td>
| 1.007 </td>
| 1.007 </td>
</tt></td>
| 4.7</td>
| 0.745 </td>
| 0.732 </td>
| 0.743 </td>
| 0.736 </td>
| 0.734 </td>
</tt></td>
| 512004</td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| NPROC=1 DIR=/data1 SYNC=off PHASE_COPY=cp REP_COUNTER=3 GAMMA=0.2 PHASE_OVERWRITE=off PHASE_STATS=find FILE_SIZE=8192 BYTES=134217728 PHASE_APPEND=off PHASE_READ=find DEV=/dev/hdb1 DD_MBCOUNT=500 WRITE_BUFFER=131072 PHASE_DELETE=rm PHASE_MODIFY=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
[edit] mongo 2003-07-10
mongo comparison, reiserfs vs. reiser4, 2003-07-10, obtained before LinuxTAG 2003
</td></tr>
|
A. reiser4</th>
</tr>
|
B. ext3 data journalling</th>
</tr>
|
C. ext3 </th>
</tr>
| #0:</td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
</tr>
|
CREATE</td>
| 14.19</td>
| 3.221 </td>
| 3.592 </td>
</tt></td>
| 5.66</td>
| 1.610 </td>
| 1.475 </td>
</tt></td>
| 458692</td>
| 1.106 </td>
| 1.106 </td>
</tt></td>
</tr>
|
COPY</td>
| 49.01</td>
| 1.586 </td>
| 1.783 </td>
</tt></td>
| 9.08</td>
| 1.308 </td>
| 1.176 </td>
</tt></td>
| 916668</td>
| 1.106 </td>
| 1.106 </td>
</tt></td>
</tr>
|
READ</td>
| 43.39</td>
| 0.970 </td>
| 1.017 </td>
</tt></td>
| 8.1</td>
| 0.452 </td>
| 0.453 </td>
</tt></td>
| 916668</td>
| 1.106 </td>
| 1.106 </td>
</tt></td>
</tr>
|
STATS</td>
| 1.93</td>
| 1.534 </td>
| 1.549 </td>
</tt></td>
| 0.27</td>
| 2.000 </td>
| 1.963 </td>
</tt></td>
| 916668</td>
| 1.106 </td>
| 1.106 </td>
</tt></td>
</tr>
|
DELETE</td>
| 40.13</td>
| 0.797 </td>
| 0.837 </td>
</tt></td>
| 11.26</td>
| 0.217 </td>
| 0.210 </td>
</tt></td>
| 4</td>
| 0.000 </td>
| 0.000 </td>
</tt></td>
</tr>
| #1:DD_MBCOUNT=500 </td></tr>
| </td>
| REAL_TIME</td>
| CPU_TIME</td>
| DF</td>
</tr>
| </td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
| A</td> | B/A </td> | C/A </td>
</tr>
|
dd_writing_largefile</td>
| 42.27</td>
| 2.527 </td>
| 1.057 </td>
</tt></td>
| 7.78</td>
| 1.497 </td>
| 1.189 </td>
</tt></td>
| 512004</td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
|
dd_reading_largefile</td>
| 36.57</td>
| 1.005 </td>
| 1.005 </td>
</tt></td>
| 4.8</td>
| 0.760 </td>
| 0.777 </td>
</tt></td>
| 512004</td>
| 1.001 </td>
| 1.001 </td>
</tt></td>
</tr>
| </td></tr>
|
| NPROC=1 DIR=/data1 SYNC=off PHASE_COPY=cp REP_COUNTER=3 GAMMA=0.2 PHASE_OVERWRITE=off PHASE_STATS=find FILE_SIZE=8192 BYTES=134217728 PHASE_APPEND=off PHASE_READ=find DEV=/dev/hdb1 DD_MBCOUNT=500 WRITE_BUFFER=131072 PHASE_DELETE=rm PHASE_MODIFY=off </td></tr>
| Produced by <a href=http://namesys.com/benchmarks/mongo_readme.html>Mongo</a> benchmark suite.</td></tr>
</table>
<a name="mongo.2003.07.10">
The below are some older benchmarks from just before Linux Tag. In
these, note that gamma is the fraction of files that are larger than
the base size by 10x. This is set either to 0.2 (as in the benchmark
above), in an attempt to mimic observed real usage patterns, or to 0,
in an attempt to measure a file size range's performance qualities in
isolation. Note that V3 performs poorly in the 0-8k size range, and
V4 performs well. This is the result of deep design changes you can
read about at <a href="http://www.namesys.com/v4/v4.html">http://www.namesys.com/v4/v4.html</a>.
- mem total</dt>
- 513748</dd>
- machine </dt>
- strelka</dd>
- kernel </dt>
- 2.5.74 #213 SMP Thu Jul 10 22:53:23 MSD 2003</dd>
- date </dt>
- Thu Jul 10 22:48:56 2003</dd>
- .config </dt>
- <a href="http://www.namesys.com/intbenchmarks/mongo/03.07.11.nikita/.config">here</a></dd>
- NPROC</dt>
- 1</dd>
- DIR</dt>
- /data1</dd>
- SYNC</dt>
- off</dd>
- REP_COUNTER</dt>
- 3</dd>
- All phases are in readdir order</dt>
- </dd>
- BYTES</dt>
- 100M</dd>
- DEV</dt>
- /dev/hdb1</dd>
- WRITE_BUFFER</dt>
- <b>256k</b></dd>
<p>everywhere <b>A</b> is reiserfs and <b>B</b> is reiser4. Green numbers
mean reiser4 is better.
<tbody> |
median file size 8k |
|
REAL_TIME |
CPU_TIME |
DF |
|
A | B/A |
A | B/A |
A | B/A |
CREATE |
41.26 |
0.246 |
3.93 |
0.908 |
321632 |
0.961 |
COPY |
154.09 |
0.504 |
5.17 |
1.217 |
642624 |
0.962 |
APPEND |
282.09 |
0.573 |
6.6 |
1.392 |
944428 |
0.980 |
MODIFY |
284.52 |
0.986 |
3.29 |
1.489 |
943592 |
0.981 |
OVERWRITE |
298.19 |
1.263 |
5.33 |
1.608 |
943548 |
0.968 |
READ |
245.22 |
0.940 |
3.85 |
1.753 |
943548 |
0.968 |
STATS |
20.58 |
0.099 |
0.48 |
1.292 |
943548 |
0.968 |
GAMMA=0.2 FILE_SIZE=8192 <a href="http://www.namesys.com/intbenchmarks/mongo/03.07.11.nikita/8k.heavy.v3.profile">A profile</a> <a href="http://www.namesys.com/intbenchmarks/mongo/03.07.11.nikita/8k.heavy.v4.profile">B profile</a> |
|
|
|
|
median file size 4k |
|
REAL_TIME |
CPU_TIME |
DF |
|
A | B/A |
A | B/A |
A | B/A |
CREATE |
117.32 |
0.176 |
15.57 |
0.758 |
667652 |
1.000 |
COPY |
524.67 |
0.365 |
19.16 |
1.059 |
1332856 |
1.002 |
APPEND |
1068.43 |
0.363 |
31.27 |
0.937 |
2073420 |
0.950 |
MODIFY |
1081.23 |
0.670 |
18.61 |
1.048 |
2066536 |
0.953 |
OVERWRITE |
1050.55 |
0.885 |
22.81 |
1.017 |
2066424 |
0.948 |
READ |
974.43 |
0.644 |
12.28 |
1.635 |
2066424 |
0.948 |
STATS |
83.44 |
0.075 |
1.26 |
0.802 |
2066424 |
0.948 |
GAMMA=0.2 FILE_SIZE=4096 <a href="http://www.namesys.com/intbenchmarks/mongo/03.07.11.nikita/4k.heavy.v3.profile">A profile</a> <a href="http://www.namesys.com/intbenchmarks/mongo/03.07.11.nikita/4k.heavy.v4.profile">B profile</a> |
|
|
|
|
maximal file size 4k |
|
REAL_TIME |
CPU_TIME |
DF |
|
A | B/A |
A | B/A |
A | B/A |
CREATE |
77.34 |
0.309 |
21.86 |
0.938 |
452252 |
0.923 |
COPY |
412.28 |
0.300 |
35.11 |
1.013 |
893408 |
0.934 |
APPEND |
1198.9 |
0.164 |
67.06 |
0.694 |
1631992 |
0.749 |
MODIFY |
1305.14 |
0.351 |
43.77 |
0.762 |
1613124 |
0.758 |
OVERWRITE |
1390.94 |
0.239 |
44.22 |
0.777 |
1610948 |
0.759 |
READ |
1093.6 |
0.256 |
19.46 |
1.743 |
1610948 |
0.759 |
STATS |
115.76 |
0.200 |
2.6 |
0.735 |
1610948 |
0.759 |
GAMMA=0.0 FILE_SIZE=4096 <a href="http://www.namesys.com/intbenchmarks/mongo/03.07.11.nikita/100.heavy.v3.profile">A profile</a> <a href="http://www.namesys.com/intbenchmarks/mongo/03.07.11.nikita/100.heavy.v4.profile">B profile</a> |
|
|
|
median file size 8k |
|
REAL_TIME |
CPU_TIME |
DF |
|
A | B/A |
A | B/A |
A | B/A |
CREATE |
40.54 |
0.248 |
4.01 |
0.895 |
321632 |
0.961 |
COPY |
152.82 |
0.506 |
5.2 |
1.215 |
642624 |
0.962 |
READ |
141.8 |
0.563 |
3.03 |
1.762 |
642624 |
0.962 |
STATS |
14.91 |
0.084 |
0.59 |
1.051 |
642624 |
0.962 |
|
| GAMMA=0.2 FILE_SIZE=8192 |
|
|
|
median file size 4k |
|
REAL_TIME |
CPU_TIME |
DF |
|
A | B/A |
A | B/A |
A | B/A |
CREATE |
115.6 |
0.174 |
14.84 |
0.772 |
667652 |
1.000 |
COPY |
528.83 |
0.361 |
18.91 |
1.058 |
1332856 |
1.002 |
READ |
532.06 |
0.372 |
10.87 |
1.589 |
1332856 |
1.002 |
STATS |
51.99 |
0.069 |
1.67 |
0.581 |
1332856 |
1.002 |
|
| GAMMA=0.2 FILE_SIZE=4096 |
|
|
|
maximal file size 4k |
|
REAL_TIME |
CPU_TIME |
DF |
|
A | B/A |
A | B/A |
A | B/A |
CREATE |
77.5 |
0.309 |
22.24 |
0.910 |
452252 |
0.923 |
COPY |
415.84 |
0.297 |
34.9 |
1.009 |
893408 |
0.934 |
READ |
469.97 |
0.273 |
20.14 |
1.454 |
893408 |
0.934 |
STATS |
65.49 |
0.162 |
3.09 |
0.599 |
893408 |
0.934 |
|
| GAMMA=0.0 FILE_SIZE=4096 |
</tbody>
Mongo benchmark results
create, copy, read, stats, delete phases
- reiser4 </dt>
- ChangeSet@1.1095, 2003-07-10 15:22:17+04:00, god@laputa.namesys.com oops
ChangeSet@1.1094, 2003-07-10 15:14:06+04:00, god@laputa.namesys.com repairing
compilation damage. </dd>
- mem total</dt>
- 256624</dd>
- machine </dt>
- belka</dd>
- kernel </dt>
- 2.5.74 #28 Thu Jul 10 18:36:03 MSD 2003</dd>
- date </dt>
- Thu Jul 10 19:21:06 2003</dd>
- <a href="http://namesys.com/intbenchmarks/mongo/03.07.11.light/dot.config">.config</a></dt>
<tbody> |
A.INFO_R4=test FSTYPE=reiser4 |
B.INFO_R4=test FSTYPE=reiser4 MKFS=mkfs.reiser4 -q -e extent40 |
C.FSTYPE=reiserfs |
D.FSTYPE=reiserfs MOUNT_OPTIONS=notail |
E.FSTYPE=ext3 |
F.FSTYPE=ext3 MOUNT_OPTIONS=data=journal |
#0:FILE_SIZE=4000 |
|
REAL_TIME |
CPU_TIME |
DF |
|
A | B/A | C/A | D/A | E/A | F/A |
A | B/A | C/A | D/A | E/A | F/A |
A | B/A | C/A | D/A | E/A | F/A |
CREATE |
20.47 |
1.404 |
3.037 |
2.024 |
2.513 |
3.324 |
12.72 |
1.143 |
1.270 |
0.873 |
0.615 |
0.606 |
416332 |
1.934 |
1.088 |
1.909 |
1.858 |
1.858 |
COPY |
65.25 |
1.484 |
2.953 |
2.020 |
1.986 |
2.267 |
21.98 |
1.032 |
1.098 |
0.732 |
0.529 |
0.699 |
832640 |
1.934 |
1.088 |
1.910 |
1.858 |
1.858 |
READ |
75.56 |
1.349 |
2.868 |
2.218 |
1.902 |
1.925 |
17.36 |
1.213 |
0.745 |
0.857 |
0.695 |
0.681 |
832640 |
1.934 |
1.088 |
1.910 |
1.858 |
1.858 |
STATS |
132.18 |
0.996 |
0.963 |
0.994 |
0.967 |
0.950 |
2.63 |
0.977 |
0.970 |
0.989 |
0.981 |
1.008 |
832640 |
1.934 |
1.088 |
1.910 |
1.858 |
1.858 |
DELETE |
85.32 |
0.627 |
1.239 |
0.442 |
0.403 |
0.449 |
33.57 |
0.856 |
0.780 |
0.623 |
0.157 |
0.154 |
4 |
1.000 |
0.000 |
0.000 |
0.000 |
0.000 |
#1:FILE_SIZE=8000 |
|
REAL_TIME |
CPU_TIME |
DF |
|
A | B/A | C/A | D/A | E/A | F/A |
A | B/A | C/A | D/A | E/A | F/A |
A | B/A | C/A | D/A | E/A | F/A |
CREATE |
15.07 |
1.009 |
8.875 |
1.709 |
2.237 |
3.321 |
8.62 |
0.945 |
1.932 |
0.729 |
0.517 |
0.522 |
399788 |
1.000 |
1.243 |
1.461 |
1.434 |
1.434 |
COPY |
52.24 |
1.007 |
4.998 |
1.492 |
1.562 |
1.879 |
13.42 |
1.026 |
1.264 |
0.700 |
0.487 |
0.635 |
799488 |
1.000 |
1.243 |
1.461 |
1.434 |
1.434 |
READ |
60.91 |
1.013 |
3.738 |
1.606 |
1.333 |
1.340 |
11.66 |
1.018 |
0.526 |
0.749 |
0.547 |
0.547 |
799488 |
1.000 |
1.243 |
1.461 |
1.434 |
1.434 |
STATS |
126.53 |
0.951 |
0.958 |
0.991 |
1.004 |
0.966 |
2.57 |
1.023 |
1.027 |
0.988 |
1.016 |
1.012 |
799488 |
1.000 |
1.243 |
1.461 |
1.434 |
1.434 |
DELETE |
73.21 |
1.116 |
0.746 |
0.242 |
0.301 |
0.396 |
19.93 |
1.013 |
0.584 |
0.530 |
0.126 |
0.123 |
4 |
1.000 |
0.000 |
0.000 |
0.000 |
0.000 |
|
| PHASE_APPEND=off
NPROC=1 DIR=/mnt/testfs SYNC=off REP_COUNTER=3 GAMMA=0.0
PHASE_OVERWRITE=off DEV=/dev/hdb3 WRITE_BUFFER=4096 BYTES=128000000
PHASE_MODIFY=off |
Produced by <a href="http://namesys.com/benchmarks/mongo_readme.html">Mongo</a> benchmark suite. |
</tbody>
dd of a large file phase
- reiser4 </dt>
- ChangeSet@1.1095, 2003-07-10 15:22:17+04:00, god@laputa.namesys.com oops
ChangeSet@1.1094, 2003-07-10 15:14:06+04:00, god@laputa.namesys.com repairing
compilation damage. </dd>
- mem total</dt>
- 256624</dd>
- machine </dt>
- belka</dd>
- kernel </dt>
- 2.5.74 #28 Thu Jul 10 18:36:03 MSD 2003</dd>
- date </dt>
- Thu Jul 10 21:36:22 2003</dd>
- <a href="http://namesys.com/intbenchmarks/mongo/03.07.11.light/dot.config">.config</a></dt>
<tbody> |
A.INFO_R4=test FSTYPE=reiser4 |
B.INFO_R4=test FSTYPE=reiser4 MKFS=mkfs.reiser4 -q -e extent40 |
C.FSTYPE=reiserfs |
D.FSTYPE=reiserfs MOUNT_OPTIONS=notail |
E.FSTYPE=ext3 |
F.FSTYPE=ext3 MOUNT_OPTIONS=data=journal |
#0:DD_MBCOUNT=768 |
|
REAL_TIME |
CPU_TIME |
DF |
|
A | B/A | C/A | D/A | E/A | F/A |
A | B/A | C/A | D/A | E/A | F/A |
A | B/A | C/A | D/A | E/A | F/A |
dd_writing_largefile |
76.29 |
0.997 |
1.137 |
1.149 |
1.062 |
2.217 |
7.47 |
1.027 |
0.545 |
0.549 |
0.803 |
0.835 |
786432 |
1.000 |
1.001 |
1.001 |
1.001 |
1.001 |
|
| NPROC=1
DIR=/mnt/testfs SYNC=off REP_COUNTER=3 GAMMA=0.0 DD_MBCOUNT=768
DEV=/dev/hdb3 WRITE_BUFFER=4096 FILE_SIZE=8000 BYTES=128000000 |
Produced by <a href="http://namesys.com/benchmarks/mongo_readme.html">Mongo</a> benchmark suite. |
</tbody>
[edit] bonnie++ 2003-09-30
Bonnie++ comparison, ext3 vs reiser4 (2003-09-30)
This is bonnie++ output for reiser4 and ext3. This has been done in an attempt to analyze <a href="http://fsbench.netnation.com/">results</a> obtained by Mike Benoit.
Hardware specs:
processor : 3
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel(R) Xeon(TM) CPU 2.40GHz
stepping : 7
cpu MHz : 2379.253
cache size : 512 KB
bogomips : 4751.36
Dual CPU with hyper-threading
Memory: 128M
HDD:
# hdparm /dev/hdb1
/dev/hdb1:
multcount = 16 (on)
IO_support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 65535/16/63, sectors = 117226242, start = 63
# hdparm -t /dev/hdb1
/dev/hdb1:
Timing buffered disk reads: 64 MB in 1.60 seconds = 39.91 MB/sec
# hdparm -i /dev/hdb
/dev/hdb:
Model=ST360021A, FwRev=3.19, SerialNo=3HR173RB
Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }
RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4
BuffType=unknown, BuffSize=2048kB, MaxMultSect=16, MultSect=16
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=117231408
IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5
AdvancedPM=no WriteCache=enabled
Drive conforms to: device does not report version: 1 2 3 4 5
./bonnie++ -s 1g -n 10 -x 5
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
v4.128M 1G 19903 89 37911 20 15392 11 13624 58 41807 12 131.0 0
v4.128M 1G 19965 89 37600 20 15845 11 13730 58 41751 12 130.0 0
v4.128M 1G 19937 89 37746 20 15404 11 13624 58 41793 12 132.1 0
v4.128M 1G 19998 89 37184 19 15007 10 13393 56 41611 11 130.2 0
v4.128M 1G 19771 89 37679 20 15206 11 13466 57 41808 11 130.2 1
ext3.128M 1G 21236 99 37258 22 11357 4 13460 56 41748 6 120.0 0
ext3.128M 1G 20821 99 36838 23 12176 5 13154 55 40671 6 120.7 0
ext3.128M 1G 20755 99 37032 24 12069 4 12908 54 40851 5 120.2 0
ext3.128M 1G 20651 99 37094 24 11817 5 13038 54 40842 6 121.3 0
ext3.128M 1G 20928 99 37300 23 12287 4 13067 55 41404 6 120.1 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
v4.128M 10 18503 100 +++++ +++ 9488 99 10158 99 +++++ +++ 11635 99
v4.128M 10 19760 99 +++++ +++ 9696 99 10441 100 +++++ +++ 11831 99
v4.128M 10 19583 100 +++++ +++ 9672 100 10597 99 +++++ +++ 11846 100
v4.128M 10 19720 100 +++++ +++ 9577 99 10126 100 +++++ +++ 11924 100
v4.128M 10 19682 100 +++++ +++ 9683 100 10461 100 +++++ +++ 11834 100
ext3.128M 10 3279 97 +++++ +++ +++++ +++ 3406 100 +++++ +++ 8951 95
ext3.128M 10 3303 98 +++++ +++ +++++ +++ 3423 99 +++++ +++ 8558 96
ext3.128M 10 3317 98 +++++ +++ +++++ +++ 3402 100 +++++ +++ 8721 93
ext3.128M 10 3325 98 +++++ +++ +++++ +++ 3390 100 +++++ +++ 9242 100
ext3.128M 10 3315 97 +++++ +++ +++++ +++ 3439 100 +++++ +++ 8896 96
./bonnie++ -f -d . -s 3072 -n 10:100000:10:10 -x 1
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
v4 3G 37579 19 15657 11 41531 11 105.8 0
v4 3G 37993 20 15478 11 41632 11 105.4 0
ext3 3G 35221 22 10987 4 41105 6 90.9 0
ext3 3G 35099 22 11517 4 41416 6 90.7 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
v4 10:100000:10/10 570 39 746 17 1435 23 513 40 104 2 951 15
v4 10:100000:10/10 566 40 765 17 1385 23 509 41 104 2 904 14
ext3 10:100000:10/10 221 8 364 4 853 4 204 7 99 1 306 2
ext3 10:100000:10/10 221 7 368 4 839 5 206 7 91 1 309 2
<a name="grant"></a>
Benchmarks performed by <a href="mailto:mine0057@mrs.umn.edu">Grant
Miner</a>. He used <a href="http://epoxy.mrs.umn.edu/~minerg/fstests/bench.scm">bench.scm</a>
script (requires <a href="http://www.scsh.net/">scsh</a>).
Results (copied from <a href="http://epoxy.mrs.umn.edu/~minerg/fstests/results.html">http://epoxy.mrs.umn.edu/~minerg/fstests/results.html</a>):
2.6.0-test3
mkfs ran with default options
Each test has three columns. First is a canoninical name of the test, with time test took in seconds. Second column is system cpu time. Third column is user cpu time. Last column "total" is total time; sys is total sys time; usr is total usr time; total cpu is sum of total sys time and total usr time.
<b>all values are in seconds thus lower is better</b>
Filesystem Performance
<colgroup>
<col>
<col bgcolor="gray">
</colgroup>
fs |
bigdir |
sys |
usr |
cp |
sys |
usr |
cp2 |
sys |
usr |
cp3 |
sys |
usr |
cp4 |
sys |
usr |
cp5 |
sys |
usr |
rm |
sys |
usr |
rm2 |
sys |
usr |
rm3 |
sys |
usr |
sync |
sys |
usr |
total |
sys |
usr |
total cpu |
fs |
reiserfs |
40.03 |
12.22 |
0.76 |
77.75 |
10.72 |
0.45 |
62.9 |
10.82 |
0.43 |
60.26 |
11.03 |
0.43 |
61.33 |
11.13 |
0.43 |
66.08 |
11.31 |
0.45 |
10.86 |
3.74 |
0.07 |
4.62 |
3.36 |
0.09 |
8.22 |
3.5 |
0.09 |
1.78 |
0.03 |
0. |
393.83 |
77.86 |
3.2 |
81.06 |
reiserfs |
jfs |
47.2 |
8.9 |
0.77 |
109.75 |
5.5 |
0.3 |
110.71 |
5.49 |
0.35 |
114.69 |
5.6 |
0.29 |
117.97 |
5.65 |
0.35 |
125.48 |
5.82 |
0.29 |
38.68 |
0.74 |
0.05 |
16.25 |
1.08 |
0.07 |
37.46 |
0.74 |
0.04 |
0.07 |
0. |
0. |
718.26 |
39.52 |
2.51 |
42.03 |
jfs |
xfs |
44.77 |
13.3 |
0.94 |
105.36 |
13.33 |
0.53 |
110.27 |
14.36 |
0.5 |
110.17 |
14.37 |
0.51 |
111.03 |
14.43 |
0.53 |
118.84 |
14.87 |
0.55 |
31.85 |
6.44 |
0.15 |
15.2 |
5.45 |
0.14 |
34.32 |
5.87 |
0.14 |
0.03 |
0. |
0. |
681.84 |
102.42 |
3.99 |
106.41 |
xfs |
reiser4 |
33.51 |
10.85 |
0.69 |
33.9 |
10.65 |
0.65 |
32.9 |
10.79 |
0.67 |
34. |
10.87 |
0.65 |
33.62 |
10.87 |
0.69 |
31.31 |
10.83 |
0.76 |
17.45 |
4.07 |
0.3 |
11.54 |
4.49 |
0.3 |
13.08 |
4.27 |
0.27 |
0.52 |
0. |
0. |
241.83 |
77.69 |
4.98 |
82.67 |
reiser4 |
ext3 |
38.79 |
9.35 |
0.7 |
91.57 |
7.21 |
0.36 |
62.6 |
7.44 |
0.36 |
62.74 |
7.5 |
0.37 |
60.62 |
7.52 |
0.34 |
69.82 |
7.59 |
0.39 |
26.21 |
1.67 |
0.05 |
8.73 |
1.66 |
0.04 |
13.79 |
1.63 |
0.06 |
4.76 |
0.01 |
0. |
439.63 |
51.58 |
2.67 |
54.25 |
ext3 |
ext2 |
32.78 |
7.61 |
0.64 |
37.28 |
5.24 |
0.34 |
43.55 |
5.34 |
0.35 |
45.41 |
5.34 |
0.37 |
47.72 |
5.48 |
0.34 |
50.5 |
5.41 |
0.32 |
16.28 |
0.67 |
0.06 |
7.54 |
0.66 |
0.05 |
15.31 |
0.71 |
0.05 |
0.24 |
0. |
0. |
296.61 |
36.46 |
2.52 |
38.98 |
ext2 |
</body>
</html>
<address><a href="mailto:reiser@namesys.com">Hans Reiser</a></address>
Last modified: Thu Nov 20 17:51:10 MSK 2003
</body>
<SCRIPT language="Javascript">
</SCRIPT>
</html>
|
|
|
|
|
|
|
|
|
|
|
|