FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > EXT3 Users

 
 
LinkBack Thread Tools
 
Old 12-24-2009, 11:59 AM
Teran McKinney
 
Default benchmark results

Which I/O scheduler are you using? Pretty sure that ReiserFS is a
little less deadlocky with CFQ or another over deadline, but that
deadline usually gives the best results for me (especially for JFS).

Thanks,
Teran

On Thu, Dec 24, 2009 at 10:31, Christian Kujau <lists@nerdbynature.de> wrote:
> I've had the chance to use a testsystem here and couldn't resist running a
> few benchmark programs on them: bonnie++, tiobench, dbench and a few
> generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs.
>
> All with standard mkfs/mount options and +noatime for all of them.
>
> Here are the results, no graphs - sorry:
> * http://nerdbynature.de/benchmarks/v40z/2009-12-22/
>
> Reiserfs is locking up during dbench, so I removed it from the
> config, here are some earlier results:
>
> * http://nerdbynature.de/benchmarks/v40z/2009-12-21/bonnie.html
>
> Bonnie++ couldn't complete on nilfs2, only the generic tests
> and tiobench were run. As nilfs2, ufs, zfs aren't supporting xattr, dbench
> could not be run on these filesystems.
>
> Short summary, AFAICT:
> * *- btrfs, ext4 are the overall winners
> * *- xfs to, but creating/deleting many files was *very* slow
> * *- if you need only fast but no cool features or journaling, ext2
> * * *is still a good choice
>
> Thanks,
> Christian.
> --
> BOFH excuse #84:
>
> Someone is standing on the ethernet cable, causing a kink in the cable
> --
> To unsubscribe from this list: send the line "unsubscribe reiserfs-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at *http://vger.kernel.org/majordomo-info.html
>

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 12-24-2009, 12:05 PM
 
Default benchmark results

> I've had the chance to use a testsystem here and couldn't
> resist

Unfortunately there seems to be an overproduction of rather
meaningless file system "benchmarks"...

> running a few benchmark programs on them: bonnie++, tiobench,
> dbench and a few generic ones (cp/rm/tar/etc...) on ext{234},
> btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options
> and +noatime for all of them.

> Here are the results, no graphs - sorry: [ ... ]

After having a glance, I suspect that your tests could be
enormously improved, and doing so would reduce the pointlessness of
the results.

A couple of hints:

* In the "generic" test the 'tar' test bandwidth is exactly the
same ("276.68 MB/s") for nearly all filesystems.

* There are read transfer rates higher than the one reported by
'hdparm' which is "66.23 MB/sec" (comically enough *all* the
read transfer rates your "benchmarks" report are higher).

BTW the use of Bonnie++ is also usually a symptom of a poor
misunderstanding of file system benchmarking.

On the plus side, test setup context is provided in the "env"
directory, which is rare enough to be commendable.

> Short summary, AFAICT:
> - btrfs, ext4 are the overall winners
> - xfs to, but creating/deleting many files was *very* slow

Maybe, and these conclusions are sort of plausible (but I prefer
JFS and XFS for different reasons); however they are not supported
by your results as they seem to me to lack much meaning, as what is
being measured is far from clear, and in particular it does not
seem to be the file system performance, or anyhow an aspect of
filesystem performance that might relate to common usage.

I think that it is rather better to run a few simple operations
(like the "generic" test) properly (unlike the "generic" test), to
give a feel for how well implemented are the basic operations of
the file system design.

Profiling a file system performance with a meaningful full scale
benchmark is a rather difficult task requiring great intellectual
fortitude and lots of time.

> - if you need only fast but no cool features or
> journaling, ext2 is still a good choice

That is however a generally valid conclusion, but with a very,
very important qualification: for freshly loaded filesystems.
Also with several other important qualifications, but "freshly
loaded" is a pet peeve of mine :-).

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 12-24-2009, 10:46 PM
Evgeniy Polyakov
 
Default benchmark results

Hi Ted.

On Thu, Dec 24, 2009 at 04:27:56PM -0500, tytso@mit.edu (tytso@mit.edu) wrote:
> > Unfortunately there seems to be an overproduction of rather
> > meaningless file system "benchmarks"...
>
> One of the problems is that very few people are interested in writing
> or maintaining file system benchmarks, except for file system
> developers --- but many of them are more interested in developing (and
> unfortunately, in some cases, promoting) their file systems than they
> are in doing a good job maintaining a good set of benchmarks. Sad but
> true...

Hmmmm.... I suppose here should be a link to such set?
No link? Than I suppose benchmark results are pretty much in sync with
what they are supposed to show.

> > * In the "generic" test the 'tar' test bandwidth is exactly the
> > same ("276.68 MB/s") for nearly all filesystems.
> >
> > * There are read transfer rates higher than the one reported by
> > 'hdparm' which is "66.23 MB/sec" (comically enough *all* the
> > read transfer rates your "benchmarks" report are higher).
>
> If you don't do a "sync" after the tar, then in most cases you will be
> measuring the memory bandwidth, because data won't have been written
> to disk. Worse yet, it tends to skew the results of the what happens
> afterwards (*especially* if you aren't running the steps of the
> benchmark in a script).

It depends on the size of untarred object, for linux kernel tarball and
common several gigs of RAM it is very valid not to run a sync after the
tar, since writeback will take care about it.

> > BTW the use of Bonnie++ is also usually a symptom of a poor
> > misunderstanding of file system benchmarking.
>
> Dbench is also a really nasty benchmark. If it's tuned correctly, you
> are measuring memory bandwidth and the hard drive light will never go
> on. :-) The main reason why it was interesting was that it and tbench
> was used to model a really bad industry benchmark, netbench, which at
> one point a number of years ago I/T managers used to decide which CIFS
> server they would buy[1]. So it was useful for Samba developers who were
> trying to do competitive benchmkars, but it's not a very accurate
> benchmark for measuring real-life file system workloads.
>
> [1] http://samba.org/ftp/tridge/dbench/README

Was not able to resist to write a small notice, what no matter what, but
whatever benchmark is running, it _does_ show system behaviour in one
or another condition. And when system behaves rather badly, it is quite
a common comment, that benchmark was useless. But it did show that
system has a problem, even if rarely triggered one

Not an ext4 nitpick of course.

--
Evgeniy Polyakov

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 12-25-2009, 03:11 PM
 
Default benchmark results

On Fri, Dec 25, 2009 at 02:46:31AM +0300, Evgeniy Polyakov wrote:
> > [1] http://samba.org/ftp/tridge/dbench/README
>
> Was not able to resist to write a small notice, what no matter what, but
> whatever benchmark is running, it _does_ show system behaviour in one
> or another condition. And when system behaves rather badly, it is quite
> a common comment, that benchmark was useless. But it did show that
> system has a problem, even if rarely triggered one

If people are using benchmarks to improve file system, and a benchmark
shows a problem, then trying to remedy the performance issue is a good
thing to do, of course. Sometimes, though the case which is
demonstrated by a poor benchmark is an extremely rare corner case that
doesn't accurately reflect common real-life workloads --- and if
addressing it results in a tradeoff which degrades much more common
real-life situations, then that would be a bad thing.

In situations where benchmarks are used competitively, it's rare that
it's actually a *problem*. Instead it's much more common that a
developer is trying to prove that their file system is *better* to
gullible users who think that a single one-dimentional number is
enough for them to chose file system X over file system Y.

For example, if I wanted to play that game and tell people that ext4
is better, I'd might pick this graph:

http://btrfs.boxacle.net/repository/single-disk/2.6.29-rc2/2.6.29-rc2/2.6.29-rc2_Mail_server_simulation._num_threads=32.html

On the other hand, this one shows ext4 as the worst compared to all
other file systems:

http://btrfs.boxacle.net/repository/single-disk/2.6.29-rc2/2.6.29-rc2/2.6.29-rc2_Large_file_random_writes_odirect._num_threads= 8.html

Benchmarking, like statistics, can be extremely deceptive, and if
people do things like carefully order a tar file so the files are
optimal for a file system, it's fair to ask whether that's a common
thing for people to be doing (either unpacking tarballs or unpacking
tarballs whose files have been carefully ordered for a particular file
systems). When it's the only number used by a file system developer
when trying to convince users they should use their file system, at
least in my humble opinion it becomes murderously dishonest.

- Ted

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 12-25-2009, 03:14 PM
 
Default benchmark results

On Thu, Dec 24, 2009 at 05:52:34PM -0800, Christian Kujau wrote:
>
> Well, I do "sync" after each operation, so the data should be on disk, but
> that doesn't mean it'll clear the filesystem buffers - but this doesn't
> happen that often in the real world too. Also, all filesystem were tested
> equally (I hope), yet some filesystem perform better than another - even
> if all the content copied/tar'ed/removed would perfectly well fit into the
> machines RAM.

Did you include the "sync" in part of what you timed? Peter was quite
right --- the fact that the measured bandwidth in your "cp" test is
five times faster than the disk bandwidth as measured by hdparm, and
many file systems had exactly the same bandwidth, makes me very
suspicious that what was being measured was primarily memory bandwidth
--- and not very useful when trying to measure file system
performance.

- Ted

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 12-25-2009, 03:22 PM
Larry McVoy
 
Default benchmark results

On Fri, Dec 25, 2009 at 11:14:53AM -0500, tytso@mit.edu wrote:
> On Thu, Dec 24, 2009 at 05:52:34PM -0800, Christian Kujau wrote:
> >
> > Well, I do "sync" after each operation, so the data should be on disk, but
> > that doesn't mean it'll clear the filesystem buffers - but this doesn't
> > happen that often in the real world too. Also, all filesystem were tested
> > equally (I hope), yet some filesystem perform better than another - even
> > if all the content copied/tar'ed/removed would perfectly well fit into the
> > machines RAM.
>
> Did you include the "sync" in part of what you timed? Peter was quite
> right --- the fact that the measured bandwidth in your "cp" test is
> five times faster than the disk bandwidth as measured by hdparm, and
> many file systems had exactly the same bandwidth, makes me very
> suspicious that what was being measured was primarily memory bandwidth
> --- and not very useful when trying to measure file system
> performance.

Dudes, sync() doesn't flush the fs cache, you have to unmount for that.
Once upon a time Linux had an ioctl() to flush the fs buffers, I used
it in lmbench.

ioctl(fd, BLKFLSBUF, 0);

No idea if that is still supported, but sync() is a joke for benchmarking.
--
---
Larry McVoy lm at bitmover.com http://www.bitkeeper.com

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 12-25-2009, 03:33 PM
 
Default benchmark results

On Fri, Dec 25, 2009 at 08:22:38AM -0800, Larry McVoy wrote:
>
> Dudes, sync() doesn't flush the fs cache, you have to unmount for that.
> Once upon a time Linux had an ioctl() to flush the fs buffers, I used
> it in lmbench.
>
> ioctl(fd, BLKFLSBUF, 0);
>
> No idea if that is still supported, but sync() is a joke for benchmarking.

Depends on what you are trying to do (flush has multiple meanings, so
using can be ambiguous). BLKFLSBUF will write out any dirty buffers,
*and* empty the buffer cache. I use it when benchmarking e2fsck
optimization. It doesn't do anything for the page cache. If you are
measuring the time to write a file, using fsync() or sync() will
include the time to actually write the data to disk. It won't empty
caches, though; if you are going to measure read as well as writes,
then you'll probably want to do something like "echo 3 >
/proc/sys/vm/drop-caches".

- Ted

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 12-25-2009, 05:42 PM
Christian Kujau
 
Default benchmark results

On Fri, 25 Dec 2009 at 11:14, tytso@mit.edu wrote:
> Did you include the "sync" in part of what you timed?

In my "generic" tests[0] I do "sync" after each of the cp/tar/rm
operations.

> Peter was quite
> right --- the fact that the measured bandwidth in your "cp" test is
> five times faster than the disk bandwidth as measured by hdparm, and
> many file systems had exactly the same bandwidth, makes me very
> suspicious that what was being measured was primarily memory bandwidth

That's right, and that's what I replied to Peter on jfs-discussion[1]:

>> * In the "generic" test the 'tar' test bandwidth is exactly the
>> same ("276.68 MB/s") for nearly all filesystems.
True, because I'm tarring up ~2.7GB of content while the box is equipped
with 8GB of RAM. So it *should* be the same for all filesystems, as
Linux could easily hold all this in its caches. Still, jfs and zfs
manage to be slower than the rest.

> --- and not very useful when trying to measure file system
> performance.

For the bonnie++ tests I chose an explicit filesize of 16GB, two times the
size of the machine's RAM to make sure it will tests the *disks*
performance. And to be consistent across one benchmark run, I should have
copied/tarred/removed 16GB as well. However, I figured not to do that -
but to *use* the filesystem buffers instead of ignoring them. After all,
it's not about disk performace (that's what hdparm could be for) but
filesystem performance (or comparision, more exactly) - and I'm not exited
about the fact, that almost all filesystems are copying with ~276MB/s but
I'm wondering why zfs is 13 times slower when copying data or xfs takes
200 seconds longer than other filesystems, while it's handling the same
size as all the others. So no, please don't compare the bonnie++ results
against my "generic" results withing these results - as they're
(obviously, I thought) taken with different parameters/content sizes.

Christian.

[0] http://nerdbynature.de/benchmarks/v40z/2009-12-22/env/fs-bench.sh.txt
[1] http://tinyurl.com/yz6x2sj
--
BOFH excuse #85:

Windows 95 undocumented "feature"

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 12-25-2009, 05:51 PM
Christian Kujau
 
Default benchmark results

On Fri, 25 Dec 2009 at 08:22, Larry McVoy wrote:
> Dudes, sync() doesn't flush the fs cache, you have to unmount for that.

Thanks Larry, that was exactly my point[0] too, I should add that to the
results page to avoid further confusion or misassumptions:

> Well, I do "sync" after each operation, so the data should be on
> disk, but that doesn't mean it'll clear the filesystem buffers
> - but this doesn't happen that often in the real world too.

I realize however that on the same results page the bonnie++ tests were
run with a filesize *specifically* set to not utilize the filesystem
buffers any more but the measure *disk* performance while my "generic*
tests do something else - and thus cannot be compared to the bonnie++ or
hdparm results.

> No idea if that is still supported, but sync() is a joke for benchmarking.

I was using "sync" to make sure that the data "should" be on the disks
now, I did not want to flush the filesystem buffers during the "generic"
tests.

Thanks,
Christian.

[0] http://www.spinics.net/lists/linux-ext4/msg16878.html
--
BOFH excuse #210:

We didn't pay the Internet bill and it's been cut off.

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 12-25-2009, 05:56 PM
Christian Kujau
 
Default benchmark results

On Fri, 25 Dec 2009 at 11:33, tytso@mit.edu wrote:
> caches, though; if you are going to measure read as well as writes,
> then you'll probably want to do something like "echo 3 >
> /proc/sys/vm/drop-caches".

Thanks for the hint, I could find sys/vm/drop-caches documented in
Documentation/ but it's good to know there's a way to flush all these
caces via this knob. Maybe I should add this to those "genric" tests to be
more comparable to the other benchmarks.

Christian.
--
BOFH excuse #210:

We didn't pay the Internet bill and it's been cut off.

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 

Thread Tools




All times are GMT. The time now is 08:51 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org