FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 10-12-2012, 11:06 PM
Michael Zugelder
 
Default PROBLEM: read starvation during writeback

Hi,

On Fri, 2012-10-12 at 22:34 +0200, Milan Broz wrote:
> On 10/12/2012 09:37 PM, Michael Zugelder wrote:
> > Testing setup:
> > * Fedora 17, stock 3.5.4-2.fc17 kernel and a self-compiled 3.6.1 kernel
> > * 320 GiB USB hard drive (sdb)
>
> I guess that USB is the key factor here... I remember to have similar
> problem some time ago even without dmcrypt.
>
> Is it reproducible with the same kernel cfg but with internal disk?

I noticed this problem on my encrypted root partition and used the
USB device to reproduce it. It's just much easier to get writeback on a
simple 20 MiB/s device and being able to actually use the root device
while performing tests.

My root device (SATA2, Samsung SSD 830, aes-xts-plain, btrfs):
3463 seeks/second, 0.29 ms random access time

During writeback:
0 seeks/second, 4285.71 ms random access time

> You can also test completely fake underlying device,
> use device-mapper- zero target:
> dmsetup create dev_zero --table "0 <sectors size> zero"
> (All writes are dropped and all reads returns zero in this case.)
>
> Is there any starvation with this setup? (It shouldn't.)

Using the zero target alone, no issues (192286 seeks/second).

> Btw you can use cryptsetup with cipher "null" to simplify
> (I added to cryptsetup to test exactly such scenarios).

Neat, but doesn't work with the device mapper null target. Using raw
dmsetup with crypto_null results in a nice test case:

Preparation:
# dmsetup create dev_zero --table "0 $((1024*1024*1024)) zero"
# dmsetup create nullcrypt --table "0 $((1024*1024*1024)) crypt cipher_null - 0 /dev/mapper/dev_zero 0"

Now some writes:
# dd if=/dev/zero of=/dev/mapper/nullcrypt bs=1M

Then try to read something:
# seeker /dev/mapper/nullcrypt
8260 seeks/second, 0.12 ms random access time

# dd if=/dev/mapper/nullcrypt of=/dev/null count=1 skip=355154
512 bytes (512 B) copied, 18.0695 s, 0.0 kB/s

For some time period, reads are fine (see the relatively low average
random access time), but sometimes reads take multiple seconds. A
benchmark showing min/max/avg/med/stdev values for random reads would be
nice.

> > * writeback induced by running 'dd if=/dev/zero of=$target bs=1M'
>
> Any change if you use oflag=direct ? (iow using direct io)

No issues while using direct IO (25054 seeks/second, no obvious spikes)
using the null target, cipher_null test from above.

> > I experimented a bit with the other device mapper targets, namely linear
> > and stripe, but both worked completely fine. I also tried putting a
> > linear mapping above dm-crypt, with no impact on performance. Comparing
> > the content of the /sys/block/$DEV files of the linear mapping and
> > dm-crypt, there are no differences beside the name, dev no, stats,
> > uevent and inflight files.
>
> There is crucial difference between linear/stripe and dmcrypt:
> linear just remaps IO target device, dmcrypt queues operations
> (using kernel workqueue) and creates full bio clones.
> So comparison here is IMHO not much helpful.

Okay, I just wanted to rule out a general device mapper problem.

> There are two internal dmcrypt queues, but I think that the problem
> is triggered by some combination with USB storage backend.

Results above seem to indicate otherwise.

> > Any pointers would be appreciated, I haven't found much on the web about
> > this issue.
>
> Btw there was a proposed rewrite of internal dmcrypt queues, if you have time,
> you can try if it changes anything for your use case.
> Patches in dm-devel archive
> http://www.redhat.com/archives/dm-devel/2012-August/msg00210.html

Seems interesting, I'll try it out tomorrow.


Thanks,
Michael

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 10-13-2012, 08:56 PM
Michael Zugelder
 
Default PROBLEM: read starvation during writeback

Hi,

I have a small update on my previously posted test case. I noticed
'cryptsetup -c null' uses cipher_null in ecb mode, and in that case, I
don't observe any read starvation. I'm not sure what the default cipher
mode is (cryptsetup uses cbc-essiv:sha256), but I can reproduce the
problem using just 'cipher_null-cbc-plain'.

The revised test case is as follows:

Preparation:
# dmsetup create dev_zero --table "0 $((1024*1024*1024)) zero"
# dmsetup create cipher_null-cbc-plain --table "0 $((1024*1024*1024)) crypt cipher_null-cbc-plain - 0 /dev/mapper/dev_zero 0"
# dmsetup create cipher_null-ecb --table "0 $((1024*1024*1024)) crypt cipher_null-ecb - 0 /dev/mapper/dev_zero 0"

Testing cipher_null-cbc-plain:
# dd if=/dev/zero bs=1M of=/dev/mapper/cipher_null-cbc-plain
# ioping -R /dev/mapper/cipher_null-cbc-plain
1 requests completed in 9530.3 ms, 0 iops, 0.0 mb/s

Note that for some reason, dd writes extremely slow (below 100.0 MB/s on
my machine) in this test.

On the other hand, cipher_null-ecb works fine.
# dd if=/dev/zero bs=1M of=/dev/mapper/cipher_null-ecb
# ioping -R /dev/mapper/cipher_null-ecb
32337 requests completed in 3000.0 ms, 54042 iops, 211.1 mb/s
min/avg/max/mdev = 0.0/0.0/18.8/0.2 ms

dd writes at around 850 MB/s in that case (1.8 GB/s directly to the null
target).

I tried similar benchmarks using aes instead of cipher_null, but found
no bothersome spikes with aes-ecb, aes-cbc-plain, aes-cbc-essiv:sha256.
But aes-xts-plain sometimes drops down to 5 iops (over the course of 3
seconds).

So there is probably something very wrong with cipher_null-cbc-plain
being an order of magnitude slower than aes-cbc-plain, but the cbc mode
itself doesn't seem to cause the problem.

I also ran the benchmarks on an old Athlon 64 X2 3800+ box running
kernel 3.2 and could reproduce it with very first try for every cipher
spec I tried (cipher_null-cbc-plain, cipher_null-ecb, aes-cbc-plain,
aes-ecb, aes-xts-plain). Best I could achieve was 5 iops. But
interestingly, I didn't observe the really huge spikes (upwards of 20
seconds), as I do on my 1nd gen Core i5 Notebook.

On Sat, 2012-10-13 at 01:06 +0200, Michael Zugelder wrote:
> On Fri, 2012-10-12 at 22:34 +0200, Milan Broz wrote:
> > Btw there was a proposed rewrite of internal dmcrypt queues, if you have time,
> > you can try if it changes anything for your use case.
> > Patches in dm-devel archive
> > http://www.redhat.com/archives/dm-devel/2012-August/msg00210.html
>
> Seems interesting, I'll try it out tomorrow.

I tried the patch set earlier today, but had the same issues when
reading while writing to a 'crypt_null' mapping.

Anything else I could do to diagnose this problem?


Michael

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 08:44 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org