Today I upgraded to the new xen and kernel packages that just came out, as I'd been looking forward to all the IO cleanup supposedly in xen 3.0.2. Imagine my disappointment when I found out that my domU's virtual disks still seem to be throttled at around 2MB/s sustain writes for some reason. My dom0 is "much" better at ~14MB/s sustained writes.
Is this what other people are seeing? It's painful.
Hi,
On Mon, 2006-05-22 at 20:26 -0700, Ben wrote:
Today I upgraded to the new xen and kernel packages that just came out, as I'd been looking forward to all the IO cleanup supposedly in xen 3.0.2. Imagine my disappointment when I found out that my domU's virtual disks still seem to be throttled at around 2MB/s sustain writes for some reason. My dom0 is "much" better at ~14MB/s sustained writes.
Are you using a file or device-backed virtual device?
--Stephen
Hi,
On Tue, 2006-05-23 at 11:54 +0100, Stephen C. Tweedie wrote:
On Mon, 2006-05-22 at 20:26 -0700, Ben wrote:
Today I upgraded to the new xen and kernel packages that just came out, as I'd been looking forward to all the IO cleanup supposedly in xen 3.0.2. Imagine my disappointment when I found out that my domU's virtual disks still seem to be throttled at around 2MB/s sustain writes for some reason. My dom0 is "much" better at ~14MB/s sustained writes.
Are you using a file or device-backed virtual device?
On two local guests on a rawhide box:
Device-backed guest (LVM on fast SATA disk): dom0 hdparm -t: 53.70 MB/sec domU hdparm -t: 53.61 MB/sec
File-backed guest: dom0 hdparm -t: 85.7138 MB/sec domU hdparm -t: 75.2901 MB/sec
(The file is a sparse file that's only about 50% full, so there's actually less data being read in that case.)
The file-backed domain definitely takes a lot more dom0 CPU time to serve, but they are both pretty fast.
Cheers, Stephen
Hi,
On Tue, 2006-05-23 at 13:31 +0100, Stephen C. Tweedie wrote:
File-backed guest: dom0 hdparm -t: 85.7138 MB/sec domU hdparm -t: 75.2901 MB/sec
That was rawhide; I get similar performance from current FC5:
domU hdparm -t: 77.99 MB/sec
and identical results testing the raw disk rate with lmdd.
--Stephen
On Tue, 2006-05-23 at 13:31 +0100, Stephen C. Tweedie wrote:
File-backed guest: dom0 hdparm -t: 85.7138 MB/sec domU hdparm -t: 75.2901 MB/sec
That was rawhide; I get similar performance from current FC5:
domU hdparm -t: 77.99 MB/sec
and identical results testing the raw disk rate with lmdd.
Bleah.. didn't CC the list:
Mine:
DBD - 5G LVM partition on a 7200RPM IDE 200G: [root@xm-fc5-001 ~]# dd if=/dev/zero of=/home/foo bs=4k count=25000 25000+0 records in 25000+0 records out 102400000 bytes (102 MB) copied, 2.06112 seconds, 49.7 MB/s
DBD - 120G dedicated disk 7200RPM IDE (phy device set as xvdb) [root@xm-fc5-001 ~]# dd if=/dev/zero of=/export/foo bs=4k count=25000 25000+0 records in 25000+0 records out 102400000 bytes (102 MB) copied, 0.719008 seconds, 142 MB/s
hdparm returns similar results. The first physical device is shared with four other DomU's.
On two local guests on a rawhide box:
Device-backed guest (LVM on fast SATA disk): dom0 hdparm -t: 53.70 MB/sec domU hdparm -t: 53.61 MB/sec
File-backed guest: dom0 hdparm -t: 85.7138 MB/sec domU hdparm -t: 75.2901 MB/sec
I guess "hdparm -t" is not very significative, but anyways:
dom0: hdparm -t: 149.17 MB/sec domU: hdparm -t: 97..07 MB/sec
However, real performance is somewhat lower, at about 70MB/s for dom0 and about 33MB/s
Are you using LVM-backed domUs?
High numbers like this give me hope that my problems are not inherent in Xen.
On Tue, 23 May 2006, Felipe Alfaro Solana wrote:
On two local guests on a rawhide box:
Device-backed guest (LVM on fast SATA disk): dom0 hdparm -t: 53.70 MB/sec domU hdparm -t: 53.61 MB/sec
File-backed guest: dom0 hdparm -t: 85.7138 MB/sec domU hdparm -t: 75.2901 MB/sec
I guess "hdparm -t" is not very significative, but anyways:
dom0: hdparm -t: 149.17 MB/sec domU: hdparm -t: 97..07 MB/sec
However, real performance is somewhat lower, at about 70MB/s for dom0 and about 33MB/s
Hi,
On Tue, 2006-05-23 at 10:34 -0700, Ben wrote:
Are you using LVM-backed domUs?
The numbers I listed as being "LVM on fast SATA disk" are LVM backed, yes. :-) The ones listed as "file-backed" were on a separate dom0 ext3 filesystem dedicated to xen images, but that filesystem also uses LVM underneath.
That host is a dual-core box, but I also tried with the "nosmp" option in case there are performance problems stemming from dom0 and domU competing for a single CPU. Performance was largely unchanged.
Cheers, Stephen
That sounds similar to my setup, except that my raid-5 is hardware and pretends to be a scsi device.
Which version of xen are you using? I'm on xen-3.0.2-0.FC5.3, and using kernel kernel-xen0-2.6.16-1.2122_FC5 and kernel-xenU-2.6.16-1.2122_FC5.
On Tue, 23 May 2006, Felipe Alfaro Solana wrote:
Are you using LVM-backed domUs?
Yes... dom0 manages LVM laid over a RAID-5 array spread over four 300GB SATA disks, and each domU is contained within an LVM logical volume. Anyways, hdparm -t numbers tend to be very high. However, real throughput is much lower.
That sounds similar to my setup, except that my raid-5 is hardware and pretends to be a scsi device.
Which version of xen are you using? I'm on xen-3.0.2-0.FC5.3, and using kernel kernel-xen0-2.6.16-1.2122_FC5 and kernel-xenU-2.6.16-1.2122_FC5.
I'm using the Xen 3.0.2 hypervisor and xen-kernel from XenSource download page, installed on a dom0 Red Hat's Enterprise Linux ES 4.1 Update 3.
Hi,
On Tue, 2006-05-23 at 22:44 +0200, Felipe Alfaro Solana wrote:
I'm using the Xen 3.0.2 hypervisor and xen-kernel from XenSource download page, installed on a dom0 Red Hat's Enterprise Linux ES 4.1 Update 3.
You might want to try the FC-5 kernel-xen0 and xen. FC-5 has a change to the default sched-sedf scheduler parameters which can make for much fairer CPU balance between dom0 and domU.
--Stephen
....except, I'm using that kernel and my performance is much worse than Felipe's?
On Tue, 23 May 2006, Stephen C. Tweedie wrote:
Hi,
On Tue, 2006-05-23 at 22:44 +0200, Felipe Alfaro Solana wrote:
I'm using the Xen 3.0.2 hypervisor and xen-kernel from XenSource download page, installed on a dom0 Red Hat's Enterprise Linux ES 4.1 Update 3.
You might want to try the FC-5 kernel-xen0 and xen. FC-5 has a change to the default sched-sedf scheduler parameters which can make for much fairer CPU balance between dom0 and domU.
--Stephen
Hi
On Tue 23-May-2006 at 01:34:31PM -0700, Ben wrote:
my raid-5 is hardware and pretends to be a scsi device.
If it's a 3ware RAID card this could well be the bottle neck -- from my experience they tend to have very high iowait and get really slow when there is a lot of data being moved about...
Chris
It's an adaptec card, and I've seen it push over 25MB/s before I went to xen, so it's not the card.
On May 24, 2006, at 1:53 AM, Chris Croome wrote:
Hi
On Tue 23-May-2006 at 01:34:31PM -0700, Ben wrote:
my raid-5 is hardware and pretends to be a scsi device.
If it's a 3ware RAID card this could well be the bottle neck -- from my experience they tend to have very high iowait and get really slow when there is a lot of data being moved about...
Chris
-- Chris Croome chris@webarchitects.co.uk web design http://www.webarchitects.co.uk/ web content management http://mkdoc.com/
-- Fedora-xen mailing list Fedora-xen@redhat.com https://www.redhat.com/mailman/listinfo/fedora-xen
Well, this gives me hope that I'm suffering from some kind of misconfiguration issue. I'm using LVM-backed domUs, and I see dom0 hdparm-t: 53.68 MB/sec domU hdparm -t: 18.39 MB/sec
Unfortunately, I don't know where to look to figure out what's going on. Any suggestions?
On May 23, 2006, at 5:31 AM, Stephen C. Tweedie wrote:
Hi,
On two local guests on a rawhide box:
Device-backed guest (LVM on fast SATA disk): dom0 hdparm -t: 53.70 MB/sec domU hdparm -t: 53.61 MB/sec
Hi,
On Tue, 2006-05-23 at 08:55 -0700, Ben wrote:
Well, this gives me hope that I'm suffering from some kind of misconfiguration issue. I'm using LVM-backed domUs, and I see dom0 hdparm-t: 53.68 MB/sec domU hdparm -t: 18.39 MB/sec
"top" and "xm top", perhaps? If you have fast storage and slow CPUs, you may be seeing the system struggle to keep the IO pipeline between domU and dom0 full enough to saturate the disk, for example.
Another thing, are you using "phy:" or "file:" on the domU config?
--Stephen
I'm running 1 domU on my dom0, and both are largely idle. They're running on a dualcore Opeteron, so I don't suspect the cpu is the bottleneck.
I'm using phy: in my domU config.
I notice (with large scp copies, at least) that writing to the domU starts off at network speed, but then slows down to about 2MB/s after ~10 seconds, presumably as buffers fill up. Looking at vmstat on the domU shows 100% cpu wait, and on the dom0 at the same time, I see almost 100% idle, with the occasional 1% system.
Have any of you tweaked your xen scheduling?
On Tue, 23 May 2006, Stephen C. Tweedie wrote:
Hi,
On Tue, 2006-05-23 at 08:55 -0700, Ben wrote:
Well, this gives me hope that I'm suffering from some kind of misconfiguration issue. I'm using LVM-backed domUs, and I see dom0 hdparm-t: 53.68 MB/sec domU hdparm -t: 18.39 MB/sec
"top" and "xm top", perhaps? If you have fast storage and slow CPUs, you may be seeing the system struggle to keep the IO pipeline between domU and dom0 full enough to saturate the disk, for example.
Another thing, are you using "phy:" or "file:" on the domU config?
--Stephen