ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Need to Improve Disk Utilization on XenServer 7.2

    IT Discussion
    xp xenserver xenserver 7.2 storage iops
    12
    98
    10.7k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @Obsolesce
      last edited by

      @tim_g said in Need to Improve Disk Utilization on XenServer 7.2:

      A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.

      He's using EXT4 + LVM, and qcow2 for virtualdisks.

      I'm using XFS + LVM, and RAW (.img) for virtual disks.

      My Win10 VM gets twice the I/O as his. We both had nothing in the background running.

      I know this doesn't help with Xen, but food for thought. (we're both using m.2 SSDs) Mine was over 2k MBps reads, his was 1k, mine was 400 MBps writes, his was 200

      Edit: We're both running Fedora 26. At that time I was running Gnome and him Cinnamon.

      But both on KVM, right? No Xen involved?

      ObsolesceO 1 Reply Last reply Reply Quote 0
      • scottalanmillerS
        scottalanmiller @krisleslie
        last edited by

        @krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:

        @tim_g what would be the comparison speed of a raid 10 off spinning rust to 1 ssd in iops?

        A SATA 7200 RPM drive is ~100 IOPS. So four of them in RAID 10 is 400 Read IOPS.

        A typical SATA SSD is 10K - 100K IOPS.

        You would need hundreds of SATA drives in a massive RAID 10 with huge cache to come close to a single $100 SSD drive, let alone a nice M.2 drive.

        1 Reply Last reply Reply Quote 1
        • ObsolesceO
          Obsolesce @scottalanmiller
          last edited by

          @scottalanmiller said in Need to Improve Disk Utilization on XenServer 7.2:

          @tim_g said in Need to Improve Disk Utilization on XenServer 7.2:

          A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.

          He's using EXT4 + LVM, and qcow2 for virtualdisks.

          I'm using XFS + LVM, and RAW (.img) for virtual disks.

          My Win10 VM gets twice the I/O as his. We both had nothing in the background running.

          I know this doesn't help with Xen, but food for thought. (we're both using m.2 SSDs) Mine was over 2k MBps reads, his was 1k, mine was 400 MBps writes, his was 200

          Edit: We're both running Fedora 26. At that time I was running Gnome and him Cinnamon.

          But both on KVM, right? No Xen involved?

          Correct.

          1 Reply Last reply Reply Quote 1
          • FATeknollogeeF
            FATeknollogee @Obsolesce
            last edited by

            @tim_g said in Need to Improve Disk Utilization on XenServer 7.2:

            A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.

            He's using EXT4 + LVM, and qcow2 for virtualdisks.

            I'm using XFS + LVM, and RAW (.img) for virtual disks.

            Why RAW (.img), I thought qcow2 is/was preferred?

            ObsolesceO 1 Reply Last reply Reply Quote 0
            • ObsolesceO
              Obsolesce @FATeknollogee
              last edited by Obsolesce

              @fateknollogee said in Need to Improve Disk Utilization on XenServer 7.2:

              @tim_g said in Need to Improve Disk Utilization on XenServer 7.2:

              A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.

              He's using EXT4 + LVM, and qcow2 for virtualdisks.

              I'm using XFS + LVM, and RAW (.img) for virtual disks.

              Why RAW (.img), I thought qcow2 is/was preferred?

              I don't need any special features like snapshotting or anything like that, only performance. RAW is presented as-is to the VM and gives the best IO performance.

              I can always convert if need be, and there are other ways of snapshotting/checkpointing.

              There are many other differences too, but it's better to google comparisons rather than me try to quickly explain it while preoccupied.

              stacksofplatesS 1 Reply Last reply Reply Quote 1
              • stacksofplatesS
                stacksofplates @Obsolesce
                last edited by

                @tim_g said in Need to Improve Disk Utilization on XenServer 7.2:

                @fateknollogee said in Need to Improve Disk Utilization on XenServer 7.2:

                @tim_g said in Need to Improve Disk Utilization on XenServer 7.2:

                A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.

                He's using EXT4 + LVM, and qcow2 for virtualdisks.

                I'm using XFS + LVM, and RAW (.img) for virtual disks.

                Why RAW (.img), I thought qcow2 is/was preferred?

                I don't need any special features like snapshotting or anything like that, only performance. RAW is presented as-is to the VM and gives the best IO performance.

                I can always convert if need be, and there are other ways of snapshotting/checkpointing.

                There are many other differences too, but it's better to google comparisons rather than me try to quickly explain it while preoccupied.

                If you preallocate the qcow2s you get close to raw speeds.

                1 Reply Last reply Reply Quote 0
                • FATeknollogeeF
                  FATeknollogee
                  last edited by

                  For what I'm doing qcow2 is fine, except I can't snapshot "q35/uefi"...a fix is coming sometime in the future.

                  1 Reply Last reply Reply Quote 0
                  • K
                    krisleslie
                    last edited by

                    Scott I knew it was vast difference but not 100x!!!! Dude a 4 ssd raid 10 is Basiclally all you need !!!

                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @krisleslie
                      last edited by

                      @krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:

                      Scott I knew it was vast difference but not 100x!!!! Dude a 4 ssd raid 10 is Basiclally all you need !!!

                      Absolutely, that's why no one uses RAID 10 for SSDs, doesn't make sense as the leap in performance is so big. That's why RAID 5 is about all that is used.

                      jmooreJ 1 Reply Last reply Reply Quote 1
                      • BRRABillB
                        BRRABill @scottalanmiller
                        last edited by

                        @scottalanmiller said

                        You can also install the GUI on the server and have local management tools. Obviously managing purely remotely is better. But as this is a desktop anyway, local management tools are not out of the question and you can switch later once you are comfortable with it. There is no lock in to your GUI or tools choices like with Hyper-V.

                        "Obviously"

                        Hey do you consider cockpit a GUI?

                        K scottalanmillerS 2 Replies Last reply Reply Quote 0
                        • K
                          krisleslie @BRRABill
                          last edited by

                          @scottalanmiller
                          Hey Scott, I'm going to switch my SATA 7200 RPM spinning rust probably tonight. I'll go ahead and switch to a SSD (luckily I have a bunch sitting around). With that in mind, I'm trying to wrap my head around the performance impact per machine with SSD vs a HDD. I realize the RPM's drop per "vm" I have with spinning rust. Does my IOPS drop per vm with SSD?

                          Also just to give some context, on my two Dell R530 @ work (thank God remember those oldy goldy days SAM I had haha), I went with SAS 7200's with my H700 512 MB Cache. The thing runs like a champ with just my four 2 TB HDD's in Raid 10. I have literally no IOPS problems that I've experienced. I have about 30 VM's running. With that in mind, does it make sense at work to consider the swap to full SSD?

                          The only server i have that takes pure storage is a Ubiquiti NVR, short of that nothing else runs slow or has even blimped at me wrong. No startup sprawl either which when I look back at my old craptacular Dell Tower T110 i, it died from sprawl.

                          scottalanmillerS 2 Replies Last reply Reply Quote 0
                          • jmooreJ
                            jmoore @scottalanmiller
                            last edited by

                            @scottalanmiller said in Need to Improve Disk Utilization on XenServer 7.2:

                            @krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:

                            Scott I knew it was vast difference but not 100x!!!! Dude a 4 ssd raid 10 is Basiclally all you need !!!

                            Absolutely, that's why no one uses RAID 10 for SSDs, doesn't make sense as the leap in performance is so big. That's why RAID 5 is about all that is used.

                            If you used 4 ssd in a raid 10 do you still get ever increasing levels of performance?

                            scottalanmillerS K 2 Replies Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @BRRABill
                              last edited by

                              @brrabill said in Need to Improve Disk Utilization on XenServer 7.2:

                              @scottalanmiller said

                              You can also install the GUI on the server and have local management tools. Obviously managing purely remotely is better. But as this is a desktop anyway, local management tools are not out of the question and you can switch later once you are comfortable with it. There is no lock in to your GUI or tools choices like with Hyper-V.

                              "Obviously"

                              Hey do you consider cockpit a GUI?

                              Yes, do you consider it local?

                              BRRABillB 1 Reply Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller @jmoore
                                last edited by

                                @jmoore said in Need to Improve Disk Utilization on XenServer 7.2:

                                @scottalanmiller said in Need to Improve Disk Utilization on XenServer 7.2:

                                @krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:

                                Scott I knew it was vast difference but not 100x!!!! Dude a 4 ssd raid 10 is Basiclally all you need !!!

                                Absolutely, that's why no one uses RAID 10 for SSDs, doesn't make sense as the leap in performance is so big. That's why RAID 5 is about all that is used.

                                If you used 4 ssd in a raid 10 do you still get ever increasing levels of performance?

                                RAID is RAID. That the disk is SSD isn't a factor to the RAID system.

                                What can be a factor is if you use a RAID controller that caps out lower than your RAID subsystem.

                                1 Reply Last reply Reply Quote 1
                                • scottalanmillerS
                                  scottalanmiller @krisleslie
                                  last edited by

                                  @krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:

                                  @scottalanmiller
                                  Hey Scott, I'm going to switch my SATA 7200 RPM spinning rust probably tonight. I'll go ahead and switch to a SSD (luckily I have a bunch sitting around). With that in mind, I'm trying to wrap my head around the performance impact per machine with SSD vs a HDD. I realize the RPM's drop per "vm" I have with spinning rust. Does my IOPS drop per vm with SSD?

                                  No, SSDs do not have contention.

                                  1 Reply Last reply Reply Quote 0
                                  • K
                                    krisleslie @jmoore
                                    last edited by

                                    @jmoore
                                    In short, yes RAID 10 in SSD would be astronomical faster. I think looking at it like this,
                                    4 spinning rust HDD's at 7200 RPM will only net you a max of 400 IOPS, you can keep adding more HDD's in pairs and keep improving the speed. I assume that doesn't include overhead. I mean if you got a server and just want pure storage first, then speed, then sticking with RAID 10 allows you to keep incrementally improving.

                                    But when I took that into context in what Scott is saying, its like I would have use like 10 arrays of hard drives to equal the performance of 1 SSD! But this also increases risk, what if a drive fails! That's a lot of drives to baby sit!!!!

                                    I did my digging around 550 MB/s is roughly where most consumer ssd drives and I assume some enterprise drives that don't use nve cap off at. That's with a conservative base of 10,000 IOPS and goes up to 2 Million IOPS!!!!!!!

                                    [https://kb.sandisk.com/app/answers/detail/a_id/16376/~/sandisk-ultra-ii-ssd-specifications](link url)

                                    I guess looking at it from a different perspective, I would find little need for many small companies to ever want to go past 4 SSD's with RAID 10.

                                    To get the equivalent of that speed, you would have to stuff so many internal and external RAID controllers, you would have paid well more than needed to! Even if you went 15K SAS!

                                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @krisleslie
                                      last edited by

                                      @krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:

                                      Also just to give some context, on my two Dell R530 @ work (thank God remember those oldy goldy days SAM I had haha), I went with SAS 7200's with my H700 512 MB Cache. The thing runs like a champ with just my four 2 TB HDD's in Raid 10. I have literally no IOPS problems that I've experienced. I have about 30 VM's running. With that in mind, does it make sense at work to consider the swap to full SSD?

                                      So each NL-SAS there has 20% more IOPS than its SATA counterpart. Then RAID 10 on top of that. Then the "million IOPS" cache on top of that. Your base RAID there is getting nearly 5x the IOPS of your new machine, and then that cache makes it act many, many times that size. It's dramatic.

                                      As far as if it is worth moving to SSD, all depends on if more IOPS would be beneficial or not. If you have plenty, what value would faster storage bring?

                                      K 1 Reply Last reply Reply Quote 1
                                      • scottalanmillerS
                                        scottalanmiller @krisleslie
                                        last edited by

                                        @krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:

                                        I guess looking at it from a different perspective, I would find little need for many small companies to ever want to go past 4 SSD's with RAID 10.

                                        Why do we keep mentioning SSD in RAID 10. Don't even look at that. Three SSD in RAID 5.

                                        K 1 Reply Last reply Reply Quote 3
                                        • K
                                          krisleslie @scottalanmiller
                                          last edited by

                                          @scottalanmiller

                                          Little value I think, but the NVR still needs to be tuned. We keep running into errors but it seems that UBNT NVR is a ram hog, so in theory just slapping an additional 16 GB of RAM to it will make it perform better. We have about 3-4 people simultaneously getting into the system. It's but with 4 cores, 16 GB RAM, 1 TB of space. I find we are just at 500 GB of space of usage.

                                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @krisleslie
                                            last edited by

                                            @krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:

                                            @scottalanmiller

                                            Little value I think, but the NVR still needs to be tuned.

                                            SSDs don't need to be tuned here. It's like talking about how you have to tune your muscle car, but then deciding to get a rocket itself. But then feeling like you need to "tweak" the rocket, even though a minute ago a muscle car was going to do the job.

                                            1 Reply Last reply Reply Quote 1
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 2 / 5
                                            • First post
                                              Last post