ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    RAID Performance Calculators

    Scheduled Pinned Locked Moved IT Discussion
    46 Posts 9 Posters 9.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • coliverC
      coliver
      last edited by

      Can a SSD drive saturate the SATA connection they are attached to? Or are they not that fast yet.

      I know most enterprises will probably start moving to PCIe SSD drives or at least a controller to integrate them.

      1 Reply Last reply Reply Quote 1
      • DustinB3403D
        DustinB3403
        last edited by

        Well SATA supports up to 6GB/S

        With my calculations I can push 4.4GB/m or 700MB/S (write)

        So I don't think so.

        coliverC DashrenderD 2 Replies Last reply Reply Quote 0
        • coliverC
          coliver @DustinB3403
          last edited by

          @DustinB3403 said:

          Well SATA supports up to 6GB/S

          With my calculations I can push 4.4GB/m or 700MB/S (write)

          So I don't think so.

          Thanks.

          1 Reply Last reply Reply Quote 0
          • DashrenderD
            Dashrender @DustinB3403
            last edited by

            @DustinB3403 said:

            Well SATA supports up to 6GB/S

            With my calculations I can push 4.4GB/m or 700MB/S (write)

            So I don't think so.

            Not a single drive. But an array definitely can.

            DustinB3403D 1 Reply Last reply Reply Quote 0
            • DustinB3403D
              DustinB3403 @Dashrender
              last edited by

              @Dashrender my calculations are a 12 disk RAID 5 array.

              A bigger system might.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @DustinB3403
                last edited by

                @DustinB3403 said:

                Yeah I did the math on the SSD drives above, and the rates IOPS is 4.4 GB/m.

                Drive performance is not measured in GB/m. It is measured in IOPS.

                1 Reply Last reply Reply Quote 1
                • scottalanmillerS
                  scottalanmiller
                  last edited by

                  Looking at throughput numbers for drives is almost always useless. If you are building a streaming video server or a backup target that takes a single backup stream at a time, okay, there is a time where throughput can matter. But it is rare.

                  There is a reason why IOPS is the only number generally used when talking storage performance - because it is the only one of significance. It is only because of this that things like SANs have any hope of working as they have terribly slow throughput bottlenecks between them and the servers that they support. But most businesses can run from iSCSI over 1GigE wires. Why? Because it is the IOPS that matter, rarely the throughput.

                  If you look at throughput numbers, you will come up with some crazily dangerous comparisons that will lead you in some terrible decision making directions.

                  JaredBuschJ 1 Reply Last reply Reply Quote 0
                  • JaredBuschJ
                    JaredBusch @scottalanmiller
                    last edited by

                    @scottalanmiller said:

                    There is a reason why IOPS is the only number generally used when talking storage performance - because it is the only one of significance.

                    I've not even bothered to count how many times he has been told that in this thread yet he keeps not listening.

                    scottalanmillerS 1 Reply Last reply Reply Quote 1
                    • scottalanmillerS
                      scottalanmiller @JaredBusch
                      last edited by

                      @JaredBusch said:

                      @scottalanmiller said:

                      There is a reason why IOPS is the only number generally used when talking storage performance - because it is the only one of significance.

                      I've not even bothered to count how many times he has been told that in this thread yet he keeps not listening.

                      Speaking to him, he was confused and thought that GB/m was an IOPS measurement and did not realize that IOPS was the units.

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller
                        last edited by

                        So how to figure out the IOPS of an array? Start with getting the IOPS numbers from the drives. If we are dealing with 50,000 Read IOPS and 100,000 Write IOPS you just use the formula from the below link and you will get the rough IOPS number:

                        RAID Performance Math

                        Now you have to deal with total capacity of the RAID controller. You can only have so many IOPS before the RAID controller cannot push it.

                        1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller
                          last edited by

                          This article helps explain the issues with the RAID controller limits...

                          http://mangolassi.it/topic/2072/testing-the-limits-of-the-dell-h710-raid-controller-with-ssd

                          1 Reply Last reply Reply Quote 1
                          • DustinB3403D
                            DustinB3403
                            last edited by

                            OK So I read though the articles provided, Thank you @scottalanmiller .

                            In doing the math, I have 1 remaining question. Should I use QD1 or QD32 read/write performance markers?

                            I did the math with QD1 IOPS 10000/40000 respectively. With an 80/20 Ratio a baseline. (I'm sure this needs to be verified with DPACK though).

                            But if I do the math with QD32 I'm guessing I'll have a dramtically different resulting number. Since QD32 is 197,000 /88,000 respectively.

                            scottalanmillerS 1 Reply Last reply Reply Quote 0
                            • DashrenderD
                              Dashrender
                              last edited by

                              How many IOPs do you need? Assuming you're not in an IOP defect now, DPACk should tell you that to. But if you are in a defect today, it will be much harder to know.

                              If you have the time and resources, you could see about throwing an SSD in a system, loading it up with your workload and see what DPACK tells you then...

                              DustinB3403D 1 Reply Last reply Reply Quote 0
                              • DustinB3403D
                                DustinB3403
                                last edited by

                                With QD1 on a RAID 5 12 disk array I'm looking at 48,000 IOPS.

                                If I use QD32 on a RAID 5 12 disk array I'm looking at 525,600 IOPS.

                                Can someone clarify this?

                                scottalanmillerS 1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller @DustinB3403
                                  last edited by

                                  @DustinB3403 said:

                                  In doing the math, I have 1 remaining question. Should I use QD1 or QD32 read/write performance markers?

                                  That's tough. That you can only determine from measuring your actual usage and, in reality, you need a big blend of numbers. I'd start by getting a number from both to provide a range of reasonable possibilities.

                                  1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller @DustinB3403
                                    last edited by

                                    @DustinB3403 said:

                                    With QD1 on a RAID 5 12 disk array I'm looking at 48,000 IOPS.

                                    If I use QD32 on a RAID 5 12 disk array I'm looking at 525,600 IOPS.

                                    Can someone clarify this?

                                    You'll likely be somewhere very much in the middle. These are kind of the best case and worst case numbers on a very large curve.

                                    1 Reply Last reply Reply Quote 0
                                    • DustinB3403D
                                      DustinB3403 @Dashrender
                                      last edited by

                                      @Dashrender We likely need very little IOPS. We don't run any intensive applications, most literally Network shares and Domain functions.

                                      DashrenderD 1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller
                                        last edited by

                                        One of the secrets of IOPS, and of many many things in IT, is that there is not a real answer in terms of an actual number, the numbers are massive estimates. What you should actually get is a big curve or 3D wave that represents IOPS under different conditions.

                                        1 Reply Last reply Reply Quote 1
                                        • DustinB3403D
                                          DustinB3403
                                          last edited by

                                          Should I really be concerned about this?

                                          The goal is to run and host VM's and network share data off of two XenServer Host. Or should I simply state that "on the low end, we'd see IOPS at 48K to 525K, which is still mountains faster than the SR disk we have now in stand alone servers"

                                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                                          • DashrenderD
                                            Dashrender @DustinB3403
                                            last edited by

                                            @DustinB3403 said:

                                            @Dashrender We likely need very little IOPS. We don't run any intensive applications, most literally Network shares and Domain functions.

                                            What is your current IOP availability? Are you having drive related issues today? If not, then as long as you match the current number (nearly impossible to not blow it away with SSD drives) you should be golden.

                                            If I have 8 SAS 15K drives in RAID 10 today to replace them with 8 SSD in RAID 5, I personally wouldn't even look at IOP numbers as they are 10-1000x more than they were before, and probably 100-1000x more.

                                            DustinB3403D scottalanmillerS 2 Replies Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 2 / 3
                                            • First post
                                              Last post