ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Help choosing replacement Hyper-V host machines and connected storage

    IT Discussion
    storage virtualization hyper-v
    17
    183
    102.4k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @ryan from xbyte
      last edited by

      @ryan-from-xbyte said:

      If you don't need CPU performance and only need storage, I would recommend the R510. If you need the spindles, then the 24x2.5" R720xd would be a good fit. For raw capacity, the R510 will give you a cheaper option.

      For storage capacity the R510 could definitely do it. I think that the R720xd with the 12x LFF drive option is probably where he needs to be. Enough drive capacity to do the 16TB usable and the improved CPU performance without breaking the bank.

      1 Reply Last reply Reply Quote 1
      • J
        JohnFromSTL @scottalanmiller
        last edited by

        @scottalanmiller said:

        @ryan-from-xbyte said:

        You don't even need to go with a R720xd when comparing the R910. You can go with a R620 and get 10x2.5" drives. Cluster a couple of those together and you are far better off than getting an R910 and you will save a fortune on power.

        He's going to need LFF drives, I think, for this. He needs 16TB per node, from what I can tell, which is pretty big. If we have 2TB NL-SAS drives in RAID 6 that will require ten LFF bays just to hit the 16TB usable number.

        I could probably shave 2-3TB off the SQL and Oracle VMs. I've included more just in case we need the space as databases are migrated from one version to another.

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @JohnFromSTL
          last edited by

          @JohnFromSTL said:

          I could probably shave 2-3TB off the SQL and Oracle VMs. I've included more just in case we need the space as databases are migrated from one version to another.

          You don't want to cut these things too closely.

          1 Reply Last reply Reply Quote 1
          • scottalanmillerS
            scottalanmiller
            last edited by

            One thing we have not considered is database storage performance. Yes we can get what we need by going with NL-SAS or even SATA (ugh) on RAID 6 and that will be pretty decent for read performance, but write performance will be really weak, especially for a database.

            1 Reply Last reply Reply Quote 0
            • Reid CooperR
              Reid Cooper
              last edited by

              What about 10K SAS drives? Are any available that are big enough for that? What about 1.8TB 10K SAS?

              1 Reply Last reply Reply Quote 1
              • StrongBadS
                StrongBad
                last edited by

                Moving from 2TB drives up to 4TB RE drives won't hit 10K RPM per drive but the move from RAID 6 to RAID 10 will help a lot with database performance.

                http://www.amazon.com/SAS-Enterprise-Hard-Drive-WD4001FYYG/dp/B0090UGQ2C

                $203 for a 4TB RE SAS drive.

                1 Reply Last reply Reply Quote 1
                • scottalanmillerS
                  scottalanmiller
                  last edited by

                  And for databases you will definitely want SAS over SATA, the access patterns heavily favour the SAS protocol. You could see as much as a 20% difference in performance between the protocols alone.

                  RAID 10 will double the write performance over RAID 6 as well for database writes, in nearly all cases. As these are probably going to be twelve bay, not ten bay, servers going to all twelve bays will improve storage performance too.

                  1 Reply Last reply Reply Quote 1
                  • scottalanmillerS
                    scottalanmiller
                    last edited by

                    With RAID 10 and twelve bays you would get:

                    2TB Drives: 12TB Usable
                    3TB Drives: 18TB Usable
                    4TB Drives: 24TB Usable

                    So likely 3TB drives will make sense as that would be a nice amount of extra overhead.

                    1 Reply Last reply Reply Quote 0
                    • StrongBadS
                      StrongBad
                      last edited by

                      $280 for genuine Dell 4TB NL-SAS with the tray and everything.

                      1 Reply Last reply Reply Quote 2
                      • scottalanmillerS
                        scottalanmiller
                        last edited by

                        What is your current setup for storage? How many IOPS do you have available to your systems today?

                        J 1 Reply Last reply Reply Quote 1
                        • StrongBadS
                          StrongBad
                          last edited by

                          $350 for the 3TB NL-SAS drives, I'm thinking that it would be better to just stick with the 4TB ones.

                          1 Reply Last reply Reply Quote 2
                          • Reid CooperR
                            Reid Cooper
                            last edited by

                            Okay then, I guess the smaller 3TB drives are just a silly choice. I can only imagine that this is because they are in short supply or something.

                            1 Reply Last reply Reply Quote 2
                            • StrongBadS
                              StrongBad
                              last edited by

                              So I'm bored and looking into drive options. Here is another one...

                              1TB NL-SAS 2.5" drive. This would work in the other R720xd with the 25x 2.5" drives. They are $260 so this would cost a lot more but let you go for performance by having lots of drives. But that would more than double the storage cost and I think only RAID 6 is an option to get enough to fit into the chassis. So other than more spindles it does not work out very well. Still just 7200 RPM NL-SAS so not super fast.

                              1 Reply Last reply Reply Quote 2
                              • StrongBadS
                                StrongBad
                                last edited by

                                Here is the more interesting small drive option: 900GB 10K SAS 2.5" for $280. You would need even more of these and they are not cheap per GB but they are a lot faster than the NL-SAS options.

                                25 of these in an R720xd would be $7,000 just for the drives in one of the two servers. So that is the entire budget just for drives. With RAID 10 you could get 10.8TB, so not enough to even consider it. RAID 6 would be the only option and that would be 20.7TB which is plenty. So you could use one of the drives as a hot spare or buy a few fewer drives to save money, but then you would be losing performance again.

                                1 Reply Last reply Reply Quote 2
                                • StrongBadS
                                  StrongBad
                                  last edited by

                                  1.8TB 10K SAS 2.5" drives. This would fix the RAID 10 issue, but it is NOT cheap. That's not a good option.

                                  1 Reply Last reply Reply Quote 2
                                  • StrongBadS
                                    StrongBad
                                    last edited by

                                    Okay, I think that we have a pretty good round up of the options at this point. Until we have more information from the OP, like if there is more money, new requirements or specific disk requirements this seems to be the consensus:

                                    Solution 1: The bare bones, cost saving solution is a two node Dell R510 or R720xd with 8x 4TB NL-SAS drives in RAID 10 (add drives as needed for performance up to 12) cluster. The 4TB 3.5" drives are just too cheap to not use and RAID 10 probably makes them the reasonably fast choice even though they are NL-SAS rather than 10K. Use HyperV and StarWind to do the clustering and failover. Might be able to come in somewhere around the assumed budget limits.

                                    Solution 2: The expensive but easy approach. Scale with three nodes and everything included (hyperconverged) in a single package. A fraction of the work to set up or to maintain. Will grow easily in the future. Likely far more expensive than the OP can justify.

                                    J 1 Reply Last reply Reply Quote 2
                                    • Reid CooperR
                                      Reid Cooper
                                      last edited by

                                      I agree, that seems to be where we are. RAID 10 makes sense, that workload is almost all database and there don't seem to be affordable 10K SAS drive options so we are a bit stuck there. Not very many options with those kind of budgetary constraints. Kind of just have to do what has to be done.

                                      1 Reply Last reply Reply Quote 2
                                      • J
                                        JohnFromSTL @scottalanmiller
                                        last edited by

                                        @scottalanmiller said:

                                        What is your current setup for storage? How many IOPS do you have available to your systems today?

                                        I haven't obtained those numbers yet from our current servers.

                                        I neglected to mention that we are using physical servers for SQL 2008 R2 and SQL 2012.

                                        SQL 2008 R2:
                                        PowerEdge 2950 Gen-III
                                        CPU: (2) Xeon E5450 @ 3.00GHz, 2993 MHz
                                        RAM: 32GB
                                        PERC 6/i, RAID-5
                                        (4) x 146.8GB Seagate Savvio 15K.2 146.8GB SAS 15k RPM 16MB Cache 6Gb/s 2.5"
                                        (4) x 146.8GB Hitachi Ultrastar C10K147 SAS 10k RPM 16MB Cache 3Gb/s 2.5"

                                        SQL 2012:
                                        PowerEdge 2950, Gen-II
                                        CPU: (2) x Xeon X5365 @ 3.00GHz, 2993 MHz
                                        RAM: 32GB
                                        PERC 5/i, RAID-10
                                        4 x 600GB Toshiba AL13SXB600N SAS 15K RPM 64MB Cache 6Gb/s 2.5"

                                        1 Reply Last reply Reply Quote 0
                                        • J
                                          JohnFromSTL @StrongBad
                                          last edited by

                                          @StrongBad said:

                                          Okay, I think that we have a pretty good round up of the options at this point. Until we have more information from the OP, like if there is more money, new requirements or specific disk requirements this seems to be the consensus:

                                          Solution 1: The bare bones, cost saving solution is a two node Dell R510 or R720xd with 8x 4TB NL-SAS drives in RAID 10 (add drives as needed for performance up to 12) cluster. The 4TB 3.5" drives are just too cheap to not use and RAID 10 probably makes them the reasonably fast choice even though they are NL-SAS rather than 10K. Use HyperV and StarWind to do the clustering and failover. Might be able to come in somewhere around the assumed budget limits.

                                          Solution 2: The expensive but easy approach. Scale with three nodes and everything included (hyperconverged) in a single package. A fraction of the work to set up or to maintain. Will grow easily in the future. Likely far more expensive than the OP can justify.

                                          Would the CPU options on the Dell R510 or R720xd provide enough horsepower for the VMs?

                                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                                          • J
                                            JohnFromSTL @scottalanmiller
                                            last edited by

                                            @scottalanmiller said:

                                            If the R910 is maxing out at, say, 20% CPU, then my guess is that an R720xd will do the trick to take over its load. The R720xd has two, faster procs than the R910. Not only are the individual procs faster, but by moving from quad procs to dual procs you gain a small amount of efficiency just from that one move. So faster procs and more efficient proc usage and then cutting the total number of procs in half.... seems like you will be okay.

                                            I apologize if I said R910, but I'm actually using 2 x PowerEdge R900 as Hyper-V host machines.

                                            scottalanmillerS 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 5
                                            • 6
                                            • 7
                                            • 8
                                            • 9
                                            • 10
                                            • 7 / 10
                                            • First post
                                              Last post