ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    ZFS Based Storage for Medium VMWare Workload

    SAM-SD
    zfs storage virtualization filesystems raid
    9
    156
    75.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @Dashrender
      last edited by

      @Dashrender said:

      Any reason that all of these solutions couldn't be done with XByte purchased systems?

      Only that he is an HP shop and they are Dell.

      DashrenderD 1 Reply Last reply Reply Quote 0
      • DashrenderD
        Dashrender @scottalanmiller
        last edited by

        @scottalanmiller said:

        @Dashrender said:

        Any reason that all of these solutions couldn't be done with XByte purchased systems?

        Only that he is an HP shop and they are Dell.

        Is there an HP equivalent?

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @Dashrender
          last edited by

          @Dashrender said:

          @scottalanmiller said:

          @Dashrender said:

          Any reason that all of these solutions couldn't be done with XByte purchased systems?

          Only that he is an HP shop and they are Dell.

          Is there an HP equivalent?

          Nearly everything in their lineups has an equivalent that is close on the other side.

          DashrenderD 1 Reply Last reply Reply Quote 0
          • DashrenderD
            Dashrender @scottalanmiller
            last edited by

            @scottalanmiller said:

            @Dashrender said:

            @scottalanmiller said:

            @Dashrender said:

            Any reason that all of these solutions couldn't be done with XByte purchased systems?

            Only that he is an HP shop and they are Dell.

            Is there an HP equivalent?

            Nearly everything in their lineups has an equivalent that is close on the other side.

            Well I was mainly meaning in the secondary market/refurbished area. I knew that HP and Dell have mostly equivalent server lineups.

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller
              last edited by

              Oh, I see. ServerMonkey would be a place to start.

              1 Reply Last reply Reply Quote 1
              • donaldlandruD
                donaldlandru @scottalanmiller
                last edited by

                @scottalanmiller said:

                @donaldlandru said:

                @scottalanmiller said:

                @donaldlandru said:

                Which I can add for as cheap as $5k with RED drives or $10k with Seagate SAS drives.

                WD makes RE and Red drives. Don't call them RED, it is hard to tell if you are meaning to say RE or Red. The Red Pro and SE drives fall between the Red and the RE drives in the lineup. Red and RE drives are not related. RE comes in SAS, Red is SATA only.

                It's all in a name. When I say REDs I am referring to WD Red 1TB NAS Hard Drive 2.5" WD10JFCX. When I say seagate I am referring to Seagate Savvio 10K.5 900 GB 10000 RPM SAS 6-Gb/S ST9900805SS

                Edit: I don't always use WD NAS (RED) drives, but when I do I use the WDIDLE tool to fix that problem

                Boy those have gotten cheap!

                http://www.newegg.com/Product/Product.aspx?Item=N82E16822236600

                But they will be terrible slow. Those are 5400 RPM SATA drives.

                This is why I made my comment about not using the "RED" drives earlier, they don't have the PRO in 2.5" form factor; however, the savvios are twice the speed at 4x the price.

                1 Reply Last reply Reply Quote 0
                • donaldlandruD
                  donaldlandru @scottalanmiller
                  last edited by

                  @scottalanmiller said:

                  @donaldlandru said:

                  I agree we do lack true HA in the production side as there is a single weak link (one storage array), the solution here depends on our move to Office 365 as that would take most of the operations load off of the network and change the requirements completely.

                  Good deal. We use O365, it is mostly great.

                  If I can sell them on Office 365 this time around (third times a charm), but that is for a different thread

                  S 1 Reply Last reply Reply Quote 1
                  • scottalanmillerS
                    scottalanmiller
                    last edited by

                    If you need an Office 365 partner, you know where to fine one 😉

                    cough.... cough.... @ntg

                    1 Reply Last reply Reply Quote 0
                    • B
                      bhershen
                      last edited by bhershen

                      There is a software defined (ZFS) solution- Nexenta- that is specifically organized for Enterprise use and that includes comprehensive hardware and software support. Using Super Micro Reference Architecture, a single head node (HA optional), 200GB L2Arc & Zil dedicated cache, RAID Z2, a 3 yr NBD on-site HW warranty, 3 yr. x 7x24 SW support would run under your $15K budget.

                      Nexenta includes:
                      • Certified for Virtual / VDI/Cluster/Big Data/Cloud/Archive & Data Protection Environments
                      • Standard functionality includes Hybrid storage pools (HDD, SSD, Flash), Auto-tiering, in-line Compression/De-duplication, Replication, unlimited Snapshots, only 2 plug-in options if required: High Availability and FC support
                      • Uses Scalable Read & Write Cache to accelerate Read/Write performance, leveraging low cost spinning disk but also allowing using 10K/15K/SSD pools to achieve demanding IOPs and throughput
                      • Unmatched Data Integrity- continuous integrity checking, built-in self healing, 256b check sum copy-on-write, seamlessly addresses intermittent faulty devices, single/dual/triple parity RAID or RAID 10
                      • Perpetual licensing w/incremental capacity expansion. No replacement of core equipment, minimizing TCO and cost of growth at a fraction of dedicated hardware solutions.

                      International Computer Concepts (www.ICC-USA.com) has been building high performance/ high density compute & storage for commercial, government, research and education and is a premier integrator of NexentaStor. For more info, I can be reached at
                      Brian Hershenhouse, [email protected] or 877 422-8729 x109.

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller
                        last edited by

                        I'm pretty sure he actually mentioned Nexenta in the original post. That was the SAM-SD option that he was considering.

                        1 Reply Last reply Reply Quote 1
                        • B
                          bhershen
                          last edited by

                          Hi Scott,
                          Donald mentioned SM and referenced generic ZFS (could be Oracle, OpenIndiana, FreeBSD, etc.) solutions which have uncoordinated HW, SW and support. Nexenta is packaged to compete with EMC, NetApp, etc. as primary storage in the Commercial market.
                          If you would like to get an overview, please feel free to ping me.
                          Best.

                          S 1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller
                            last edited by

                            You are correct, sir, I must have imagined it.

                            1 Reply Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller
                              last edited by

                              Following up on this one as I see it getting some traffic today. How did this project go @donaldlandru

                              1 Reply Last reply Reply Quote 0
                              • S
                                StorageNinja Vendor @donaldlandru
                                last edited by

                                @donaldlandru said in ZFS Based Storage for Medium VMWare Workload:

                                Ok, so a little background. the storage situation at my organization is our weakest link in our network. Currently we have a single HP MSA P2000 with 12 spindles (7200 rpm) serving two separate ESXi clusters.

                                This is not a lot of IOPS I have easily 100x more IOPS in the laptop I'm typing this on, than this disk configuration.

                                It is not uncommon for us to max out the disk i/o on 12 spindles sharing the load of almost 150 virtual machines and everyone is on board that something needs to be changed.

                                Yep, Go all Flash.

                                Here is what the business cares about the solution: Reliable solution that provides necessary resources for the development environments to operate effectively (read: we do not do performance testing in-house as by the very nature, it is much a your mileage may vary depending on your deployment situation).

                                If your a VMware shop doing QA testing, there's some workflows you can do with Linked Clones, and Instant Clones (No IO overhead, 400ms to clone a VM with zero memory or disk as it runs ajournal log for both) to reduce disk load, speed up process's and in general make everyone's life easier.

                                In addition to the business requirements, I have added my own requirements that my boss agrees with and blesses.

                                1. Operations and Development must be on separate storage devices

                                This isn't necessary at 150 VM's. Just get something with all flash and if it is that big of a deal that people are running IO burner something that has QoS as an option to smack them down.

                                1. Storage systems must be built of business class hardware (no RED drives -- although I would allow this in a future Veeam backup storage target)

                                Think strongly about using reds with Veeam. Reverse Incremental or roll ups use random IO and your backup windows will hate you.

                                Requirements for development storage

                                • 9+ Tib of usable storage
                                • Support a minimum of 1100 random iops (what our current system is peaking at)
                                • disks must be in some kind of array (zfs, raid, mdadm, etc)

                                Proposed solutions:

                                #1 a.k.a the safe option
                                HP StoreVirtual 4530 with 12 TB (7.2k) spindles in RAID6 -- this is our vendor recommendation. This is an HP renew quote with 3 years 5x9 support next-day on-site for ~$15,000

                                Wait is this a single node storevirtual? ALso the IOPS on this will be awful. (7.2K drives are slow).

                                Less performance than solution #2 out of the box
                                More expensive to upgrade later (additional shelves and drives at HP prices)
                                All used hardware
                                Its worse than that, as you have to not just buy HP parts but licensing.

                                #2 ZFS Solution ~$10,000
                                24 spindle 900Gb (7.2k SAS) in 12 mirrored vdevs

                                To my knowledge no one makes 900GB 7.2K SAS drives.

                                Based on Supermicro SC216E16 chassis
                                X9SRH-7F Motherboard
                                Intel E5-1620v2 CPU
                                64 GB of RAM
                                No L2ARC or ZIL planned

                                Then why the hell would you use ZFS?

                                Dual 10gig NICs

                                Pros
                                Better performance out of the box (twice the spindle count)
                                Non-vendor specific parts means upgrades require less investment

                                Cons
                                Alright, tear me apart tell me I am wrong or provide any other useful feedback. The biggest concerns I have exist in both platforms (drives fail, controllers fail, data goes bad, etc) and have to be mitigated either way. That is what we have backups for -- in my opinion the HP gets me the following things:

                                1. The "ability" to purchase a support contract
                                2. Next-day on-site of a tech or parts if needed

                                Be careful with NBD Parts, as a failure on Thursday afternoon really means Monday afternoon is within SLA as its based on when it was isolated.

                                With the $4000 saved from not buying the HP support contract I can buy a duplicate Supermicro system, and a couple extra hard drives, and have the same level of protection.

                                Note: this is my first time posting an actual give me feedback topic, I tried to include all information I felt was relevant. If more is needed I can provide.

                                Your a VMware shop, curious if you looked at Using VSAN? You could go all flash, and get inline Dedupe and Compression which makes all flash cheaper than 10K drives at that point.

                                1 Reply Last reply Reply Quote 1
                                • S
                                  StorageNinja Vendor @Dashrender
                                  last edited by

                                  @Dashrender IF you put enterprise grade in a REAL datacenter its scary how long it will run without failure. Now if this stuff is going in some Nolo Telco Dataslum, or his closet yah, stuff dies all the time.

                                  scottalanmillerS 1 Reply Last reply Reply Quote 1
                                  • S
                                    StorageNinja Vendor @Dashrender
                                    last edited by StorageNinja

                                    @Dashrender said in ZFS Based Storage for Medium VMWare Workload:

                                    @dafyre said:

                                    @donaldlandru said:

                                    The politics are likely to be harder to play as we just renewed our SnS for both Essentials and Essentials plus in January for three years.
                                    <snip>
                                    Another important piece of information with the local storage is that everything is based on 2.5" disks -- and all but two servers only have two bays each, getting any really kind of local storage without going external direct attached (non-shared) is going to be a challenge.

                                    He brings a good point about the 2 bays and 2.5" drives... Do they even make 4 / 6 TB drives in 2.5" form yet?

                                    If not, would it be worth getting an external DAS shelf for each of the servers?

                                    It's been 15 years, but I've seen DAS shelves that can be split between two hosts. Assuming those are still made, and there is enough needed disk slots, that would save a small amount.

                                    They make flash drives in that size. Actual large capacity small format magnetic drives? No.
                                    The bigger issue is with this level of VM density you run out of IOPS before you run out of capacity.

                                    Shared DAS shelfs though don't handle RAID they are designed for some type of controller to manage them and disk locking etc.

                                    BTW, you can limit IOPS on a VM in vSphere if you just have a noisy neighbor problem.
                                    https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1038241

                                    1 Reply Last reply Reply Quote 0
                                    • S
                                      StorageNinja Vendor @scottalanmiller
                                      last edited by StorageNinja

                                      @scottalanmiller said in ZFS Based Storage for Medium VMWare Workload:

                                      That $1200 number was based off of Essentials. Just saw that you have Essentials Plus. What is that for? Eliminating that will save you many thousands of dollars! This just went from a "little win" to a major one!

                                      Essentials Plus SnS includes 24/7 flat rate support. Getting that On other platforms is a hell of a lot more expensive. Its ~$1200 a year (Essentials is only like $600 and it's software updates only renewal is like $100)

                                      1 Reply Last reply Reply Quote 0
                                      • S
                                        StorageNinja Vendor @scottalanmiller
                                        last edited by

                                        @scottalanmiller

                                        VSAN is more than high availability.

                                        1. Dedupe and Compression, combined with distributed erasure code's makes all flash cheaper than hybrid in most cases.

                                        2. Distributed RAID/RAIN So you don't have to mirror mirrors (doubles capacity efficiency vs. P4xxx type design)

                                        3. On the fly ability to change availability. Have a test/dev workload you don't care about? FTT=0 and that sucker is RAID 0. Its moving to Production? change to FTT=1 and without disruption it will now be protected with either RAID 1, or distributed RAID 5 (All Flash). Its become really damn important, Change to FTT=2 and get triple mirror or RAID 6. Can even adjust stripe width etc on the fly. Moving ephemeral workloads to FTT=0 saves a lot of space.

                                        4. Non-disruptive expansion (No RAID expansion limits, just add drives or hosts as you need).

                                        5. Self healing. When you have 4 nodes, and 1 node fails, it will use free capacity on the other hosts to re-protect mirrored data.

                                        It may be overkill for what he's doing, but the product has some opex beenfits (no managing LUNs, you get 24/7 support with it etc).

                                        Also I saw a comment about 2 hosts. You can technically deploy VSAN in a 2 node configuration (It just needs a witness VM to run elsewhere).

                                        One thing I will say is for a shop with 100+ VM's and a dev heavy enviroment your bound to be hemorrhaging money from wasted labor on people waiting on things because of how IOPS starved this enviroment is. To Show the value of "Fast" storage Grab A good flash drive, put it in one of the hosts (behind a proper smart array or HBA, not some b model garbage) and put a few VM's on it and see if people notice the difference. They'll be fighting each other to get you budget for more flash...

                                        1 Reply Last reply Reply Quote 0
                                        • S
                                          StorageNinja Vendor @donaldlandru
                                          last edited by

                                          @donaldlandru Cuts licensing for VSAN in half (single CPU)

                                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                                          • S
                                            StorageNinja Vendor @scottalanmiller
                                            last edited by

                                            @scottalanmiller said in ZFS Based Storage for Medium VMWare Workload:

                                            WD makes RE and Red drives. Don't call them RED, it is hard to tell if you are meaning to say RE or Red. The Red Pro and SE drives fall between the Red and the RE drives in the lineup. Red and RE drives are not related. RE comes in SAS, Red is SATA only.

                                            They just re-branded the consumer side (IronWolf, brought back Barracuda with a FireCuda cache drive). I was looking at them and realized they have like a .05 DWPD rating which actually makes their write endurance worse than the cheapest of Enterprise TLC drives.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 7
                                            • 8
                                            • 6 / 8
                                            • First post
                                              Last post