ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    The VSA is the Ugly Result of Legacy Vendor Lock-Out

    Self Promotion
    scale scale hc3 scale blog hyperconvergence
    8
    22
    5.3k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • travisdh1T
      travisdh1 @Deleted74295
      last edited by

      @Breffni-Potter said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

      @Aconboy said

      In HC3, we not only eliminated the SAN, we did so without using a VSA at all, so those "reserved" resources go directly into actually running VM's, all the while streamlining the IO path so that there is a dramatic reduction in the number of hops it takes to do things like change a period to a comma.

      I want one NOW 😉

      ftfy

      1 Reply Last reply Reply Quote 1
      • thwrT
        thwr
        last edited by thwr

        Just curious: How exactly does your product differ from StarWind Virtual SAN in this context?

        scottalanmillerS 1 Reply Last reply Reply Quote 2
        • scottalanmillerS
          scottalanmiller @thwr
          last edited by

          @thwr said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

          Just curious: How exactly does your product differ from StarWind Virtual SAN in this context?

          The really, really high level technical different is that VSA / VSAN approach is a layer on top of the hypervisor that has to run as a guest workload. The Scale system puts the storage layer at the same spot that a normal filesystem/LVM would be. It is part of the hypervisor natively and acts just like a filesystem or DRBD. It isn't that it has zero overhead, but it has extremely little as it's just part of the hypervisor itself.

          Starwind will vary heavily from ESXi to Hyper-V as it requires a full VM on one and not on the other.

          thwrT 1 Reply Last reply Reply Quote 3
          • thwrT
            thwr @scottalanmiller
            last edited by

            @scottalanmiller said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

            @thwr said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

            Just curious: How exactly does your product differ from StarWind Virtual SAN in this context?

            The really, really high level technical different is that VSA / VSAN approach is a layer on top of the hypervisor that has to run as a guest workload. The Scale system puts the storage layer at the same spot that a normal filesystem/LVM would be. It is part of the hypervisor natively and acts just like a filesystem or DRBD. It isn't that it has zero overhead, but it has extremely little as it's just part of the hypervisor itself.

            Starwind will vary heavily from ESXi to Hyper-V as it requires a full VM on one and not on the other.

            Ah ok, thx

            AconboyA 1 Reply Last reply Reply Quote 0
            • AconboyA
              Aconboy @thwr
              last edited by

              @thwr @breffni-potter @travisdh1 - we have just released our 1150 platform which brings all features and functionalities with both flash and spinning disk in at a pricepoint under $30k USD for a complete 3 node cluster.

              travisdh1T thwrT 2 Replies Last reply Reply Quote 1
              • travisdh1T
                travisdh1 @Aconboy
                last edited by

                @Aconboy said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

                @thwr @breffni-potter @travisdh1 - we have just released our 1150 platform which brings all features and functionalities with both flash and spinning disk in at a pricepoint under $30k USD for a complete 3 node cluster.

                Trust me, if we needed more than a single server I'd have a cluster!

                1 Reply Last reply Reply Quote 2
                • thwrT
                  thwr @Aconboy
                  last edited by

                  @Aconboy said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

                  @thwr @breffni-potter @travisdh1 - we have just released our 1150 platform which brings all features and functionalities with both flash and spinning disk in at a pricepoint under $30k USD for a complete 3 node cluster.

                  Could you give us some numbers? Like what's included in the 30k? Storage capacity, CPU cores, NICs, upgrade paths...

                  AconboyA 1 Reply Last reply Reply Quote 1
                  • AconboyA
                    Aconboy @thwr
                    last edited by

                    @thwr sure thing
                    the 1150 ships with a baseline of 8 broadwell cores per node with the E5-2620v4 upgradable to the e5-2640v4 with 10 cores per node. It ships with 64 GB RAM upgradable to 256 GB per node. It ships with either a 480 GB, 960 GB, or 1.92 TB eMLC ssd per node and 3 1,2, or 4 TB NL-SAS drives per node. Each node has quad gigabit or quad 10gig nics. All features and functionalities are included (HA, DR, multi-site replication, up to 5982 snapshots per-vm, auto tiering with HEAT staging and destaging and automatic prioritization of workload IO to name a few). All 1150 nodes can be joined with all other scale node families and generations both forward and back, so upgrade paths are not artificially limited.

                    thwrT 1 Reply Last reply Reply Quote 1
                    • thwrT
                      thwr @Aconboy
                      last edited by thwr

                      @Aconboy said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

                      @thwr sure thing
                      the 1150 ships with a baseline of 8 broadwell cores per node with the E5-2620v4 upgradable to the e5-2640v4 with 10 cores per node. It ships with 64 GB RAM upgradable to 256 GB per node. It ships with either a 480 GB, 960 GB, or 1.92 TB eMLC ssd per node and 3 1,2, or 4 TB NL-SAS drives per node. Each node has quad gigabit or quad 10gig nics. All features and functionalities are included (HA, DR, multi-site replication, up to 5982 snapshots per-vm, auto tiering with HEAT staging and destaging and automatic prioritization of workload IO to name a few). All 1150 nodes can be joined with all other scale node families and generations both forward and back, so upgrade paths are not artificially limited.

                      Sounds good, what about deduplication / compression? For example, I've got a small 3 node Hyper-V cluster right now with roughly 15TB of (more or less hot) storage. 70% of the VMs are 2008R2 (will be upgraded to 2016 next year), rest is Linux and BSD.

                      1 Reply Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller
                        last edited by

                        No dedupe or compression on the Scale storage. But you can always do that at a higher later with the OS or whatever if you need it. That works in most cases.

                        KOOLERK 1 Reply Last reply Reply Quote 1
                        • KOOLERK
                          KOOLER Vendor @scottalanmiller
                          last edited by

                          @scottalanmiller said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

                          No dedupe or compression on the Scale storage. But you can always do that at a higher later with the OS or whatever if you need it. That works in most cases.

                          right! windows server has a recent dedupe so a VM with WS2012R2 will do the trick

                          S 1 Reply Last reply Reply Quote 0
                          • KOOLERK
                            KOOLER Vendor @travisdh1
                            last edited by

                            @travisdh1 said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

                            Alright, I have to ask. Is Starwind able to get access to the hardware level drive access like this in Hyper-V? @KOOLER (sorry, forgetting the others around here with Starwind.)

                            on hyper-v we'll run a mix of a kernel-mode drivers and user-land services and we'll get direct access to hardware

                            on vmware we'll use hypervisor and will "talk" eventually to VMDK with a data container

                            1 Reply Last reply Reply Quote 3
                            • KOOLERK
                              KOOLER Vendor @travisdh1
                              last edited by

                              @travisdh1 said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

                              @Breffni-Potter said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:

                              Is this really the case? I'm sceptical that a VMWare or HyperV or even a XenServer based system would have that huge a difference in performance requirements compared with a Scale system.

                              "24 vCores and up to 300GB RAM (depending on the vendor) just to power the VSA’s and boot themselves vs HES using a fraction of a core per node and 6GB RAM total. Efficiency matters."

                              Is this genuine or is it a flippant example? If it's genuine...shut up and take my money.

                              From Starwind's LSFS FAQ
                              "How much RAM do I need for LSFS device to function properly?
                              4.6 MB of RAM per 1 GB of LSFS device with disabled deduplication,
                              7.6 MB of RAM per 1 GB of LSFS device with enabled deduplication.
                              "

                              So, yeah, could easily eat up that much ram. ~7.6GB RAM per TB of storage.

                              I didn't spot the CPU recommendation, but I know it's beefy.

                              you don't always use LSFS with starwind

                              and if you use lsfs you don't always enable dedupe

                              and we're offloading hash tables for nvme flash now so upcoming update will have ZERO overhead for dedupe

                              supported combinations are

                              1. flash for capacity and ram for hash tables => FAAAAAAAAAST !!

                              2. spinning disk for capacity and nvme flash for hash tables => somehow slower but because of a spinning disk of course

                              1 Reply Last reply Reply Quote 3
                              • travisdh1T
                                travisdh1
                                last edited by

                                Thanks @KOOLER.

                                1 Reply Last reply Reply Quote 1
                                • S
                                  StorageNinja Vendor @KOOLER
                                  last edited by

                                  @KOOLER Limits if I recall are 1TB file size max, no virtual machines, post process only, 32KB block size (but Variable Block at least right?) 2016 should raise the limit.

                                  Advantage to doing data reduction on the back end is you can dedupe out common applications and OS files between Virtual machines. That said flash is so cheap (~55 cents per GB for enterprise grade storage) throwing hardware at the problem has its advantages...

                                  1 Reply Last reply Reply Quote 1
                                  • S
                                    StorageNinja Vendor
                                    last edited by

                                    They did this for the sole reason that this was the only way to continue providing their solutions based on the legacy vendors and their lock out and lack of access.

                                    Not sure where this information came from.
                                    PernixData, SanDisk’s Flashsoft, and ScaleIO have all used kernel modules with vSphere...

                                    The reason these vendors use VSA's is a combination of factors, largest of which writing kernel code is hard...

                                    1 Reply Last reply Reply Quote 1
                                    • 1
                                    • 2
                                    • 1 / 2
                                    • First post
                                      Last post