ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    ServerBear Specs on Scale HC3

    IT Discussion
    scale scale hc3 serverbear performance monitoring centos 7 centos linux scale hc3 hc2000
    10
    48
    9.9k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • coliverC
      coliver
      last edited by

      It's oddly satisfying to know that you can relate RAID to almost anything... we had a hamster thread not too long ago as well.

      1 Reply Last reply Reply Quote 0
      • A
        Alex Sage
        last edited by

        What are the specs on the servers?

        I am trying to think of how you would create a poor man's cluster....

        Couldn't I just create a XenServer Cluster with Xen Orchestra, and get the same thing?

        scottalanmillerS 2 Replies Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @Dashrender
          last edited by

          @Dashrender said:

          I don't know what RAIN is - so what is the usable storage?

          It's mirrored, so cut it in half 🙂

          1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller @Dashrender
            last edited by

            @Dashrender said:

            @scottalanmiller said:

            @hobbit666 said:

            Not silly money, what sort of storage comes with the basic starter kit?

            New units are right around the corner so don't want to say for sure, but the HC1000 and HC2000 clusters are both 12x SATA drives in RAIN throughout the cluster. So the performance varies a little but the IOPS are more or less what they are, you can see those above. The capacity is around 21.6TB RAW.

            I don't know what RAIN is - so what is the usable storage?

            RAIN is Redundant Array of Independent Nodes. The redundancy and/or mirroring (both in this case) are done at the node level, not at a disk pair level. So in many ways, like capacity, it acts just like RAID 10, but the performance balancing and survivability is different.

            1 Reply Last reply Reply Quote 1
            • scottalanmillerS
              scottalanmiller @Dashrender
              last edited by

              @Dashrender said:

              so what is the usable storage?

              Outside of the Scale world, there are RAIN systems that are not mirrored, so RAIN itself does not mean a specific utilization rate.

              1 Reply Last reply Reply Quote 1
              • scottalanmillerS
                scottalanmiller @Alex Sage
                last edited by

                @aaronstuder said:

                What are the specs on the servers?

                The HC2000 in question (we have the fastest one that there is, this is the very latest unit with the Winchesters, technically an HC2100) are built on Dell R430 single CPU nodes. 64GB of RAM per node.

                1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @Alex Sage
                  last edited by

                  @aaronstuder said:

                  Couldn't I just create a XenServer Cluster with Xen Orchestra, and get the same thing?

                  Nowhere close, I'm afraid. The thing that makes the Scale cluster important is the RAIN based scale out RLS system on which it is built. XS with XO would give you the same basic "single pane of glass" interface stuff, but you aren't getting a Scale for the interface. It's the storage mostly and the HA management secondary that make it valuable.

                  The RAIN storage here mirrors at the block level across the cluster providing a very high durability storage layer. And very importantly that's a native storage layer, in the kernel. There is no VSA here, this is a more advanced and more powerful approach. The storage layer runs right in the hypervisor kernel.

                  Then on top of that, there is integrated storage and compute management, so both layers know what the other layer is doing. Performance, faults, capacity data is all transparent between the two. So this provides for a level of storage performance, scale out and reliability that you cannot easily replicate on your own.

                  dafyreD 1 Reply Last reply Reply Quote 1
                  • scottalanmillerS
                    scottalanmiller
                    last edited by

                    Now assuming that you do want to take the "poor man's" approach and try to build something like this on your own, of course there are tools for that. Using either KVM or Xen you can add a scale out storage layer to that. The two key ones on the market are Gluster and CEPH. You would have to build that component yourself. DRBD is great for two nodes but is not scale out, it's a good product but a different animal here. So you need to build your own durable, HA scale out storage layer on which to run Xen or KVM. Then layer the HA and management on top of that.

                    1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender
                      last edited by

                      @scottalanmiller said:

                      hich it is built. XS with XO would give you the same basic "single pane of glass" interface stuff, but you aren't getting a Scale for

                      How do you have three nodes and only loose 50% storage, yet loose nothing when a node fails?

                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @Dashrender
                        last edited by

                        @Dashrender said:

                        @scottalanmiller said:

                        hich it is built. XS with XO would give you the same basic "single pane of glass" interface stuff, but you aren't getting a Scale for

                        How do you have three nodes and only loose 50% storage, yet loose nothing when a node fails?

                        RAIN mirroring 🙂 In RAID terms, think of network RAID 1+.

                        DashrenderD 1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller
                          last edited by

                          The blocks are mirrored. No matter what you write to one node, it is replicated to at least one additional node. But no node is a "pair".

                          1 Reply Last reply Reply Quote 1
                          • DashrenderD
                            Dashrender @scottalanmiller
                            last edited by

                            @scottalanmiller said:

                            @Dashrender said:

                            @scottalanmiller said:

                            hich it is built. XS with XO would give you the same basic "single pane of glass" interface stuff, but you aren't getting a Scale for

                            How do you have three nodes and only loose 50% storage, yet loose nothing when a node fails?

                            RAIN mirroring 🙂 In RAID terms, think of network RAID 1+.

                            Is RAIN mirroring always 50% available? I know I know.. mirroring kinda implies that, but I have to ask the question anyway.

                            Also, what manages this? I assume this management is also mirrored out over the nodes?

                            scottalanmillerS 2 Replies Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @Dashrender
                              last edited by

                              @Dashrender said:

                              Is RAIN mirroring always 50% available? I know I know.. mirroring kinda implies that, but I have to ask the question anyway.

                              Yes, mirroring is always 50%. RAIN is not always mirroring.

                              1 Reply Last reply Reply Quote 1
                              • scottalanmillerS
                                scottalanmiller @Dashrender
                                last edited by

                                @Dashrender said:

                                Also, what manages this? I assume this management is also mirrored out over the nodes?

                                Each node has a completely independent management system. You can log into any of the nodes to manage the cluster. It's a multi-master system. So fully HA there, not even a blip if a node goes down (unless you are looking at the interface for that specific node.)

                                1 Reply Last reply Reply Quote 0
                                • dafyreD
                                  dafyre @scottalanmiller
                                  last edited by

                                  @scottalanmiller said:

                                  The RAIN storage here mirrors at the block level across the cluster providing a very high durability storage layer. And very importantly that's a native storage layer, in the kernel. There is no VSA here, this is a more advanced and more powerful approach. The storage layer runs right in the hypervisor kernel.

                                  A couple of my colleagues told me they lost TWO nodes in their 4 node scale cluster... and everything kept right on trucking... and these units are the 3-year old models.

                                  mlnewsM 1 Reply Last reply Reply Quote 2
                                  • mlnewsM
                                    mlnews @dafyre
                                    last edited by

                                    @dafyre said:

                                    @scottalanmiller said:

                                    The RAIN storage here mirrors at the block level across the cluster providing a very high durability storage layer. And very importantly that's a native storage layer, in the kernel. There is no VSA here, this is a more advanced and more powerful approach. The storage layer runs right in the hypervisor kernel.

                                    A couple of my colleagues told me they lost TWO nodes in their 4 node scale cluster... and everything kept right on trucking... and these units are the 3-year old models.

                                    That can easily work as long as you don't use your capacity all the way up. If you lose them one at a time and it has time to rebalance you would be all set in any way. If you lose two at exactly the same time, it's a bit more of luck 🙂

                                    dafyreD 1 Reply Last reply Reply Quote 2
                                    • dafyreD
                                      dafyre @mlnews
                                      last edited by

                                      @mlnews said:

                                      @dafyre said:

                                      @scottalanmiller said:

                                      The RAIN storage here mirrors at the block level across the cluster providing a very high durability storage layer. And very importantly that's a native storage layer, in the kernel. There is no VSA here, this is a more advanced and more powerful approach. The storage layer runs right in the hypervisor kernel.

                                      A couple of my colleagues told me they lost TWO nodes in their 4 node scale cluster... and everything kept right on trucking... and these units are the 3-year old models.

                                      That can easily work as long as you don't use your capacity all the way up. If you lose them one at a time and it has time to rebalance you would be all set in any way. If you lose two at exactly the same time, it's a bit more of luck 🙂

                                      It was a whole lotta luck for them, lol. They're not running at full capacity, so they were safe.

                                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @dafyre
                                        last edited by

                                        @dafyre said:

                                        @mlnews said:

                                        @dafyre said:

                                        @scottalanmiller said:

                                        The RAIN storage here mirrors at the block level across the cluster providing a very high durability storage layer. And very importantly that's a native storage layer, in the kernel. There is no VSA here, this is a more advanced and more powerful approach. The storage layer runs right in the hypervisor kernel.

                                        A couple of my colleagues told me they lost TWO nodes in their 4 node scale cluster... and everything kept right on trucking... and these units are the 3-year old models.

                                        That can easily work as long as you don't use your capacity all the way up. If you lose them one at a time and it has time to rebalance you would be all set in any way. If you lose two at exactly the same time, it's a bit more of luck 🙂

                                        It was a whole lotta luck for them, lol. They're not running at full capacity, so they were safe.

                                        One of the nice things about scale out is that it also often means "scale back". So if you are not over using the system, it can rebalance as you fail back to a smaller system.

                                        You can also replicate entire clusters for even more reliability.

                                        1 Reply Last reply Reply Quote 2
                                        • hobbit666H
                                          hobbit666
                                          last edited by

                                          The more I read about Scale and peoples views the more I want it!!!!!!

                                          Now to just get the board and management to think we need to refresh our hardware lol

                                          1 Reply Last reply Reply Quote 1
                                          • A
                                            Alex Sage
                                            last edited by Alex Sage

                                            I guess I just don't get it. This is basically just Dell Servers running via a VSAN connected over 10GB networking. I guess I don't see what's so special about it.

                                            dafyreD scottalanmillerS 2 Replies Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 2 / 3
                                            • First post
                                              Last post