ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Starwind/Server Limitations

    Scheduled Pinned Locked Moved IT Discussion
    19 Posts 6 Posters 505 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      Jimmy9008
      last edited by

      Hi folks,

      I have a three node Starwind vSAN made up of 3 x R740XD servers each with 14 SAS SSD drives. The Starwind storage is presented to a Windows Failover Cluster, which also runs on these three nodes.

      As Starwind can scale up, I am looking to populate 10 x spare slots in each server with more SSD and create additional Cluster Storage.

      The thing is though, this storage will not be used by the Failover Cluster on the existing hosts. I am looking to purchase additional hosts, add them to the iSCSI network, and build a new cluster using the vSAN storage on the existing nodes.

      Would we potentially see any performance issues here? The iSCSI network is 10 GbE and I figure the new servers would be seeing the same performance as if they were connected to a physical SAN. But in this case, its just virtual. My worry is that as the existing hosts are already running VMs, Starwind and hold the data, could they be a bottleneck.

      I plan to run Live Optics to see the current performance, anything I should look out for?

      Best,
      Jim

      scottalanmillerS 3 Replies Last reply Reply Quote 1
      • J
        Jimmy9008
        last edited by

        As vSAN could be running on three dedicated hosts, with compute connecting over iSCSI anyway, this wont be much of an issue?

        1 Reply Last reply Reply Quote 1
        • DustinB3403D
          DustinB3403
          last edited by

          Paging @Oksana

          1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller @Jimmy9008
            last edited by

            @Jimmy9008 said in Starwind/Server Limitations:

            I figure the new servers would be seeing the same performance as if they were connected to a physical SAN.

            Each to its own, extremely low latency, extremely high performance SAN, yes. So.... nothing like a SAN, basically, lol.

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @Jimmy9008
              last edited by

              @Jimmy9008 said in Starwind/Server Limitations:

              My worry is that as the existing hosts are already running VMs, Starwind and hold the data, could they be a bottleneck.

              The bottleneck scaling up is your switch. Just make sure you don't exhaust the backplane.

              J 1 Reply Last reply Reply Quote 3
              • J
                Jimmy9008 @scottalanmiller
                last edited by

                @scottalanmiller said in Starwind/Server Limitations:

                @Jimmy9008 said in Starwind/Server Limitations:

                My worry is that as the existing hosts are already running VMs, Starwind and hold the data, could they be a bottleneck.

                The bottleneck scaling up is your switch. Just make sure you don't exhaust the backplane.

                I'll take a look. Traffic is quite light, but will see what metrics I can get.

                With the vSAN, having the second set of drives in the chassis as the storage for the second cluster, would we expect to see a bottleneck on the hosts at all? I am running Live Optics to see what they are currently doing...

                scottalanmillerS 1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @Jimmy9008
                  last edited by

                  @Jimmy9008 said in Starwind/Server Limitations:

                  The thing is though, this storage will not be used by the Failover Cluster on the existing hosts. I am looking to purchase additional hosts, add them to the iSCSI network, and build a new cluster using the vSAN storage on the existing nodes.

                  Here is the big question.... why?

                  J 1 Reply Last reply Reply Quote 1
                  • J
                    Jimmy9008 @scottalanmiller
                    last edited by

                    @scottalanmiller said in Starwind/Server Limitations:

                    @Jimmy9008 said in Starwind/Server Limitations:

                    The thing is though, this storage will not be used by the Failover Cluster on the existing hosts. I am looking to purchase additional hosts, add them to the iSCSI network, and build a new cluster using the vSAN storage on the existing nodes.

                    Here is the big question.... why?

                    Why, which part?

                    1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @Jimmy9008
                      last edited by

                      @Jimmy9008 said in Starwind/Server Limitations:

                      With the vSAN, having the second set of drives in the chassis as the storage for the second cluster, would we expect to see a bottleneck on the hosts at all? I am running Live Optics to see what they are currently doing...

                      Oh yes, I didn't notice all the details. The new hosts that don't have their own storage will just be using a SAN. Actually using a SAN in the "don't do that" kind of way that we always say. Will it work? Yes. But it's just a SAN. Not a vSAN, a SAN.

                      J 1 Reply Last reply Reply Quote 0
                      • J
                        Jimmy9008 @scottalanmiller
                        last edited by

                        @scottalanmiller said in Starwind/Server Limitations:

                        @Jimmy9008 said in Starwind/Server Limitations:

                        With the vSAN, having the second set of drives in the chassis as the storage for the second cluster, would we expect to see a bottleneck on the hosts at all? I am running Live Optics to see what they are currently doing...

                        Oh yes, I didn't notice all the details. The new hosts that don't have their own storage will just be using a SAN. Actually using a SAN in the "don't do that" kind of way that we always say. Will it work? Yes. But it's just a SAN. Not a vSAN, a SAN.

                        Correct. Its a SAN. Not a vSAN. But in this case, as the storage presented to the cluster over the network from Starwind and is redundant, its better than 1 x SAN. I just want to somehow be sure that the hosts having the additional I/O of the new cluster wont cause any performance issues to the existing cluster that sit upon them.

                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                        • J
                          Jimmy9008
                          last edited by

                          So, the cluster storage is mirrored between vSAN host 1 and vSAN host 2, and is then attached to the cluster. Plus, redundant switch. So in this case, no IPOD design as we can lose 1x of anything and stay up.

                          scottalanmillerS 2 Replies Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @Jimmy9008
                            last edited by

                            @Jimmy9008 said in Starwind/Server Limitations:

                            But in this case, as the storage presented to the cluster over the network from Starwind and is redundant, its better than 1 x SAN.

                            Oh yes, it's a SAN cluster, but you are "always" supposed to have a cluster when having a SAN as a starting point anyway.

                            The only real factor is here is that it "already exists." Except, it doesn't actually, you are adding drives specifically to use it in this way rather than putting those drives in the machine that will use them.

                            1 Reply Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @Jimmy9008
                              last edited by

                              @Jimmy9008 said in Starwind/Server Limitations:

                              So, the cluster storage is mirrored between vSAN host 1 and vSAN host 2, and is then attached to the cluster. Plus, redundant switch. So in this case, no IPOD design as we can lose 1x of anything and stay up.

                              You are misunderstanding how an IPOD works. It's still an IPOD, it's still an inverted pyramid. It's better than a totally misdesigned IPOD, it's a "proper IPOD", but still 100% an IPOD.

                              1 Reply Last reply Reply Quote 1
                              • scottalanmillerS
                                scottalanmiller @Jimmy9008
                                last edited by

                                @Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.

                                Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?

                                coliverC 1 Reply Last reply Reply Quote 1
                                • coliverC
                                  coliver @scottalanmiller
                                  last edited by

                                  @scottalanmiller said in Starwind/Server Limitations:

                                  @Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.

                                  Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?

                                  Licensing from the sounds of it.

                                  J 1 Reply Last reply Reply Quote 0
                                  • J
                                    Jimmy9008 @coliver
                                    last edited by

                                    @coliver said in Starwind/Server Limitations:

                                    @scottalanmiller said in Starwind/Server Limitations:

                                    @Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.

                                    Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?

                                    Licensing from the sounds of it.

                                    Correct, by adding to the existing vSAN, that storage side is under the Starwind SLA/Support.

                                    scottalanmillerS DashrenderD 2 Replies Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @Jimmy9008
                                      last edited by

                                      @Jimmy9008 said in Starwind/Server Limitations:

                                      @coliver said in Starwind/Server Limitations:

                                      @scottalanmiller said in Starwind/Server Limitations:

                                      @Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.

                                      Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?

                                      Licensing from the sounds of it.

                                      Correct, by adding to the existing vSAN, that storage side is under the Starwind SLA/Support.

                                      I see.

                                      1 Reply Last reply Reply Quote 0
                                      • DashrenderD
                                        Dashrender @Jimmy9008
                                        last edited by

                                        @Jimmy9008 said in Starwind/Server Limitations:

                                        @coliver said in Starwind/Server Limitations:

                                        @scottalanmiller said in Starwind/Server Limitations:

                                        @Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.

                                        Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?

                                        Licensing from the sounds of it.

                                        Correct, by adding to the existing vSAN, that storage side is under the Starwind SLA/Support.

                                        I was wondering if this was the case upon reading the OP.

                                        1 Reply Last reply Reply Quote 0
                                        • StarWindEngineerS
                                          StarWindEngineer
                                          last edited by StarWindEngineer

                                          @Jimmy9008
                                          Taking into account the cluster specification you mentioned, StarWind is not the bottleneck as the network configuration might be. For the similar setup of StarWind HyperConverged Appliances, we install, at least, 25 GbE network adapters and switches, however, 40 GbE would be preferable. I believe you should benchmark the network performance, whether 10 GbE network is the bottleneck or not for your environment.

                                          As it was emphasized, use Live Optics to get current performance utilization picture by your production. Additionally, you might use diskspd to benchmark storage performance and network utilization.

                                          Please tag or dm me should you have any questions.

                                          Have a nice day!

                                          1 Reply Last reply Reply Quote 2
                                          • 1 / 1
                                          • First post
                                            Last post