ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Cannot decide between 1U servers for growing company

    IT Discussion
    18
    246
    137.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • AconboyA
      Aconboy @ntoxicator
      last edited by

      @ntoxicator said:

      NOTE:

      Just spoke with folks at Oracle sales, had a conference call to discuss X5-2 servers and specs. Awaiting pricing.

      Also noticed ALOT of IBM server X on ebay.. newer ones at that. Not a good sign. Also relates back to how IBM didnt trust their own servers.

      I am not super surprised, as I was looking at specs on the 3250M5 yesterday and was floored by how outdated they are compared to Dell, HP, SM, etc

      1 Reply Last reply Reply Quote 0
      • ntoxicatorN
        ntoxicator @coliver
        last edited by

        @coliver

        I'm aware of this - and that is the point I was getting across.

        As with iSCSI initiator I COULD attach as local disk and direct connect and take advantage of near full network speed with smaller overhead.

        In my opinion. There would be more overhead

        ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

        Furthermore. The issue stands.

        With the primary data being on the Citrix Xen Server as local disk (iSCSI LUN storage). if I was to migrate to an NFS Stor. Mounted to Xen Server.

        I would attach as a NEW disk to that Virtual Machine. Mount it within Windows and format. Then I'll be stuck wit 'xcopy' the data & Permissions over to this new storage drive.

        As this is an issue now, as the Citrix Xen server has storage ties to our original Synology 4-bay NAS.

        I've been wanting to move ALL our LUN's and data to our newer larger Synology NAS. And then use the original 4-bay as a replication/ back-up

        scottalanmillerS coliverC 4 Replies Last reply Reply Quote 0
        • coliverC
          coliver @ntoxicator
          last edited by

          @ntoxicator said:

          Still confused as to why local storage being recommended over centralized storage on a NAS?

          Because a standard NAS isn't any more reliable then a standard server... mostly because they are standard servers with special software thrown on top. Why would you worry about a server node dying but not your storage node?

          1 Reply Last reply Reply Quote 1
          • scottalanmillerS
            scottalanmiller @ntoxicator
            last edited by

            @ntoxicator said:

            Also noticed ALOT of IBM server X on ebay.. newer ones at that. Not a good sign. Also relates back to how IBM didnt trust their own servers.

            Now that IBM doesn't make or support IBM servers even for customers... the one reason that people had for selecting them is gone.

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @ntoxicator
              last edited by

              @ntoxicator said:

              @coliver

              I'm aware of this - and that is the point I was getting across.

              As with iSCSI initiator I COULD attach as local disk and direct connect and take advantage of near full network speed with smaller overhead.

              In my opinion. There would be more overhead

              Oh absolutely, there is more overhead. But that overhead is trivial, it gets handled in a more reliable way (Linux iSCSI is more reliable than Windows iSCSI and storage is better to the host than the guest and networking has less overhead at the host than at the guest) so this is generally considered to not be a factor at all. But more importantly is fragility and manageability.

              What if you need to pause a VM... how will the VM know to tell the SAN to freeze in this way?

              1 Reply Last reply Reply Quote 1
              • scottalanmillerS
                scottalanmiller @ntoxicator
                last edited by

                @ntoxicator said:

                I've been wanting to move ALL our LUN's and data to our newer larger Synology NAS. And then use the original 4-bay as a replication/ back-up

                Synology is Supermicro gear. It's just a normal server. If you are okay with having a normal lower end enterprise server on which everything rests, why have the other servers at all? Why not go down to a single server for everything? What's the purpose of the additional servers?

                1 Reply Last reply Reply Quote 1
                • coliverC
                  coliver @ntoxicator
                  last edited by

                  @ntoxicator said:

                  In my opinion. There would be more overhead

                  ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

                  Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

                  scottalanmillerS 1 Reply Last reply Reply Quote 1
                  • scottalanmillerS
                    scottalanmiller @ntoxicator
                    last edited by

                    @ntoxicator said:

                    With the primary data being on the Citrix Xen Server as local disk (iSCSI LUN storage). if I was to migrate to an NFS Stor. Mounted to Xen Server.

                    I would attach as a NEW disk to that Virtual Machine. Mount it within Windows and format. Then I'll be stuck wit 'xcopy' the data & Permissions over to this new storage drive.

                    Yes, sadly using SAN instead of NAS instroduces all kinds of complications because all data has to be processed through another machine to be useful - including doing transfers of the data.

                    However, as long as you don't start attaching directly to the guests, you can use storage vmotion to do this move on a block level without needing to deal with xcopy or anything of the sort. XenServer can do this for you - one of the big, critical reasons why you don't attach storage to the guests is because you lose the protections and features that the hypervisor has to provide.

                    1 Reply Last reply Reply Quote 1
                    • ntoxicatorN
                      ntoxicator
                      last edited by

                      @scottalanmiller

                      Thank you for the insight.. great points from you & everyone

                      For centralized storage.

                      Right now its essentially a single Synology NAS (Serving out NFS & iSCSI LUNS)

                      I have two(2) Synology NAS's. But one is directly associated to the Citrix Xen Server and it storage needs. The 2nd larger Synology NAS is tied to both Citrix Xen Server (NFS) and also Prox Mox storage.

                      The goal was to migrate ALL data off the old NAS to the new larger NAS. But due to limitations and the storage size growing so rapidly became so difficult

                      Company bitches to me of anydown time. As users will randomly want to work remotely or from home. So telling CEO that I need to migrate 2TB of data over the network to the new storage pool and will take 10 hours. Its pulling teeth.

                      Ultimate goal in new setup I was planning

                      meaning WAS

                      2 - Synology NAS 12 bay units - data replicated between

                      2 - 3 NODE servers for housing the Virtual Machines

                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @coliver
                        last edited by

                        @coliver said:

                        @ntoxicator said:

                        In my opinion. There would be more overhead

                        ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

                        Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

                        As I was writing out the downsides, I'm not actually sure that it is more overhead. Because the iSCSI has to be processed in software by the VM rather than in hardware by the host there is more network overhead in doing it to the guest.

                        coliverC 1 Reply Last reply Reply Quote 0
                        • coliverC
                          coliver @scottalanmiller
                          last edited by

                          @scottalanmiller said:

                          @coliver said:

                          @ntoxicator said:

                          In my opinion. There would be more overhead

                          ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

                          Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

                          As I was writing out the downsides, I'm not actually sure that it is more overhead. Because the iSCSI has to be processed in software by the VM rather than in hardware by the host there is more network overhead in doing it to the guest.

                          Right, I agree with this I assumed that it would be slightly more processing overhead for the hypervisor but since it would be doing it anyway it wouldn't be anything additional.

                          scottalanmillerS AconboyA 2 Replies Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @ntoxicator
                            last edited by

                            @ntoxicator said:

                            The goal was to migrate ALL data off the old NAS to the new larger NAS. But due to limitations and the storage size growing so rapidly became so difficult

                            XenServer should be able to do that with no downtime. Did you look into its features for moving storage while it is running?

                            1 Reply Last reply Reply Quote 1
                            • scottalanmillerS
                              scottalanmiller @coliver
                              last edited by

                              @coliver said:

                              @scottalanmiller said:

                              @coliver said:

                              @ntoxicator said:

                              In my opinion. There would be more overhead

                              ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

                              Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

                              As I was writing out the downsides, I'm not actually sure that it is more overhead. Because the iSCSI has to be processed in software by the VM rather than in hardware by the host there is more network overhead in doing it to the guest.

                              Right, I agree with this I assumed that it would be slightly more processing overhead for the hypervisor but since it would be doing it anyway it wouldn't be anything additional.

                              Networking in the host is more efficient than in the guest. And both networking and storage is more efficient in Linux and Xen than in Windows. So double bonus on efficiency.

                              1 Reply Last reply Reply Quote 1
                              • ntoxicatorN
                                ntoxicator
                                last edited by

                                I've moved VM's on Citrix Xen Server and Storage to another LUN at the time (when installed the 2nd Synology)

                                It saturated the network.

                                The current SuperMicro 1U server only has 2 Intel NIC cards. I have them bonded via Xen Center and LACP enabled.

                                scottalanmillerS 1 Reply Last reply Reply Quote 0
                                • AconboyA
                                  Aconboy @coliver
                                  last edited by

                                  @coliver said:

                                  @scottalanmiller said:

                                  @coliver said:

                                  @ntoxicator said:

                                  In my opinion. There would be more overhead

                                  ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

                                  Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

                                  As I was writing out the downsides, I'm not actually sure that it is more overhead. Because the iSCSI has to be processed in software by the VM rather than in hardware by the host there is more network overhead in doing it to the guest.

                                  Right, I agree with this I assumed that it would be slightly more processing overhead for the hypervisor but since it would be doing it anyway it wouldn't be anything additional.

                                  Guys, this is largely the reason we architected HC3 the way we did - give you the flexibility and HA of SAN/NAS without the complexity or overhead of VSA's and storage protocols. This is also why we built it specifically for the SMB and Mid-Market - at a price that makes sense specifically for our target market (not trying to sound too "salesy" but this is exacly why we built the platform).

                                  1 Reply Last reply Reply Quote 2
                                  • ntoxicatorN
                                    ntoxicator
                                    last edited by

                                    Also my goal was to migrate to NFS storage away from iSCSI

                                    as dealing with the RAW image or .cow2 image file is hell of alot easier.

                                    scottalanmillerS 1 Reply Last reply Reply Quote 1
                                    • ntoxicatorN
                                      ntoxicator
                                      last edited by ntoxicator

                                      @Aconboy said:

                                      HC3

                                      How long has HC3 scale been avail? Today is my first time hearing and being introduced with the option

                                      AconboyA scottalanmillerS 2 Replies Last reply Reply Quote 0
                                      • AconboyA
                                        Aconboy @ntoxicator
                                        last edited by

                                        @ntoxicator said:

                                        @Aconboy said:

                                        HC3

                                        How long has HC3 scale been avail? Today is my first time hearding and being introduced with the option

                                        We began it in 2008, and first customer ship was in late 2011. we have north of 5000 units in the field across 1800 or so customer sites.Take a look at www.scalecomputing.com

                                        1 Reply Last reply Reply Quote 2
                                        • dafyreD
                                          dafyre
                                          last edited by

                                          The Scale systems are excellent. I know NTG has one. I've worked with their systems a couple of years ago, and the performance was night & day VS VMware and a similarly sized SAN. And their systems work really well.

                                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @ntoxicator
                                            last edited by

                                            @ntoxicator said:

                                            I've moved VM's on Citrix Xen Server and Storage to another LUN at the time (when installed the 2nd Synology)

                                            It saturated the network.

                                            The current SuperMicro 1U server only has 2 Intel NIC cards. I have them bonded via Xen Center and LACP enabled.

                                            Yeah, storage migrations will do that 🙂 That's why you want block storage on a dedicated SAN if possible so that it uses its own "back channel" whenever possible so that it doesn't impact other things.

                                            1 Reply Last reply Reply Quote 1
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 12
                                            • 13
                                            • 2 / 13
                                            • First post
                                              Last post