ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Cannot decide between 1U servers for growing company

    IT Discussion
    18
    246
    134.7k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DashrenderD
      Dashrender @ntoxicator
      last edited by

      @ntoxicator said:

      shit me for getting torn to shreds on here. Pissing contest.

      Its much easier to verbalize than type out exact specifics.

      What i meant by "I just seen as Windows iSCSI initiator working much better; more manageable and not limited."

      Am I currently using windows iSCSI initator? NO
      Do I wish I was using it: Yes?

      Why: Because I feel it would be easier to manage and connect an iSCSI LUN as localized storage and data storage. The larger 2TB storage holds all the windows network shares and user profile data.... thats the problem.

      So disconnect the LUN from the XenServer and connect it directly to ProxMox, then give that drive to the windows VM. does that not work in ProxMox?

      scottalanmillerS 1 Reply Last reply Reply Quote 0
      • AconboyA
        Aconboy @ntoxicator
        last edited by

        @ntoxicator said:

        So if anyone can explain to me

        To do away with centralized storage such as what we have now and I've been moving to. I suppose this is what I've grown use to.

        In order to have localized storage at the node/hypervisor level. one or many of the hypervisors would be storing all the data and sharing out the NFS? Then its replicated between? probably with DRBD Storage.

        however, would would this be done with Citrix Xen Server for instance?

        there are a couple of methods used there in the hyperconverged space. Two different schools of thought emerged on how best to simplify the architecture while maintaining the benefits of virtualization.

        1. Simply virtualize the SAN and it's controllers - also known as pulling the SAN into the servers. The VSA or Virtual San Appliance approach was developed to move the SAN up into the host servers through the use of a virtual machine. This did in fact simplify things like implementation and management by eliminating the separate SAN. However, it didn't do much to simplify the data path or regain efficiency. The VSA consumed significant host resources (CPU and RAM), still used storage protocols, and complicated the path to disk by turning the IO path from application->RAM->Disk into application->RAM->hypervisor->RAM->SAN controller VM->RAM->hypervisor->RAM->write-cache SSD->Disk.

        2. Eliminate the dedicated servers, storage protocol overhead, resources consumed and associated gear by moving the hypervisor directly into the OS of the storage platform as a set of kernel modules, thereby simplifying the architecture dramatically while regaining the efficiency originally promised by Virtualization.

        1 Reply Last reply Reply Quote 0
        • coliverC
          coliver @ntoxicator
          last edited by

          @ntoxicator said:

          So if anyone can explain to me

          To do away with centralized storage such as what we have now and I've been moving to. I suppose this is what I've grown use to.

          In order to have localized storage at the node/hypervisor level. one or many of the hypervisors would be storing all the data and sharing out the NFS? Then its replicated between? probably with DRBD Storage.

          however, would would this be done with Citrix Xen Server for instance?

          You could also look at Starwinds Virtual SAN. Which could do this as well.

          scottalanmillerS 1 Reply Last reply Reply Quote 1
          • DashrenderD
            Dashrender @scottalanmiller
            last edited by

            @scottalanmiller said:

            It is that it is on a LUN now that is limiting you. If it was on a NAS instead of a SAN, you'd have more options.

            Scott, where's your link explaining the difference?

            coliverC scottalanmillerS 2 Replies Last reply Reply Quote 0
            • coliverC
              coliver @Dashrender
              last edited by coliver

              @Dashrender said:

              @scottalanmiller said:

              It is that it is on a LUN now that is limiting you. If it was on a NAS instead of a SAN, you'd have more options.

              Scott, where's your link explaining the difference?

              One is block the other is file?

              DashrenderD 1 Reply Last reply Reply Quote 0
              • DashrenderD
                Dashrender @coliver
                last edited by

                @coliver said:

                @Dashrender said:

                @scottalanmiller said:

                It is that it is on a LUN now that is limiting you. If it was on a NAS instead of a SAN, you'd have more options.

                Scott, where's your link explaining the difference?

                One is block the other is file?

                The purpose of my post was to show that the OP was using his NAS as both NAS and SAN simultaneously to ensure Scott's point wasn't being lost when he was indicating the use of a SAN or NAS.

                1 Reply Last reply Reply Quote 1
                • scottalanmillerS
                  scottalanmiller @ntoxicator
                  last edited by

                  @ntoxicator said:

                  So if anyone can explain to me

                  To do away with centralized storage such as what we have now and I've been moving to. I suppose this is what I've grown use to.

                  In order to have localized storage at the node/hypervisor level. one or many of the hypervisors would be storing all the data and sharing out the NFS? Then its replicated between? probably with DRBD Storage.

                  however, would would this be done with Citrix Xen Server for instance?

                  No NFS, SMB, iSCSI or anything involved with the DRBD scenario. It's raw block storage replication all the way down in the stack. Never need to look at high level stuff. The storage is visible to both nodes at the same time locally. No sharing protocols at all.

                  With XenServer you could do this yourself but for best results you'd likely use HA-Lizard which is a set of tools around DRBD on XenServer handling all of the complication for you. Not only is it free, but the HA-Lizard team participates here in the community a little.

                  1 Reply Last reply Reply Quote 0
                  • scottalanmillerS
                    scottalanmiller @Dashrender
                    last edited by

                    @Dashrender said:

                    @ntoxicator said:

                    shit me for getting torn to shreds on here. Pissing contest.

                    Its much easier to verbalize than type out exact specifics.

                    What i meant by "I just seen as Windows iSCSI initiator working much better; more manageable and not limited."

                    Am I currently using windows iSCSI initator? NO
                    Do I wish I was using it: Yes?

                    Why: Because I feel it would be easier to manage and connect an iSCSI LUN as localized storage and data storage. The larger 2TB storage holds all the windows network shares and user profile data.... thats the problem.

                    So disconnect the LUN from the XenServer and connect it directly to ProxMox, then give that drive to the windows VM. does that not work in ProxMox?

                    SANs do not work that way.

                    DashrenderD 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @coliver
                      last edited by

                      @coliver said:

                      @ntoxicator said:

                      So if anyone can explain to me

                      To do away with centralized storage such as what we have now and I've been moving to. I suppose this is what I've grown use to.

                      In order to have localized storage at the node/hypervisor level. one or many of the hypervisors would be storing all the data and sharing out the NFS? Then its replicated between? probably with DRBD Storage.

                      however, would would this be done with Citrix Xen Server for instance?

                      You could also look at Starwinds Virtual SAN. Which could do this as well.

                      But is limited to Hyper-V for best results and can make due in the VMware world but is inferior (but you shouldn't be looking at ESXi anyway so not a big deal.)

                      coliverC 1 Reply Last reply Reply Quote 0
                      • DashrenderD
                        Dashrender @scottalanmiller
                        last edited by

                        @scottalanmiller said:

                        @Dashrender said:

                        @ntoxicator said:

                        shit me for getting torn to shreds on here. Pissing contest.

                        Its much easier to verbalize than type out exact specifics.

                        What i meant by "I just seen as Windows iSCSI initiator working much better; more manageable and not limited."

                        Am I currently using windows iSCSI initator? NO
                        Do I wish I was using it: Yes?

                        Why: Because I feel it would be easier to manage and connect an iSCSI LUN as localized storage and data storage. The larger 2TB storage holds all the windows network shares and user profile data.... thats the problem.

                        So disconnect the LUN from the XenServer and connect it directly to ProxMox, then give that drive to the windows VM. does that not work in ProxMox?

                        SANs do not work that way.

                        Do we need another thread for an explanation of why not? I don't understand why not, at least not as stated.

                        scottalanmillerS 1 Reply Last reply Reply Quote 1
                        • coliverC
                          coliver @scottalanmiller
                          last edited by

                          @scottalanmiller said:

                          @coliver said:

                          @ntoxicator said:

                          So if anyone can explain to me

                          To do away with centralized storage such as what we have now and I've been moving to. I suppose this is what I've grown use to.

                          In order to have localized storage at the node/hypervisor level. one or many of the hypervisors would be storing all the data and sharing out the NFS? Then its replicated between? probably with DRBD Storage.

                          however, would would this be done with Citrix Xen Server for instance?

                          You could also look at Starwinds Virtual SAN. Which could do this as well.

                          But is limited to Hyper-V for best results and can make due in the VMware world but is inferior (but you shouldn't be looking at ESXi anyway so not a big deal.)

                          Doesn't work well with Xen? I thought they supported it.

                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @Dashrender
                            last edited by

                            @Dashrender said:

                            @scottalanmiller said:

                            It is that it is on a LUN now that is limiting you. If it was on a NAS instead of a SAN, you'd have more options.

                            Scott, where's your link explaining the difference?

                            http://www.smbitjournal.com/2013/02/comparing-san-and-nas/

                            1 Reply Last reply Reply Quote 1
                            • scottalanmillerS
                              scottalanmiller @Dashrender
                              last edited by

                              @Dashrender said:

                              @scottalanmiller said:

                              @Dashrender said:

                              @ntoxicator said:

                              shit me for getting torn to shreds on here. Pissing contest.

                              Its much easier to verbalize than type out exact specifics.

                              What i meant by "I just seen as Windows iSCSI initiator working much better; more manageable and not limited."

                              Am I currently using windows iSCSI initator? NO
                              Do I wish I was using it: Yes?

                              Why: Because I feel it would be easier to manage and connect an iSCSI LUN as localized storage and data storage. The larger 2TB storage holds all the windows network shares and user profile data.... thats the problem.

                              So disconnect the LUN from the XenServer and connect it directly to ProxMox, then give that drive to the windows VM. does that not work in ProxMox?

                              SANs do not work that way.

                              Do we need another thread for an explanation of why not? I don't understand why not, at least not as stated.

                              Sure, just ask the question and I'll respond. I'm racing to keep up today between busy site and kids all over the place and my dad visiting 🙂 Ask the full question about moving from Xen to KVM because that's where the rub is.

                              1 Reply Last reply Reply Quote 1
                              • scottalanmillerS
                                scottalanmiller
                                last edited by

                                as I think about it, the LUN disconnect / reconnect might work as it is Linux handling the connection on both ends. Assuming you have Xen with Linux, but that's a safe assumption.

                                1 Reply Last reply Reply Quote 0
                                • J
                                  Jason Banned
                                  last edited by

                                  We use lots of 1U Servers.

                                  Only servers that are 2U are backup appliances (which have 40TB of storage)

                                  However, that being Said 400 is relatively small and when I worked at companies that small we usually used 2U with local replicated storage.

                                  Also do not used ProxMox.. Just don't it's a toy.

                                  1 Reply Last reply Reply Quote 1
                                  • scottalanmillerS
                                    scottalanmiller @coliver
                                    last edited by

                                    @coliver said:

                                    @scottalanmiller said:

                                    @coliver said:

                                    @ntoxicator said:

                                    So if anyone can explain to me

                                    To do away with centralized storage such as what we have now and I've been moving to. I suppose this is what I've grown use to.

                                    In order to have localized storage at the node/hypervisor level. one or many of the hypervisors would be storing all the data and sharing out the NFS? Then its replicated between? probably with DRBD Storage.

                                    however, would would this be done with Citrix Xen Server for instance?

                                    You could also look at Starwinds Virtual SAN. Which could do this as well.

                                    But is limited to Hyper-V for best results and can make due in the VMware world but is inferior (but you shouldn't be looking at ESXi anyway so not a big deal.)

                                    Doesn't work well with Xen? I thought they supported it.

                                    If it can be used with Xen, which I am unaware of being supported or available, it would only be able to do so in the "Vmware" style fallback VM mode which is vastly inferior to DRBD. It's their integration with Hyper-V in the Dom0 that makes them super powerful there. Definitely not happening on Xen today.

                                    1 Reply Last reply Reply Quote 0
                                    • ntoxicatorN
                                      ntoxicator
                                      last edited by

                                      Ok so I'll scratch the Proxmox ideal. As right, it probably wouldnt scale. meh.

                                      But I do know and have been planning to migrate away from iSCSI over to NFS storage for Xen Server. I already started... but the issue is with migrating the storage from the current local disk to new local disk that gets assigned to the VM

                                      Still 100% confused on local replicated storage.

                                      Scott made a comment on Xen Server with HA-Lizard

                                      But wouldnt all that storage replication STILL be handled over 1Gbe backbone??!

                                      scottalanmillerS M DustinB3403D 4 Replies Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @ntoxicator
                                        last edited by

                                        @ntoxicator said:

                                        Scott made a comment on Xen Server with HA-Lizard

                                        But wouldnt all that storage replication STILL be handled over 1Gbe backbone??!

                                        Yes, ONLY way to avoid that is to abandon HA and move on. HA requires certain things that you can't get away from even if you improve the architecture.

                                        Our Scale cluster has a dedicated 10GigE SAN network, for example, to overcome that. Huge throughput and sharing nothing with other functions.

                                        DashrenderD 1 Reply Last reply Reply Quote 1
                                        • ntoxicatorN
                                          ntoxicator
                                          last edited by gjacobse

                                          UPDATE

                                          Oracle just got back to me on Pricing. Made me puke

                                          10K PER server for a F*** simple 1U box? what f[moderated] is going on with this market, have I completely lost touch?

                                          scottalanmillerS coliverC 2 Replies Last reply Reply Quote 1
                                          • DashrenderD
                                            Dashrender @scottalanmiller
                                            last edited by

                                            @scottalanmiller said:

                                            @ntoxicator said:

                                            Scott made a comment on Xen Server with HA-Lizard

                                            But wouldnt all that storage replication STILL be handled over 1Gbe backbone??!

                                            Yes, ONLY way to avoid that is to abandon HA and move on. HA requires certain things that you can't get away from even if you improve the architecture.

                                            Our Scale cluster has a dedicated 10GigE SAN network, for example, to overcome that. Huge throughput and sharing nothing with other functions.

                                            It doesn't have to. @ntoxicator was talking about installing a 10 GB network for replication.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 7
                                            • 12
                                            • 13
                                            • 5 / 13
                                            • First post
                                              Last post