ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Cannot decide between 1U servers for growing company

    Scheduled Pinned Locked Moved IT Discussion
    246 Posts 18 Posters 139.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @Minion Queen
      last edited by

      @Minion-Queen said:

      And to be clear NTG partners with Scale and many other Vendors, so that we can offer the most and best (dependent on client what is best for each one) solutions for clients to choose from.

      Yes, we work quite hard to remain neutral. And an important part of that is disclosures 🙂

      1 Reply Last reply Reply Quote 0
      • ntoxicatorN
        ntoxicator
        last edited by

        Server quotes came back

        LENOVO Server X are decently priced per 1U
        ThinKServers are cheapest

        Still waiting on pricing from CISCO

        Oracle dropped price down... nearly 8k for an 1U server. But they have only minimal 600GB SAS drives?? wtf

        Like to have NEW server prices and present them to CEO/Finance and then also look at used servers. I really like what xBYTE has for inventory

        brianlittlejohnB AconboyA 2 Replies Last reply Reply Quote 1
        • brianlittlejohnB
          brianlittlejohn @ntoxicator
          last edited by brianlittlejohn

          @ntoxicator The next server I buy will be from Xbyte... Unless I go Scale.

          1 Reply Last reply Reply Quote 1
          • AconboyA
            Aconboy @ntoxicator
            last edited by

            @ntoxicator pricing seems to be relatively ok with 3250M5's, my concern is they seem a bit dated on specs vs others. The BIOS on them is a bit of a huge PITA in my opinion. Are you still thinking of rolling your own or is a prebuilt solution still on the radar for you?

            1 Reply Last reply Reply Quote 1
            • ntoxicatorN
              ntoxicator
              last edited by

              I could easily build our own again using Supermicro hardware. its trusty. only downside is warranty and such

              But I suppose, if running in HA 2-3 servers within XenServer.. would be a non issue if one had a hardware issue.

              AconboyA 1 Reply Last reply Reply Quote 1
              • AconboyA
                Aconboy @ntoxicator
                last edited by

                @ntoxicator agreed on supermicro - good gear, but the company is built to work with vendors, not so much end users. On the Scale side, we don't use Xen - too much overhead on the l3->l1 calls. We use KVM at the base, then made it cluster aware

                1 Reply Last reply Reply Quote 0
                • ntoxicatorN
                  ntoxicator
                  last edited by

                  And here i was getting blasted about ProxMox and KVM... I personally feel KVM is superior

                  Just XenServer uses XEN hypervisor and packages their own features etc. dont know all details, just 10k foot view.

                  AconboyA scottalanmillerS 2 Replies Last reply Reply Quote 0
                  • AconboyA
                    Aconboy @ntoxicator
                    last edited by

                    @ntoxicator I can go over what we do with you - no strings attached. I have a web demo that I do weekly on thursdays at 1:30 central time. http://bit.ly/HC3LiveDemo if you want to come. As far as KVM goes, being that it is a pair of kernel modules, it allows us to do tons of stuff that we otherwise couldn't

                    1 Reply Last reply Reply Quote 0
                    • ntoxicatorN
                      ntoxicator
                      last edited by

                      I'll see if i can join. have another meeting at 2PM EST. if I'm available i'll hop on

                      AconboyA 1 Reply Last reply Reply Quote 1
                      • AconboyA
                        Aconboy @ntoxicator
                        last edited by

                        @ntoxicator if you can, great, if not, that is fine, I do them pretty much every week

                        1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller @ntoxicator
                          last edited by

                          @ntoxicator said:

                          And here i was getting blasted about ProxMox and KVM... I personally feel KVM is superior

                          Just XenServer uses XEN hypervisor and packages their own features etc. dont know all details, just 10k foot view.

                          XenServer is made by the Xen team at Linux, just as KVM is. Both KVM and Xen come from the same team. XenServer and XCP are just the Linux Foundation's packaging of Xen as a full product rather than just as a component like Xen itself that you need to build your own system around.

                          KVM is very good, as is Xen. The biggest difference is that KVM lacks the ecosystem. So if you want it you normally get it packaged by someone else and Xen you normally get as XenServer. Just think of XenServer as the reference distro of Xen.

                          1 Reply Last reply Reply Quote 0
                          • ntoxicatorN
                            ntoxicator
                            last edited by

                            thank you. Wonderful explanation.

                            1 Reply Last reply Reply Quote 1
                            • scottalanmillerS
                              scottalanmiller
                              last edited by

                              No problem 🙂

                              Some extra info... big other providers of packaged Xen systems are Ubuntu, Suse and Oracle. Big backers of KVM are Red Hat and IBM.

                              Big clouds using Xen: Amazon, Rackspace and IBM
                              Big clouds using KVM: Digital Ocean, Vultr

                              Xen is more powerful "out of the box." KVM is more extendible.

                              Xen is more performant for Linux workloads. KVM is more performant for Windows workloads. Both are super fast and performance is not normally a deciding factor.

                              Besides Scale, lots of other vendors build on KVM as well for similar reasoning. One vendor that we work with regularly that uses KVM because of the ease of automation is Unitrends.

                              1 Reply Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller
                                last edited by

                                ProxMox I avoid, KVM I do not 😉 It's ProxMox themselves that are the issues there, not that they are built on KVM.

                                1 Reply Last reply Reply Quote 0
                                • ntoxicatorN
                                  ntoxicator
                                  last edited by

                                  You the man. Amazing information here. Goes a long ways.

                                  You think there would be an issue upgrading the current xenserver node to 6.5? Presently 6.0

                                  I have 6.1 ISO sitting here right now that some other nodes were running - but I migrated them to Proxmox for testing/development.

                                  Aways worried something will 'break'

                                  1 Reply Last reply Reply Quote 1
                                  • DustinB3403D
                                    DustinB3403 @ntoxicator
                                    last edited by

                                    @ntoxicator said:

                                    So if anyone can explain to me

                                    To do away with centralized storage such as what we have now and I've been moving to. I suppose this is what I've grown use to.

                                    In order to have localized storage at the node/hypervisor level. one or many of the hypervisors would be storing all the data and sharing out the NFS? Then its replicated between? probably with DRBD Storage.

                                    however, would would this be done with Citrix Xen Server for instance?

                                    So I just wrote up an entire quote on this for my org.

                                    Your Xenservers would have enough capacity (storage) to run everything you have today, plus room for growth. You build the Xen installation on both host, and then configure them into a XenPool.

                                    This allows the VM's to migrate between the two (or more host) in the event you need to work on them.

                                    For free 2-Node HA, look into HA-Lizard.

                                    1 Reply Last reply Reply Quote 0
                                    • DustinB3403D
                                      DustinB3403 @ntoxicator
                                      last edited by

                                      @ntoxicator said:

                                      But wouldnt all that storage replication STILL be handled over 1Gbe backbone??!

                                      Bond the Ethernet together or install a 10GbE NIC into each host. 🙂

                                      1 Reply Last reply Reply Quote 0
                                      • ntoxicatorN
                                        ntoxicator
                                        last edited by

                                        ethernet is bonded.

                                        Could install 10GbE cards on servers. But the NAS would still be limitation as does not support PCI-e cards.

                                        coliverC DustinB3403D scottalanmillerS 3 Replies Last reply Reply Quote 0
                                        • coliverC
                                          coliver @ntoxicator
                                          last edited by

                                          @ntoxicator said:

                                          ethernet is bonded.

                                          Could install 10GbE cards on servers. But the NAS would still be limitation as does not support PCI-e cards.

                                          He's saying you should get rid of the NAS and have all storage on the servers.

                                          1 Reply Last reply Reply Quote 1
                                          • DustinB3403D
                                            DustinB3403 @ntoxicator
                                            last edited by

                                            @ntoxicator said:

                                            ethernet is bonded.

                                            Could install 10GbE cards on servers. But the NAS would still be limitation as does not support PCI-e cards.

                                            But your NAS isn't used as a place for your VM's to reside. As a backup target sure, but the NAS is worthless in the case of local storage on your Xen Hosts.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 12
                                            • 13
                                            • 4 / 13
                                            • First post
                                              Last post