ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XenServer, local storage, and redundancy/backups

    IT Discussion
    xenserver backup redundancy
    7
    40
    8.6k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DustinB3403D
      DustinB3403 @scottalanmiller
      last edited by

      @scottalanmiller Thank you for the clarification.

      1 Reply Last reply Reply Quote 0
      • KellyK
        Kelly @scottalanmiller
        last edited by

        @scottalanmiller said:

        Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?

        If the former, why not keep CEPH and only move the top layer from OpenStack to XS?

        Also, what is driving the move away from OpenStack? Just a desire for simplicity?

        I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated. The current hardware requirements preclude that at this point. My ultimate goal is to move the storage to dedicated hardware, perhaps utilizing Ceph then, but until then I need to get a working virtualization platform.

        travisdh1T scottalanmillerS 2 Replies Last reply Reply Quote 0
        • KellyK
          Kelly
          last edited by

          As for the driving factor, it is that we're trying to simplify our infrastructure. It looked like we might be able to achieve this using Mirantis to package up OpenStack, but we're having issues getting their deployment tools to work. If I had more time to play with things I might fight it to the point of working, but we're currently running in a semi crippled state (one of the hosts removed itself from the cloud), and I need to get something up and running sooner than later, but be a longish term solution. We don't really need a private cloud. It is a convenience mostly, and at this point it appears to not be worth the overhead to setup and maintain.

          scottalanmillerS 1 Reply Last reply Reply Quote 0
          • travisdh1T
            travisdh1 @Kelly
            last edited by

            @Kelly said:

            @scottalanmiller said:

            Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?

            If the former, why not keep CEPH and only move the top layer from OpenStack to XS?

            Also, what is driving the move away from OpenStack? Just a desire for simplicity?

            I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated.

            Why do you feel the need to separate the storage and compute? What business reason exists to justify the added cost and management headache?

            I get that you currently have a management headache, and I do like the idea of moving to something more reliable. XenServer with halizard and XenOrchestra would be a great drop-in replacement, it's what I'm migrating to at least.

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @Kelly
              last edited by

              @Kelly said:

              @scottalanmiller said:

              Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?

              If the former, why not keep CEPH and only move the top layer from OpenStack to XS?

              Also, what is driving the move away from OpenStack? Just a desire for simplicity?

              I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated. The current hardware requirements preclude that at this point. My ultimate goal is to move the storage to dedicated hardware, perhaps utilizing Ceph then, but until then I need to get a working virtualization platform.

              Gotcha, okay. was hoping that the CEPH infrastructure could remain. But I guess not.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @Kelly
                last edited by

                @Kelly said:

                As for the driving factor, it is that we're trying to simplify our infrastructure. It looked like we might be able to achieve this using Mirantis to package up OpenStack, but we're having issues getting their deployment tools to work. If I had more time to play with things I might fight it to the point of working, but we're currently running in a semi crippled state (one of the hosts removed itself from the cloud), and I need to get something up and running sooner than later, but be a longish term solution. We don't really need a private cloud. It is a convenience mostly, and at this point it appears to not be worth the overhead to setup and maintain.

                I totally get that private cloud isn't likely to make sense with just four nodes, that's pretty crazy 🙂

                Was just thinking of what might be easy going forward.

                1 Reply Last reply Reply Quote 1
                • scottalanmillerS
                  scottalanmiller
                  last edited by

                  Do you really need HA? HA adds complication. Although there is an option here... with four nodes you could do TWO HA-Lizard clusters and I think put it all under XO for a single pane of glass. Not as nice as a single four node cluster, but free and works with what you have, more or less.

                  1 Reply Last reply Reply Quote 2
                  • scottalanmillerS
                    scottalanmiller
                    last edited by

                    The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.

                    KellyK 1 Reply Last reply Reply Quote 0
                    • KellyK
                      Kelly @scottalanmiller
                      last edited by

                      @scottalanmiller said:

                      The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.

                      It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.

                      scottalanmillerS Reid CooperR 2 Replies Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller @Kelly
                        last edited by

                        @Kelly said:

                        It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.

                        You are going to lose storage capacity to either RAID or RAIN. Can't do any sort of failover without losing capacity. The simplest thing, if you are okay with it, would be to do RAID 6 or RAID 10 (depending on the capacity that you are willing to lose) using MD software RAID and not do HA but just run each machine individually. Use XO to manage them all as a pool.

                        1 Reply Last reply Reply Quote 1
                        • Reid CooperR
                          Reid Cooper @Kelly
                          last edited by

                          @Kelly You would have to have a SAS controller of some type in there for the drives that are attached for CEPH now.

                          KellyK 1 Reply Last reply Reply Quote 0
                          • KellyK
                            Kelly @Reid Cooper
                            last edited by

                            @Reid-Cooper said:

                            @Kelly You would have to have a SAS controller of some type in there for the drives that are attached for CEPH now.

                            There is an onboard controller, but it isn't running any RAID configuration that I can tell.

                            scottalanmillerS 1 Reply Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @Kelly
                              last edited by

                              @Kelly said:

                              There is an onboard controller, but it isn't running any RAID configuration that I can tell.

                              It would not be for CEPH. CEPH is a RAIN system, there would be no RAID. But what it was doing isn't an issue. What we care about going forward is what we can do. The SAS controller has the drives attached and that's all that we would care about when looking at the software RAID from MD. The SAS controller isn't what provides the RAID, it is just what attaches the drives.

                              1 Reply Last reply Reply Quote 0
                              • KellyK
                                Kelly
                                last edited by

                                So, to sum up, the recommendation would be to run XS independently on each of the four, configuring RAID 1 or 10, if possible, and then use XO to manage it all? Is that correct?

                                scottalanmillerS 1 Reply Last reply Reply Quote 1
                                • scottalanmillerS
                                  scottalanmiller @Kelly
                                  last edited by

                                  @Kelly said:

                                  So, to sum up, the recommendation would be to run XS independently on each of the four, configuring RAID 1 or 10, if possible, and then use XO to manage it all? Is that correct?

                                  That is what I am thinking. Or RAID 6 for more capacity. With eight drives, RAID 6 might be a decent option.

                                  1 Reply Last reply Reply Quote 0
                                  • DashrenderD
                                    Dashrender
                                    last edited by

                                    How much storage do you have in CEPH now? what kind of resiliency do you have today? I'm completely unfamiliar with CEPH, if you loose a drive, I'm assuming you don't loose an entire array?

                                    scottalanmillerS KellyK 2 Replies Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @Dashrender
                                      last edited by

                                      @Dashrender said:

                                      How much storage do you have in CEPH now? what kind of resiliency do you have today? I'm completely unfamiliar with CEPH, if you loose a drive, I'm assuming you don't loose an entire array?

                                      CEPH is RAIN. Very advanced. Compare to Gluster, Luster, Exablox or Scale's storage.

                                      1 Reply Last reply Reply Quote 1
                                      • KellyK
                                        Kelly @Dashrender
                                        last edited by

                                        @Dashrender said:

                                        How much storage do you have in CEPH now? what kind of resiliency do you have today? I'm completely unfamiliar with CEPH, if you loose a drive, I'm assuming you don't loose an entire array?

                                        It is a very cool tool. As @scottalanmiller said, it is a RAIN (hadn't heard that term before yesterday). Basically it is software that will write to multiple nodes (it is even slow link aware) and enable you to convert commodity hardware into a resilient storage system. I would consider keeping it if it were not able to coexist (as near as I can tell) with XS.

                                        As for total storage it is pretty low. Each host is running at < 20 TB in absolute terms. Since Ceph requires three writes that means I'm getting quite a bit less than this on average.

                                        1 Reply Last reply Reply Quote 1
                                        • StrongBadS
                                          StrongBad
                                          last edited by

                                          I am pretty sure that CEPH and Xen can coexist, but I don't know about with XenServer. Doing so would likely be very awkward at best.

                                          travisdh1T 1 Reply Last reply Reply Quote 1
                                          • travisdh1T
                                            travisdh1 @StrongBad
                                            last edited by

                                            @StrongBad said:

                                            I am pretty sure that CEPH and Xen can coexist, but I don't know about with XenServer. Doing so would likely be very awkward at best.

                                            Possibly. Now I want to go experiment with XenServer and CEPH.

                                            scottalanmillerS KellyK 2 Replies Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 1 / 2
                                            • First post
                                              Last post