ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Solved XenServer - Specify cores per VM

    IT Discussion
    xen xenserver
    4
    13
    1.4k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      This is called "core affinity" and is very bad except for extremely specific use cases for things where you are doing very specific NUMA tuning or need to trade throughput for latency like low latency trading (which does not get virtualized.) 99.999% of the time, this would be a very bad idea and is not exposed for a reason, because people would cripple their installs. You don't want this unless you really, really understand NUMA and cache hits on the CPU cache and how load balancing will go out the window.

      Basically the system is tuned for throughput. Core Affinity is tuning against throughput. Which there are use cases for, but extremely few.

      travisdh1T 1 Reply Last reply Reply Quote 3
      • DustinB3403D
        DustinB3403
        last edited by

        Thanks for the answer.

        I'm not looking to do it on my installs, but was curious if it was even doable.

        Thanks for explaining.

        1 Reply Last reply Reply Quote 0
        • DashrenderD
          Dashrender
          last edited by

          did you just listen to Security Now? or read about the new bleed flaw? lol

          scottalanmillerS 1 Reply Last reply Reply Quote 0
          • DustinB3403D
            DustinB3403
            last edited by

            @Dashrender nope, just a random thought

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @Dashrender
              last edited by

              @Dashrender said:

              did you just listen to Security Now? or read about the new bleed flaw? lol

              Why, what were they talking about?

              DashrenderD 1 Reply Last reply Reply Quote 0
              • travisdh1T
                travisdh1 @scottalanmiller
                last edited by

                Adding to what @scottalanmiller already said. I learned about this back in the early 2000s, in SGI's sysadmin training courses. The OS had a way to assign a process to a CPU, but they told you not to bother as process scheduler would still assign other processes to that core. The only "correct" way to assign processes to cores was within the programming. They actually had a library for many programming languages that would properly communicate to the process scheduler in the OS. Of course the largest single system image deployment of any of those sysadmins in the class was a 2000 cpu machine (this was 2001). Of course that was spread out among lots of racks, which causes delays if a process got spread out to random processors and memory banks all across the system. Today's x86 processors and operating systems at least don't have as many problems with delays due to "I have a CPU requesting a memory page that is 300 feet away" issue!

                scottalanmillerS 1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @travisdh1
                  last edited by

                  @travisdh1 Linux does not have that problem today. Process affinity and pinning work without a problem. The issue is, is that it is like using a separate hard drive for every process rather than sharing a RAID array. There are very specific times that it makes sense, but for 99.99% of workloads if you try to do that you will cripple your system.

                  travisdh1T 1 Reply Last reply Reply Quote 1
                  • travisdh1T
                    travisdh1 @scottalanmiller
                    last edited by

                    @scottalanmiller said:

                    @travisdh1 Linux does not have that problem today. Process affinity and pinning work without a problem. The issue is, is that it is like using a separate hard drive for every process rather than sharing a RAID array. There are very specific times that it makes sense, but for 99.99% of workloads if you try to do that you will cripple your system.

                    Right. They were doing things a little oddly just because of the physical size of the systems at the time.

                    1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender @scottalanmiller
                      last edited by

                      @scottalanmiller said:

                      @Dashrender said:

                      did you just listen to Security Now? or read about the new bleed flaw? lol

                      Why, what were they talking about?

                      A new hack called CacheBleed - the ability for one process to detect cache collisions inside a hyper threaded core and through them extract things like PGP private keys.

                      It requires things like what Dustin is asking about - I thought his question was appropo considering I had just heard about it early yesterday.

                      scottalanmillerS 1 Reply Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller @Dashrender
                        last edited by

                        @Dashrender said:

                        @scottalanmiller said:

                        @Dashrender said:

                        did you just listen to Security Now? or read about the new bleed flaw? lol

                        Why, what were they talking about?

                        A new hack called CacheBleed - the ability for one process to detect cache collisions inside a hyper threaded core and through them extract things like PGP private keys.

                        It requires things like what Dustin is asking about - I thought his question was appropo considering I had just heard about it early yesterday.

                        I see. That's really targeted at processes on a CPU, not for VMs. Not that there isn't a theory that VMs could be affected but there are a few factors...

                        • CacheBleed requires that you know a LOT about the other processes that are running to figure out what is causing the cache to react that way.
                        • Requires that your workloads remain on a single CPU.

                        Processing affinity would actually dramatically increase this risk rather than lower it. The best defence is the native process floating system because you never know what other system shares your cache.

                        In cloud envirnments you are generally protected because even if you discover a key, you never know what key you discovered. It would be like walking in a field and finding a house key without any markings on it. You assume it opens a door somewhere. But what door, and where?

                        1 Reply Last reply Reply Quote 2
                        • DashrenderD
                          Dashrender
                          last edited by

                          Yep, that's why it's currently not a real concern.

                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @Dashrender
                            last edited by

                            @Dashrender said:

                            Yep, that's why it's currently not a real concern.

                            And only certain Intel CPUs. AMD users are in the clear right now.

                            1 Reply Last reply Reply Quote 0
                            • 1 / 1
                            • First post
                              Last post