ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Containers on Bare Metal

    IT Discussion
    containers bare metal
    6
    31
    1.6k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      LXD is what we use. Very fast, very mature, and good tools for it.

      Emad RE stacksofplatesS 2 Replies Last reply Reply Quote 1
      • Emad RE
        Emad R @scottalanmiller
        last edited by

        @scottalanmiller

        Nice, do you try to do them with ceph storage or you simply go with the default zfs

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @Emad R
          last edited by

          @Emad-R said in Containers on Bare Metal:

          @scottalanmiller

          Nice, do you try to do them with ceph storage or you simply go with the default zfs

          ZFS isn't a default on any system that I know. But definitely not CEPH, CEPH isn't very performant unless you do a lot of extra stuff (Starwind makes a CEPH acceleration product.) ZFS was only default for Solaris Zones, not LXD. Much of LXD doesn't have have ZFS as an option. We are normally on XFS.

          Emad RE 1 Reply Last reply Reply Quote 0
          • Emad RE
            Emad R @scottalanmiller
            last edited by

            @scottalanmiller

            https://lxd.readthedocs.io/en/latest/clustering/
            https://lxd.readthedocs.io/en/latest/storage/

            I think latest versions and especially with clustering recommends ZFS storage, which is nice cause now it is added easily as fuse fs

            1 Reply Last reply Reply Quote 0
            • stacksofplatesS
              stacksofplates @scottalanmiller
              last edited by stacksofplates

              @scottalanmiller said in Containers on Bare Metal:

              LXD is what we use. Very fast, very mature, and good tools for it.

              @Emad-R Yeah LXD has taken the OCI image idea and applied it to LXC. LXC was doing something kind of like that later on. When you did an lxc-create -t download it would look at a text file with links to tarballs to download. LXD has incorporated images from the beginning which has given them a lot of flexibility like updating and layering.

              1 Reply Last reply Reply Quote 1
              • Emad RE
                Emad R @Emad R
                last edited by

                @Emad-R

                Very good read:

                https://linuxhint.com/lxd-vs-docker/

                scottalanmillerS 1 Reply Last reply Reply Quote 1
                • scottalanmillerS
                  scottalanmiller @Emad R
                  last edited by

                  @Emad-R said in Containers on Bare Metal:

                  @Emad-R

                  Very good read:

                  https://linuxhint.com/lxd-vs-docker/

                  That is a good way to break them down, I liked that.

                  1 Reply Last reply Reply Quote 0
                  • S
                    StorageNinja Vendor
                    last edited by

                    A few things...

                    1. Google and AWS don't bother running them on Baremetal. While some people do, they tend to be shops that like running lots of linux on bare-metal and for them, it's a OS/Platform choice rather than a Hypervisor vs. non-hypervisor choice. The majority of the containers in people's datacenters and in the cloud are in VMs.

                    2. VMware with the project pacific announcement at VMworld called out that they get better performance with their container runtime in a Virtual Machine, than bare metal Linux container hosts. (This makes sense, once you understand that the vSphere scheduler does a better job at packing with NUMA awareness than the Linux kernel. Kit explained this on my podcast last week if anyone cares to listen).

                    3. I run them on bare metal on my Pi4 cluster because I'm still waiting on drivers and EFI to be written for it so I can run a proper hypervisor on them.

                    Emad RE 1 Reply Last reply Reply Quote 1
                    • Emad RE
                      Emad R @StorageNinja
                      last edited by

                      @StorageNinja

                      I would like to hear more about your pi4 cluster since the pi4 is fairly new, any links or hints or suggested products

                      S 1 Reply Last reply Reply Quote 0
                      • S
                        StorageNinja Vendor @Emad R
                        last edited by

                        @Emad-R Eh, I got 6 of them with the maximum memory (4GB). Also looking to acquire some beefier ARM platforms that I can run experimental ESXi builds on. - https://shop.solid-run.com/product/SRM8040S00D16GE008S00CH/ has caught my eye, but there are a few other ARM packages that are also reasonably priced and have different capabilities (Jetson etc from Nvidia for CUDA etc). Was really hoping rancher would sort out a ARM install but egh, might end up running that on my Intel NUCs.

                        scottalanmillerS 1 Reply Last reply Reply Quote 2
                        • scottalanmillerS
                          scottalanmiller @StorageNinja
                          last edited by

                          @StorageNinja said in Containers on Bare Metal:

                          Also looking to acquire some beefier ARM platforms that I can run experimental ESXi builds on. - https://shop.solid-run.com/product/SRM8040S00D16GE008S00CH/ has caught my eye

                          Now this looks really sweet. That's some cool stuff... both the hardware and ESXi on ARM. $459 is a little high for that CPU and only 16GB, but not horrible.

                          1 Reply Last reply Reply Quote 0
                          • 1
                          • 2
                          • 2 / 2
                          • First post
                            Last post