ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Networking and 1U Colocation

    IT Discussion
    colocation networking virtualization software defined network
    15
    103
    9.8k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • JaredBuschJ
      JaredBusch
      last edited by

      Honestly, if it were me, I would just add a NIC on the LAN to your host and let it reside on your LAN like everything else.

      I know other things were mentioned and recommended out of paranoid security concerns, but realistically, that is mitigating such a small risk, that I generally find it not to be worth the effort.

      The thought exercise has to be gone through to determine that though.

      EddieJenningsE 1 Reply Last reply Reply Quote 3
      • EddieJenningsE
        EddieJennings @JaredBusch
        last edited by

        @jaredbusch said in Networking and 1U Colocation:

        Honestly, if it were me, I would just add a NIC on the LAN to your host and let it reside on your LAN like everything else.

        I know other things were mentioned and recommended out of paranoid security concerns, but realistically, that is mitigating such a small risk, that I generally find it not to be worth the effort.

        The thought exercise has to be gone through to determine that though.

        There is a gap of understanding. The LAN interface of the VyOS VM is connected to virbr1. For the host, do you mean create an interface and [attach] it to virbr1? Or do you mean assigning an IP address to virbr1?

        This is how the virtual networks appear on the host (from nmcli).

        virbr0: connected to virbr0
        	"virbr0"
        	bridge, 52:54:00:55:91:EB, sw, mtu 1500
        	inet4 192.168.122.1/24
        
        virbr1: connected to virbr1
        	"virbr1"
        	bridge, 52:54:00:BF:E7:FB, sw, mtu 1500
        
        1 Reply Last reply Reply Quote 0
        • stacksofplatesS
          stacksofplates
          last edited by stacksofplates

          If you're not having other people connect to it and it's just for testing, I'd just leave the connection go to the host (SSH and Cockpit) and then join all of your VMs to ZeroTier.

          EddieJenningsE 1 Reply Last reply Reply Quote 0
          • EddieJenningsE
            EddieJennings @stacksofplates
            last edited by EddieJennings

            @stacksofplates said in Networking and 1U Colocation:

            If you're not having other people connect to it and it's just for testing, I'd just leave the connection go to the host (SSH and Cockpit) and then join all of your VMs to ZeroTier.

            Would you expose your hypervisor to the Internet with no firewall in between?

            stacksofplatesS ObsolesceO 2 Replies Last reply Reply Quote 0
            • stacksofplatesS
              stacksofplates @EddieJennings
              last edited by stacksofplates

              @eddiejennings said in Networking and 1U Colocation:

              @stacksofplates said in Networking and 1U Colocation:

              If you're not having other people connect to it and it's just for testing, I'd just leave the connection go to the host (SSH and Cockpit) and then join all of your VMs to ZeroTier.

              Would you expose your hypervisor to the Internet with no firewall in between?

              For your lab, as long as you use strong SSH keys I don't see an issue with it. I've not tried it but you should be able to set your hosts.allow to only use your workstations ZeroTier IP address. You could also just do an SSH tunnel for Cockpit if you want to use it.

              1 Reply Last reply Reply Quote 0
              • stacksofplatesS
                stacksofplates
                last edited by

                You can also do extra hardening with something like SCAP.

                1 Reply Last reply Reply Quote 0
                • EddieJenningsE
                  EddieJennings
                  last edited by

                  The story has evolved a bit, as Colocation America gave me a /29 network rather than /30, so it's possible that could just assign a public IP to the other physical NIC on my server -- though, that seems like not a good practice.

                  It seems like there has to be a way for my host to be able to access the Internet through one of the guests.

                  stacksofplatesS 1 Reply Last reply Reply Quote 0
                  • stacksofplatesS
                    stacksofplates @EddieJennings
                    last edited by

                    @eddiejennings said in Networking and 1U Colocation:

                    The story has evolved a bit, as Colocation America gave me a /29 network rather than /30, so it's possible that could just assign a public IP to the other physical NIC on my server -- though, that seems like not a good practice.

                    It seems like there has to be a way for my host to be able to access the Internet through one of the guests.

                    The only way to do that is a full bridge. Either a normal bridge or an OVS bridge.

                    1 Reply Last reply Reply Quote 0
                    • stacksofplatesS
                      stacksofplates
                      last edited by

                      I just tested it on my one hypervisor. If I set hosts.allow to my ZT address on my laptop and hosts.deny to all I can still ssh to the KVM host over ZT.

                      EddieJenningsE 1 Reply Last reply Reply Quote 0
                      • EddieJenningsE
                        EddieJennings @stacksofplates
                        last edited by

                        @stacksofplates said in Networking and 1U Colocation:

                        I just tested it on my one hypervisor. If I set hosts.allow to my ZT address on my laptop and hosts.deny to all I can still ssh to the KVM host over ZT.

                        So applying that to my scenario, one of your KVM hosts's NICs would have a public IP address, correct?

                        There was one point I missed that you said. Eventually, there will be others connecting to the VMs, I'm planning on running a NextCloud VM, PBX, and Zimbra.

                        1 Reply Last reply Reply Quote 0
                        • EddieJenningsE
                          EddieJennings
                          last edited by

                          This looks like it worked. I added this line to the appropriate network using virsh net-edit:

                          <route address='0.0.0.0' prefix='0' gateway='192.168.100.1'/> (yes, the final subnet decision was to use 192.168.100.0/24).

                          That created a default route, which shows up with ip route show. If I can get DNS resolution, then I'm all set 😄 .

                          1 Reply Last reply Reply Quote 2
                          • EddieJenningsE
                            EddieJennings
                            last edited by

                            And for DNS, this worked.

                            nmcli connection mod virbr1 ipv4.dns "8.8.8.8"

                            1 Reply Last reply Reply Quote 1
                            • ObsolesceO
                              Obsolesce @EddieJennings
                              last edited by

                              @eddiejennings said in Networking and 1U Colocation:

                              @stacksofplates said in Networking and 1U Colocation:

                              If you're not having other people connect to it and it's just for testing, I'd just leave the connection go to the host (SSH and Cockpit) and then join all of your VMs to ZeroTier.

                              Would you expose your hypervisor to the Internet with no firewall in between?

                              I forget what hypervisor you're doing and don't feel like scrolling up, so I'm assuming KVM.

                              But I see no reason to really treat the hypervisor much different than a VPS that basically directly exposed to the public too.

                              For your hypervisor, you can do what I do for my VPS and ONLY allowSSH, only key-based access, and no root login via ssh. Also make sure you got logwatch and fail2ban going.

                              stacksofplatesS 1 Reply Last reply Reply Quote 2
                              • ObsolesceO
                                Obsolesce
                                last edited by

                                Another good idea is to use something to keep your hypervisor in a specified state, such as SaltStack. That's what I use on my VPS, so I always know a bunch of specific things are ALWAYS in check.

                                1 Reply Last reply Reply Quote 1
                                • stacksofplatesS
                                  stacksofplates @Obsolesce
                                  last edited by

                                  @tim_g said in Networking and 1U Colocation:

                                  fail2ban

                                  Fail2ban does nothing with key based access. It's denied before fail2ban even sees it.

                                  1 Reply Last reply Reply Quote 0
                                  • 1
                                  • 2
                                  • 3
                                  • 4
                                  • 5
                                  • 6
                                  • 5 / 6
                                  • First post
                                    Last post