ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Greenfield Kubernetes Architecture and Security

    IT Discussion
    3
    9
    562
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • IRJI
      IRJ
      last edited by IRJ

      Let's say your organization has 50-100 different applications running on kubernetes. Historically each cluster runs one application.

      You have the ability to greenfield and re-architect how everything is built.

      1. Would you keep one cluster per application and use network policies to control data flow?

      2. Would you break up clusters similar to how you'd separate a 3 or 4 tier web app? One of the advantages to this approach is perhaps you can keep DevOps engineers from accessing database clusters at all. The disadvantage of course being complexity on the network side.

      3. Would you create a few kubernetes clusters and separate applications by namespace and use network policies to filter traffic?

      Note : For sake of discussion Kubernetes will be hosted on major CSP (AWS, Azure, or GCP) so no need to worry about hardware requirements for this topic.

      JaredBuschJ stacksofplatesS 2 Replies Last reply Reply Quote 2
      • JaredBuschJ
        JaredBusch @IRJ
        last edited by JaredBusch

        @irj said in Greenfield Kubernetes Architecture and Security:

        your organization has 50-100 different applications running on kubernetes

        The organizations I work with have 1-3 applications running their business and nothing using Kubernetes.

        @irj said in Greenfield Kubernetes Architecture and Security:

        create a few kubernetes clusters and separate applications by namespace and use network policies to filter traffic?

        But, to me, I would go this route. Of course that is without a enough knowledge of Kubernetes best practices. But I like separating things by logical things like namespace or task or department group, etc.

        IRJI 1 Reply Last reply Reply Quote 0
        • IRJI
          IRJ @JaredBusch
          last edited by

          @jaredbusch said in Greenfield Kubernetes Architecture and Security:

          @irj said in Greenfield Kubernetes Architecture and Security:

          create a few kubernetes clusters and separate applications by namespace and use network policies to filter traffic?

          But, to me, I would go this route. Of course that is without a enough knowledge of Kubernetes best practices. But I like separating things by logical things like namespace or task or department group, etc.

          Interestingly enough that was also a recommendation from someone who I consider a Kubernetes guru.

          One thing I am thinking about is controlling access via IAM accounts on a CSP. It's easier to separate roles and permissions when you can separate them into different projects or VPCs on a major cloud provider. I am still learning kubernetes, but I wonder how much effort it would be to manage permissions on a namespace level.

          1 Reply Last reply Reply Quote 0
          • stacksofplatesS
            stacksofplates @IRJ
            last edited by stacksofplates

            @irj said in Greenfield Kubernetes Architecture and Security:

            Let's say your organization has 50-100 different applications running on kubernetes. Historically each cluster runs one application.

            You have the ability to greenfield and re-architect how everything is built.

            1. Would you keep one cluster per application and use network policies to control data flow?

            2. Would you break up clusters similar to how you'd separate a 3 or 4 tier web app? One of the advantages to this approach is perhaps you can keep DevOps engineers from accessing database clusters at all. The disadvantage of course being complexity on the network side.

            3. Would you create a few kubernetes clusters and separate applications by namespace and use network policies to filter traffic?

            Note : For sake of discussion Kubernetes will be hosted on major CSP (AWS, Azure, or GCP) so no need to worry about hardware requirements for this topic.

            1 will get really expensive and complicated really fast.

            2 is complicated in networking, but less complicated in that you need less rolebindings (also more expensive).

            3 makes the most sense but adds complexity with SAs and rolebindings. Let the namespaces be the logical separation. Use a mesh like Istio/Kuma for mTLS. If you pay for Kuma you get OPA integration in the sidecar with a CRD for the policy, if you use Istio you still get OPA but I believe it's a configmap that you need to load into a central OPA I can't remember. This way you can define policy for each app but your app doesn't need to understand how authentication mechanisms work.

            I'd recommend Rancher for an easier RBAC solution and more logical separation of projects on top of namespaces.

            IRJI 1 Reply Last reply Reply Quote 1
            • stacksofplatesS
              stacksofplates
              last edited by stacksofplates

              Here's an example of a rego policy for OPA:

              package envoy.authz
              import input.attributes.request.http as http_request
              
              default allow = false
              
              token() = claimInfo{
                  token := split(http_request.headers.authorization, " ")
                  claims := io.jwt.decode(token[1])
                  claimInfo := claims[1]
              
              checkRecord() = http.send({
                  "url": sprintf("http://localhost:8080%s", [http_request.path]),
                  "method": "GET",
                  "force_cache": true,
                  "force_cache_duration_seconds": 3600
              })
              
              allow {
                  requester_is_owner
              }
              
              allow {
                  method_is_post
              }
              
              method_is_post {
                  http_request.method == "POST"
              }
              
              requester_is_owner {
                  getRequest.body.username == tokenData.sub
              }
              

              The awesome thing about this is your app doesn't need to understand roles, users, etc. OPA requests a record from this app takes the JWT in the request and compares the owner of the record stored at username to the sub in the token. If they don't match you get a 403, if they do match it will return the record for you. The app just needs to return the record and doesn't care about auth.

              You can also use OPA as a K8s admission controller to verify that resources have correct annotations, labels, policies, etc. It's a really awesome tool.

              1 Reply Last reply Reply Quote 1
              • IRJI
                IRJ @stacksofplates
                last edited by

                @stacksofplates said in Greenfield Kubernetes Architecture and Security:

                .

                1 will get really expensive and complicated really fast.

                2 is complicated in networking, but less complicated in that you need less rolebindings (also more expensive).

                3 makes the most sense but adds complexity with SAs and rolebindings. Let the namespaces be the logical separation. Use a mesh like Istio/Kuma for mTLS. If you pay for Kuma you get OPA integration in the sidecar with a CRD for the policy, if you use Istio you still get OPA but I believe it's a configmap that you need to load into a central OPA I can't remember. This way you can define policy for each app but your app doesn't need to understand how authentication mechanisms work.

                1.) Is that because you have masters for each cluster, but if combining all clusters your master would still need to scale out, right? Why is it complicated? It seems like to me for organizing backups and for administration it's probably the easiest?

                2.) You are saying you'd have a cluster called postgressql and then have names paces like app1-postgresql app2-postgresql app3-postgresql, etc. If you're backing up an entire application you would need to create some type of orchestration for restore on multiple clusters simultaneously to bring the application back up.

                3.) I need to do some research and reading on this before I can ask more questions 📖

                stacksofplatesS 1 Reply Last reply Reply Quote 0
                • IRJI
                  IRJ
                  last edited by IRJ

                  Also another related question, would you even use kubernetes for databases or would be better to use a hosted service like RDS?

                  stacksofplatesS 1 Reply Last reply Reply Quote 0
                  • stacksofplatesS
                    stacksofplates @IRJ
                    last edited by

                    @irj said in Greenfield Kubernetes Architecture and Security:

                    @stacksofplates said in Greenfield Kubernetes Architecture and Security:

                    .

                    1 will get really expensive and complicated really fast.

                    2 is complicated in networking, but less complicated in that you need less rolebindings (also more expensive).

                    3 makes the most sense but adds complexity with SAs and rolebindings. Let the namespaces be the logical separation. Use a mesh like Istio/Kuma for mTLS. If you pay for Kuma you get OPA integration in the sidecar with a CRD for the policy, if you use Istio you still get OPA but I believe it's a configmap that you need to load into a central OPA I can't remember. This way you can define policy for each app but your app doesn't need to understand how authentication mechanisms work.

                    1.) Is that because you have masters for each cluster, but if combining all clusters your master would still need to scale out, right? Why is it complicated? It seems like to me for organizing backups and for administration it's probably the easiest?

                    2.) You are saying you'd have a cluster called postgressql and then have names paces like app1-postgresql app2-postgresql app3-postgresql, etc. If you're backing up an entire application you would need to create some type of orchestration for restore on multiple clusters simultaneously to bring the application back up.

                    3.) I need to do some research and reading on this before I can ask more questions 📖

                    1. It would be more complicated because you lose the aspects of Kube that make it helpful like service discovery. If you want that you'd have to have your mesh span multiple clusters. And if you don't have a mesh, you'd have to use an ingress for every single thing that talks to your app in your cluster. And it's a big waste of resources.

                    2. I prob wouldn't recommend 2 at all. If you're running kube in a company like yours, you should be using microservices. The microservice should just have it's own database which could either just be a document store or a single table (or a couple if you really need a relation). This way the engineers can access their own db and access other info through the API contract in the other microservices. 3 tier apps are kind of legacy at this point.

                    1 Reply Last reply Reply Quote 0
                    • stacksofplatesS
                      stacksofplates @IRJ
                      last edited by

                      @irj said in Greenfield Kubernetes Architecture and Security:

                      Also another related question, would you even use kubernetes for databases or would be better to use a hosted service like RDS?

                      it depends. It's valid either way. However, things like dynamodb can get stupid expensive really quickly, so it's valuable to run those in cluster and just pay for the PVCs used.

                      1 Reply Last reply Reply Quote 1
                      • 1 / 1
                      • First post
                        Last post