ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. stacksofplates
    3. Best
    • Profile
    • Following 0
    • Followers 13
    • Topics 145
    • Posts 7,946
    • Groups 0

    Posts

    Recent Best Controversial
    • Static site in a CI/CD Pipeline

      So there are a multitude of tools to use for this but here's what I'll be using:

      • GitLab
      • GitLab CI/CD
      • Asciidoctor

      I prefer Asciidoctor over Markdown because it has standards and different projects aren't implementing their own versions of the tool. A good example of a site is the Groovy documentation site: http://docs.groovy-lang.org/docs/next/html/documentation/

      This is assuming you have a repository created in GitLab and you know how to use Git.

      Let's create our sample Asciidoctor page:

      = Test Site
      [email protected]
      :toc2:
      :sectnums:
      :icons: font
      :experimental: 
      
      == This is a header
      Here's some sample text. It's pretty cool. Enabling experimental lets you use keyboard icons like when you press 
      kbd:[Ctrl + J]
      
      NOTE: This is a test site.
      

      To set up your pipeline, you will need to create a .gitlab-ci.yml file.

      Here's the contents:

      image: asciidoctor/docker-asciidoctor
      
      html:
        script: 
          - asciidoctor site.adoc
        artifacts:
          paths:
            - site.html
      

      Just adding these will allow you to run the CI/CD process on shared runners. This uses the Asciidoctor Docker image to convert that site into an HTML page. It doesn't deploy it anywhere, it's just stored as an artifact in GitLab. However you could deploy to another server or whatever you like.

      I created all of this in a test repository for you to browse. Here's the link https://gitlab.com/hooksie1/test-pipeline-site

      To get the artifact, go to the CI/CD section on the left and click Pipelines. Then you can view that job and it's build step with the artifact.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Training Sessions

      I like @JaredBusch's idea of a PBX and VoIP

      posted in MangoCon
      stacksofplatesS
      stacksofplates
    • I won a Spiceworks shirt!

      I got it in the mail today, and didn't know what it was because it was just in a postal plastic bag. Great surprise!

      posted in Water Closet
      stacksofplatesS
      stacksofplates
    • Small Ansible Write-up

      Just a quick write-up some stuff I was doing with Ansible today.

      This is all on a Fedora 23 machine.

      Using an ssh key makes all of this easier:

      ssh-keygen -t rsa
      

      Then:

      ssh-copy-id -i <ip address of remote server>
      

      Download ansible:

      sudo dnf -y install ansible
      

      First thing I did was make a new hosts file to clean it up.

      sudo mv /etc/ansible/hosts /etc/ansible/hosts.old
      

      Then make another one:

      sudo touch /etc/ansible/hosts
      

      The hosts folder holds all of your host names and ip addresses. Make a couple groups for your servers:

      [webservers]
      jhbcomputers.com 
      webserver.com
      
      [local]
      10.1.10.2
      10.1.10.5
      

      There are a couple options I've used in my hosts file, and may be helpful. My website is behind cloudflare, so if I ssh to the domain name it doesn't do anything. You can set a host name and ssh port like this:

      [webservers]
      xxx.xxx.xxx.xxx ansible_ssh_port=<custom port> ansible_host=<domain>
      

      Once the hosts file is set up we can start running commands.

      ansible webservers -m ping
      

      returns:

      server2 | success >> {
      "changed": false, 
      "ping": "pong"
      }
      
      server2 | success >> {
          "changed": false, 
          "ping": "pong"
      }
      

      Another example would be getting uptime from all of your servers:

      ansible webservers -m command -a 'uptime'
      
      server1 | success | rc=0 >>
      01:34:21 up  5:17,  1 user,  load average: 0.00, 0.02, 0.05
      
      server2 | success | rc=0 >>
      01:34:23 up  5:15,  1 user,  load average: 0.00, 0.01, 0.05
      

      The -m argument tells Ansible which module to use, and the -a argument tells it what arguments to pass to the module.

      You could run everything from ad hoc commands like this, but that could get old.

      Here's an example playbook I created to update a couple webservers:

      ---
      - hosts: webservers
        gather_facts: yes
        remote_user: john
        tasks:
         - name: Backup Drupal
           shell: chdir=/var/www/html/{{ansible_host}} drush archive-dump
      
         - name: Update
           yum: name=* 
                state=latest
                update_cache=yes
           become: yes
           become_method: sudo
      

      The playbook is stored in a .yml file. These always start with three --- at the top.

      There are a couple things going on here. First the playbook sees we are running this on the webservers group in our hosts file.

      Gather Facts will grab a ton of info about the server and store it in variables. It will store things like the Linux distro, user directory, user id, user gid, disks, amount of free space, ssh key, and a ton more.

      Remote_user is the user that you are running the commands with on the remote system.

      The tasks section holds the tasks to be completed. Mine is really simple, it just has a couple tasks. You can run things called handlers which will run when a task is completed, but I didn't need them for this.

      The first task that runs is named Backup Drupal. It runs these commands:

      cd /var/www/html/<sitename> 
      

      and then does

      drush archive-dump
      

      Each website is stored in a folder that is the domain name. So {{ ansible_host }} grabs the host name from the hosts file (this is the ansible_host=<name> part) and places it in the command. Drush is a utility for Drupal that lets you run a ton of stuff from the cli. So archive-dump creates a backup of the web folder and does a mysql dump of the database and saves it in the home folder of the user who ran the command. This way if something happens after the system update in the second task, I can just run drush archive-restore and it will pull everything back in.

      The next task that runs is the system update task. It uses the yum module and updates (state=latest) all packages (name=*). I don't know if I need the update_cache=yes but it was in someone else's write up so I used it. I may not because here's the ansible doc on it:

      "Run the equivalent of apt-get update before the operation. Can be run as part of the package installation or as a separate step."

      The last two lines tell the task that you want to run as sudo. It will ask you for the sudo password after you run the playbook.

      To run this playbook you would type:

      ansible-playbook <name-of-playbook>.yml
      
      posted in IT Discussion ansible linux centos 7
      stacksofplatesS
      stacksofplates
    • RE: What Are You Doing Right Now

      @MattSpeller said:

      @RojoLoco http://community.spiceworks.com/topic/1409559-accidentally-re-initialized-raid10-array-for-about-10-seconds?source=homepage-feed

      Tobak Slovakian
      VP IT/CIO at CompanySomewhere, Somestate, United States34 years IT experienceA+, MCSE, Network+, Security+

      There was another post from a guy a while ago asking for router recommendations. A lot of people recommended and EdgeRouter and a Ubiquiti AP. He made another post saying he couldn't get the SSID to work (something stupid like that) and then finally ended up saying he was sending it back and getting something else.

      I looked at his profile, 25 years in IT.......

      These people must be counting from when they first heard the word "computer"

      posted in Water Closet
      stacksofplatesS
      stacksofplates
    • RE: Ahh the lovely sound of a clicking Disk. . .

      If you put a magnet over it, it will keep the head from doing that. 🙂

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: What Are You Doing Right Now

      So is posting this question a requirement to be a member of SW?

      https://community.spiceworks.com/topic/1471813-raid-5-vs-raid-10-for-ssd-s-in-2016?source=start&pos=1

      posted in Water Closet
      stacksofplatesS
      stacksofplates
    • RE: Ahh the lovely sound of a clicking Disk. . .

      @art_of_shred said:

      Sounds like that would take care of both issues.

      1. no more clicking
      2. no need to mess with the old hard drive (data mysteriously vanished)

      I call it a win!

      I don't know what happened, I stuck the hard drive behind my 12" woofer so it was out of the way and I don't see any data at all.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: What Are You Doing Right Now

      @scottalanmiller said:

      @johnhooks said:

      I don't think I can go on SW any more. I don't want to be a jerk, but it's like people over there are getting a prize for being completely ignorant to how things should work.

      Nearly every newbie is completely unaware of other conversations going on currently, too. So many of the posts that I see are completely opposite of any best practices and when you mention that they act shocked as if they had never heard of such a thing... when ten other threads and thousands of previous ones are talking about what a dumb idea that would be right at the same time. Everyone seems to just throw their opinion over the wall and run away, no back and forth, no learning.

      The issue is that the community isn't getting collectively smarter over time, but less informed. The more threads, the more references, the more examples that we get... the more incredibly solidified the best practices the less anyone seems to be aware of them. Years and years of "Why SAN doesn't make sense in the SMB" or "Why RAID 5 Died in 2009" (that was first published in 2007, that's a decade of knowledge at this point and as Robin pointed out in the article, it was old knowledge that he was presenting and nothing that storage people weren't all already aware of at that point) and people don't just make the mistakes then ask about it, but people who have been in the community for a while will post those things as reckless recommendations and then act as if they've never heard any reason why they don't make sense.

      Every thread about SAN someone says this exact line "How do you do HA without a SAN" yet have no idea how they were going to do one with a SAN. Every. Damn. Time. Happened just twelve hours ago, like clockwork.

      The community is failing to improve. A few people do, of course, but most of those improve and then come here. The percentage of people providing good feedback is decreasing while the number of "I didn't bother to do any research first including just paying attention or looking at threads that are related to me needs as they happen."

      That whole virtual or not post is ridiculous. "I do 6 writes per minute to my database so it has to be physical. It takes 4 GB of RAM."

      it wouldn't be bad if they listened to reason. Not knowing something isn't a problem (if you're new). But not listening when other people try to help is ridiculous.

      posted in Water Closet
      stacksofplatesS
      stacksofplates
    • RE: Rocket Chat vs. Jabber

      @tonyshowoff said in Rocket Chat vs. Jabber:

      @dafyre said in Rocket Chat vs. Jabber:

      Is there any reason in particular that it has to work with XMPP? I've not read the OP again... But if you need Desktop / Mobile clients, RocketChat offers those as well.

      Yes, exactly what @scottalanmiller said. We've got a lot of communication, notification, and other things which use it. Our web cam and credit exchange system (the messaging aspects) and our chat system also use it, and while Rocket.Chat is more for internal use in our case, pushing/pulling information from these other services will be easier.

      I'm also going to be testing out a KVM orchestration system tonight that uses XMPP.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: What Are You Doing Right Now

      Bentley just took 4 steps! 😄

      posted in Water Closet
      stacksofplatesS
      stacksofplates
    • RE: ZeroTier and Bind

      So as is with most things. I actually did set an address for Bind in named.conf. I just needed to add the ip address to listen on and add the zone for recursion and it's working now. Thanks!

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Explainshell

      @tonyshowoff said in Explainshell:

      I'm not sure why I didn't think of this with EXPLAIN in MySQL it seems pretty obvious, because it works better than "use man pages" because when you build up tons of stuff, sprinkle syntactical sugar all over it, it can even be slightly difficult for experts to understand.

      Great idea!

      Ya. I hate when man pages don't have an example of the syntqx they want. I've seen some bad enough that it just tells you what the utility is used for with no real explanation.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • Callback Provisioning with Ansible Tower

      One big advantage to Tower is the REST API. This means hosts can do an API call to Tower and have Tower run a playbook against your host. One way I use it is we create a systemd timer and service in the post install section of the kickstart config. After a system reboots it waits for a predetermined time and does an API call to run the provisioning playbook to make sure the system is in compliance. This can be done at predetermined intervals also, like every 15 minutes.

      To enable this all you have to do is enable callback provisioning in your job template:

      0_1499822425323_callback.png

      It gives you the URL and the key. Tower ships with a script for the API call, but you can just use curl also. Here's an example:

      curl --data "host_config_key=d13a7b6e08e84c7d8f412b9754400a00" https://tower.pa.jhbcomputers.com/api/v1/job_templates/26/callback/ -k
      

      This will tell Tower to run job template 26 and add a limit of this host.

      Since the job is run from Tower, it's added to the system reporting and any job template options are applied also (like auto SCM updates).

      0_1499823061339_job.png

      posted in IT Discussion ansible tower configuration management
      stacksofplatesS
      stacksofplates
    • RE: KVM Setup

      @dustinb3403 said in KVM Setup:

      XenServer doesn't have to add software to the hypervisor.

      Neither does KVM. IMO KVM has always been so much easier to use than XenServer. No weird proprietary formats. No strange matching UUIDs of disks to UUIDs of VMs. It's just super easy overall.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Convert a list of DNS names to IPs

      It's pretty simple with Bash and a list of names on each line. Just do

      while read dnsname; do
        nslookup $dnsname
      done < dnsnames.txt
      
      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Linux: History

      you can just do

      gedit ~/.bash_history
      

      or if you like the cli

      cat ~/.bash_history
      

      Obv only Bash if that's what you're using.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • Using Make for SysAdmin Work

      When most people hear about make they think of compiling source code. While that is often what it's used for, there are other good uses as well. Here's an example of using it with Terraform/Ansible to build your infrastructure. Make has some advantages over just using a shell script. One is the native concept of dependencies. You can easily define which targets are dependencies of other targets. Make also has some idempotence baked in (as long as everything isn't a phony target). Another advantage is that parallelism is built directly into make. You can run targets in parallel with other targets as long as they don't have dependencies.

      So first off here's the contents of the Makefile:

      .PHONY: clean download plan apply inventory roles destroy
      
      all: download plan apply inventory roles
      
      download:
              test -s terraform || wget https://releases.hashicorp.com/terraform/0.12.3/terraform_0.12.3_linux_amd64.zip -O terraform.zip
              test -s terraform || unzip terraform.zip
      
      plan: download
              cd $(env)/services && \
              ../../terraform init -backend-config="bucket=bucket-name"
              cd $(env)/services && \
              ../../terraform plan -out plan
      
      apply: plan
              cd $(env)/services && \
              ../../terraform apply -input=false plan
      
      destroy: download
              cd $(env)/services && \
              ../../terraform init -backend-config="bucket=bucket-name" && \
              ../../terraform destroy -auto-approve
      
      inventory: apply
              cd $(env)/services && \
              cp inventory ../../ansible/
      
      roles:
              git clean -fdx ansible/roles/
              ansible-galaxy install -r ansible/roles/requirements.yml
      
      clean:
              git clean -fd
              git clean -fx
              git reset --hard
      

      It's important to note that you could rename the download target as terraform and make it a file target instead of phony. This way you wouldn't need the test -s terraform statement. Make will look to make sure that a file called the name of the target is created before the target is run. If that file exists, make won't run the target. This is where it's built in idempotence comes in to play.

      If you look after each target you will see a name of another target. These are the dependencies for that target. So plan depends on download, apply depends on plan, and so on. However roles, clean, and download don't have any dependencies.

      Since the all target is first, if you just run make it would be the same as running make all which includes all of the targets listed beside all. We can run them in parallel with make -j. If you don't give make a number after the -j it will run as many in parallel as possible. If you give it a number it will only run that many jobs at once.

      For this Makefile, if we run make -j env=dev make will download Terraform and unzip it while also downloading the Ansible roles we have defined both in parallel. Then it runs terraform init with the specific bucket we put in the Makefile. It then runs terraform plan and writes that to a file named plan. After that target is finished, it runs terraform apply using the plan file we created. Once that target is finished, it copies the inventory file that Terraform created to a directory for Ansible to use afterward.

      This workflow is really nice for both local development and through a pipeline. Now you don't have to edit your CI/CD pipeline stages directly you can just edit the Makefile and only have one or two stages in the pipeline. And if you're running locally you can run each target independently and it will only run the dependencies that specific target needs.

      This specific Makefile also lets you destroy the infrastructure by running make destroy env=dev (or whatever the directory name for your Terraform environment is). And if you want to wipe out your local changes, you can run make clean and it will reset your local git repo to wherever HEAD is pointed.

      posted in IT Discussion make terraform ansible devops idempotent
      stacksofplatesS
      stacksofplates
    • RE: Senior IT Administrator in Gladstone, NJ

      Knowledge of server platforms such as: Windows Server 2010.

      Uh?

      posted in Job Postings
      stacksofplatesS
      stacksofplates
    • RE: Samba problem

      So I figured it out. Just like most every other problem I've had, it was something stupid I did. I initially started with a user "john" and set up permissions but decided later to use "jhooks". I had accessed the shares with the "john" username first and that's why some worked. I never added the jhooks username to samba (must have forgot since some shares were working) and that's why I couldn't log into the other shares with that username.

      That will teach me to try to do this stuff on Friday night when I'm tired.

      posted in SAM-SD
      stacksofplatesS
      stacksofplates
    • 1 / 1