Use Netbox as backend for Oxidized
- June
- 24
I had been keeping the databases separate for netbox and oxidized so thought I would integrate the two. The information on how to do it was a bit scattered so I'm documenting it here.
First of all, netbox has the interfaces we need to interface oxidized directly without the need of any external scripts. There are three parts that need to be built:
1. Come up with a query that can be used against the netbox api that will yield the hosts you want to monitor in oxidized. My use case has stacked switches and the netbox REST view for virtual chassis doesn't have a hostname. So instead I decided to do it in a more atomic way by using a custom field on the device. That gives me a simple way to turn oxidized configuration management on and off for any device. For a switch stack I simply turn it on for the master switch element.
I also added a custom field for the DNS name. You may be able to use your device name instead depending on your setup.
2. The second part of the config is in oxidized. Fortunately oxidized has an http interface that can read the netbox data directly.
You'll need to generate an api token in netbox first. So navigate to Admin, API Tokens, click Add and create an appropriate key. You may want to create a user that just has read-only permissions to the device data for security.
Next we must configure oxidized for an http source. Here is the relevant portion of the oxidized config file:
Important to note is that when calling the netbox REST api you embed cf_oxidized_enrolled as a filter and also test to make sure the device is active. The last URL parameter is needed if you have more than 50 devices because the netbox api will only return up to 50 results by default.
The part below the map: specifies which fields oxidized "maps" to its own fields. Here I use my custom field for both the name and ip address. I use the platform name as the model; that's a standard field in netbox. If you think you want to map different fields, remember that you can view the api data formats pretty easily by visiting https://your.netbox.url/api and then drill down to the dcim/devices section and look at the output. You can also test your filter that way. NOTE: the field you use for model must match a defined model in oxidized.
3. The final step is to create a trigger that will make oxidized refresh its device database when a device is added, removed or modified. We do that in netbox. Create a webhook (Operations, Webhooks, Add) and fill in the fields. The URL field will be your oxidized URL with /reload appended.
To make sure we only trigger an oxidized config reload when we're modifying a relevant object, we can set an event rule (Operations, Event Rules, Add):
Test everything and make sure you can add/delete a device and trigger a reload.
That's it!
Integrating Ansible with Hashicorp Vault
- December
- 15
There are several moving parts of this configuration. We’re using a simple example here; your implementation may have different requirements.
First, vault should be installed and working in a production configuration. For these examples we have the root token in our possession. You may not need this for your deployment if you have the correct permissions already assigned to your local account.
To use these examples you will need hvac python library and community.hashi_vault collection installed. Install hvac library with ‘pip install hvac’. Install the community package with ‘ansible-galaxy collection install community.hashi_vault’.
You can use Hashicorp Vault to store all the sensitive variables you use with ansible. However, you still need to have a credential to authenticate ansible against vault. We will explore using a token or alternatively a username/password against an LDAP (Active Directory) backend for authentication.
When using vault from the command line you can use a token from an environment variable, or you can specify your username and you will need to type your password. Generally, you will only use this to write variables for ansible to read. You can also likely do it through the vault GUI if you have set that up.
When running ansible plays, the token or username and password can be set as environment variables manually or from your .profile or .bashrc if you wish. Obviously, it’s more secure to set them manually for your session and not store them on the server.
Let’s go through storing some key value pairs and setting up a policy to access them. We’ll then show how to retrieve them and use them in an ansible playbook.
Let’s put our server address and root token into the environment.
$ export VAULT_ADDR="https://vault.example.com:8200"
$ export VAULT_TOKEN="hvs.hrlxewcxyxyxyxyxyxy"
Now we’ll create a token that is valid for a year and can be refreshed every 30 days. We will set the default max lease time for tokens.
$ vault write sys/auth/token/tune max_lease_ttl=8760h
Success! Data written to: sys/auth/token/tune
Let’s create a path for some vmware credentials, we’ll use the key-value storage version 2:
$ vault secrets enable -path=apps kv-v2
Success! Enabled the kv-v2 secrets engine at: apps/
Now we’ll put our vcenter username and password in there.
$ vault kv put apps/vmware vcenter_username="[email protected]"
$ vault kv patch apps/vmware vcenter_password="SomeVerySecretPassword"
Check to make sure you can read them okay.
$ vault kv get apps/vmware
== Secret Path ==
apps/data/vmware
======= Metadata =======
Key Value
--- -----
created_time 2022-12-13T22:16:24.625488264Z
custom_metadata
deletion_time n/a
destroyed false
version 3
========== Data ==========
Key Value
--- -----
vcenter_password SomeVerySecretPassword
vcenter_username [email protected]
If you only want one value, use the -field parameter to get it:
$ vault kv get -field vcenter_username apps/vmware
[email protected]
Now that we have a few values stored we need to create a policy to allow access to them. Make up a policy file called app-policy.hcl that looks like this:
path "apps/*"
{
capabilities = ["read"]}
Create a new policy and pull in the file:
$ vault policy write app-reader app-policy.hcl
Success! Uploaded policy: app-reader
Make a token an associate it to the policy:
$ vault token create -display-name app-reader -explicit-max-ttl 8760h -policy app-reader -ttl 720h -renewable
Key Value
--- -----
token hvs.CAESIFLCj9VhI2IHzKeTNtMOJGPVxyxyxyxy
token_accessor RJb1xyxyxyxy
token_duration 720h
token_renewable true
token_policies ["app-reader" "default"]
identity_policies []
policies ["app-reader" "default"]
You can confirm the ability to read the secrets:
$ vault token capabilities hvs.CAESIFLCj9VhI2IHzKeTNtMOJGPVxyxyxyxy
read
Now that secrets are stored, we can read them with a playbook. In this example we will be accessing a VMware vCenter server. The non-sensitive variables are in a file called vars.yml that looks like this:
---
vcenter_hostname: "vcenter.example.com"
vcenter_datacenter: "EXAMPLE"
vcenter_cluster: "Cluster1"
vcenter_datastore: "SAN"
vcenter_validate_certs: false
vcenter_destination_folder: "EXAMPLE"
vcenter_content_library: "example"
vm_template: "linux-ubuntu-20.04lts-v22.11"
vm_state: "poweroff"
ansible_hashi_vault_url: 'https://vault.example.com:8200'
ansible_hashi_vault_auth_method: token
Notice the last two variables defined. They are used by the hashi_vault module.
The last thing to do before running a playbook is to store our token into the expected environment variable like this:
$ export ANSIBLE_HASHI_VAULT_TOKEN=hvs.CAESIFLCj9Vxyxyxyxyxy
We can put together a playbook that uses the credentials from vault. Here is one that powers up some VMs from our inventory:
---
- name: start inactive vms
hosts: localhost
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: get vcenter credentials from hashicorp vault
set_fact:
vcenter_username: "{{ lookup('hashi_vault', 'secret=apps/data/vmware:vcenter_username') }}"
vcenter_password: "{{ lookup('hashi_vault', 'secret=apps/data/vmware:vcenter_password') }}"
- name: power on
vmware_guest_powerstate:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ item }}"
validate_certs: "{{ vcenter_validate_certs }}"
state: powered-on
loop: "{{ lookup('inventory_hostnames', 'inactive:&ubuntu22', wantlist=True) }}"
As an alternative, a username and password can be used for authentication. In this scenario we will use LDAP (Active Directory) to authenticate the user.
We use our root credential on vault to enable the ldap authentication mechanism on vault:
$ export VAULT_TOKEN=hvs.hrlxewxyxyxyxy
$ vault auth enable ldap
Now let’s configure the ldap to talk to our domain controller. We’ve already built a user called ‘vault’ in active directory so we can bind with that user here. We’re not using any certificates for simplicity, it would be a better idea to use ldaps in production.
$ vault write auth/ldap/config \
url="ldap://dc01.ad.example.com" \
userattr="sAMAccountName" \
userdn="cn=Users,dc=ad,dc=example,dc=com" \
groupdn="cn=Users,dc=ad,dc=example,dc=com" \
groupfilter="(&(objectClass=group)(member={{.UserDN}}))" \
groupattr="memberOf" \
binddn="cn=vault,cn=users,dc=ad,dc=example,dc=com" \
bindpass='FNjRdTTzxyxyxy' \
starttls=false \
userfilter="({{.UserAttr}}={{.Username}})"
Now that we can authenticate to vault via ldap, we can use ldap groups to set user policy. Let’s reuse the policy we built previously and bind it to the vaultusers group:
$ vault write auth/ldap/groups/vaultusers policies=app-reader
Let’s log in and test our access from the CLI. We’ll make sure we have our server location set in the environment first:
$ export VAULT_ADDR=https://vault.example.com:8200/
Now we log in with username/password:
$ vault login -method=ldap username=ansible
Password (will be hidden):
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token hvs.CAESICz0NC0UNUTW1nyxyxyxyxy
token_accessor jXIisU1vaHRGRkxyxyxyxy
token_duration 768h
token_renewable true
token_policies ["app-reader" "default"]
identity_policies []
policies ["app-reader" "default"]
token_meta_username ansible
Since ‘ansible’ is a member of the ‘vaultusers’ security group in AD, you can see that we have the “app-reader” policy applied (in addition to default). So let’s see if we can read our vcenter credentials:
$ vault kv get apps/vmware
== Secret Path ==
apps/data/vmware
======= Metadata =======
Key Value
--- -----
created_time 2022-12-13T22:16:24.625488264Z
custom_metadata
deletion_time n/a
destroyed false
version 3
========== Data ==========
Key Value
--- -----
vcenter_password SomeVerySecretPassword
vcenter_username [email protected]
Success!
To use username and password credentials in our playbook, instead of setting a token in an environment variable we set our username and password instead:
$ export ANSIBLE_HASHI_VAULT_USERNAME="ansible"
$ export ANSIBLE_HASHI_VAULT_PASSWORD="YouWontEverGuessIt"
We also need to change the line in our vars.yml to specify ldap instead of token auth:
ansible_hashi_vault_auth_method: ldap
Now our playbook will run using username/password against ldap instead of requiring a token.
Update Unifi Dream Machine Pro certificates with automation
- October
- 27
Since I like to use Letsencrypt for my certificates I wanted to have a method to deploy them and keep them up to date on my Unifi Dream Machine Pro. I did some reading and found that the certs are actually in two different places, one for the web GUI and one for the WIFI guest portal.
I used some shell scripting along with make, ansible and a few cron jobs to get it done.
The files that need to be maintained on the UDM Pro are these:
/mnt/data/unifi-os/unifi/data/keystore
- this is a java keystore file/mnt/data/unifi-os/unifi-core/config/unifi-core.crt
- full chain certificate in PEM format/mnt/data/unifi-os/unifi-core/config/unifi-core.key
- key for the PEM cert
I have all my certificate renewals done on a single VM which is much easier to manage than having each server do one. So first step I needed was to copy the certificates to a location that my ansible user could read them when there was a renewal. (they are only accessible by root otherwise). Fortunately, certbot has a hook that can be used to do this. I modified my certbot renewal file in /etc/letsencrypt/renewal/mydomain.com and added a line for the hook:
[renewalparams]
authenticator = dns-rfc2136
account = 656f748e58bf73c472623f243ab7eda1
server = https://acme-v02.api.letsencrypt.org/directory
dns_rfc2136_credentials = /etc/letsencrypt/certbot-rfc2136-credentials
dns_cloudflare_propagation_seconds = 30
dns_rfc2136_propagation_seconds = 90
renew_hook = /usr/local/bin/copy_certs_to_ansible.sh
Of course I have my cron set up to check and run renewals once a day.
3
0 2
*
* * /usr/bin/certbot renew >> /var/log/letsencrypt/renew.log
If the domain actually renews, certbot will execute the renew_hook which is in copy_certs_to_ansible.sh. Here is that script.
#!/bin/sh
#
# Copy the pem versions to ansible certs dir
#
/bin/cp -p /etc/letsencrypt/live/mydomain.com/* /home/ansible/certs/mydomain.com/
#
# Rebuild the java keystore if needed
#
cd /home/ansible/certs; make
#
# Set proper owner and group on all the files
#
/bin/chown ansible /home/ansible/certs/mydomain.com/*
/bin/chgrp ad_admins /home/ansible/certs/mydomain.com/*
You notice the "make" command in the middle. There is a Makefile in /home/ansible/certs/mydomain.com/ that manages the dependencies in order to create the java keystore file. We don't want to simply rebuild it and copy it every day because ansible will then restart the unifi-os service every time. Instead, we only want to regenerate the keystore when the certificate has actually changed. Here is the Makefile from /home/ansible/certs:
#!/usr/bin/make
#TZ="US/New_York"
all: mydomain-com-keystore
mydomain-com-keystore: mydomain.com/keystore
mydomain.com/keystore: mydomain.com/keystore.p12
@/usr/bin/keytool -importkeystore -destkeystore mydomain.com/keystore -srckeystore mydomain.com/keystore.p12 -srcstoretype PKCS12 -srcstorepass aircontrolenterprise -deststorepass aircontrolenterprise -alias unifi -noprompt
mydomain.com/keystore.p12: mydomain.com/fullchain.pem mydomain.com/privkey.pem
@/bin/openssl pkcs12 -export -in mydomain.com/fullchain.pem -inkey mydomain.com/privkey.pem -out mydomain.com/keystore.p12 -passout pass:aircontrolenterprise -name 'unifi'
And finally is the ansible playbook that keeps the certificates up to date. I have the UDM Pro defined in my ansible inventory like this (udmpro.yml):
---
all:
hosts:
children:
udmpro:
hosts:
gw.mydomain.com:
ansible_connection: ssh
ansible_user: "root"
ansible_ssh_pass: "yourrootpassword"
ansible_ssh_private_key_file: "~/.ssh/id_rsa.pub"
Here is the playbook that logs into the UDM Pro and checks to make sure certificates are up to date, restarts unifi-os if needed (udmpro_update.yml).
---
- hosts: udmpro
gather_facts: no
become: no
tasks:
- name: Copy *.mydomain.com certificate file
ansible.builtin.copy:
src: /home/ansible/certs/mydomain.com/fullchain.pem
dest: /mnt/data/unifi-os/unifi-core/config/unifi-core.crt
owner: root
group: root
mode: '0644'
backup: true
register: cert
- name: Copy *.mydomain.com key file
ansible.builtin.copy:
src: /home/ansible/certs/mydomain.com/privkey.pem
dest: /mnt/data/unifi-os/unifi-core/config/unifi-core.key
owner: root
group: root
mode: '0644'
backup: true
register: key
- name: Copy java keystore
ansible.builtin.copy:
src: /home/ansible/certs/mydomain.com/keystore
dest: /mnt/data/unifi-os/unifi/data/keystore
owner: 902
group: 902
mode: '0640'
backup: true
register: keystore
- name: Restart unifi-os if needed
command: unifi-os restart
when: cert.changed or key.changed or keystore.changed
That playbook is called via cron once per day and makes sure the most recent letsencrypt certificate is installed on the UDM Pro.
MacOS - change account to admin from command line
- September
- 19
Had to do this today so thought I would share.....
dscl . -append /Groups/admin GroupMembership <username>
You need to be root or use sudo.
Create a RAID 10 on Mac OS (Monterey)
- July
- 31
I had four WD RED 4TB drives and wanted to get better performance out of them with a mirror but also wanted the ability to replace a drive if (when) one failed.
Indications from online searching said I could create either a RAID 10 or 0-1. I'm not sure if one is better than the other. I went with the 10 configuration since I'm already familiar with that on my TrueNAS.
Trying to do this from the Disk Utility always failed with an error message saying the RAID couldn't be created. The only type of array I was able to create out of two mirrored pairs of drives was a concatenated array. That still gave errors but appeared to have worked. But that's not going to give me any performance benefit.
Instead, the solution is to do everything from the command line. So here we go....
By the way, my disks are in an external cabinet attached via Thunderbolt 3.
The four disks are disk2, disk5, disk6, and disk7 (use 'diskutil list' to find yours). Make sure you get the correct disks as they will be repartitioned and reformatted.
Let's create the first set of mirrors from disk2 and disk5. I named the set vdev0.
# diskutil createRAID mirror vdev0 JHFS+ disk2 disk5
Now we'll create the second set of mirrors from disk6 and disk7.
# diskutil createRAID mirror vdev1 JHFS+ disk6 disk7
Now do a 'diskutil list' and find the disk numbers of the newly created mirrors. Mine were disk8 and disk10.
Finally, create the striped set of mirrors. I named mine WDRED.
# diskutil createRAID stripe WDRED JHFS+ disk8 disk10
That's it. if you do a diskutil AppleRAID list you'll see the two mirrors and also the striped set which should now be mounted.