SNMP server not reporting disk size

I was setting up snmpd on an Ubuntu box, and noticed that it was reporting weird numbers for a couple of XFS filesystems I had setup for Minio.

An snmpwalk showed values like this:

HOST-RESOURCES-MIB::hrStorageType.51 = OID: HOST-RESOURCES-TYPES::hrStorageFixedDisk
HOST-RESOURCES-MIB::hrStorageType.52 = OID: HOST-RESOURCES-TYPES::hrStorageFixedDisk
...
HOST-RESOURCES-MIB::hrStorageDescr.51 = STRING: /minio/disk1
HOST-RESOURCES-MIB::hrStorageDescr.52 = STRING: /minio/disk2
...
HOST-RESOURCES-MIB::hrStorageAllocationUnits.51 = INTEGER: 0 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.52 = INTEGER: 0 Bytes
...
HOST-RESOURCES-MIB::hrStorageSize.51 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageSize.52 = INTEGER: 0
...
HOST-RESOURCES-MIB::hrStorageUsed.51 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageUsed.52 = INTEGER: 0

So it was reporting the existence of the disks, but with all 0 values. The root ext4 filesystem showed up fine, was it something to do with XFS?

Turns out the answer was NO, it was the permissions of the /minio directory that the filesystems were mounted under. I figured this out when I noticed that df -h showed the disks when I was running as root

/dev/sdb1        60G  461M   60G   1% /minio/disk1
/dev/sdc1        60G  461M   60G   1% /minio/disk2

But when running as non-root, such as the Debian-snmp user, df -h didn't show the disks at all.

Turns out I had been too strict with the permissions on /minio, I had originally set that to

drwx------   4 minio-user root       4096 Feb 14 14:57 minio

But that apparently, and to my surprise, prevented snmpd from being able to read the size and usage information for the mounts under that directory. Changing that to 0755 fixed the problem, and I just made sure that the mountpoints had more strict permissions

drwxr-x--- 3 minio-user root 24 Feb 27 13:08 disk1
drwxr-x--- 3 minio-user root 24 Feb 27 13:08 disk2

Proxmox NIC pass-through and machine type

I picked up a little Topton fanless server a while back, that comes with 4 x 2.5Gb Intel i226 NICs, and wanted to try running an OpenBSD VM on Proxmox with NIC pass-through. There are some fairly detailed instructions on servethehome.com, but one important thing I found they left out was the Proxmox "Machine" setting.

The default seems to be "i440fx", and with that default I was sort of seeing the NIC inside OpenBSD, but with dmesg entries like:

igc0 at pci0 dev 16 function 0 "Intel I226-V" rev 0x04: not enough msi-x vectors
igc0: unable to map msi-x vector 0

and the interface wouldn't actually show up in ifconfig

After finally finding some hints on the web that the q35 machine type adds PCIe support, I gave that a try and now the igc0 interface on OpenBSD seems to be working!

Automatically restarting Percona XtraDB cluster

I've been experimenting with Percona XtraDB cluster, and found that by default it requires manual intervention to restart the cluster from an all-nodes-down state when the nodes were gracefully shutdown. The docs talk about identifying which node has safe_to_bootstrap: 1 in it's /var/lib/mysql/grastate.dat file, and on that node starting the mysql@boostrap service instead of just plain mysql.

Looking at a file and acting on what's found seems like something that could be automated, so here's my take for an Ubuntu 22.04 setup:

On each node (yay Ansible!) I added this script as /usr/local/sbin/choose-mysql-service.sh

#!/bin/bash

GRASTATE="/var/lib/mysql/grastate.dat"

service="mysql"

# Start a different service if grastate.dat is present
# with safe_to_bootstrap: 1
#
if [ -f $GRASTATE ]; then
    if grep --quiet "^safe_to_bootstrap: 1" $GRASTATE; then
        service="mysql@bootstrap"
    fi
fi

echo "Starting $service"
systemctl start $service

Then I added a one-shot systemd unit to execute at boot time, as /etc/systemd/system/choose-mysql-service.service

[Unit]
Description=Choose MySQL service
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/local/sbin/choose-mysql-service.sh
RemainAfterExit=true

[Install]
WantedBy=multi-user.target

And the disabled the default mysql service and enabled my new unit with:

systemctl daemon-reload
systemctl disable mysql
systemctl enable choose-mysql-service

So now when the OS boots, instead of just blindly trying to start mysql, it looks at the grastate.dat and if it has safe_to_bootstrap: 1 it starts mysql@bootstrap instead - or otherwise falls back to the default of starting mysql

I also shared this on the Percona Forum, look for feedback there

mysqldump: Got error: 1049: Unknown database when using LOCK TABLES

Ran in to a confusing error message today:

mysqldump: Got error: 1049: Unknown database 'mydb' when using LOCK TABLES

I did have a database named mydb and then deleted it the other day. Later when doing some scripted mysqldumps of other databases, I got that message. Why in the world was it trying to lock a database I knew to be gone?

Turns out one of my other dbs was referencing the deleted db in a VIEW. Hope that helps

UFW and LXC/LXD on Ubuntu 22.04

I recently setup a new Ubuntu server with LXC containers. At first it all went great, but then later when I enabled UFW, things got flaky. Looking at /var/log/syslog I saw UFW was blocking lots of traffic from inside the containers.

Also when restarting a container, the container wouldn't get one of the bridged 10.x.x.x IP addresses.

After Googling a bit, I found the magic commmands on this discussion:

ufw allow in on lxdbr0
ufw route allow in on lxdbr0

In hindsight, I think it would have been better to enable ufw before doing anything else with the new install, that way the problems would have been more obvious right away - rather than it being a "geez, it was working before" type situation.

Dynamic loading of Vue CSS assets with Vite

I've been experimenting with switching from Webpack to Vite, because when developing with Vite it's really fast and you get some nice features like Hot Module Reloading (HMR), so that when you're editing a Vue component and save the file, you instantly see the results in your browser window without having to reload the page.

I've been using Vite with the Backend Integration instructions, and on a Laravel Lumen site one problem I had was that while Vite worked great for development - when I did a production build I found the CSS from the Vue SFCs (Single File Components) weren't being injected into the <head> the way the were when running in development mode. Basically my page would be there but totally unstyled.

I had another Vite site that worked totally fine with the production build (this one), and it took a while to figure out what the difference was.

Ultimately I realized the working site had as part of its JS build, some code from vite called preload-helper, which seems to be responsible for injecting stylesheets into the page. My other site wasn't including the preload-helper, and it seems the difference was that the 2nd site wasn't using the dynamic JS import(), feature (because it was such a simple site)

Once I recoded my entrypoint to dynamically import my Vue3 root component like this:

import { createApp } from 'vue'

import('./App.vue')
    .then(({ default: rootComponent }) => {
        const app = createApp(rootComponent);
        app.mount('#app');
    });

then the build included the preload-helper and everything works as expected.

Without this type of loading, a person would have to configure Vite to enable generation of manifest.json, and figure out which CSS file to reference in your page's HEAD, similar to how you figure out the JS to load (because of doing Backend Integration). However at the moment the manifest.json doesn't include the name of the generated CSS file - I filed a Github Issue about it.

Decrease snmpd logging level on Ubuntu 18.04

I recently updated some servers from Ubuntu 16.04 to 18.04, and found that the snmpd daemon was generating way too many log entries in /var/log/syslog - one for every SNMP query coming from our monitoring system.

In older Ubuntus I had edited /etc/defaults/snmpd to change the SNMPDOPTS line to have a different -Ls parameter, but it seems that on the new Ubuntu, the systemd service for snmp doesn't use that defaults file at all. A comment on this serverfault question gave me a clue on how to fix it in systemd - I thought I'd elaborate here.

If you run

systemctl cat snmpd.service

You see the current service file:

# /lib/systemd/system/snmpd.service
[Unit]
Description=Simple Network Management Protocol (SNMP) Daemon.
After=network.target
ConditionPathExists=/etc/snmp/snmpd.conf

[Service]
Environment="MIBSDIR=/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf:/usr/share/mibs/site:/usr/share/snmp/mibs:/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp"
Environment="MIBS="
Type=simple
ExecStartPre=/bin/mkdir -p /var/run/agentx
ExecStart=/usr/sbin/snmpd -Lsd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -f
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target

I wanted to override the ExecStart line with something different. To do that, run

systemctl edit snmpd.service

This brings up your default editor with a blank file. I entered these new lines:

# Override default "-Lsd" paramter to "-LSwd" to decrease logging level
[Service]
ExecStart=
ExecStart=/usr/sbin/snmpd -LSwd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -f

The first ExecStart= line is a bit odd, without it you get an error:

snmpd.service: Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing.

so the first line 'clears' the setting before processing your own version.

Save your file (systemctl edit stores it as /etc/systemd/system/snmpd.service.d/override.conf), and then run service snmpd restart to have take effect. If you re-run systemctl cat snmpd.service you should now see:

# /lib/systemd/system/snmpd.service
[Unit]
Description=Simple Network Management Protocol (SNMP) Daemon.
After=network.target
ConditionPathExists=/etc/snmp/snmpd.conf

[Service]
Environment="MIBSDIR=/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf:/usr/share/mibs/site:/usr/share/snmp/mibs:/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp"
Environment="MIBS="
Type=simple
ExecStartPre=/bin/mkdir -p /var/run/agentx
ExecStart=/usr/sbin/snmpd -Lsd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -f
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/snmpd.service.d/override.conf
# Override default "-Lsd" paramter to "-LSwd" to decrease logging level
[Service]
ExecStart=
ExecStart=/usr/sbin/snmpd -LSwd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -f

Which is a combination of the default service along with your override.

If you have other servers you want to copy your /etc/systemd/system/snmpd.service.d/override.conf file to, you need to run

systemctl daemon-reload
service snmpd restart

to have it take effect.

Building Sentry on FreeBSD

So I was hoping to install the on-premise version of Sentry 9.0.0 on my FreeBSD 11 box, but ran into a snag. I was using the Installation with Python instructions, but it failed to build the semaphore module (another user has an error dump available).
Unfortunately the official response is: We don’t support FreeBSD.

The key error seems to boil down to:

relocation R_X86_64_TPOFF32 against `a local symbol’ can not be used when 
making a shared object; recompile with -fPIC …/…/src/libsodium/.libs/libsodium.a: 
could not read symbols: Bad value cc: error: linker command failed with 
exit code 1 (use -v to see invocation) *** [randombytes] Error code 1

So it's a libsodium error, some more digging found that sodium was a dependency of rust_sodium, which nicely has the ability To use your own copy of libsodium, with the stipulation that you're using the same version they are.

Looking at their rust_sodium/rust_sodium-sys/build.rs shows they're using libsodium 1.0.16, and a check of the FreeBSD security/libsodium Makefile shows the same version - so we're in business.

So after building and installing the libsodium port, I tried the Sentry install again with:

RUST_SODIUM_LIB_DIR=/usr/local/lib pip install -U sentry

and it worked! Overall the FreeBSD ports that need to be installed as build dependencies are:

databases/postgresql11-client
devel/py-virtualenv@py27
graphics/jpeg
lang/rust
security/libsodium
textproc/libxml2
textproc/libxslt

For additional runtime dependencies I've got:

databases/postgresql11-contrib  (Needed for citext extension)
databases/postgresql11-server
databases/redis
mail/exim
sysutils/py-supervisor

After a long build, I got this warning (in red):

redis-py-cluster 1.3.6 has requirement redis==2.10.6, but you'll have redis 2.10.5 which is incompatible.

Looked at https://github.com/getsentry/sentry/blob/9.0.0/requirements-base.txt and found sentry wants redis < 2.10.6, and is OK with redis-py-cluster >= 1.3.4

Looked at https://github.com/Grokzen/redis-py-cluster/blob/1.3.4/requirements.txt and found that version is OK with redis >= 2.10.2

So the fix seems to be to downgrade redis-py-cluster slightly to a version that sentry is OK with, and is OK with the version of redis sentry wants.

pip install -U redis-py-cluster==1.3.4

RancherOS/ISOLinux/Syslinux on FreeBSD bhyve

After messing with Docker on my laptop, I thought it would be interesting to setup a VM on my FreeBSD server to run RancherOS. I've been using vm-bhyve to manage the VMs, and have been running Ubuntu without much problem, so I figured another Linux distro would be fine ... but ended up opening a whole can of worms and while I did get it running eventually, I learned more about grub and booting on bhyve than I really wanted.
I thought I'd jot down some notes here for future reference.

To start with, bhyve is not a general hypervisor that can boot any PC-compatible disk or CD image you throw at it, the way something like KVM, VMWare, or Parallels can. It doesn't start a VM in 16-bit mode and go through an old-school BIOS boot sequence where it reads a Master Boot Record and executes whatever's there. It knows how to load a FreeBSD kernel, and with grub2-bhyve it can boot disks and CDs that use Grub2 - such as Ubuntu.

Unfortunately, RancherOS doesn't use grub, instead it uses Syslinux/ISOLinux on their ISO images and harddisk installations. When bhyve boots using the grub loader, it doesn't find any grub menu on the disk, and just drops you into a grub command prompt.

GNU GRUB  version 2.00

Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.

grub>

Fortunately, the grub commandline is like a mini-os, with lots of abilities to look around the disks, and it turns out manually boot things like RancherOS.

The first command to run is:

set pager=1

so that future commands don't just scroll off the screen.

help

displays a list of commands, help <command> gives a short explanation. ls lets you start poking around, in this case giving:

(cd0) (cd0,msdos1) (host)

Now we're getting somewhere. Trying ls (cd0) gives

Device cd0: Filesystem type iso9660 - Label `RancherOS' - Last modification time 2018-09-19 03:09:12 Wednesday, UUID 2018-09-19-03-09-12-00 - Total size 176128 sectors

ls -l (cd0)/ gives

DIR          20180919030909 boot/
DIR          20180919030912 rancheros/

OK, a boot directory, getting closer. ls -l (cd0)/boot gives

170          20180919030909 global.cfg
66978212     20180919030909 initrd-v1.4.1
DIR          20180919030909 isolinux/
1373         20180919030909 linux-current.cfg
12734        20180919030909 rancher.png
5523216      20180919030909 vmlinuz-4.14.67-rancher2

There we go, isolinux, but no grub files, no wonder it doesn't boot. After lots and lots of messing around learning grub, I was able to get an initial boot of the CD image from the grub> prompt with:

linux (cd0)/boot/vmlinuz-4.14.67-rancher2
initrd (cd0)/boot/initrd-v1.4.1
boot

And it started! After lots of Linux boot output I was rewarded with:

                ,        , ______                 _                 _____ _____TM
   ,------------|'------'| | ___ \               | |               /  _  /  ___|
  / .           '-'    |-  | |_/ /__ _ _ __   ___| |__   ___ _ __  | | | \ '--.
  \/|             |    |   |    // _' | '_ \ / __| '_ \ / _ \ '__' | | | |'--. \
    |   .________.'----'   | |\ \ (_| | | | | (__| | | |  __/ |    | \_/ /\__/ /
    |   |        |   |     \_| \_\__,_|_| |_|\___|_| |_|\___|_|     \___/\____/
    \___/        \___/     Linux 4.14.67-rancher2

    RancherOS #1 SMP Thu Sep 13 15:37:04 UTC 2018 rancher ttyS0
    docker-sys: 172.18.42.1 eth0: 10.66.0.48 lo: 127.0.0.1
rancher login:

Very cool, but what's the login? Userid is rancher, but there is no default password. According to the rancher docs, the ISO image is supposed to auto-login. Now what?

After rebooting and getting back to the grub> prompt, and digging around more, I found that cat (cd0)/boot/global.cfg showed:

APPEND rancher.autologin=tty1 rancher.autologin=ttyS0 rancher.autologin=ttyS1 rancher.autologin=ttyS1 console=tty1 console=ttyS0 console=ttyS1 printk.devkmsg=on panic=10

Ah, LInux command parameters including autologin stuff. To apply them it ended up being (again at the grub> prompt):

linux (cd0)/boot/vmlinuz-4.14.67-rancher2 rancher.autologin=tty1 rancher.autologin=ttyS0 rancher.autologin=ttyS1 rancher.autologin=ttyS1 console=tty1 console=ttyS0 console=ttyS1 printk.devkmsg=on panic=10
initrd (cd0)/boot/initrd-v1.4.1
boot

(that commandline could probably be simplified since we can see from the banner that our VM console is ttyS0, so we probably don't need the params relating to tty1 or ttyS1) This time I got the cattle banner from above, and a beautiful:

Autologin default
[rancher@rancher ~]$

A simple sudo -s (not requiring a password) gives root access. At that point you can do whatever, including installing onto a harddisk.

To get a RancherOS harddisk installation to boot, you'd have to go through similar steps with grub in exploring around the (hd0,1) disk to find the kernel, initrd, and kernel params. The grub commands for booting can be saved permanently in the vm-bhyve config for this machine with grub_runX lines like:

grub_run0="linux (hd0,1)/boot/vmlinuz-4.14.67-rancher2 rancher.autologin=ttyS0 printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait panic=10 console=tty0"
grub_run1="initrd (hd0,1)/boot/initrd-v1.4.1"
grub_run2="boot"

So the full vm-bhyve config file looks like (in case you're wondering - I hate it when people give snippets of code but don't show where it should go exactly):

loader="grub"
grub_run0="linux (hd0,1)/boot/vmlinuz-4.14.67-rancher2 rancher.autologin=ttyS0 printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait panic=10 console=tty0"
grub_run1="initrd (hd0,1)/boot/initrd-v1.4.1"
grub_run2="boot"
cpu=2
memory=2048M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0"
disk0_dev="sparse-zvol"
uuid="7f9fc9e5-c835-11e8-a327-e03f49af0c7d"
network0_mac="58:9c:fc:05:2a:04"

With that, my VM now boots without manual intervention, even though the virtual disk doesn't use grub.

Firefox Focus content-blocking of fonts almost drove me mad

I have a little personal webapp I use on my iPhone, that relied on Bootstrap3 glyphicons. However on my iPhone it started displaying weird emojis instead of the icons after upgrading to iOS 11. Other people's iPhones displayed everything fine, desktop browsers displayed everything fine, seemed to be just my phone, WTF?

Even looking at the BS3 components sample page I'd see emojis, WTF!?

Tried switching to open-iconic fonts, same problem (different emojis though), WTF!!?

Finally found this coment on BS3's Github saying it was due to content-blocking. Turns out I had Firefox Focus installed, and probably during the iOS upgrade I also upgraded Focus which must of coincidentally starting blocking the webfonts at that time.

Disabling content-blocking fixed the problem, yay! Just as a reference, the place to go (at least in iOS 11) is:

Settings App, scroll down to "Safari", scroll down to "Content Blockers", then in "Allow These Content Blockers:" disable "Firefox Focus"

Mozilla's Focus support page says:

Web fonts - fonts that are downloaded from the server (may slow down web pages). Web fonts are typefaces used to style the text on some web pages. Blocking Web fonts will alter the appearance of text on any pages where Web fonts are used, but all text will still display legibly.

Someone should tell them web fonts are used for more then just text, and blocking them can make your icons illegible.

Fortunately, you don't have to completely give up Focus content-blocking. In the Focus app, there's a little gear icon in the upper-right that lets you in a more fine-grained fashion enable/disable blocking of web fonts, but keep the other blocking of ad trackers, etc. After turning only web-font blocking off, and re-enabling content-blocking overall in the phone's Safari settings, I still have working icons in my little app.