Fun with ones and zerosBarry's notes on computer software and hardware2024-03-28T14:07:42-07:00urn:uuid:80dc4343-4f43-6b46-3253-25bc9ec29072SNMP server not reporting disk sizeurn:uuid:69d36474-60b6-0039-5d8d-e9bb77426a602024-02-29T12:21:00-08:00Barry Pedersonbp@barryp.org<p>I was setting up snmpd on an Ubuntu box, and noticed that it was reporting weird numbers
for a couple of XFS filesystems I had setup for Minio.</p>
<p>An snmpwalk showed values like this:</p>
<pre><code>HOST-RESOURCES-MIB::hrStorageType.51 = OID: HOST-RESOURCES-TYPES::hrStorageFixedDisk
HOST-RESOURCES-MIB::hrStorageType.52 = OID: HOST-RESOURCES-TYPES::hrStorageFixedDisk
...
HOST-RESOURCES-MIB::hrStorageDescr.51 = STRING: /minio/disk1
HOST-RESOURCES-MIB::hrStorageDescr.52 = STRING: /minio/disk2
...
HOST-RESOURCES-MIB::hrStorageAllocationUnits.51 = INTEGER: 0 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.52 = INTEGER: 0 Bytes
...
HOST-RESOURCES-MIB::hrStorageSize.51 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageSize.52 = INTEGER: 0
...
HOST-RESOURCES-MIB::hrStorageUsed.51 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageUsed.52 = INTEGER: 0</code></pre>
<p>So it was reporting the existence of the disks, but with all 0 values. The root ext4 filesystem showed up fine,
was it something to do with XFS? </p>
<p>Turns out the answer was NO, it was the permissions of the /minio directory
that the filesystems were mounted under. I figured this out when I noticed
that <code>df -h</code> showed the disks when I was running as root</p>
<pre><code>/dev/sdb1 60G 461M 60G 1% /minio/disk1
/dev/sdc1 60G 461M 60G 1% /minio/disk2</code></pre>
<p>But when running as non-root, such as the Debian-snmp user, <code>df -h</code> didn't show the disks at all.</p>
<p>Turns out I had been too strict with the permissions on <code>/minio</code>, I had originally set that to</p>
<pre><code>drwx------ 4 minio-user root 4096 Feb 14 14:57 minio</code></pre>
<p>But that apparently, and to my surprise, prevented snmpd from being able to read the size and usage
information for the mounts under that directory. Changing that to <code>0755</code> fixed the problem, and I just
made sure that the mountpoints had more strict permissions</p>
<pre><code>drwxr-x--- 3 minio-user root 24 Feb 27 13:08 disk1
drwxr-x--- 3 minio-user root 24 Feb 27 13:08 disk2</code></pre>Proxmox NIC pass-through and machine typeurn:uuid:0208978d-4433-b81c-a9d3-d6c40130b13f2023-07-11T09:15:00-07:00Barry Pedersonbp@barryp.org<p>I picked up a little Topton fanless server a while back, that comes with 4 x 2.5Gb Intel i226 NICs, and
wanted to try running an OpenBSD VM on Proxmox with NIC pass-through. There are some fairly
detailed <a href="https://www.servethehome.com/how-to-pass-through-pcie-nics-with-proxmox-ve-on-intel-and-amd">instructions on servethehome.com</a>, but one important thing I found they left out was
the Proxmox "Machine" setting. </p>
<p>The default seems to be "i440fx", and with that default I was sort of seeing the NIC inside OpenBSD, but with dmesg
entries like:</p>
<pre><code>igc0 at pci0 dev 16 function 0 "Intel I226-V" rev 0x04: not enough msi-x vectors
igc0: unable to map msi-x vector 0</code></pre>
<p>and the interface wouldn't actually show up in <code>ifconfig</code></p>
<p>After finally finding some hints on the web that the <code>q35</code> machine type adds PCIe support, I gave that a try
and now the <code>igc0</code> interface on OpenBSD seems to be working!</p>Automatically restarting Percona XtraDB clusterurn:uuid:d1a316dd-db76-abce-d158-d28ddf61c8de2023-02-01T10:05:00-08:00Barry Pedersonbp@barryp.org<body><p>I've been experimenting with Percona XtraDB cluster, and found that by default it requires manual intervention
to restart the cluster from an all-nodes-down state when the nodes were gracefully shutdown. The docs talk about
identifying which node has <code>safe_to_bootstrap: 1</code> in it's <code>/var/lib/mysql/grastate.dat</code> file, and on that node starting
the <code>mysql@boostrap</code> service instead of just plain <code>mysql</code>.</p>
<p>Looking at a file and acting on what's found seems like something that could be automated, so here's my take for an
Ubuntu 22.04 setup:</p>
<p>On each node (yay Ansible!) I added this script as <code>/usr/local/sbin/choose-mysql-service.sh</code></p>
<div class="source"><pre><span></span><span class="ch">#!/bin/bash</span>
<span class="nv">GRASTATE</span><span class="o">=</span><span class="s2">"/var/lib/mysql/grastate.dat"</span>
<span class="nv">service</span><span class="o">=</span><span class="s2">"mysql"</span>
<span class="c1"># Start a different service if grastate.dat is present</span>
<span class="c1"># with safe_to_bootstrap: 1</span>
<span class="c1">#</span>
<span class="k">if</span> <span class="o">[</span> -f <span class="nv">$GRASTATE</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
<span class="k">if</span> grep --quiet <span class="s2">"^safe_to_bootstrap: 1"</span> <span class="nv">$GRASTATE</span><span class="p">;</span> <span class="k">then</span>
<span class="nv">service</span><span class="o">=</span><span class="s2">"mysql@bootstrap"</span>
<span class="k">fi</span>
<span class="k">fi</span>
<span class="nb">echo</span> <span class="s2">"Starting </span><span class="nv">$service</span><span class="s2">"</span>
systemctl start <span class="nv">$service</span>
</pre></div>
<p>Then I added a one-shot systemd unit to execute at boot time, as <code>/etc/systemd/system/choose-mysql-service.service</code></p>
<pre><code>[Unit]
Description=Choose MySQL service
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/choose-mysql-service.sh
RemainAfterExit=true
[Install]
WantedBy=multi-user.target</code></pre>
<p>And the disabled the default <code>mysql</code> service and enabled my new unit with:</p>
<pre><code>systemctl daemon-reload
systemctl disable mysql
systemctl enable choose-mysql-service</code></pre>
<p>So now when the OS boots, instead of just blindly trying to start <code>mysql</code>, it looks at the <code>grastate.dat</code> and if it has <code>safe_to_bootstrap: 1</code> it starts <code>mysql@bootstrap</code> instead - or otherwise falls back to the default of starting <code>mysql</code></p>
<p>I also shared this on the <a href="https://forums.percona.com/t/pxc-8-auto-restart-after-graceful-shutdown/19850">Percona Forum</a>, look for feedback there</p></body>mysqldump: Got error: 1049: Unknown database when using LOCK TABLESurn:uuid:605a9e4c-5df3-8995-4cbd-1d9b69dcf27a2022-12-22T14:56:00-08:00Barry Pedersonbp@barryp.org<p>Ran in to a confusing error message today:</p>
<pre><code>mysqldump: Got error: 1049: Unknown database 'mydb' when using LOCK TABLES</code></pre>
<p>I did have a database named <code>mydb</code> and then deleted it the other day. Later when doing
some scripted mysqldumps of other databases, I got that message. Why in the world was it trying
to lock a database I knew to be gone?</p>
<p>Turns out one of my other dbs was referencing the deleted db in a VIEW. Hope that helps</p>UFW and LXC/LXD on Ubuntu 22.04urn:uuid:ba108f66-11c7-a676-de98-f224eadd40272022-05-26T10:00:00-07:00Barry Pedersonbp@barryp.org<p>I recently setup a new Ubuntu server with LXC containers. At first it all went great, but then
later when I enabled UFW, things got flaky. Looking at <code>/var/log/syslog</code> I saw UFW was blocking
lots of traffic from inside the containers. </p>
<p>Also when restarting a container, the container wouldn't get one of the bridged <code>10.x.x.x</code> IP addresses.</p>
<p>After Googling a bit, I found the magic commmands on <a href="https://discuss.linuxcontainers.org/t/lxd-bridge-doesnt-work-with-ipv4-and-ufw-with-nftables/10034/16">this discussion</a>:</p>
<pre><code>ufw allow in on lxdbr0
ufw route allow in on lxdbr0</code></pre>
<p>In hindsight, I think it would have been better to enable <code>ufw</code> <em>before</em> doing anything else with the new install,
that way the problems would have been more obvious right away - rather than it being a "geez, it was working before"
type situation.</p>