Building Sentry on FreeBSD

So I was hoping to install the on-premise version of Sentry 9.0.0 on my FreeBSD 11 box, but ran into a snag. I was using the Installation with Python instructions, but it failed to build the semaphore module (another user has an error dump available).
Unfortunately the official response is: We don’t support FreeBSD.

The key error seems to boil down to:

relocation R_X86_64_TPOFF32 against `a local symbol’ can not be used when 
making a shared object; recompile with -fPIC …/…/src/libsodium/.libs/libsodium.a: 
could not read symbols: Bad value cc: error: linker command failed with 
exit code 1 (use -v to see invocation) *** [randombytes] Error code 1

So it's a libsodium error, some more digging found that sodium was a dependency of rust_sodium, which nicely has the ability To use your own copy of libsodium, with the stipulation that you're using the same version they are.

Looking at their rust_sodium/rust_sodium-sys/build.rs shows they're using libsodium 1.0.16, and a check of the FreeBSD security/libsodium Makefile shows the same version - so we're in business.

So after building and installing the libsodium port, I tried the Sentry install again with:

RUST_SODIUM_LIB_DIR=/usr/local/lib pip install -U sentry

and it worked! Overall the FreeBSD ports that need to be installed as build dependencies are:

databases/postgresql11-client
devel/py-virtualenv@py27
graphics/jpeg
lang/rust
security/libsodium
textproc/libxml2
textproc/libxslt

For additional runtime dependencies I've got:

databases/postgresql11-contrib  (Needed for citext extension)
databases/postgresql11-server
databases/redis
mail/exim
sysutils/py-supervisor

After a long build, I got this warning (in red):

redis-py-cluster 1.3.6 has requirement redis==2.10.6, but you'll have redis 2.10.5 which is incompatible.

Looked at https://github.com/getsentry/sentry/blob/9.0.0/requirements-base.txt and found sentry wants redis < 2.10.6, and is OK with redis-py-cluster >= 1.3.4

Looked at https://github.com/Grokzen/redis-py-cluster/blob/1.3.4/requirements.txt and found that version is OK with redis >= 2.10.2

So the fix seems to be to downgrade redis-py-cluster slightly to a version that sentry is OK with, and is OK with the version of redis sentry wants.

pip install -U redis-py-cluster==1.3.4

RancherOS/ISOLinux/Syslinux on FreeBSD bhyve

After messing with Docker on my laptop, I thought it would be interesting to setup a VM on my FreeBSD server to run RancherOS. I've been using vm-bhyve to manage the VMs, and have been running Ubuntu without much problem, so I figured another Linux distro would be fine ... but ended up opening a whole can of worms and while I did get it running eventually, I learned more about grub and booting on bhyve than I really wanted. I thought I'd jot down some notes here for future reference.

To start with, bhyve is not a general hypervisor that can boot any PC-compatible disk or CD image you throw at it, the way something like KVM, VMWare, or Parallels can. It doesn't start a VM in 16-bit mode and go through a old-school BIOS boot sequence where it reads a Master Boot Record and executes whatever's there. It knows how to load a FreeBSD kernel, and with grub2-bhyve it can boot disks and CDs that use Grub2 - such as Ubuntu.

Unfortunately, RancherOS doesn't use grub, instead it uses Syslinux/ISOLinux on their ISO images and harddisk installations. When bhyve boots using the grub loader, it doesn't find any grub menu on the disk, and just drops you into a grub command prompt.

GNU GRUB  version 2.00

Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.

grub>

Fortunately, the grub commandline is like a mini-os, with lots of abilities to look around the disks, and it turns out manually boot things like RancherOS.

The first command to run is:

set pager=1

so that future commands don't just scroll off the screen.

help

displays a list of commands, help <command> gives a short explanation. ls lets you start poking around, in this case giving:

(cd0) (cd0,msdos1) (host)

Now we're getting somewhere. Trying ls (cd0) gives

Device cd0: Filesystem type iso9660 - Label `RancherOS' - Last modification time 2018-09-19 03:09:12 Wednesday, UUID 2018-09-19-03-09-12-00 - Total size 176128 sectors

ls -l (cd0)/ gives

DIR          20180919030909 boot/
DIR          20180919030912 rancheros/

OK, a boot directory, getting closer. ls -l (cd0)/boot gives

170          20180919030909 global.cfg
66978212     20180919030909 initrd-v1.4.1
DIR          20180919030909 isolinux/
1373         20180919030909 linux-current.cfg
12734        20180919030909 rancher.png
5523216      20180919030909 vmlinuz-4.14.67-rancher2

There we go, isolinux, but no grub files, no wonder it doesn't boot. After lots and lots of messing around learning grub, I was able to get an initial boot of the CD image from the grub> prompt with:

linux (cd0)/boot/vmlinuz-4.14.67-rancher2
initrd (cd0)/boot/initrd-v1.4.1
boot

And it started! After lots of Linux boot output I was rewarded with:

                ,        , ______                 _                 _____ _____TM
   ,------------|'------'| | ___ \               | |               /  _  /  ___|
  / .           '-'    |-  | |_/ /__ _ _ __   ___| |__   ___ _ __  | | | \ '--.
  \/|             |    |   |    // _' | '_ \ / __| '_ \ / _ \ '__' | | | |'--. \
    |   .________.'----'   | |\ \ (_| | | | | (__| | | |  __/ |    | \_/ /\__/ /
    |   |        |   |     \_| \_\__,_|_| |_|\___|_| |_|\___|_|     \___/\____/
    \___/        \___/     Linux 4.14.67-rancher2

    RancherOS #1 SMP Thu Sep 13 15:37:04 UTC 2018 rancher ttyS0
    docker-sys: 172.18.42.1 eth0: 10.66.0.48 lo: 127.0.0.1
rancher login:

Very cool, but what's the login? Userid is rancher, but there is no default password. According to the rancher docs, the ISO image is supposed to auto-login. Now what?

After rebooting and getting back to the grub> prompt, and digging around more, I found that cat (cd0)/boot/global.cfg showed:

APPEND rancher.autologin=tty1 rancher.autologin=ttyS0 rancher.autologin=ttyS1 rancher.autologin=ttyS1 console=tty1 console=ttyS0 console=ttyS1 printk.devkmsg=on panic=10

Ah, LInux command parameters including autologin stuff. To apply them it ended up being (again at the grub> prompt):

linux (cd0)/boot/vmlinuz-4.14.67-rancher2 rancher.autologin=tty1 rancher.autologin=ttyS0 rancher.autologin=ttyS1 rancher.autologin=ttyS1 console=tty1 console=ttyS0 console=ttyS1 printk.devkmsg=on panic=10
initrd (cd0)/boot/initrd-v1.4.1
boot

(that commandline could probably be simplified since we can see from the banner that our VM console is ttyS0, so we probably don't need the params relating to tty1 or ttyS1) This time I got the cattle banner from above, and a beautiful:

Autologin default
[rancher@rancher ~]$

A simple sudo -s (not requiring a password) gives root access. At that point you can do whatever, including installing onto a harddisk.

To get a RancherOS harddisk installation to boot, you'd have to go through similar steps with grub in exploring around the (hd0,1) disk to find the kernel, initrd, and kernel params. The grub commands for booting can be saved permanently in the vm-bhyve config for this machine with grub_runX lines like:

grub_run0="linux (hd0,1)/boot/vmlinuz-4.14.67-rancher2 rancher.autologin=ttyS0 printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait panic=10 console=tty0"
grub_run1="initrd (hd0,1)/boot/initrd-v1.4.1"
grub_run2="boot"

So the full vm-bhyve config file looks like (in case you're wondering - I hate it when people give snippets of code but don't show where it should go exactly):

loader="grub"
grub_run0="linux (hd0,1)/boot/vmlinuz-4.14.67-rancher2 rancher.autologin=ttyS0 printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait panic=10 console=tty0"
grub_run1="initrd (hd0,1)/boot/initrd-v1.4.1"
grub_run2="boot"
cpu=2
memory=2048M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0"
disk0_dev="sparse-zvol"
uuid="7f9fc9e5-c835-11e8-a327-e03f49af0c7d"
network0_mac="58:9c:fc:05:2a:04"

With that, my VM now boots without manual intervention, even though the virtual disk doesn't use grub.

Mercurial escaped colors

After upgrading Mercurial to 4.2 on my FreeBSD 10.x boxes, there was a problem in that the mercurial color extension was now enabled, and suddenly things like hg status were showing output like

ESC[0;34;1mM ESC[0mESC[0;34;1mpf.confESC[0m

after lots of digging, finally figured out it was caused by my PAGER environment variable being set to more, pretty outdated. Fixed it on-the-fly with export PAGER='less -X' and got nice colorized output. Made it permanent by editing ~/.profile and replacing a line I had setting the PAGER with a new one:

PAGER='less -X';    export PAGER

IPv6 World Launch Day

IPv6 Logo I've been working on getting this website up and running under IPv6, and it turned out to be somewhat involved. Firstly, I signed up with Hurricane Electric's tunnelbroker.net, to get IPv6 connectivity, because my ISP doesn't offer it yet. Setup my own DNS servers running nsd, which was a bit of a learning curve, but in the long run I think it'll be better than working with goofy DNS managers like you'd find on registrar or hosting websites. NameCheap is now letting you setup IPv6 glue records right on their website (previously you had to file a support ticket), so that made things easier.

The only big glitch I ran into is that on FreeBSD, using simply

listen [::]:80;

to listen to both IPv4 and IPv6 didn't work. When trying that, I found that any request coming in as IPv4 would give weird 403 or 404 (I don't remember which) errors, where it seemed nginx just didn't know what virtual host to go to. Linux doesn't seem to have that problem. Ended up using separate listen statements, as in:

listen 80 default_server;
listen [::]:80 default_server ipv6only=on;

for the main site, but VERY IMPORTANTLY, the remaining sites could not have the ipv6only=on directive, they just simply say

listen  80;
listen [::]:80;

(found that trick in this ServerFault page). This also has the advantage of showing proper IPv4 IP addresses in the logs, instead of IPv4-mapped IPv6 addresses such as ::ffff:11.22.33.44, so I ended up doing the same thing on a Linux box even though it handled dual-stack by default just fine.

I also for testing purposes, made aliases

To force one protocol or the other. When you use http://barryp.org/blog/, it's not obvious which you're using.

RabbitMQ FreeBSD port

I was happy to see a FreeBSD port added for RabbitMQ, net/rabbitmq, although I found a couple problems with it: it doesn't start automatically when your machine or jail boots, and when building the rabbitmq-erlang-client, it errors out with:

src/amqp_channel.erl:28: can't find include lib "rabbitmq_server/include/rabbit.hrl"
src/amqp_channel.erl:29: can't find include lib "rabbitmq_server/include/rabbit_framing.hrl"

I worked on the port a bit, and submitted a bug report and patch, ports/127033, that fixes these problems.

SSH in a FreeBSD jail

I've been running lots of FreeBSD jails on various servers I maintain, and one thing I've noticed is that using ssh or scp from inside a jail often results in the error: Host key verification failed. A little Google searching turns up this explanation, that the problem is caused when you jexec into the jail instead of logging in normally through SSH.

I often run the jails in a pretty minimal way and don't really want to run sshd in them, and fortunately the problem can be worked around somewhat. Apparently the Host key verification failed. error is caused when SSH is unable to show you this type of prompt:

The authenticity of host 'foobar.edu' can't be established.
DSA key fingerprint is 7c:ac:b0:da:be:3c:c2:00:00:00:00:ce:db:fb:49:77.
Are you sure you want to continue connecting (yes/no)?

when connecting to a host you haven't connected to before. All you have to do to get around this is manually add a line to the jail's ~/.ssh/known_hosts for the server you're trying to connect to, probably by copying one from a known_hosts on another box or outside the jail.

Once past that, you may find that SSH is still unhappy in the jail if you don't have publickey authentication setup with the server you're trying to connect to, with an error like:

Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).

Fixing that is just a matter of generating or copying a private/public key pair into the jail's ~/.ssh directory, and putting the public key on the server you're connecting to.

Daemontools mishap

While working on my DiskCompare.com website I had a weird problem with daemontools. The Django process I had running under daemontools became unresponsive, it wouldn't shutdown normally with svc -d, and after kill -9 and restarting, it still wouldn't respond - it just seemed hung up or frozen. No messages in /var/log/messages, but running the service manually outside of daemontools worked fine. What the heck?

I had the service also setup to run multilog, and it turns out that I had mistakenly changed the ownership of the log directory it was supposed to be writing to. My guess is that it was the multilog process that was really hung up, since it couldn't write to the files it wanted to, and that my Django processes were then blocked because the pipe between them and multilog was full.

Simply correcting the permissions of the log directory cleared the jam and things took off again. So if you're seeing strange behavior with daemontools, that's one area to check out.

Apache in FreeBSD Jail Error

In one of my FreeBSD 6.2 jails running Apache, even though the server seemed to respond ok, I saw lots of these errors in the logfile:

[warn] (61)Connection refused: connect to listener on 0.0.0.0:443

Google searching found lots of other people asking about this, but I didn't really see any good answers. Others complained about the same thing on port 80

[warn] (61)Connection refused: connect to listener on 0.0.0.0:80

I think the problem is just that Apache in a jail can't listen to :443 or 0.0.0.0:443 (or :80 or 0.0.0.0:80). If your jail has the IP 1.2.3.4 for example, then in httpd.conf, changing

Listen 80

to

Listen 1.2.3.4:80

and/or in extra/httpd-ssl.conf

Listen 443

to

Listen 1.2.3.4:443

Seems to fix the problem

Split tunnel between FreeBSD boxes using OpenVPN

At work there are machines I'd like to access from home using Windows networking (Samba servers mostly), but the catch is that the work firewall is blocking NetBIOS traffic (an awfully good idea). My home network uses a FreeBSD box for NAT (Network Address Translation). Here's a diagram of what we're talking about.

Diagram of NetBIOS being blocked

In the picture above, my home NAT box has the public IP 1.2.3.4, the internal home network is 10.0.0.0/24, and I'm trying to reach a work server 5.6.7.100 on the 5.6.7.0/24 network.

Fortunately, I have a FreeBSD box at work, with an address 5.6.7.8, and it's fairly easy to setup a simple OpenVPN tunnel between 1.2.3.4 and 5.6.7.8, and route NetBIOS traffic over that.

OpenVPN will make it appear as if the two machines have a point-to-point network connection, when in reality the traffic is passing encrypted over the public internet. We need to pull a couple IP numbers out of our hat to use for the VPN endpoints - I'll use 192.168.88.1 for the home machine and 192.168.88.2 for the work machine.

Diagram of using OpenVPN

Settting up OpenVPN

On each box install the security/openvpn port. After that's done, on one machine, go to /usr/local/etc/openvpn and run:

openvpn --genkey --secret mykey

Copy the mykey file you just generated over to the other box's /usr/local/etc/openvpn directory. The two OpenVPN endpoints will used that shared key to authenticate each other.

On the 1.2.3.4 machine, create a /usr/local/etc/openvpn/openvpn.conf file containing:

remote 5.6.7.8
dev tun0
ifconfig 192.168.88.1 192.168.88.2
secret mykey

On the 5.6.7.8 machine, create a /usr/local/etc/openvpn/openvpn.conf file containing:

remote 1.2.3.4
dev tun0
ifconfig 192.168.88.2 192.168.88.1
secret mykey

(note that the ifconfig line swapped the IPs compared to the other machine's config)

Throw an openvpn_enable="YES" in each machine's /etc/rc.conf, and start the daemons: /usr/local/etc/rc.d/openvpn start

If necessary, allow OpenVPN traffic through your firewall, for the 1.2.3.4 box it might look something like:

pass in on $ext_if inet proto udp from 5.6.7.8 to $ext_if port 1194
pass on tun0

If this works, you should be able to sit at the 1.2.3.4 box and ping 192.168.88.2 and get a response. On the 5.6.7.8 box, running tcpdump -n -i tun0 should show the ICMP packets reaching the machine.

Routing specific traffic

I don't want to route all my traffic going to the 5.6.7.0/24 network through the VPN, I just want the NetBIOS stuff so I'll setup a split tunnel. PF makes it pretty easy to redirect network traffic through the VPN, in fact, I ended up doing a double-NAT, one on each end of the tunnel.

Diagram of double-natting

So when the home workstation contacts the Samba server, the Samba server sees the traffic as coming from the 5.6.7.8 box, and the 5.6.7.8 box saw the traffic as coming from the home FreeBSD NAT machine. So interestingly, neither of the work machines needs to have any clue about the home network. The PF state tables take care of reversing everything when the Samba server responds.

On the home 1.2.3.4 machine, these lines are added in the appropriate places to /etc/pf.conf:

int_if="eth1"
internal_net="10.0.0.0/24"
work_net="5.6.7.0/24"
.
.
nat on tun0    from $internal_net to any -> (tun0)
.
.
pass in on $int_if route-to tun0 proto tcp from any to $work_net port {139, 445} flags S/SA modulate state

That last line is the key to the whole thing, it's responsible for diverting the traffic we want to go through the VPN instead of over the public internet. If you want to secure additional protocols, just add similar lines.

The PF config on the work 5.6.7.8 machine is simpler, just

nat on $ext_if from 192.168.88.1 to any -> 5.6.7.8

To perform that 2nd NATting, making VPN traffic seem like it came from the work box.

Lastly, both machine need gateway_enable="YES" in /etc/rc.conf. A home NAT box probably already has that though.

There's a lot more that OpenVPN can do, we barely scratched the surface with the simple setup described above, check the docs for more info.

More DHCP Failover

Earlier I wrote about DHCP failover, but there's another thing I thought I might mention that could be useful to others....

I had a problem in that one of my servers' CMOS clocks tends to be a bit off, maybe 90 seconds. When dhcpd starts up, it is unable to enter a normal failover state because of the time difference between it and the other dhcpd server.

I have

ntpdate_enable="YES"
ntpdate_flags="-b x.x.x.x"

in my /etc/rc.conf, along with running openntpd, but for some reason ntpdate wasn't setting the clock at boot time, and by the time openntpd got the clock tuned up, dhcpd had given up on trying to re-establish failover. Restarting dhcpd by hand later on always worked OK.

I think what was happening was that the network jack this server was plugged into wasn't coming alive quick enough to be up and running when ntpdate tried to do its thing. Something to do with the Cisco switch not having portfast enabled.

I don't have access to do anything about the switches, so I came up with the workaround of adding a simple script /usr/local/etc/rc.d/000.afterboot.sh to schedule a job to run a few minutes after the machine boots - to adjust the clock and restart dhcpd. It looks something like:

#!/bin/sh
at now + 5 minutes <<EOF
/etc/rc.d/ntpdate restart
/usr/local/etc/rc.d/isc-dhcpd restart
EOF

It's a bit of a kludge, but seems to do the trick.