• Running Unifi Controller on a Raspberry Pi

    I recently migrated my Unifi controller from a bhyve instance in Triton to an LXC container in a Raspberry Pi. I won't go into all the reasons why here, but suffice it to say that I had made some choices about my existing network that, while they made sense at the time, didn't really jell with the way Unifi is intended to operate. I've been running a controller myself for over a year, and I already have a router and several spare Raspberry Pis laying around, so getting a Cloud Key or Dream Machine wasn't something I was willing to pay for just yet.

    Finding the right distro

    Shopping around which operating system to run on the Raspberry Pi, I ended up choosing Ubuntu, since FreeBSD isn't supported for the Unifi controller. The main reason I chose Ubuntu is because it has 64-bit for arm64 while raspbian and alpine do not. Ubuntu also supports the WiFi on the rpi3 and rpi4 which I definitely wanted without having to deal with it. I actually ended up not using either of those, but more on that later. I've also been using Ubuntu for my bhyve controller instance, so I figured getting it set up would be pretty straighforward.

    The Ubuntu images for Raspberry Pi have some really nice features. There are a number of files on the SD card that feed directly into cloud-init, which is something I'm quite accustomed to from using Ubuntu on Triton. This made configuring networking, including wifi, and my ssh keys a cinch.

    First Steps

    I ran into a couple of problems initially. First, Unifi's apt repo only has packages for armhf, not arm64. I figured, oh well, it's not like 64-bit is actually giving me much on a system with only 1GB of RAM so I re-imaged the SD card with 32-bit ubuntu-18, loaded my networking-config, booted it up and started in again and ran into my second issue. The Unifi controller doesn't run on Ubuntu 18 due to an issue with MongoDB. I could have, maybe, looked around for a ppa to get an older version of mongo and apt-pin it. That seemed both fairly fragile in the long run and not ideal to me. I remembered that Ubuntu comes with LXD installed by default and decided to give it a try.

    Now, this was my first time using either lxc or lxd. I've used Docker, though never runc, but lxc containers feel more like SmrtOS Zones rather than just single process containers like Docker. I did a bit of reading to get a primer on lxc and with my new found knowledge I figured out that Ubuntu provides xenial armhf lxc images that support cloud-init (whereas images from other sources often don't). Bingo.

    Creating the container was super simple. Props to the lxc people.

    sudo lxc launch ubuntu:16.04 unifictl
    

    Networking Misadventure

    Having never used lxc before, geting the networking right took me a few tries. By default lxd wants to set up a bridge with a private network and configure IPMASQ with dnsmasq providing DHCP for everything. I want my controller to have a direct network interface for L2 discoverability with Unifi devices. I spent far too much time trying to figure out what the recomended way to do this was. As near as I can tell, if you're not using the default you're basically just on your own and you can do whatever you want. And coming from illumos with crossbow, virtual networking on Linux…let's just say it leaves a lot to be desired.

    Ultimately I went with a bridge attached to the wired interface for the container (since I needed to have the controller on vlan 1 for managing devices) and the wlan0 connecting to my wifi network (which is on vlan 3). My modified netplan looked like this:

    # This file is generated from information provided by
    # the datasource.  Changes to it will not persist across an instance.
    # To disable cloud-init's network configuration capabilities, write a file
    # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
    # network: {config: disabled}
    network:
        version: 2
        ethernets:
            eth0:
                dhcp4: false
                accept-ra: no
        bridges:
            br0:
                dhcp4: false
                accept-ra: no
                interfaces: [eth0]
                addresses: [172.28.1.10/24]
        wifis:
            wlan0:
                dhcp4: true
                optional: true
                access-points:
                    "My Home Network":
                        password: "walt sent me"
    

    With that set, I needed to reconfigure LXD. Initialy I did this by purging and re-installing the packages, but apparently all I needed to do was lxc network delete lxdbr0 to remove the lxd bridge I didn't want so that I could use my own. My final lxd preseed looks like this.

    # lxd init --preseed < EOF
    config: {}
    networks: []
    storage_pools:
    - config: {}
      description: ""
      name: default
      driver: dir
    profiles:
    - config: {}
      description: ""
      devices:
        eth0:
          name: eth0
          nictype: bridged
          parent: br0
          type: nic
        root:
          path: /
          pool: default
          type: disk
      name: default
    cluster: null
    EOF
    

    This creates a non-custered (because it's just one rpi), local only lxd with a storage pool named default just using a directory on the filesystem. Other options are btrfs or lvm, neither of which I had set up, nor wanted deal with configuring. For a raspberry pi where I'm probably only ever going to run one container, this is good enough. Maybe the next time I get around to it, zfs will be an option.

    Next up was setting a static IP since I don't want the controller changing IPs on the devices and causing an issue with the inform IP. Nearly everything I found said to add a device with lxc config device ... and setting raw.lxc values, but additional post-provision manual configuration seems absurd to me. There had to be a better way. This is again where LXC falls short, because there's absolutely no guidance here whatsoever, and the answer really is that if you're not using the default you're completly on your own. However, I did eventually find lxc/lxd#2534 where stgraber says:

    Though, note that the preferred way to do this is through your Linux distribution's own configuration mechanism rather than pre-configure things through raw.lxc.

    For Ubuntu, that'd be through some cloud-init configuration of some sort, that said, if raw.lxc works for you, that's fine too :)

    I suppose in hindsight it should have been obvious to me that I wasn't looking for how to configure container networking, I was looking for how to pass in cloud-init data. Coming from illumos, I'm used to the global zone configuring networking on behalf of zones and not allowing them permission to modify it.

    Since I needed an ubuntu-16 container for running the unifi controller, the older version of cloud-init in xenial only supports version 1 cloud-config networking so the format was different from what I used to provision the rpi itself.

    #cloud-config
    network:
        version: 1
        config:
          - type: physical
            name: eth0
            subnets:
              - type: static
                ipv4: true
                address: 172.28.1.11
                netmask: 255.255.255.0
                gateway: 172.28.1.1
                control: auto
          - type: nameserver
            address: 8.8.8.8
    

    And finally, launching the container.

    lxc launch ubuntu:16.04 unifictl --config=user.network-config="$(cat network.yml)"
    

    As far as I can tell, there's no way to pass in a filename, thus resorting to a subshell. Since yml is a superset of json, you could do a one-liner of all json. I don't know, choose whichever pain you'd prefer to have.

    At long last, getting the controller installed

    Getting into the running container is as easy as lxc exec unifictl bash, and you're root with a static IP. From here, there are a number of scripts and tutorials for setting up the unifi controller. That seemed like overkill. I do the following:

    # apt source
    echo 'deb http://www.ui.com/downloads/unifi/debian stable ubiquiti' > /etc/apt/sources.list.d/100-ubnt-unifi.list
    apt-key adv --keyserver keyserver.ubuntu.com --recv '06E85760C0A52C50'
    
    # install
    apt update && apt install openjdk-8-jre-headless unifi
    
    # Make sure mongo and unifi run
    systemctl enable mongodb
    systemctl enable unifi
    systemctl start mongodb
    systemctl start unifi
    

    At this point I've got unifi running in a container on vlan 1 where it can talk to all of my devices, and my wireless network is on vlan 3.

    Finalizing the set up with a reverse proxy and SSL certificates

    I like to keep my networks isolated so I added an nginx reverse proxy to the rpi (in what I would call the global zone, but linux apparently doesn't have a name for?).

    Ubiquity has documented the ports necessary to access the controller. Ports 8080, 8443, 8880, and 8843 are HTTP. Port 6789 is for mobile speed test and needs to be TCP only. STUN on port 3478 is only needed for Unifi devices which are on VLAN 1, and won't need to be proxied.

    Here's the nginx config that I used. Note that I've elided common settings such as logging and SSL. See https://ssl-config.mozilla.org to generate a suitable SSL configuration for your site, and always use Let's Encrypt if possible.

    # non-ssl ports
    server {
        listen      8080;
        listen      8880;
        listen [::]:8080;
        listen [::]:8880;
        server_name _;
    
        location / {
            proxy_pass_header server;
            proxy_pass_header date;
            proxy_set_header Host $host;
            proxy_set_header Forwarded "for=$remote_addr; proto=$scheme; by=$server_addr";
            proxy_set_header X-Forwarded-For "$remote_addr";
            proxy_pass http://172.28.1.11:$server_port;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    
    }
    
    # ssl ports
    server {
        listen      8443 ssl http2;
        listen      8843 ssl http2;
        listen [::]:8443 ssl http2;
        listen [::]:8843 ssl http2;
        server_name _;
    
        # SSL options go here. See https://ssl-config.mozilla.org
    
        location / {
            proxy_pass_header server;
            proxy_pass_header date;
            proxy_set_header Host $host;
            proxy_set_header Forwarded "for=$remote_addr; proto=$scheme; by=$server_addr";
            proxy_set_header X-Forwarded-For "$remote_addr";
            proxy_pass https://172.28.1.11:$server_port;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }
    

    I put this in /etc/nginx/sites-available and symlinked it in sites-enabled as is normal on Debian/Ubuntu.

    As I mentioned, the mobile speed test on port 6789 is not HTTP, so it needs to go outside of the http stanza. Both the sites-enabled and conf.d include directives are inside the http stanza, so the stream stanza needs go go directly in nginx.conf. Append this to the end.

    # Unifi controller mobile speed test
    stream {
        server {
            listen            [::]:6789;
            proxy_pass        172.28.1.11:6789;
            proxy_buffer_size 128k;
        }
    }
    

    I could also have created a container for this, and I may still do that when I have some time.

    The last Hurdle

    There was one final issue after getting the controller set up. Everything seemed to work great, but since I was replacing a non-unifi switch with a unifi switch I had to do some reconfiguration of the network, including the wireless access points. In the past, whenever wireless was down for whatever reason (e.g., firmware updates) I could disconnect wifi on my phone and access my controller's IPv6 address over the cell network. This worked because my controller being on a bhyve instance was wired. Having the raspberry pi connected to my main network (I didn't include IPv6 on vlan 1 since the unifi devices don't yet support it (or maybe they just don't support it without a USG?)) over wifi, if the wifi is down I couldn't access the controller remotely. Maybe this is something you can deal with, but in my experience, when the wifi is down is precicely when I need to access the controller. I needed to change the Pi to use a wired network for vlan 3 rather than connecting over wifi. To do this, I changed the switch port profile for the pi to All and changed the netplan to add a vlan interface.

    #cloud-config
    # This file is generated from information provided by
    # the datasource.  Changes to it will not persist across an instance.
    # To disable cloud-init's network configuration capabilities, write a file
    # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
    # network: {config: disabled}
    network:
        version: 2
        ethernets:
            eth0:
                dhcp4: false
                accept-ra: no
        bridges:
            br0:
                dhcp4: false
                accept-ra: no
                interfaces: [eth0]
                addresses: [172.28.1.10/24]
        vlans:
            vlan.3:
                id: 3
                link: br0
                dhcp4: true
                accept-ra: yes
    

    I now have the equivalent of a cloud key for the price of a spare raspberry pi I had lying around.

    Conclusion

    To summarize, here are the key components to reproducing this.

    1. Ubuntu 18 raspberry pi image. I used 32-bit, but if I were to do it again I'd try 64-bit first.
    2. Use only wired networking. I still don't know what will happen when I need update the firmware on the switch. Juniper switches can still pass traffic while the switch control plane is rebooting. Here's hoping the unifi can do the same! Maybe it's unavoidable and I might as well just use wifi. We'll see.
    3. Create your own bridge to give the controller instance an interface directly on the network with no nat. Or, have fun with iptables.
    4. Ubuntu 16 armhf image (lxc launch ubuntu:16.04/armhf, if you're using arm64). You could also use Debian, which you might be able to use a version later than ubuntu-16 without the mongo problem, but Xenial is LTS until 2021.

  • The CDDL is Not Incompatible With the GPL

    The CDDL is not incompatible with the GPL. Anybody who says otherwise has an agenda. I've heard all the arguments. They're all bullshit and FUD.

    I, of course, am not a lawyer. But I can read.

    This is written primarily to discuss the situation with the Linux kernel and ZFS.

    First, let's review. The Linux kernel is licensed under the GNU General Public License version 2 (GPL). The effective clause of the GPL is in section 2, as follows (emphasis added)..

    These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.

    Everyone agrees that this is the clause that covers combined works. Even the FSF cites this passage when discussing ZFS and Linux. There is some discussion about executable vs source in section 3, but that's a clarification of how this clause in section two affects binaries. The meat is here in section 2.

    The Linux kernel contains a LICENSES directory with guidance on various licenses, among those are MIT, BSD, etc. MIT and BSD licenses are among those considered "preferred" because they are "GPL compatible". This compatibility comes from the fact that these licenses permit relicensing. That is, the source code of a given module/file may be MIT or BSD, but the executable form is considered to be GPL licensed. Everything is ok. Everyone is ok with this.

    What happens then if I extract the Linux source code and find files with MIT, BSD, or other licenses? May I use those files under their stated license? Or am I restricted to using those files under the terms of the GPL just because I obtained the source from a GPL binary I previously obtained a copy of? This is, of course, silly.

    Casual perusal (i.e., using cscope) of a git clone, current as of this writing, shows there to be 1679 BSD licensed files and 2344 MIT licensed files in the Linux kernel tree. The argument that one must use these files under the terms of the GPL instead of their stated license, just because they were obtained as part of a bundle containing GPL licensed code is absurd in the highest degree. What would we say then? That a file originally authored by the FreeBSD project, is sometimes only covered by the BSD license and sometimes only covered by the GPL depending on whether you downloaded it from FreeBSD or from RedHat? The notion is absolutely ridiculous, and deserves to be ridiculed.

    Now, let's look at the CDDL. The CDDL section 3.5 states (emphasis added):

    You may distribute the Executable form of the Covered Software under the terms of this License or under the terms of a license of Your choice, which may contain terms different from this License, provided that You are in compliance with the terms of this License and that the license for the Executable form does not attempt to limit or alter the recipients rights in the Source Code form from the rights set forth in this License.

    To reiterate, executable forms of CDDL source code can be under any license you want. So what happens when you compile and link modules of which some are GPL and some are CDDL? Obviously the resulting binary is licensed under the GPL, because the GPL requires it, and the CDDL allows it.

    What then of the obligations of the CDDL and the GPL? They both require source code be made available. Even if the CDDL didn't require it, the CDDL licensed source files must be provided to comply with the terms of the GPL. In supplying the original source code you have complied with both licenses. And once those those files are obtained, they may be reused, copied, modified, etc. under the terms of the CDDL, just as files licesned MIT or BSD may be used under the terms of the stated license. If this is not the case, and the source files must be licensed only under the GPL as the FSF claims, then the GPL cannot be compatible with any other license and all files not marked as licensed GPL in the Linux kernel are in violation of the GPL. A veritable license roach motel.

    I hope we can put this foolish nonsense to rest. I have no idea why the FSF errounously claims that CDDL code is incompatible with the GPL while also maintaining that MIT/BSD code is (although I highly suspect it's because they fear the CDDL, or perhaps they fear Sun/Oracle and refuse to back down to save face). But it seems that the rest of the community goes along with it because they don't want to offend RMS or the FSF.

    While it's true that I am not a lawyer, that does not preclude me from being right.

  • Good bye, Carrie

    I can't believe she's gone.

    Princesses

  • Running Containers in Production, no really!

    Last week I presented on Triton at LOPSA LA and UUASC.

    I've got video this time!

    And slides, though most of the talk was live demos, so the slides leave a bit to be desired.

  • Creating ECDSA SSL Certificates in 3 Easy Steps

    I've previously written about creating SSL certificates. Times have changed, and ECC is the way of the future. Today I'm going to revisit that post with creating ECDSA SSL certificates as well as how to get your certificate signed by Let's Encrypt.

    Generating an ECDSA Key

    Since this information doesn't seem to be readily available many places, I'm putting it here. This is the fast track to getting an ECDSA SSL certificate.

    openssl ecparam -out private.key -name prime256v1 -genkey
    

    Generating the Certficate Signing Request

    Generating the csr is generally done interactively.

    openssl req -new -sha256 -key private.key -out server.csr
    

    Fill out the requested information. Use your two letter country code. Use the full name of your state. Locality means city. Organization Name and Organizational Unit Name seem rather self explanatory (they can be the same). Common name is the fully qualified domain name of the server or virtual server you are creating a certificate for. The rest you can leave blank.

    Non-interactive CSR generation

    You can avoid interactive csr creation by supplying the subject information. This will work fine as long as you're not using subjectAltNames.

    openssl req -new -sha256 -key private.key -out domain.com.csr \
        -subj "/C=US/ST=California/L=San Diego/O=Digital Elf/CN=digitalelf.net"
    

    Non-interactive CSR generation with subjetAltName

    Unfortunately certificates with subjectAltName, currently must be done with a config file. This is disappointing on many levels. You'll need the following minimum config.

    [req]
    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    
    [req_distinguished_name]
    
    [v3_req]
    basicConstraints = CA:FALSE
    keyUsage = nonRepudiation, digitalSignature, keyEncipherment
    subjectAltName = @alt_names
    
    [alt_names]
    DNS.1 = digitalelf.net
    DNS.2 = www.digitalelf.net
    

    And then create the csr:

    openssl req -new -sha256 -key private.key -out domain.com.csr \
        -subj "/C=US/ST=California/  L=San Diego/O=Digital Elf/CN=digitalelf.net" \
        -config csr.cnf
    

    Signing your certificate

    At this point if you want your cert signed by a real Certificate Authority. I suggest Let's Encrypt because you can get certificates for free.

    The official client for Let's Encrypt is certbot. I've never used it.

    My preferred client is dehydrated because it doesn't need anything more than the base system, and works on SmartOS, FreeBSD, macOS (Darwin), and Linux. See the documentation on usage.

    I've also created make-cert which wraps dehydrated, pre-configures most options, but requires node.js if you don't already have a configured web server. I use this simply because it makes dehydrated easier to deploy.

    Using a traditional Certificate Authority

    If that doesn't work for you because you can't run the letsencrypt client on your web server, StartSSL is also free. If you don't want a free one, you should have no trouble finding one on your own. Whichever you pick, give them your server.csr file. They'll give you back a certificate.

    Self-Signed Certificate

    If you want a self signed certificate instead, run this:

    openssl x509 -req -sha256 -days 365 -in server.csr -signkey private.key -out public.crt
    

    You can also create a self-signed ECDSA certificate in two steps.

    openssl ecparam -out www.example.com.key -name prime256v1 -genkey
    openssl req -new -days 365 -nodes -x509 \
        -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" \
        -key www.example.com.key -out www.example.com.cert
    

  • Star Wars

    I have a different relationship with Star Wars than most people. Star Wars was origionally released in theaters fourty-seven days after I was born. The Empire Strikes Back was the first movie I saw in a cinema. I stood on the seat, transfixed on the screen from the crawl to the credits. Return of the Jedi was the first movie I remember seeing in theaters. I've seen A New Hope something on the order of two thousand times. Three times in my life I've watched either ANH or the entire trilogy at least once per day for more than a year. Then theres all the other times I've seen it outside of that. I've been known to win Star Wars Trivial Pursuit on a single turn. I can recite the dialog of the entire trilogy from memory. Star Wars was an anchor for me, through a turbulent childhood.

    I'm not one of those crazies though. I'm not a collector. I have some Star Wars stuff, but it's not overehelming. I've enjoyed the expanded universe, but it's not the same. The EU to me was, and still is I suppose, something like fanfic. A place to go to think about Star Wars when all of Star Wars had already been consumed. For over twenty years Star Wars was a constant in my life, before the dark times, before the prequals.

    I was very excited for The Phantom Menace. I saw it on opening day, the first showing of the day in San Diego. Afterward, less so. The prequels are horribly bad. I took comfort in not being alone in that opinion. But now there's a new expanse for Star Wars. Disney has made statements about producing one new Star Wars movie per year. And for better or for worse, Star Wars is no longer simply a trilogy.

    I also am a fan of Star Trek. I am possibly going through what many Star Trek fans went through in 1987. Having watched The Cage, Picard is much closer to Pike than Kirk is. The Next Generation is more the show that Gene Roddenberry wanted to create than the original series was. The architecture of TNG traces back to Gene's original design for Star Trek before the studios got involved. And Star Trek has now lived more without its creator than with. There is phenominally good Trek (City on the Edge of Forever, The Measure of a Man, or The Inner Light) and there is bad Trek (most of DS9) and really bad Trek (Spock's Brain, seasons 2-4 of Enterprise). But there is a lot of Trek. There's almost 750 hours of Star Trek cannon. There's aproximately 12 (14 after this weekend) of Star Wars. I'm able to watch and rewatch Star Trek, enjoying the good episodes and lamenting or skipping the bad ones. I don't regard all of Star Trek cannon as cannon. Starting this week, I will be doing the same with Star Wars.

  • illumos: The State of Fully Modern Unix

    Last week I presented on illumos at LOPSA San Diego.

  • IPv6 the SmartOS Way

    Update: As of 20150917T235937Z full support for IPv6 has been added to vmadm with the added ips and gateways parameters. If you're using SmartDataCenter, these parameters won't (yet) be added automatically, so the following may be useful to you. But if you're using SmartOS, see the updated SmartOS IPv6 configuration wiki page.


    There have been a lot of requests for IPv6 support in SmartOS. I'm happy to say that there is now partial support for IPv6 in SmartOS, though it's not enabled by default and there may be some things you don't expect. This essay is specific to running stand-alone SmartOS systems on bare metal. This doesn't apply to running instances in the Joyent Cloud or for private cloud SDC.

    Update: I now have a project up on Github that fully automates enabling SLAAC IPv6 on SmartOS. It works for global and non-global zones and automatically identifies all interfaces available, regardless of the driver name.

    First, some definitions so we're all speaking the same language.

    • Compute Node (CN): A non-virtualized physical host.
    • Global Zone (GZ): The Operating System instance in control of all real hardware resources.
    • OS Zone: A SmartMachine zone using OS virtualization. This is the same thing as a Solaris zone.
    • KVM Zone A zone running a KVM virtual machine using hardware emulation.
    • Compute Instance (CI): A SmartMachine zone or KVM virtual machine.
    • Smart Data Center (SDC): Joyent's Smart Data Center private cloud product. SDC backends the Joyent Cloud.

    There are two modes of networking with SmartOS. The default is for the global zone to control the address and routes. A static IP is assigned in the zone definition when it's created, along with a netmask and default gateway and network access is restricted to the assigned IP to prevent tennants from causing shenanigans on your network. The other is to set the IP to DHCP, enable allow_ip_spoofing and be done with it. The former mode is preferred for public cloud providers (such as Joyent) and the latter may be preferred for private cloud providers (i.e., enterprises) or small deployments where all tennants are trusted. For example, at home where I have only a single CN and I'm the only operator, I just use DHCP and allow_ip_spoofing.

    By far the easiest way to permit IPv6 in a SmartOS zone is to have router-advertisements on your network and enable allow_ip_spoofing. As long as the CI has IPv6 enabled (see below for enabling IPv6 within the zone) you're done. But some don't want to abandon the protection that anti-spoofing provides.

    Whether you use static assignment or DHCP in SmartOS, the CI (and probably you too) doesn't care what the IP is. In fact, KVM zones with static IP configuration are configured for DHCP with the Global Zone acting as the DHCP server. If you have another DHCP server on your network it will never see the requests and they will not conflict. In SDC, entire networks are allocated to SDC. By default SDC itself will assign IPs to CIs. In the vast majority of cases it doesn't matter which IP a host has, just as long as it has one.

    Which brings us to IPv6. It's true that in SmartOS when a NIC is defined for a CI you can't define an IPv6 address in the ip field (in my testing this is because netmask is a required parameter for static address assignment, but there's no valid way to express an IPv6 netmask that is acceptable to vmadm). But like it or not, IPv4 is still a required part of our world. A host without some type of IPv4 network access will be extremely limited. There's also no ip6 field.

    But there doesn't need to be. Remembering that in almost all cases we don't care which IP so long as there is one, IPv6 can be enabled without allowing IP spoofing by adding IPv6 addresses to the allowed_ips property of the NIC. The most common method of IPv6 assignment is SLAAC. If you're using SLAAC then you neither want, nor need SmartOS handing out IPv6 addresses. The global and link-local addresses can be derived from the mac property of NIC of the CI. Add these to allowed_ips property of the NIC definition and the zone definition is fully configured for IPv6 (you don't need an IPv6 gateway definition because it will be picked up automatically by router-advertisements).

    Permitting IPv6 in a Zone

    Here's an example nic from a zone I have with IPv6 addresses allowed. Note that both the derived link-local and global addresses are permitted.

    [root@wasp ~]# vmadm get 94ff50ad-ac74-46ac-8b9d-c05ddf55f434 | json -a nics
    [
      {
        "interface": "net0",
        "mac": "72:9c:d5:34:47:59",
        "nic_tag": "external",
        "gateway": "198.51.100.1",
        "allowed_ips": [
          "fe80::709c:d5ff:fe34:4759",
          "2001:db8::709c:d5ff:fe34:4759"
        ],
        "ip": "198.51.100.37",
        "netmask": "255.255.0.0",
        "model": "virtio",
        "primary": true
      }
    ]
    

    In my workflow, I create zones with autoboot set to false, then add IPv6 addresses based on the mac assigned by vmadm then I enable autoboot and boot the zone. This is scripted of course, so it's a single atomic action.

    Enabling IPv6 in a SmartMachine Instance

    Once the zone definition has the IPv6 address(es) allowed it needs to be enabled in the zone. For KVM images, most vended by Joyent will already have IPv6 enabled (even Ubuntu Certified images in Joyent Cloud will boot with link-local IPv6 addresses, though they will be mostly useless). For SmartOS instances you will need to enable it.

    In order to enable IPv6 in a SmartOS zone you need to enable ndp and use ipadm create-addr.

    svcadm enable ndp
    ipadm create-addr -t -T addrconf net0/v6
    

    Instead of doing this manually I've taken the extra step and created an SMF manifest for IPv6.

    I have a user-script that downloads this from github, saves it to /opt/custom/smf/ipv6.xml and restarts manifest-import. After the import is finished, IPv6 can be enabled with svcadm. Using the -r flag enables all dependencies (i.e., ndp) as well.

    svcadm enable -r site/ipv6
    

    Enabling the service is also done as part of the user-script.

    If you do actually want specific static IPv6 assignment, do everthing I've described above. Then, in addition to that use mdata-get sdc:nics to pull the NIC definition and extract the IPv6 addresses from allowed_ips and explicitly assign them. I admit that for those who want explicit static addresses this is less than ideal, but with a little effort it can be scripted and made completely automatic.

  • A Primer on CFEngine 3.6 Autorun

    Update: For CFEngine 3.6.2.

    CFEngine recently released version 3.6, which makes deploying and using cfengine easier than ever before. The greatest improvement in 3.6, in my opinion, is by far the autorun feature.

    I'm going to demonstrate how to get a policy server set up with autorun properly configured.

    Installing CFEngine 3.6.2

    The first step is to install the cfengine package, which I'm not going to cover. But I will say that I recomend using an existing repository. Instructions on how to set this up are here. Or you can get binary packages here. If you're not using Linux (like myself) you can get binary packages from cfengineers.net. Or for SmartOS try my repository here (IPv6 only). If you're inclined to build from source I expect that you don't need my help with that.

    Having installed the cfengine package, the first thing to do is to generate keys. The keys may have already been generated for you, but running the command again won't harm anything.

    /var/cfengine/bin/cf-key
    

    Setting up Masterfiles and Enabling Autorun

    Next you'll need a copy of masterfiles. If you downloaded a binary community package from cfengine.com you'll find a copy in /var/cfengine/share/CoreBase/masterfiles.

    As of 3.6 the policy files have been decoupled from the core source code distribution so if you're getting cfengine from somewhere else it may not come with CoreBase. In this case this you'll want to get a copy of the masterfiles repository at the tip of the branch for your version of CFEngine (in this case, 3.6.2), not from the master branch where the main development happens. There's already development going on for 3.7 in master so for consistency and repeatability grab an archive of 3.6.2. Going this route you also need a copy of the cfengine core source code (although you do not need to build it).

    curl -LC - -o masterfiles-3.6.2.tar.gz https://github.com/cfengine/masterfiles/archive/3.6.2.tar.gz
    curl -LC - -o core-3.6.2.tar.gz https://github.com/cfengine/core/archive/3.6.2.tar.gz
    tar zxf masterfiles-3.6.2.tar.gz
    tar zxf core-3.6.2.tar.gz
    

    You'll now have the main masterfiles distribution unpacked. This isn't something that you can just copy into place, you need to run make to install it.

    cd masterfiles-3.6.2
    ./autogen.sh --with-core=../core-3.6.2
    make install INSTALL=/opt/local/bin/install datadir=/var/cfengine/masterfiles
    

    Note: Here I've included the path to install. This is required for SmartOS. For other systems you can probably just run make install.

    At this point it's time to bootstrap the server to itself.

    /var/cfengine/bin/cf-agent -B <host_ip_address>
    

    You should get a message here saying that the host has been successfully bootstrapped and a report stating 'I'm a policy hub.'

    To enable autorun simplet make the following change in def.cf.

    -      "services_autorun" expression => "!any";
    +      "services_autorun" expression => "any";
    

    Note: There's a bug in masterfiles-3.6.0, so make sure to use at least 3.6.2.

    Using Autorun

    With the default configuration autorun will search for any files in services/autorun/ with the tag autorun and execute it. At this point you can see autorun working for yourself.

    /var/cfengine/bin/cf-agent -K -f update.cf
    /var/cfengine/bin/cf-agent -Kv
    

    Here I've enabled verbose mode. You can in the verbose output that autorun is working.

    Now, like Han Solo, I've make a couple of special modifications myself. I also like to leave the default files in pristine condition, as much as possible. This helps when upgrading. This is why I've only made very few changes to the default polcies. It also means that instead of using services/autorun.cf I'll create a new autorun entry point. This entry point is the only bundle executed by the default autorun.

    I've saved this to services/autorun/digitalelf.cf

    body file control
    {
       agent::
          inputs => { @(digitalelf_autorun.inputs) };
    }
    
    bundle agent digitalelf_autorun
    {
      meta:
          "tags" slist => { "autorun" };
    
      vars:
          "inputs" slist => findfiles("$(sys.masterdir)/services/autorun/*.cf");
          "bundle" slist => bundlesmatching(".*", "digitalelf");
    
      methods:
          "$(bundle)"
              usebundle => "$(bundle)",
              ifvarclass => "$(bundle)";
    
      reports:
        inform_mode::
          "digitalelf autorun is executing";
          "$(this.bundle): found bundle $(bundle) with tag 'digitalelf'";
    }
    

    This works exactly the same as autorun.cf, except that it looks for bundles matching digitalelf and only runs them if the bundle name matches a defined class. Also note that enabling inform_mode (i.e., cf-agent -I) will report which bundles have been discovered for automatic execution.

    For example I have the following services/autorun/any.cf.

    bundle agent any {
    
    meta:
    
        # You must uncomment this line to enable autorun.
        "tags" slist => { "digitalelf" };
    
    vars:
    
        linux::
            "local_bin_dir" string => "/usr/local/bin/";
    
        smartos::
            "local_bin_dir" string => "/opt/local/bin/";
    
    files:
    
        "/etc/motd"
            edit_line => insert_lines("Note: This host is managed by CFEngine."),
            handle => "declare_cfengine_in_motd",
            comment => "Make sure people know this host is managed by cfengine";
    
    reports:
    
        inform_mode::
            "Bundle $(this.bundle) is running via autorun.";
    }
    

    Since the tag is digitalelf it will be picked up by services/autorun/digitalelf.cf and because bundle name is any, it will match the class any in the methods promise, and therefore run. Again, enabling inform_mode (cf-agent -I) will report that this bundle is in fact being triggered.

    You can drop in bundles that match any existing hard class and it will automatically run. Want all linux or all debian hosts to have a particular configuration? There's a bundle for that.

    Extending Autorun

    You may already be familiar with my cfengine layout for dynamic bundlesequence and bundle layering. My existing dynamic bundlesequence is largely obsolete with autorun, but I still extensively use bundle stack layering. I've incorporated the classifications from bundle common classify directly into the classes: promises of services/autorun/digitalelf.cf. I can trigger bundles by discovered hard classes or with any user defined class created in bundle agent digitalelf_autorun. By using autorun bundles based on defined classes you can define classes from any source. Hostname (like I do), LDAP, DNS, from the filesystem, network API calls, etc.


  • Using 2048-bit DSA Keys With OpenSSH

    There's a long running debate about which is better for SSH public key authentication, RSA or DSA keys. With "better" in this context meaning "harder to crack/spoof" the identity of the user. This generally comes down in favor of RSA because ssh-keygen can create RSA keys up to 2048 bits while DSA keys it creates must be exactly 1024 bits.

    Here's how to use openssl to create 2048-bit DSA keys that can be used with OpenSSH.

    (umask 077 ; openssl dsaparam -genkey 2048 | openssl dsa -out ~/.ssh/id_dsa)
    ssh-keygen -y -f ~/.ssh/id_dsa > ~/.ssh/id_dsa.pub
    

    After this, add the contents of id_dsa.pub to ~/.ssh/authorized_keys on remote hosts and remove your RSA keys (if any). I'm not recomending either RSA or DSA keys. You need to make that choice yourself. But key length is no longer an issue. We can now go back to having this debate on the merit of math.

 |