• The CDDL is Not Incompatible With the GPL

    The CDDL is not incompatible with the GPL. Anybody who says otherwise has an agenda. I've heard all the arguments. They're all bullshit and FUD.

    I, of course, am not a lawyer. But I can read.

    This is written primarily to discuss the situation with the Linux kernel and ZFS.

    First, let's review. The Linux kernel is licensed under the GNU General Public License version 2 (GPL). The effective clause of the GPL is in section 2, as follows (emphasis added)..

    These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.

    Everyone agrees that this is the clause that covers combined works. Even the FSF cites this passage when discussing ZFS and Linux. There is some discussion about executable vs source in section 3, but that's a clarification of how this clause in section two affects binaries. The meat is here in section 2.

    The Linux kernel contains a LICENSES directory with guidance on various licenses, among those are MIT, BSD, etc. MIT and BSD licenses are among those considered "preferred" because they are "GPL compatible". This compatibility comes from the fact that these licenses permit relicensing. That is, the source code of a given module/file may be MIT or BSD, but the executable form is considered to be GPL licensed. Everything is ok. Everyone is ok with this.

    What happens then if I extract the Linux source code and find files with MIT, BSD, or other licenses? May I use those files under their stated license? Or am I restricted to using those files under the terms of the GPL just because I obtained the source from a GPL binary I previously obtained a copy of? This is, of course, silly.

    Casual perusal (i.e., using cscope) of a git clone, current as of this writing, shows there to be 1679 BSD licensed files and 2344 MIT licensed files in the Linux kernel tree. The argument that one must use these files under the terms of the GPL instead of their stated license, just because they were obtained as part of a bundle containing GPL licensed code is absurd in the highest degree. What would we say then? That a file originally authored by the FreeBSD project, is sometimes only covered by the BSD license and sometimes only covered by the GPL depending on whether you downloaded it from FreeBSD or from RedHat? The notion is absolutely ridiculous, and deserves to be ridiculed.

    Now, let's look at the CDDL. The CDDL section 3.5 states (emphasis added):

    You may distribute the Executable form of the Covered Software under the terms of this License or under the terms of a license of Your choice, which may contain terms different from this License, provided that You are in compliance with the terms of this License and that the license for the Executable form does not attempt to limit or alter the recipients rights in the Source Code form from the rights set forth in this License.

    To reiterate, executable forms of CDDL source code can be under any license you want. So what happens when you compile and link modules of which some are GPL and some are CDDL? Obviously the resulting binary is licensed under the GPL, because the GPL requires it, and the CDDL allows it.

    What then of the obligations of the CDDL and the GPL? They both require source code be made available. Even if the CDDL didn't require it, the CDDL licensed source files must be provided to comply with the terms of the GPL. In supplying the original source code you have complied with both licenses. And once those those files are obtained, they may be reused, copied, modified, etc. under the terms of the CDDL, just as files licesned MIT or BSD may be used under the terms of the stated license. If this is not the case, and the source files must be licensed only under the GPL as the FSF claims, then the GPL cannot be compatible with any other license and all files not marked as licensed GPL in the Linux kernel are in violation of the GPL. A veritable license roach motel.

    I hope we can put this foolish nonsense to rest. I have no idea why the FSF errounously claims that CDDL code is incompatible with the GPL while also maintaining that MIT/BSD code is (although I highly suspect it's because they fear the CDDL, or perhaps they fear Sun/Oracle and refuse to back down to save face). But it seems that the rest of the community goes along with it because they don't want to offend RMS or the FSF.

    While it's true that I am not a lawyer, that does not preclude me from being right.

  • Good bye, Carrie

    I can't believe she's gone.


  • Running Containers in Production, no really!

    Last week I presented on Triton at LOPSA LA and UUASC.

    I've got video this time!

    And slides, though most of the talk was live demos, so the slides leave a bit to be desired.

  • Creating ECDSA SSL Certificates in 3 Easy Steps

    I've previously written about creating SSL certificates. Times have changed, and ECC is the way of the future. Today I'm going to revisit that post with creating ECDSA SSL certificates as well as how to get your certificate signed by Let's Encrypt.

    Generating an ECDSA Key

    Since this information doesn't seem to be readily available many places, I'm putting it here. This is the fast track to getting an ECDSA SSL certificate.

    openssl ecparam -out private.key -name prime256v1 -genkey

    Generating the Certficate Signing Request

    Generating the csr is generally done interactively.

    openssl req -new -sha256 -key private.key -out server.csr

    Fill out the requested information. Use your two letter country code. Use the full name of your state. Locality means city. Organization Name and Organizational Unit Name seem rather self explanatory (they can be the same). Common name is the fully qualified domain name of the server or virtual server you are creating a certificate for. The rest you can leave blank.

    Non-interactive CSR generation

    You can avoid interactive csr creation by supplying the subject information. This will work fine as long as you're not using subjectAltNames.

    openssl req -new -sha256 -key private.key -out domain.com.csr \
        -subj "/C=US/ST=California/L=San Diego/O=Digital Elf/CN=digitalelf.net"

    Non-interactive CSR generation with subjetAltName

    Unfortunately certificates with subjectAltName, currently must be done with a config file. This is disappointing on many levels. You'll need the following minimum config.

    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    basicConstraints = CA:FALSE
    keyUsage = nonRepudiation, digitalSignature, keyEncipherment
    subjectAltName = @alt_names
    DNS.1 = digitalelf.net
    DNS.2 = www.digitalelf.net

    And then create the csr:

    openssl req -new -sha256 -key private.key -out domain.com.csr \
        -subj "/C=US/ST=California/  L=San Diego/O=Digital Elf/CN=digitalelf.net" \
        -config csr.cnf

    Signing your certificate

    At this point if you want your cert signed by a real Certificate Authority. I suggest Let's Encrypt because you can get certificates for free.

    The official client for Let's Encrypt is certbot. I've never used it.

    My preferred client is dehydrated because it doesn't need anything more than the base system, and works on SmartOS, FreeBSD, macOS (Darwin), and Linux. See the documentation on usage.

    I've also created make-cert which wraps dehydrated, pre-configures most options, but requires node.js if you don't already have a configured web server. I use this simply because it makes dehydrated easier to deploy.

    Using a traditional Certificate Authority

    If that doesn't work for you because you can't run the letsencrypt client on your web server, StartSSL is also free. If you don't want a free one, you should have no trouble finding one on your own. Whichever you pick, give them your server.csr file. They'll give you back a certificate.

    Self-Signed Certificate

    If you want a self signed certificate instead, run this:

    openssl x509 -req -sha256 -days 365 -in server.csr -signkey private.key -out public.crt

    You can also create a self-signed ECDSA certificate in two steps.

    openssl ecparam -out www.example.com.key -name prime256v1 -genkey
    openssl req -new -days 365 -nodes -x509 \
        -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" \
        -key www.example.com.key -out www.example.com.cert

  • Star Wars

    I have a different relationship with Star Wars than most people. Star Wars was origionally released in theaters fourty-seven days after I was born. The Empire Strikes Back was the first movie I saw in a cinema. I stood on the seat, transfixed on the screen from the crawl to the credits. Return of the Jedi was the first movie I remember seeing in theaters. I've seen A New Hope something on the order of two thousand times. Three times in my life I've watched either ANH or the entire trilogy at least once per day for more than a year. Then theres all the other times I've seen it outside of that. I've been known to win Star Wars Trivial Pursuit on a single turn. I can recite the dialog of the entire trilogy from memory. Star Wars was an anchor for me, through a turbulent childhood.

    I'm not one of those crazies though. I'm not a collector. I have some Star Wars stuff, but it's not overehelming. I've enjoyed the expanded universe, but it's not the same. The EU to me was, and still is I suppose, something like fanfic. A place to go to think about Star Wars when all of Star Wars had already been consumed. For over twenty years Star Wars was a constant in my life, before the dark times, before the prequals.

    I was very excited for The Phantom Menace. I saw it on opening day, the first showing of the day in San Diego. Afterward, less so. The prequels are horribly bad. I took comfort in not being alone in that opinion. But now there's a new expanse for Star Wars. Disney has made statements about producing one new Star Wars movie per year. And for better or for worse, Star Wars is no longer simply a trilogy.

    I also am a fan of Star Trek. I am possibly going through what many Star Trek fans went through in 1987. Having watched The Cage, Picard is much closer to Pike than Kirk is. The Next Generation is more the show that Gene Roddenberry wanted to create than the original series was. The architecture of TNG traces back to Gene's original design for Star Trek before the studios got involved. And Star Trek has now lived more without its creator than with. There is phenominally good Trek (City on the Edge of Forever, The Measure of a Man, or The Inner Light) and there is bad Trek (most of DS9) and really bad Trek (Spock's Brain, seasons 2-4 of Enterprise). But there is a lot of Trek. There's almost 750 hours of Star Trek cannon. There's aproximately 12 (14 after this weekend) of Star Wars. I'm able to watch and rewatch Star Trek, enjoying the good episodes and lamenting or skipping the bad ones. I don't regard all of Star Trek cannon as cannon. Starting this week, I will be doing the same with Star Wars.

  • illumos: The State of Fully Modern Unix

    Last week I presented on illumos at LOPSA San Diego.

  • IPv6 the SmartOS Way

    Update: As of 20150917T235937Z full support for IPv6 has been added to vmadm with the added ips and gateways parameters. If you're using SmartDataCenter, these parameters won't (yet) be added automatically, so the following may be useful to you. But if you're using SmartOS, see the updated SmartOS IPv6 configuration wiki page.

    There have been a lot of requests for IPv6 support in SmartOS. I'm happy to say that there is now partial support for IPv6 in SmartOS, though it's not enabled by default and there may be some things you don't expect. This essay is specific to running stand-alone SmartOS systems on bare metal. This doesn't apply to running instances in the Joyent Cloud or for private cloud SDC.

    Update: I now have a project up on Github that fully automates enabling SLAAC IPv6 on SmartOS. It works for global and non-global zones and automatically identifies all interfaces available, regardless of the driver name.

    First, some definitions so we're all speaking the same language.

    • Compute Node (CN): A non-virtualized physical host.
    • Global Zone (GZ): The Operating System instance in control of all real hardware resources.
    • OS Zone: A SmartMachine zone using OS virtualization. This is the same thing as a Solaris zone.
    • KVM Zone A zone running a KVM virtual machine using hardware emulation.
    • Compute Instance (CI): A SmartMachine zone or KVM virtual machine.
    • Smart Data Center (SDC): Joyent's Smart Data Center private cloud product. SDC backends the Joyent Cloud.

    There are two modes of networking with SmartOS. The default is for the global zone to control the address and routes. A static IP is assigned in the zone definition when it's created, along with a netmask and default gateway and network access is restricted to the assigned IP to prevent tennants from causing shenanigans on your network. The other is to set the IP to DHCP, enable allow_ip_spoofing and be done with it. The former mode is preferred for public cloud providers (such as Joyent) and the latter may be preferred for private cloud providers (i.e., enterprises) or small deployments where all tennants are trusted. For example, at home where I have only a single CN and I'm the only operator, I just use DHCP and allow_ip_spoofing.

    By far the easiest way to permit IPv6 in a SmartOS zone is to have router-advertisements on your network and enable allow_ip_spoofing. As long as the CI has IPv6 enabled (see below for enabling IPv6 within the zone) you're done. But some don't want to abandon the protection that anti-spoofing provides.

    Whether you use static assignment or DHCP in SmartOS, the CI (and probably you too) doesn't care what the IP is. In fact, KVM zones with static IP configuration are configured for DHCP with the Global Zone acting as the DHCP server. If you have another DHCP server on your network it will never see the requests and they will not conflict. In SDC, entire networks are allocated to SDC. By default SDC itself will assign IPs to CIs. In the vast majority of cases it doesn't matter which IP a host has, just as long as it has one.

    Which brings us to IPv6. It's true that in SmartOS when a NIC is defined for a CI you can't define an IPv6 address in the ip field (in my testing this is because netmask is a required parameter for static address assignment, but there's no valid way to express an IPv6 netmask that is acceptable to vmadm). But like it or not, IPv4 is still a required part of our world. A host without some type of IPv4 network access will be extremely limited. There's also no ip6 field.

    But there doesn't need to be. Remembering that in almost all cases we don't care which IP so long as there is one, IPv6 can be enabled without allowing IP spoofing by adding IPv6 addresses to the allowed_ips property of the NIC. The most common method of IPv6 assignment is SLAAC. If you're using SLAAC then you neither want, nor need SmartOS handing out IPv6 addresses. The global and link-local addresses can be derived from the mac property of NIC of the CI. Add these to allowed_ips property of the NIC definition and the zone definition is fully configured for IPv6 (you don't need an IPv6 gateway definition because it will be picked up automatically by router-advertisements).

    Permitting IPv6 in a Zone

    Here's an example nic from a zone I have with IPv6 addresses allowed. Note that both the derived link-local and global addresses are permitted.

    [root@wasp ~]# vmadm get 94ff50ad-ac74-46ac-8b9d-c05ddf55f434 | json -a nics
        "interface": "net0",
        "mac": "72:9c:d5:34:47:59",
        "nic_tag": "external",
        "gateway": "",
        "allowed_ips": [
        "ip": "",
        "netmask": "",
        "model": "virtio",
        "primary": true

    In my workflow, I create zones with autoboot set to false, then add IPv6 addresses based on the mac assigned by vmadm then I enable autoboot and boot the zone. This is scripted of course, so it's a single atomic action.

    Enabling IPv6 in a SmartMachine Instance

    Once the zone definition has the IPv6 address(es) allowed it needs to be enabled in the zone. For KVM images, most vended by Joyent will already have IPv6 enabled (even Ubuntu Certified images in Joyent Cloud will boot with link-local IPv6 addresses, though they will be mostly useless). For SmartOS instances you will need to enable it.

    In order to enable IPv6 in a SmartOS zone you need to enable ndp and use ipadm create-addr.

    svcadm enable ndp
    ipadm create-addr -t -T addrconf net0/v6

    Instead of doing this manually I've taken the extra step and created an SMF manifest for IPv6.

    I have a user-script that downloads this from github, saves it to /opt/custom/smf/ipv6.xml and restarts manifest-import. After the import is finished, IPv6 can be enabled with svcadm. Using the -r flag enables all dependencies (i.e., ndp) as well.

    svcadm enable -r site/ipv6

    Enabling the service is also done as part of the user-script.

    If you do actually want specific static IPv6 assignment, do everthing I've described above. Then, in addition to that use mdata-get sdc:nics to pull the NIC definition and extract the IPv6 addresses from allowed_ips and explicitly assign them. I admit that for those who want explicit static addresses this is less than ideal, but with a little effort it can be scripted and made completely automatic.

  • A Primer on CFEngine 3.6 Autorun

    Update: For CFEngine 3.6.2.

    CFEngine recently released version 3.6, which makes deploying and using cfengine easier than ever before. The greatest improvement in 3.6, in my opinion, is by far the autorun feature.

    I'm going to demonstrate how to get a policy server set up with autorun properly configured.

    Installing CFEngine 3.6.2

    The first step is to install the cfengine package, which I'm not going to cover. But I will say that I recomend using an existing repository. Instructions on how to set this up are here. Or you can get binary packages here. If you're not using Linux (like myself) you can get binary packages from cfengineers.net. Or for SmartOS try my repository here (IPv6 only). If you're inclined to build from source I expect that you don't need my help with that.

    Having installed the cfengine package, the first thing to do is to generate keys. The keys may have already been generated for you, but running the command again won't harm anything.


    Setting up Masterfiles and Enabling Autorun

    Next you'll need a copy of masterfiles. If you downloaded a binary community package from cfengine.com you'll find a copy in /var/cfengine/share/CoreBase/masterfiles.

    As of 3.6 the policy files have been decoupled from the core source code distribution so if you're getting cfengine from somewhere else it may not come with CoreBase. In this case this you'll want to get a copy of the masterfiles repository at the tip of the branch for your version of CFEngine (in this case, 3.6.2), not from the master branch where the main development happens. There's already development going on for 3.7 in master so for consistency and repeatability grab an archive of 3.6.2. Going this route you also need a copy of the cfengine core source code (although you do not need to build it).

    curl -LC - -o masterfiles-3.6.2.tar.gz https://github.com/cfengine/masterfiles/archive/3.6.2.tar.gz
    curl -LC - -o core-3.6.2.tar.gz https://github.com/cfengine/core/archive/3.6.2.tar.gz
    tar zxf masterfiles-3.6.2.tar.gz
    tar zxf core-3.6.2.tar.gz

    You'll now have the main masterfiles distribution unpacked. This isn't something that you can just copy into place, you need to run make to install it.

    cd masterfiles-3.6.2
    ./autogen.sh --with-core=../core-3.6.2
    make install INSTALL=/opt/local/bin/install datadir=/var/cfengine/masterfiles

    Note: Here I've included the path to install. This is required for SmartOS. For other systems you can probably just run make install.

    At this point it's time to bootstrap the server to itself.

    /var/cfengine/bin/cf-agent -B <host_ip_address>

    You should get a message here saying that the host has been successfully bootstrapped and a report stating 'I'm a policy hub.'

    To enable autorun simplet make the following change in def.cf.

    -      "services_autorun" expression => "!any";
    +      "services_autorun" expression => "any";

    Note: There's a bug in masterfiles-3.6.0, so make sure to use at least 3.6.2.

    Using Autorun

    With the default configuration autorun will search for any files in services/autorun/ with the tag autorun and execute it. At this point you can see autorun working for yourself.

    /var/cfengine/bin/cf-agent -K -f update.cf
    /var/cfengine/bin/cf-agent -Kv

    Here I've enabled verbose mode. You can in the verbose output that autorun is working.

    Now, like Han Solo, I've make a couple of special modifications myself. I also like to leave the default files in pristine condition, as much as possible. This helps when upgrading. This is why I've only made very few changes to the default polcies. It also means that instead of using services/autorun.cf I'll create a new autorun entry point. This entry point is the only bundle executed by the default autorun.

    I've saved this to services/autorun/digitalelf.cf

    body file control
          inputs => { @(digitalelf_autorun.inputs) };
    bundle agent digitalelf_autorun
          "tags" slist => { "autorun" };
          "inputs" slist => findfiles("$(sys.masterdir)/services/autorun/*.cf");
          "bundle" slist => bundlesmatching(".*", "digitalelf");
              usebundle => "$(bundle)",
              ifvarclass => "$(bundle)";
          "digitalelf autorun is executing";
          "$(this.bundle): found bundle $(bundle) with tag 'digitalelf'";

    This works exactly the same as autorun.cf, except that it looks for bundles matching digitalelf and only runs them if the bundle name matches a defined class. Also note that enabling inform_mode (i.e., cf-agent -I) will report which bundles have been discovered for automatic execution.

    For example I have the following services/autorun/any.cf.

    bundle agent any {
        # You must uncomment this line to enable autorun.
        "tags" slist => { "digitalelf" };
            "local_bin_dir" string => "/usr/local/bin/";
            "local_bin_dir" string => "/opt/local/bin/";
            edit_line => insert_lines("Note: This host is managed by CFEngine."),
            handle => "declare_cfengine_in_motd",
            comment => "Make sure people know this host is managed by cfengine";
            "Bundle $(this.bundle) is running via autorun.";

    Since the tag is digitalelf it will be picked up by services/autorun/digitalelf.cf and because bundle name is any, it will match the class any in the methods promise, and therefore run. Again, enabling inform_mode (cf-agent -I) will report that this bundle is in fact being triggered.

    You can drop in bundles that match any existing hard class and it will automatically run. Want all linux or all debian hosts to have a particular configuration? There's a bundle for that.

    Extending Autorun

    You may already be familiar with my cfengine layout for dynamic bundlesequence and bundle layering. My existing dynamic bundlesequence is largely obsolete with autorun, but I still extensively use bundle stack layering. I've incorporated the classifications from bundle common classify directly into the classes: promises of services/autorun/digitalelf.cf. I can trigger bundles by discovered hard classes or with any user defined class created in bundle agent digitalelf_autorun. By using autorun bundles based on defined classes you can define classes from any source. Hostname (like I do), LDAP, DNS, from the filesystem, network API calls, etc.

  • Using 2048-bit DSA Keys With OpenSSH

    There's a long running debate about which is better for SSH public key authentication, RSA or DSA keys. With "better" in this context meaning "harder to crack/spoof" the identity of the user. This generally comes down in favor of RSA because ssh-keygen can create RSA keys up to 2048 bits while DSA keys it creates must be exactly 1024 bits.

    Here's how to use openssl to create 2048-bit DSA keys that can be used with OpenSSH.

    (umask 077 ; openssl dsaparam -genkey 2048 | openssl dsa -out ~/.ssh/id_dsa)
    ssh-keygen -y -f ~/.ssh/id_dsa > ~/.ssh/id_dsa.pub

    After this, add the contents of id_dsa.pub to ~/.ssh/authorized_keys on remote hosts and remove your RSA keys (if any). I'm not recomending either RSA or DSA keys. You need to make that choice yourself. But key length is no longer an issue. We can now go back to having this debate on the merit of math.

  • How the NSA is breaking SSL

    This isn't a leak. I don't have any direct knowledge. But I have been around the block a few times. It's now widely known that the NSA is breaking most encryption on the Internet. What's not known is how.

    We also know that the Flame malware was signed by a rogue Microsoft certificate. That rogue Microsft certificate was hashed with MD5, which is what allowed it to be impersonated.

    On my Ubuntu box I just ran an analysis of the Root CA certificates (from the ca-certificates package which itself comes from Mozilla). This certificate list is widely used by thrird-party programs as an authoritative list. But other distributors (e.g., Google, Apple, Microsoft) have a substantially similar list due to the need for SSL to work in all browsers. If any one vendor shipped a substantially different list then end users would merely preceve that browser as being broken and not use it.

    Back to my analysis. Mozilla includes 20 Root CA certificates that use MD5 and 2 that use MD2. This is frightening. We already know that a Microsoft certificate with MD5 was used to distribute the Flame malware and it is all but proven that Flame was created and distributed by the U.S. government.

    The situation is clear. The NSA is in the posession of one or more Root CA keys. It is only prudent to expect that the NSA has spoofed copies of all 22 CAs that use MD5 or MD2. It is also possible that they have exact copies (i.e., true keys, not spoofed) of other major U.S. based certificate authorities (I shudder to think of a world where a national security letter requests a Root CA key as being relavent to an investigation).

    The NSA would then use these keys to spoof SSL certificates in real time, creating Subjects identical to the target web site, becoming a completely invisible man-in-the-middle. This method would be impossible to detect for all but the most skilled users.

    Edit: Turns out I was right on the money.
    Edit April 2014: Heartbleed notwithstanding, I still firmly believe the NSA is actively executing MITM attacks using genuine or spoofed Root CA keys. Why let an IDS fingerprint you when you can engage in active and undetectable surveillance?