NixOS in the Cloud, step-by-step: part 2
Jan 11, 2021
In the previous article, we deployed a basic cloud server running NixOS on DigitalOcean and set up a declarative deployment definition for it using Morph.
In this part, we are taking it a step further. We will utilize both Morph and Terraform to spin-up a network of 3 web servers, sitting behind 2 load-balancers, managed by declarative configuration. We will also learn an approach that utilizes Terraform state in Morph network definition, making Morph automatically aware of changes to your infrastructure.
For the uninitiated, Terraform is a popular infrastructure as code tool. It has basically become a de-facto standard across the industry. We will use it to spin up multiple servers on DigitalOcean without the need to point-and-click in the dashboard.
Preparation
Although we are starting from a blank state, I assume you have read the initial article of the series. You will need to utilize the knowledge you've already gained, as well as the resources created in your DigitalOcean account: the NixOS disk image and your uploaded SSH public key.
Let's create a directory and enter it:
$ mkdir part2 && cd part2
nix-shell
Then, let's create a shell.nix
file with the tools we will need:
{ pkgs ? import <nixpkgs> { } }:
let
myTerraform = pkgs.terraform.withPlugins (tp: [ tp.digitalocean ]);
ter = pkgs.writeShellScriptBin "ter" ''
terraform $@ && terraform show -json > show.json
'';
in
pkgs.mkShell {
buildInputs = with pkgs; [ curl jq morph myTerraform ter ];
}
Notably, besides Terraform, Morph and basic command line utilities,
we add ter
, a wrapper for the terraform
command that will
also dump the state from terraform show -json
into the current directory. This will come in use soon.
Enter the shell and test that all commands work:
$ nix-shell
$ curl --version | head -n 1
curl 7.72.0 (x86_64-pc-linux-gnu) libcurl/7.72.0 OpenSSL/1.1.1i zlib/1.2.11 libssh2/1.9.0 nghttp2/1.41.0
$ jq --version
jq-1.6
$ morph --version
1.5.0
$ terraform version | head -n 1
Terraform v0.14.3
$ ter version | head -n 1
Terraform v0.14.3
DigitalOcean API token
To use DigitalOcean programmatically, create a personal access token in the DigitalOcean control panel. Keep it in a safe place (e.g. your password manager), as it allows full access to your droplets and other account data.
Upon creation, let's safely set it in our shell session and verify:
$ read -s DIGITALOCEAN_TOKEN
<...enter token here and press Enter...>
$ export DIGITALOCEAN_TOKEN
$ echo DIGITALOCEAN_TOKEN
7fba5cca53d8b6f9565037804a503029451495fac6708392cd6d3646f2e8fe8f
When using read -s
, the token will not be echoed
(i.e. represented on the screen, not even as asterisks),
similar to how entering password for sudo
works.
DigitalOcean resource IDs
We need to use two existing resources on our DigitalOcean account: our SSH public key and the NixOS custom image. While we could import them to our Terraform state, or even use a tool like Terraformer to do it for us, let's stick with a manual approach for brevity.
Use curl and jq to query the DigitalOcean API and find out the IDs of the custom image and our SSH key respectively:
$ curl -s -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
'https://api.digitalocean.com/v2/images?private=true' \
| jq '.images[] | select(.name == "nixos.qcow2.gz") | .id'
75674995
$ curl -s -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
'https://api.digitalocean.com/v2/account/keys' \
| jq '.ssh_keys[0].id'
27010799
The second command assumes you only have one SSH public key set-up in your DigitalOcean account. If that is not the case, you might need to inspect the full response from the API.
Creating our droplets
We are now ready to make a basic resource definition.
Let's create a main.tf
file:
provider "digitalocean" {}
resource "digitalocean_droplet" "backend" {
name = "backend${count.index + 1}"
region = "ams3"
size = "s-1vcpu-1gb"
image = 75674995
ssh_keys = [27010799]
count = 2
}
resource "digitalocean_droplet" "loadbalancer" {
name = "loadbalancer${count.index + 1}"
region = "ams3"
size = "s-1vcpu-1gb"
image = 75674995
ssh_keys = [27010799]
count = 2
}
This file defines a digitalocean
resource provider
and two digitalocean_droplet
resources: both backend
and loadbalancer
resources will produce 2 servers with dynamically generated names:
backend1
, backend2
, loadbalancer1
, and loadbalancer2
.
Neither DigitalOcean, nor Terraform seem to enforce unique names
for droplets, but it is a requirement for what we will want to do later,
and a good idea in general.
The definition also utilizes the previously-acquired identifiers to specify that the droplet has to be spawned with our NixOS custom image and include our public SSH key for authentication.
Keys region
and size
are also required.
If you do not live in Europe, you might be better off choosing
a region close to you.
We will not use any special features of particular regions, but we will use private networking,
so the droplets have to all be in the same region.
The examples of the possible values for size
can be found on
a dedicated webpage.
We also need to create a versions.tf
file to define
the version and full path for the digitalocean
provider:
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "2.2.0"
}
}
}
Let's apply our changes, using the ter
wrapper to call Terraform:
$ ter init # only required the first time, or after new modules or providers are added
$ ter apply
<...confirm this with 'yes'...>
In my experience, launching new droplets took about 30 to 40 seconds.
Note: be really careful and always read what ter apply
is planning to do.
With great power of Terraform also comes great potential for destruction.
If, for example, you delete a resource definition from main.tf
,
or reduce the count of machines, the "obsolete" machines will get destroyed.
Terraform will always ask you to confirm the changes,
but you have to actually read through them!
By this point, you should have 4 cloud servers launched.
We can query the current state via commands like terraform show
(or ter show
).
Feel free to play around and see what information Terraform provides.
$ ter show | head -n 10
# digitalocean_droplet.backend[0]:
resource "digitalocean_droplet" "backend" {
backups = false
created_at = "2020-12-29T19:48:34Z"
disk = 25
id = "223901656"
image = "75674995"
ipv4_address = "198.51.100.41"
ipv4_address_private = "10.110.0.13"
ipv6 = false
Terraform allows not only to create new servers, but also to modify the current infrastructure, e.g. increase the amount of instances for a specific resource. Let's add one more backend server:
diff --git a/part2/main.tf b/part2/main.tf
index 3957178..1b9a65d 100644
--- a/part2/main.tf
+++ b/part2/main.tf
@@ -7,7 +7,7 @@ resource "digitalocean_droplet" "backend" {
image = 75674995
ssh_keys = [27010799]
- count = 2
+ count = 3
}
Apply the changes once again:
$ ter apply
<...confirm this with 'yes'...>
If you run terraform show
again, it should now show 5 machines in total.
Terraform can also provide a JSON-based output of your state by passing the -json
flag.
Let's try that feature out and inspect our current infrastructure:
$ terraform show -json \
| jq '.values.root_module.resources[] | {name: .values.name, ip: .values.ipv4_address}'
{
"name": "backend1",
"ip": "198.51.100.41"
}
{
"name": "backend2",
"ip": "198.51.100.36"
}
{
"name": "backend3",
"ip": "203.0.113.74"
}
{
"name": "loadbalancer1",
"ip": "198.51.100.42"
}
{
"name": "loadbalancer2",
"ip": "203.0.113.122"
}
5 servers at our whim. I would call that a success!
As Morph uses SSH to deploy, it obeys SSH's trust on first use. So, when we deploy our network for the first time, it will ask us to confirm the fingerprint for each machine. For our own convenience, let's pre-fetch the SSH keys for our servers in advance, so we can avoid the manual intervention later and have the deployment proceed automatically.
$ terraform show -json \
| jq -r '.values.root_module.resources[].values.ipv4_address' show.json \
| xargs ssh-keyscan >> ~/.ssh/known_hosts
Using Terraform state from Nix
Terraform state is now the source of truth about the servers we have created.
We would like to use the information provided by terraform show -json
in our network definition and automatically deploy the relevant configuration
to all of the services that share a role.
By now, we should have a file called show.json
in the current directory.
It has been created by the ter
wrapper we defined,
and updated after each operation, such as ter apply
.
This file will help us greatly.
Nix has a function called builtins.fromJSON
,
which can deserialize arbitrary JSON data to Nix types.
Nix is also a proper functional programming language,
which will allow us to do the necessary manipulations with that data.
Put all of this together, and we can read show.json
, deserialize it
and transform it to a useful data structure
which we can use in our network definition.
To achieve this, I made a utility file called parsetf.nix
:
{ pkgs ? import <nixpkgs> { }
, lib ? pkgs.lib
}:
let
resourcesInModule = type: module:
builtins.filter (r: r.type == type) module.resources ++
lib.flatten (map (resourcesInModule type) (module.child_modules or [ ]));
resourcesByType = type: resourcesInModule type payload.values.root_module;
payload = builtins.fromJSON (builtins.readFile ./show.json);
in
{
inherit resourcesByType;
}
It defines a function named resourcesByType
.
When this function is called like resourcesByType "digitalocean_droplet"
,
it will return all the droplets we have set-up with Terraform.
It intentionally operates on a single resource type.
As time goes on, you might also provision other resources on DigitalOcean,
for example, block storage or floating IPs.
You will be able to use the information about these resources as well,
simply by changing the argument to this function.
resourcesByType
will even recurse into
child modules,
which will come in handy once you start using that feature.
Let's see how it works by experimenting in the Nix REPL:
$ nix repl
Welcome to Nix version 2.3.9. Type :? for help.
nix-repl> :l parsetf.nix
Added 1 variables.
nix-repl> droplets = resourcesByType "digitalocean_droplet"
nix-repl> droplets
[ { ... } { ... } { ... } { ... } { ... } ]
nix-repl> (builtins.head droplets).name
"backend"
nix-repl> (builtins.head droplets).values.name
"backend1"
nix-repl> (builtins.head droplets).values.ipv4_address
"198.51.100.41"
First, we loaded everything that parsetf.nix
defines into the global namespace.
In our case, that's a single function called resourcesByType
.
When we call it with the proper resource type, we see that it returns a list of five objects,
as we currently have 5 servers.
Once we get the first element from the list via builtins.head
,
we can evaluate any of its properties.
Note that .name
will always return the name we specified for the Terraform resource,
which will be the same for all the instances that this resource defined.
Meanwhile .values.name
will return the name
property defined inside of the resource.
This is the property that decides the droplet's actual hostname.
As we have made this unique for each droplet,
we will be able to use this to uniquely identify them.
Let's continue our REPL session:
nix-repl> balancers = (builtins.filter (d: d.name == "loadbalancer") droplets)
nix-repl> map (b: b.values.ipv4_address) balancers
[ "198.51.100.42" "203.0.113.122" ]
nix-repl> firstBalancer = builtins.head (builtins.filter (d: d.values.name == "loadbalancer1") balancers)
nix-repl> firstBalancer.values.name
"loadbalancer1"
nix-repl> firstBalancer.values.ipv4_address
"198.51.100.42"
As you can see, it is now possible to extract the needed information
for both a single machine and a class of machines.
All we need is the map
and filter
operations,
familiar from most other programming languages.
Some might ask: why not just read the terraform.tfstate
file directly?
For one, there will be no such file if using a remote state backend,
such as Terraform Cloud, which is a no-brainer when working in a team.
The official documentation
also says this:
State snapshots are stored in JSON format and new Terraform versions are generally backward compatible with state snapshots produced by earlier versions. However, the state format is subject to change in new Terraform versions, so if you build software that parses or modifies it directly you should expect to perform ongoing maintenence of that software as the state format evolves in new versions.
The output of terraform show -json
, meanwhile,
has a detailed specification.
Well, what about calling terraform show -json
directly from Nix?
That is not possible without
jumping through some hoops,
and that is possibly for the better.
As I do not think it would be a good idea to let my expressions run arbitrary commands,
I've gone with the wrapper approach.
So far it has served me well: you just have to remember
to always use ter
rather than terraform
for operations
that might modify the state of your infrastructure.
Marrying Terraform and Morph
Now that we have the ability to use information about resources managed by Terraform
in our Nix expressions, let's make a minimal network.nix
:
# 1.
let
resourcesByType = (import ./parsetf.nix { }).resourcesByType;
droplets = resourcesByType "digitalocean_droplet";
backends = builtins.filter (d: d.name == "backend") droplets;
loadbalancers = builtins.filter (d: d.name == "loadbalancer") droplets;
# 2.
mkBackend = resource: { modulesPath, lib, name, ... }: {
imports = lib.optional (builtins.pathExists ./do-userdata.nix) ./do-userdata.nix ++ [
(modulesPath + "/virtualisation/digital-ocean-config.nix")
];
deployment.targetHost = resource.values.ipv4_address;
deployment.targetUser = "root";
networking.hostName = resource.values.name;
system.stateVersion = "21.11";
};
mkLoadBalancer = resource: { modulesPath, lib, name, ... }: {
imports = lib.optional (builtins.pathExists ./do-userdata.nix) ./do-userdata.nix ++ [
(modulesPath + "/virtualisation/digital-ocean-config.nix")
];
deployment.targetHost = resource.values.ipv4_address;
deployment.targetUser = "root";
networking.hostName = resource.values.name;
system.stateVersion = "21.11";
};
in
# 3.
{
network = {
pkgs = import
(builtins.fetchGit {
name = "nixos-21.11-2021-12-19";
url = "https://github.com/NixOS/nixpkgs";
ref = "refs/heads/nixos-21.11";
rev = "e6377ff35544226392b49fa2cf05590f9f0c4b43";
})
{ };
};
} // # 5.
# 4.
builtins.listToAttrs (map (r: { name = r.values.name; value = mkBackend r; }) backends) // # 5.
builtins.listToAttrs (map (r: { name = r.values.name; value = mkLoadBalancer r; }) loadbalancers)
This is quite a bit beefier than what we started with in the first article, so let's go over the functionality step-by-step:
(1.) We import parsetf.nix
and use the resourcesByType
function it defines
to get a list of droplets in the Terraform state.
We split it into two lists: one for the backends and another for the load balancers.
(2.) Then, we predefine two functions: mkBackend
and mkLoadBalancer
.
They both take a single argument - the terraform resource object -
and return yet another function, which generates the NixOS
configuration for a concrete machine.
This way, we can generate similar configurations for all of the droplets
that share the same role (backend
or loadbalancer
),
while pulling the specifics, like IP address and hostname,
from the resource
parameter in the outer function.
So far, these two configurations are identical, but we will soon introduce configuration specific to backends and load balancers.
(3.) We introduce an attribute set that defines our network.
Just like in part 1, we use the special network
attribute
to pin the version of nixpkgs that will be used across our machines.
(4.) Then, we use the lists of our resources - backends
and loadbalancers
.
We iterate over the lists using map
, and for each resource produce
an attribute set of two attributes in the following form:
{
name = "balancer1";
value = { modulesPath, lib, name, ... }: { /* configuration options */ };
}
That is, name
points to the hostname of the machine,
while value
points to the function that will generate the config
for this specific machine.
We also use builtins.listToAttrs
.
This function will take this list of attrsets with name-value pairs:
[
{
name = "balancer1";
value = { modulesPath, lib, name, ... }: { /* configuration options */ };
}
{
name = "balancer2";
value = { modulesPath, lib, name, ... }: { /* configuration options */ };
}
}
and turn it into a single attrset like
{
balancer1 = { modulesPath, lib, name, ... }: { /* configuration options */ };
balancer2 = { modulesPath, lib, name, ... }: { /* configuration options */ };
}
That's exactly the format that Morph expects!
(5.) By this point, we have several attrsets - one with the network
key,
another with a key for each of our backends and their respective configuration functions,
and a similar attrset for load balancers.
Using the //
operator, we merge these attrsets together.
If we could inspect the final value this expression produces, it would look something like this:
{
network = {
pkgs = import
(builtins.fetchGit {
name = "nixos-21.11-2021-12-19";
url = "https://github.com/NixOS/nixpkgs";
ref = "refs/heads/nixos-21.11";
rev = "e6377ff35544226392b49fa2cf05590f9f0c4b43";
})
{ };
};
backend1 = { modulesPath, lib, name, ... }: { /* configuration options */ };
backend2 = { modulesPath, lib, name, ... }: { /* configuration options */ };
loadbalancer1 = { modulesPath, lib, name, ... }: { /* configuration options */ };
loadbalancer2 = { modulesPath, lib, name, ... }: { /* configuration options */ };
loadbalancer3 = { modulesPath, lib, name, ... }: { /* configuration options */ };
}
Now, let's try to deploy the network using Morph:
$ morph deploy network.nix switch
Selected 5/5 hosts (name filter:-0, limits:-0):
0: backend1 (secrets: 0, health checks: 0)
1: backend2 (secrets: 0, health checks: 0)
2: backend3 (secrets: 0, health checks: 0)
3: loadbalancer1 (secrets: 0, health checks: 0)
4: loadbalancer2 (secrets: 0, health checks: 0)
<...snip...>
** backend1
updating GRUB 2 menu...
activating the configuration...
setting up /etc...
reloading user units for root...
setting up tmpfiles
Running healthchecks on backend1 (198.51.100.41):
Health checks OK
Done: backend1
<...snip...>
Executing 'switch' on matched hosts:
** loadbalancer2
updating GRUB 2 menu...
activating the configuration...
setting up /etc...
reloading user units for root...
setting up tmpfiles
Running healthchecks on loadbalancer2 (203.0.113.122):
Health checks OK
Done: loadbalancer2
Morph deploys to all five hosts, one by one.
Deploying services to the network
Now that we're done with the Terraform <-> Nix boilerplate, we can start deploying nginx on our droplets. Let's configure nginx on the backends:
diff --git a/part2/network.nix b/part2/network.nix
index 6198eb1..9694bfb 100644
--- a/part2/network.nix
+++ b/part2/network.nix
@@ -13,6 +13,15 @@ let
deployment.targetUser = "root";
networking.hostName = resource.values.name;
system.stateVersion = "21.11";
+
+ networking.firewall.allowedTCPPorts = [ 80 ];
+ services.nginx = {
+ enable = true;
+ virtualHosts.default = {
+ default = true;
+ locations."/".return = "200 \"Hello from ${name} at ${resource.values.ipv4_address}\"";
+ };
+ };
};
mkLoadBalancer = resource: { modulesPath, lib, name, ... }: {
Follow this up with a deployment, and check that nginx is working on all of the backends:
$ morph deploy network.nix switch
<...snip...>
$ curl 198.51.100.41
Hello from backend1 at 198.51.100.41
$ curl 198.51.100.36
Hello from backend2 at 198.51.100.36
$ curl 203.0.113.74
Hello from backend3 at 203.0.113.74
Now, let's configure the load balancer machines:
diff --git a/part2/network.nix b/part2/network.nix
index 9694bfb..5fa3925 100644
--- a/part2/network.nix
+++ b/part2/network.nix
@@ -32,8 +32,19 @@ let
deployment.targetUser = "root";
networking.hostName = resource.values.name;
system.stateVersion = "21.11";
- };
+ networking.firewall.allowedTCPPorts = [ 80 ];
+ services.nginx = {
+ enable = true;
+ upstreams.backend.servers = builtins.listToAttrs
+ (map (r: { name = r.values.ipv4_address_private; value = { }; })
+ backends);
+ virtualHosts.default = {
+ default = true;
+ locations."/".proxyPass = "http://backend";
+ };
+ };
+ };
in
{
network = {
Here, we also configure nginx.
However, instead of returning a static message,
we define an upstream called backend
.
The upstreams.<name>.servers
option
expects an attribute set where names are the server addresses,
and values are options for the specific server (empty in our case).
We use the ipv4_address_private
attribute from the Terraform resources:
by default, DigitalOcean puts all of the droplets created in a single region
on their own internal network.
We once again use map
to traverse the list of backend
resources
and then turn it into an attribute set using listToAttrs
.
At the end, it results in this:
upstreams.backend.servers = {
"198.51.100.41" = {};
"198.51.100.36" = {};
"203.0.113.74" = {};
};
We then also define the default virtual host, which proxies all requests to the upstream.
The address of http://backend
indicates that nginx will pass the requests
to one of the servers defined in the upstream called backend
.
Let's deploy and try querying one of the load balancers:
$ morph deploy network.nix switch
<...snip...>
$ curl 198.51.100.42
Hello from backend3 at 203.0.113.74
$ curl 198.51.100.42
Hello from backend2 at 198.51.100.36
$ curl 198.51.100.42
Hello from backend1 at 198.51.100.41
$ curl 198.51.100.42
Hello from backend3 at 203.0.113.74
Our backends take turns serving the requests. Just what we wanted.
Now that the load balancers proxy the requests to backends, there is no need to keep the port 80 on the backends open to the entire world. We can easily change the configuration to only allow connections to this port on the internal network.
diff --git a/part2/network.nix b/part2/network.nix
index 5fa3925..d3f5cca 100644
--- a/part2/network.nix
+++ b/part2/network.nix
@@ -12,11 +12,11 @@ let
deployment.targetHost = resource.values.ipv4_address;
deployment.targetUser = "root";
networking.hostName = resource.values.name;
system.stateVersion = "21.11";
- networking.firewall.allowedTCPPorts = [ 80 ];
+ networking.firewall.interfaces.ens4.allowedTCPPorts = [ 80 ];
services.nginx = {
enable = true;
virtualHosts.default = {
default = true;
locations."/".return = "200 \"Hello from ${name} at ${resource.values.ipv4_address}\"";
DigitalOcean seems to always have the external network on interface ens3
,
meanwhile the internal interface is named ens4
, but do check on your own just to be sure!
Once again, let's redeploy and see that our cluster still works after the changes:
$ morph deploy network.nix switch
<...snip...>
$ curl 198.51.100.42
Hello from backend2 at 198.51.100.36
By this point, we have a network of NixOS machines, managed by Terraform, with Morph using the Terraform state to deploy the relevant configuration to each machine according to its role.
Finishing touches
To cut down on the noise, we can also move the common properties
into a separate NixOS module.
Create a file named common.nix
:
{ modulesPath, lib, ... }:
{
imports = lib.optional (builtins.pathExists ./do-userdata.nix) ./do-userdata.nix ++ [
(modulesPath + "/virtualisation/digital-ocean-config.nix")
];
deployment.targetUser = "root";
system.stateVersion = "21.11";
}
Then, remove these properties from your individual machine definitions, instead, importing the newly created module:
diff --git a/part2/network.nix b/part2/network.nix
index d3f5cca..1dc7801 100644
--- a/part2/network.nix
+++ b/part2/network.nix
@@ -5,14 +5,11 @@ let
backends = builtins.filter (d: d.name == "backend") droplets;
loadbalancers = builtins.filter (d: d.name == "loadbalancer") droplets;
- mkBackend = resource: { modulesPath, lib, name, ... }: {
- imports = lib.optional (builtins.pathExists ./do-userdata.nix) ./do-userdata.nix ++ [
- (modulesPath + "/virtualisation/digital-ocean-config.nix")
- ];
+ mkBackend = resource: { name, ... }: {
+ imports = [ ./common.nix ];
+
deployment.targetHost = resource.values.ipv4_address;
- deployment.targetUser = "root";
networking.hostName = resource.values.name;
- system.stateVersion = "21.11";
networking.firewall.interfaces.ens4.allowedTCPPorts = [ 80 ];
services.nginx = {
@@ -24,14 +21,11 @@ let
};
};
- mkLoadBalancer = resource: { modulesPath, lib, name, ... }: {
- imports = lib.optional (builtins.pathExists ./do-userdata.nix) ./do-userdata.nix ++ [
- (modulesPath + "/virtualisation/digital-ocean-config.nix")
- ];
+ mkLoadBalancer = resource: { name, ... }: {
+ imports = [ ./common.nix ];
+
deployment.targetHost = resource.values.ipv4_address;
- deployment.targetUser = "root";
networking.hostName = resource.values.name;
- system.stateVersion = "21.11";
networking.firewall.allowedTCPPorts = [ 80 ];
services.nginx = {
Again, you can find the full code in a GitHub repository.
That's all (for now), folks
Having done all that, we have a network of cloud servers on DigitalOcean, which we can both bootstrap and deploy services on according to the configuration written in Terraform and Nix.
These two posts cover most of the tools and workflows I use to manage my personal infrastructure. While there are individual nuggets of Nix knowledge I have not shared yet, I do not think these warrant a full blog article right now.
One area that would be interesting to see explored is writing Terraform definitions using Nix. As Terraform supports pure JSON as an alternative to its own configuration language, and Nix can both parse and generate JSON, this does not seem impossible. However, I do not immediately see any inherent value or ergonomic improvements in this. Perhaps that shall remain a challenge for another time (or person).
With that said, I hope you learned something. Good luck on your NixOS adventures!
Alternative approaches
There exist alternative approaches of managing your infrastructure and deployments using Nix.
For one, NixOps 1.7 is able to spawn DigitalOcean machines directly. However, as mentioned in the first part, NixOps is a moving target. For version 2.0, the external providers are being moved to their own repositories. There are also significant limitations, e.g.:
Note that we rely on a ssh key resource with the hard-coded name
ssh-key
. Providing your own key is not supported yet.
Using the DigitalOcean provider with Terraform will give you far more flexibility, allowing you to manage not only your machines, but domain records, attached storage, and other sorts of resources as well.
There is also a dedicated Terraform module for deploying NixOS machines, as described by a tutorial on nix.dev. To be completely honest, I was not aware of this until I had mostly written part 1, and have not tried it for myself. For my needs, a setup where each tool has its own set of responsibilities - Terraform for infrastructure management and Morph or NixOps for configuration management - works well.