For complete methodology, refer to Rafter et al. (2019). In summary:
Data Compilation: Nitrate d15N observations were compiled from studies dating from 1975 to 2018. This global ocean nitrate d15N database was interpolated using an ensemble of artificial neural networks (EANNs). For the compiled observed global ocean nitrate d15N data, see the related dataset: https://www.bco-dmo.org/dataset/768627
Building the neural network model: We utilize an ensemble of artificial neural networks (EANNs) to interpolate our global ocean nitrate d15N database, producing complete 3D maps of the data. By utilizing an artificial neural network (ANN), a machine learning approach that effectively identifies nonlinear relationships between a target variable (the isotopic dataset) and a set of input features (other available ocean datasets), we can fill holes in our data sampling coverage of nitrate d15N.
Binning target variables (Step 1): We binned the nitrate d15N observations to the World Ocean Atlas 2009 (WOA09) grid with a 1-degree spatial resolution and 33 vertical depth layers (0-5500 m). When binning vertically, we use the depth layer whose value is closest to the observation's sampling depth (e.g. the first depth layer has a value of 0 m, the second of 10 m, and the third of 20 m, so all nitrate isotopic data sampled between 0-5 m fall in the 0 m bin; between 5-15 m they fall in the 10 m bin, etc.). An observation with a sampling depth that lies right at the midpoint between depth layers is binned to the shallower layer. If more than one raw data point falls in a grid cell we take the average of all those points as the value for that grid cell. Certain whole ship tracks of nitrate d15N data were withheld from binning to be used as an independent validation set.
Obtaining input features (Step 2): Our input dataset contains a set of climatological values for physical and biogeochemical ocean parameters that form a non-linear relationship with the target data. We have six input features including objectively analyzed annual-mean fields for temperature, salinity, nitrate, oxygen, and phosphate taken from the WOA09 (https://www.nodc.noaa.gov/OC5/WOA09/woa09data.html) at 1-degree resolution. Additionally, daily chlorophyll data from Modis Aqua for the period Jan-1-2003 through Dec-31-2012 is averaged and binned to the WOA09 grid (as described in Step 1) to produce an annual climatological field of chlorophyll values, which we then log transform to reduce their dynamic range.
The choice of these specific input features was dictated by our desire to achieve the best possible R2 value on our internal validation sets (Step 4). Additional inputs besides those we included, such as latitude, longitude, silicate, euphotic depth, or sampling depth either did not improve the R2 value on the validation dataset or degraded it, indicating that they are not essential parameters for characterizing this system globally. By opting to use the set of input features that yielded the best results for the global oceans, we potentially overlooked combinations of inputs that perform better at regional scales. However, given the scarcity of d15N data in some regions, it is not possible to ascribe the impact of a specific combination of input features versus the impact of available d15N data, which may not be representative of the region's climatological state, to the relative model performance in these regions.
Training the ANN (Step 3): The architecture of our ANN consists of a single hidden layer, containing 25 nodes, that connects the biological and physical input features (discussed in Step 2) to the target nitrate isotopic variable (as discussed in Step 1). The role of the hidden layer is to transform input features into new features contained in the nodes. These are given to the output layer to estimate the target variable, introducing nonlinearities via an activation function. The number of nodes in this hidden layer, as well as the number of input features, determines the number of adjustable weights (the free parameters) in the network. For complete information, refer to Rafter et al. (2019).
Validating the ANN (Step 4):To ensure good generalization of the trained ANN, we randomly withhold 10% of the d15N data to be used as an internal validation set for each network. This is data that the network never sees, meaning it does not factor into the cost function, so it works as a test of the ANN's ability to generalize. This internal validation set acts as a gatekeeper to prevent poor models from being accepted into the ensemble of trained networks (see Step 5). A second, independent or 'external' validation set, composed of complete ship transects from the high and low latitude ocean were omitted from binning in Step 1 and used to establish the performance of the entire ensemble. Our rationale for using complete ship transects is the following. If we randomly chose 10% of observations to perform an external validation, this dataset will be from the same cruises as the wider data. In other words, despite being randomly selected, the validating observational dataset will be highly correlated geographically. Contrast this with validating the EANN results with observations from whole research cruises in unique geographic regions—areas where the model has not "learned" anything about nitrate. We therefore argue that these observations from whole ship tracks therefore provide a more difficult test of the model.
Forming the Ensemble (Step 5): The ensemble is formed by repeating Steps 3 to 4 (using a different random 10% validation set) until we obtain 25 trained networks for the nitrate d15N dataset. A network is admitted into the ensemble if it yields an R² value greater than 0.81 on the validation dataset. For complete information, refer to Rafter et al. (2019).