2.3.3. Land Cover Characteristics
The focal point of this paper is to determine the effects, if any, of using alternative land cover maps with SWAT. The two alternatives were: five-year (multitemporal NLCD), and annual (integration of disturbance products) maps. SWAT models were fit to each watershed keeping all other inputs (DEM, climate, and soils—described below) and SWAT default parameters constant. A description of each land cover set follows. Whenever five-year maps are mentioned throughout the paper, it means that these land cover maps were refreshed or updated with a five-year delay instead of annually, which is the case for the annual maps.
• Five-year land cover maps
All available (1992, 2001, 2006, and 2011) multitemporal NLCD products were used in this step. Our assumption here is that the 1992 product resembles the land cover conditions at the time when our disturbance datasets start (i.e., 1990), and then the subsequent products (2001, 2006, and 2011) can be used to describe changes in the landscape that have occurred for the complete duration of our time series. No changes were made to these maps in this stage. Nevertheless, SWAT only ingests a single description of the land cover conditions for the entire duration of a regular simulation. In other words, it assumes by default that land cover does not change for whatever number of years are included in the simulation. Even though SWAT users are able to utilize NLCD products during simulations without adjustment, we needed to establish a process to make both datasets (annual and five-year maps) directly comparable in order to guarantee that no advantages were given to one dataset over the other. The process to integrate the four available temporal NLCD scenarios in one SWAT simulation is described directly after the description of the annual land cover.
• Annual land cover maps
A long-term annual (30 m spatial resolution) time series was created that showed the timing, extent, and type of disturbance across the NFS from 1990–2011. As a first step in the production of these maps, an automated forest change detection algorithm [32
] was run on an annual, cloud-free series of Landsat TM and ETM+ imagery. Technicians manually edited this map by adding, subtracting and changing the shape of disturbance patches using several sources of reference data, including: multi-temporal composites of Landsat data transformed to one-band per year with the Disturbance Index [33
]; high-resolution time series of aerial imagery served through Google Earth; a spatial database of historical harvest activities (the National Forest System’s Forest service Activity Tracking System or, FACTS); a national interagency database of fires over 405 hectares in size [34
]; and insect activity records obtained from aerial detection surveys [35
]. This process also allowed the technicians to label the type of disturbance (harvest, fire, bark beetle) for each affected patch.
• Fusion of a forested land cover baseline with disturbance maps
For SWAT models using both five-year and annual land cover maps, we utilized a product that describes the spatial distributions of forest types groups (FTG) as described in [21
]. It has been demonstrated [30
] that explicitly integrating the configuration of the forested landscape (i.e., which pixels are Ponderosa Pine forests, which pixels are Douglas-Fir forests, etc.) in SWAT provides much better results. The FTG geospatial dataset was used as baseline for the 1990 year for both datasets. The five-year maps were spatially-intersected with the FTG dataset, and the forest spatial domain from the FTG was maintained. Pixels from the five-year maps that did not exactly match the FTG spatial domain were assigned to a FTG class based on proximity. This process was repeated for all the five-year maps. In this context, the parameters for the NLCD forest classes were not used in SWAT. Rather, the specific parameters for FTG classes were used. This cross-walking (NLCD to FTG) activity guaranteed that the five-year maps were using the same SWAT parameters that the annual maps used. In the case of the annual maps, and starting with the 1990 FTG map, each subsequent year was then updated to reflect the changes that occurred on the landscape due to the mapped disturbances. For instance, the land cover conditions for the very next year (1991) were determined to be the same FTG from 1990 with the exception of the disturbed pixels, which were recorded as a new land cover class that we named ‘disturbed’. After another year, the classification was allowed to go back to ‘forest’ and the original FTG class. This is, in essence, the difference between the annual and five-year maps: the periodic use of the disturbed class. We had considered keeping disturbed pixels as disturbed in subsequent years but did not have the data to determine what the land cover condition was post disturbance. This is a limitation in our analysis. Parameters for these forest classes, and the disturbed class with regards to interception capacity, canopy density, and surface roughness were obtained from the application of SWAT in snowmelt-driven catchments reported by [30
2.3.6. Integration of Land Cover Change in SWAT
With the implementation of SWAT version 2009, a new module was added to account for land use and land cover changes that take place over the time series or the modeling period. Before this module was added, an assumption that the land cover in a given watershed did not change during the modeling period was made. This assumption is usually not true, especially during modeling periods that span several decades, such as the ones conducted in this study. A cloud-based software tool (SWAT LUU) can ingest multiple land cover geospatial datasets and produce the required inputs for the SWAT land cover change module. The multiple five-year (1992, 2001, 2006, and 2011) temporal datasets (modified as explained above through fusion with a forested land cover baseline), and the annual land cover maps (1990–2011) were uploaded to SWAT LUU [41
] in conjunction with their corresponding SWAT modeling projects to obtain the required inputs for simulation of land cover change. This process was conducted for each one of the pilot watersheds (i.e., one project using the five-year maps, and another project using the annual maps). With each new land cover map that is available, SWAT LUU recalculates the proportions that each HRU (i.e., Ponderosa pine with an ‘x’ soil series, disturbed forest with a ‘y’ soil series, etc.) occupies in the watershed under study. With an updated description of the landscape, and with the changing climatic conditions, SWAT is then able to model the effects of land cover change. Because SWAT cannot use land cover classes not present in the first year of the simulation, a negligible amount of ‘disturbed’ land (five 30-m pixels per soil group) were added to the FTG map in 1990. This process was required for both annual and five-year maps.
2.3.7. SWAT Simulations
For each pilot watershed, two simulations were set up in SWAT: one using the five-year maps datasets and the other using the annual land cover maps which include the disturbance time series. All inputs (DEM, climate, soils), and parameters influencing those inputs were kept the same for the two simulations with the exception of land cover. Default parameters that relate to canopy structure, interception capacity, and surface roughness for the simulations using the annual maps were modified following [30
] the forest classes (Lodgepole, Spruce-fir, and disturbed forest), and for the rangeland and barren classes. The Ponderosa and Douglas-fir forests classes were adjusted using a combination of the characteristics detailed by [30
], and an assessment of leaf area index time series [42
]. Tree growth default parameters for both simulations (five-year and annual land cover) were also modified. Preliminary runs of these models indicated that transpiration was much lower than expected. As pointed out by [45
], this was due to a default operation practice in SWAT that kills and harvests the crops or forests every year. Such a forestry operation was not expected in our watersheds, and thus was removed, and the initial settings for tree growth for both types of simulations (five-year and annual) were modified accordingly.
2.3.8. Model Calibration
Once the initial models were fitted as described above, we conducted a calibration process so that the SWAT parameters used during model validation would be based on the local specific conditions of the 10 watersheds, and not on SWAT’s default parameter values. This is particularly important given that all of our watersheds’ hydrologic regimes are snowmelt dependent, and SWAT’s default parameters are specific for agricultural watersheds. SWAT’s parameters are process-based and must be used within a sensible and realistic range that can change from watershed to watershed. The main goal of the calibration process is to determine which parameters have a predominant influence on the hydrology of the watershed, and to estimate a range of values for those parameters that better reflect the physical processes while at the same time reduces prediction uncertainty. An initial set of 26 parameters that have been previously documented [30
] as being significant in affecting the hydrology of this type of watersheds was used. The list of parameters, and the initial range of values are provided as supplement material
Model calibration was performed using the Sequential Uncertainty Fitting (SUFI2) semi-automated algorithm that is available within the SWAT Calibration and Uncertainty Programs SWAT-CUP software [46
]. In SUFI2 many iterations are performed over the selected subset of parameters, and the initial range of values. As more iterations are applied, the “parameter ranges get smaller zooming in a region of the parameter space, which produces better results than the previous iteration” [47
]. The user can tune the uncalibrated model by minimizing the differences between observed and simulated flows. This process can be done by defining an objective function to optimize, which in this study it was the Nash–Sutcliffe efficiency (NSE) technique. The NSE is “a normalized statistic that determines the relative magnitude of the residual variance” [48
], and indicates how well the plot of observed versus simulated data fits the 1:1 line [49
] A thorough description of calibration and validation concepts, and applications can be found in [47
]. The calibration was conducted independently for each of the pilot watersheds, however the initial set of parameters to be calibrated was the same for all watersheds. The calibration for each watershed was conducted as follows: (a) with the initial set of parameters and their initial values Table S3
, 750 simulations were run to obtain the results for the first iteration; (b) the calibration outputs—i.e., the global sensitivity analysis, of the iteration were reviewed to determine which parameters should not be included in the next iteration. We chose to leave out those parameters (usually one or two) with the highest p
-values, which may be interpreted as not having a significant impact in changing the predicted flow values; (c) we then reviewed the new parameter range values after the iteration, and copy those values for the parameters that we decided to include in the next iteration; (d) a new iteration was run with the chosen subset of parameters and their new values using 750 simulations; (e) the process to select the new subset of parameters to be used in the next iteration was repeated, and the change in NSE was noted; (f) this process was repeated until there was not a visible and noteworthy change in NSE improvement from iteration to iteration; (g) the SWAT original model was modified accordingly using the final subset of parameters, and their calibrated values. This is a computationally demanding process, and SWAT-CUP had to be used with an extension that allows parallel processing, thereby reducing the required time to run hundreds of simulations.
2.3.9. Model Validation
The accuracy of the SWAT calibrated models using the annual and five-year land cover was assessed independently. The NSE technique [48
], the percent bias (PBIAS) measure [50
], and the ratio of the root mean square error to the standard deviation (RSR) were calculated by comparing the simulated SWAT flows with the observed monthly statistics (monthly average of flow discharge) at each one of the watersheds outlets. These averages were obtained directly from the time series: monthly statistics that have been calculated for each hydrometric station and that are available from the USGS website (https://waterdata.usgs.gov
), Table 1
and Table S1
. During SWAT’s model configuration, the spatial location of these outlets was defined by using the same coordinates of USGS’s surface–water hydrometric stations. The performance of SWAT was estimated for either the entire disturbance time series (1991–2011) or just for the years that data were available at each hydrometric station and overlapped our period of analysis. All stations had complete monthly availability, except Wat03 for which no records were measured for the months of December through March, thus the performance that is reported for this watershed is based only on eight months of data per year. The characteristics of these stations are provided (Table S1
) as supplement material
NSE has been recommended [49
] in the literature as the standard to compute hydrologic model performance. NSE is an excellent indicator of how well the simulated data mimics the observed data and ranges from −∝ to 1.0. Values < 0 are thought to indicate an unacceptable performance, and values > 0 and <1.0 indicate acceptable degrees of accuracy. If one model renders a value <0, then the mean of the observed data is a better predictor than the simulated values [49
]. The PBIAS is usually recommended in the literature as it measures the average tendency of the simulated flows to be larger or smaller than the observed values. A PBIAS value of 0.0 is the optimal while positive values indicate model underestimation and negative values indicate a bias toward overestimation [50
]. RSR varies from an optimal value of 0, which would mean a zero-residual variation—in other words, a perfect prediction by the model, to a large positive value which indicates very poor model performance. Thus, the lower the RSR, the lower the RMSE, and the better flow predictions by the hydrologic model [47
]. In order to see how well the calibrated models performed using independent data, we conducted the validation phase in a period not covered during the calibration phase. We conducted the calibration at any particular watershed using data from the decade 1991–2000—namely for the years 1991–1995. We then conducted the validation by running the calibrated version of the model with the data that pertained to the period 2000–2010, namely the years 2001–2005. We considered that this would provide a much better idea of how well the calibrated models would perform in different temporal conditions.