USGS Water Resources Applications Software: LOADEST
This page contains a set of Frequently Asked Questions (FAQ) for the LOADEST software package. Portions of this FAQ were contributed by Tim Cohn, Dan Hippe, and Dave Lorenz.
Questions related to LOADEST are divided into the following categories:Installation files for the LOADEST software may be downloaded from the Download section of the LOADEST Web site. Alternatively, the LOADEST installation files may be downloaded by ftp as described in the LOADEST documentation. After obtaining the installation files, LOADEST may be installed by following the instructions in Section 5.3 of the LOADEST documentation.
The LOADEST software is available for use, free-of-charge. Software and related materials (data and documentation) are made available by the U.S. Geological Survey (USGS) to be used in the public interest and the advancement of science. You may, without any fee or cost, use, copy, modify, or distribute this software, and any derivative works thereof, and its supporting documentation, subject to the USGS software User's Rights Notice.
Yes, the Windows/Intel version of LOADEST should run under most versions of Microsoft Windows. A 32-bit executable is provided that should run on both 32-bit and 64-bit systems. Installation files are available in the Download section of this Web site.
An executable version for Macintosh computers is not part of the official LOADEST release. It may be possible to compile LOADEST for the Macintosh after downloading the source code.
LOADEST users running Linux or Windows on personal computers can use the pre-compiled executables that are available in the Download section of this Web site; compilation for these operating systems/hardware platforms is not required.
Compilation is required in the following situations:
ESTIMATOR and LOADEST2 are separate software packages that were previously used within the U.S. Geological Survey for load estimation. In 2001, an effort was undertaken to merge these two software packages into a single, publically-available package. The result of this effort is LOADEST, a software package that includes all of the features of its predecessors. Additional features, such as the ability to develop user-defined models and data variables, have also been developed. Additional information is available here ( PDF (30KB)). Of the three software packages, LOADEST is the only package that is fully documented and supported by the USGS. Use of ESTIMATOR and LOADEST2 by USGS personnel should be discontinued.
An interface-driven, S-PLUS version of LOADEST (S-LOADEST) is available for use by USGS personnel. Additional information on S-LOADEST may be obtained from Dave Lorenz.
Non-USGS users of LOADEST may want to consider LoadRunner, an interface developed at Yale University for determining load estimates at multiple sites. Note, however, that use of LoadRunner's "drop" and "use 1/2" options for dealing with censored data is not recommended. These options may lead to erroneous load estimates as discussed in Helsel and Cohn, 1988 (see LOADEST documentation for full citation). For additional information on this issue, see the publications of Dennis Helsel provided here ("Publications that discuss Detection Limits and Censored Data").
Another alternative for non-USGS users is the Pollutant Load Estimation System, an interface developed at Kangwon National University (Dr. KJ Lim).
The regression approach implemented within LOADEST is best suited for large, non-urban watersheds. Other applications are of course possible; the applicability of LOADEST to a given watershed and dataset must be accessed on a case-by-case basis. Tim Cohn notes that:
"My own experience is that it is very hard to get a reasonable model fit for small basins — and particularly urban ones with point sources"
In addition, Dan Hippe writes:
"In previous decades, most regression based load estimation models have been developed to provide unbiased estimates of mostly annual loads on larger (less flashy) rivers based on calibration data sets of rather infrequent water quality samples (often monthly routine samples and fewer targeted storm samples per year) and the mean daily discharge associated with the dates of the samples (Cohn and others, 1992). The use of mean daily discharges for calibration and estimation, was deemed appropriate in that the principle data product from stream gaging activities at the time was mean daily streamflow and our reporting of loads was often on an annual basis which render the estimates insensitive to the positive or negative bias possible during transient low or high streamflows.Currently, there are numerous agencies and institutions collecting dry and wet weather water quality in small watersheds with predominant land uses (often agricultural row crops or urbanized areas) for the purpose of (1) calibrating water quality models, (2) assessing best management practices, or (3) monitoring in support of the Total Maximum Daily Load (TMDL) process. In many instances, decision makers are interested in understanding loadings seasonally, during critical periods, or less often for operational purposes in real-time.
Often the sampling strategy at these sites, involves collection of intensive wet-weather water-quality data using automated samplers. The resulting datasets facilitate our understanding the temporal patterns in water quality and peak concentrations over storm hydrographs and are intuitively useful for calibrating load models given the correspondence of both high discharge and often elevated constituent concentrations captured by such data. In such streams, it is not uncommon to observe (1) a two to sometimes three or four orders of magnitude change in water quality over a storm hydrograph, (2) concentration peaks that do not align with the peak streamflow, and (3) a hysteretic response of water quality to changing streamflows that is symptomatic of any number of interacting physical and chemical processes taking place within the watershed and channel.
Though we learn much from the data collected as part of these intensive wet-weather monitoring, these programs prove to be expensive, and present challenges with subsequent use of the high-frequency wet-weather data in calibration and use of regression-based load models developed for larger watersheds and with a focus toward just an unbiased estimate of annual loads. Some research has been published on approaches for either subsetting the calibration data to obtain reasonable load estimates and ranges of estimated uncertainties based on regression models that use just mean daily discharges and developing a cost effective strategy for wet-weather data collection that provides a reasonable estimate of annual loads (Robertson and Roerisch, 1999; Robertson and Richards, 2000; Robertson, 2003; and Sprague, 2001).
A number of features have been incorporated into LOADEST that allow the user to (1) take advantage of the emergence of unit value discharge data as the product of most stream gaging activities (solving the issue of using daily mean discharges for calibration and estimation of constituent loads on flashy streams) and (2) to use continuous water quality time series, such as specific conductance or turbidity, that better track the constituents of interest than streamflow itself (addressing the issue of hysteretic relations of many water quality constituents relative to streamflow). Unit value based models implementable in LOADEST should be compatible with the intensive wet weather sampling schemas and provide a improved model fit and decreased uncertainties in the estimated loads.
We anticipate much greater use of unit values based calibration and estimation techniques and also use of surrogate relations in these smaller, flashier streams. These approaches may also be appropriate inform management decisions regarding loads and concentrations during shorter time periods and even in real time. There are however, many settings where it is difficult to develop a complete, long-term time series of unit value discharge data, and inevitably there are reliability issues with both stream flow and water quality sensors that lead to data gaps, that must be overcome in designing and implementing this approach. Work published by the USGS Kansas Water Science Center has compared and contrasted these various loads computation approaches for a number of sites and constituents (Christensen and others, 2000, and Rasmussen and others, 2005)."
Perhaps. Dan Hippe writes:
"Consider, that often loads computational work that the USGS does and the load allocations done for a TMDL are NOT procedurally similar and therefore may not be easily reconciled. Loads estimation in the USGS generally involves estimating export of a mass of material per unit of time at a limited number of instruments sites within river networks. What many TMDL's are doing is assigning an average or hypothetical loading rate to all activities in a watershed and accumulating (or routing them) them within the stream network, with rather few observations or verification based on instream data. If done right, the work we do at a limited number of points should be a verification of how well the loadings are assigned and accumulated or routed through the network as part of a TMDL — however, that presumes they are looking at actual conditions and not a hypothetical average condition.
In most simplistic TMDL studies: Point source discharges are generally assigned a loading using periodic qw measurements, often monthly samples (these can be on incomplete, for example, doing ammonia, but not nitrate, othophosphate, but not total phosphorus) and continuous volumetric measures and compute their constituent discharges by time-weighted averages. Depending on the system and their treatment capacity relative to inflows this may be an accurate or low end estimate (if the term 'bypassing' comes up) of their actual discharge, or there is a need to adjust the loadings for the unmeasured constituents.
NPS discharges are often based on something as simple as spreadsheet models with loading coefficients derived for average annual conditions that rarely coincide with actual conditions. Any processing of these constituents instream is done with additional hard-to-verify removal coefficients. It gets really strange when the drainage network is the major source, for example urban runoff in itself contains rather little suspended sediment, however, the erosive effect of peak discharges from impervious areas on ditches, intermittent streams, and the mainstem can be huge and the resulting instream transport of sediment and sediment associated constituents are often not accounted for in simple or complex TMDL models.
In typical USGS studies to estimate annual loads using regression techniques, it may be superior to at least try to compute loads separately on the primary constituents, rather than total N or P, such that the regression model can pick up on various environmental signals as best it can. For example, you can often have river systems for which total nitrogen or total phosphorus concentrations vary little with runoff events or seasonally (with much of the variation staying as noise in the model), while on the other hand nitrate+nitrite and TKN may have a very strong, but different relation with season and flow, this may also be the case with dissolved and particulate phosphorus. Doing computational work on the individual species may provide an improved model fit and changes in load from various nutrient species downstream that can relate to different sources, or instream processes.
In comparing and contrasting the LOADEST and the consultant's TMDL work. There is a couple of simple checks and cross comparisons that can be done. Does the TMDL model get the annual or seasonal streamflow volumes/yields right? — they should be very close. If they don't do this well, there is trouble.
You may not have this, but it is of interest whether the models correspond well on load estimation of one or more dissolved constituents that behave fairly conservatively, and too on a constituent that is derived primarily from point sources versus one from primarily nonpoint sources. This should be doable if they have accounted for the various sources of loadings.
Then look at how the models work on sediment or particle associated constituents (phosphorus, metals, bacteria, etc.). This is admittedly more difficult and we can expect substantial differences, but they should be brought out.
If all is well up to the point (or even if it isn't), consider evaluating how the models do for estimating loads during a range of wet and dry years. I've often taken the annual loads or yields for a number of years and plot them against the percent of average annual runoff. What you might find is that the TMDL work will over or under predict for conditions that vary much from normal....."
Yes. See Application 6 in the LOADEST documentation. Note that the load unit flag (ULFLAG) must be set to 1.
The value of the conversion factor will be dependent on the units of the concentration and streamflow data. In Application 6, the concentration data is in mg/L and the streamflow data is in ft3/sec. The units of raw load are therefore:
mg ft3
-------
L sec
The conversion factor is used within LOADEST to convert the units of raw load to units of kg/day. For Application 6, EFLOW and CFLOW are set to:
1 / conversion-factor
where
conversion factor = (1 g/1000 mg)(1 kg/1000 g)(0.30483m3/ft3)(1000 L/m3)(86,400 sec/day)
The value of EFLOW and CFLOW used in Application 6 (=1/conversion-factor) is thus 0.408735. Note that when EFLOW and CLOW are multiplied by the conversion factor, Q=1 (0.408735 * conversion factor = 1).
Perhaps. See the LOADEST documentation (last paragraph of Section 3.3.2) and the FAQ entry immediately below this one.
This error occurs when LOADEST can not find the control, header, calibration, or estimation file that it is looking for. All four of these files must reside in the folder/directory from which you are executing the LOADEST command. The control file must be named 'control.inp'; names for the remaining input files must correspond to the filenames specified in the control file. One potential problem is that some operating systems automatically add extensions such as '.txt' to the user-specified filenames when saving text files. LOADEST users may therefore need to manually rename their files or disable the automated file naming capability.
All of the errors and problems noted above are symptomatic of extrapolation — a situation that arises when the calibration data set does not cover the range of conditions for which load estimates are needed. Two common examples of extrapolation include:
In many load estimation applications, some degree of extrapolation may be needed to meet the user's objectives. The amount of acceptable extrapolation must be assessed on a case-by-case basis. As noted by Tim Cohn:
"The calibration data set should more-or-less cover the range of conditions for which estimates are needed. If you extrapolate too far — in terms of leverage — you get large standard errors or the routines won't converge. Although the error message ('failure to converge') is cryptic, it indicates a fundamental problem with the data/model. (Perhaps it would be better if it said: 'Inadequate data to estimate loads given the specified model'). There are 'fixes' for this:
- Reduce the dimension of the model. I would consider eliminating the streamflow2 and time2 terms, and then get rid of additional variables as needed.
- Collect more data so that the calibration data more completely covers the range of discharges/times/seasons that you want to estimate for.
- Don't estimate loads for those days that have streamflows below the smallest streamflow in the calibration data set."
Below are several emails written in response to users with extrapolation problems.
Extrapolation with respect to Time:
"The basic problem is that your calibration and estimation data sets don't cover the same time periods (your estimation data being from 1973-2000, your calibration data being mainly from 2003). In some cases the fact that you're extrapolating between time periods would merely be frowned upon - in theory things would still 'work'. Indeed your input files produce reasonable results when models 1, 2, 4, & 6 are run. There are, however, major problems when you include the dtime term (models 3, 5, 7-9; note that model 4 is OK as it involves the sin & cos of dtime, but not dtime). This is because the dtime term represents a linear trend in time that is present in your calibration data set. So in your case, the loads apparently have some time trend over the calibration period (dec '02- 2003). but there's no reason to expect that trend to extend all the way back to 1973. "
"The basic problem is that its really not appropriate to extrapolate when the regression model includes terms based on decimal time. In your case the calibration data spans 2001-2003, and you're trying to estimate loads for 1968-2004. I haven't looked at your problem in detail, but for illustration purposes, consider a calibration data set that has a linear increase in load over time (e.g. model 3, with a positive coefficient dtime). As you go back in time (estimation data prior to the calibration period), this linear trend can actually make the load go negative (note that decimal time is centered, so that decimal times less than the center of calibration period are actually negative). To illustrate, I did run your data for the period of 1993 forward — I've attached the results. You'll see that the loads get pretty funky for the earlier time periods. If you go back even farther, you end up getting underflow/overflow errors in the load computations which slow the calculations way down. So, yes, I guess, you could call it a bug, but the thing actually runs, very slowly, as it produces meaningless results. (I suspect one could run your entire estimation file, and after enough time it would finish — I tried it here and after 4 days it was still running.)."
"I ran these & its the same problem as before — the time trends in the calibration data cause problems when you extrapolate to other time periods - so if you want to use any of the models involving dtime, its best to trim the estimation file so it just covers the same time period as the calibration data. If you need to extrapolate, then you'll need to use the appropriate models. Note that the files you sent actually work - it just runs very slowly when it generates garbage numbers (negative loads in many cases for the early data). Also note that model 9 probably isn't appropriate for the calibration data — the t & p-values for many of the model coefficients suggest that the coefficients aren't significant. (You may have not been able to see that if the program hadn't finished running — trim your estimation file to begin Dec 02 & you'll see what I mean)"
Extrapolation with respect to Streamflow:
"The problem is that more than half of the days you want estimates for have streamflow values that are less than the minimum streamflow value from the calibration data set (see Part IIa of the Constituent Output file - Streamflow Summary Stats). So you're extrapolating to lower flows that the regression model wasn't calibrated for. In general, you don't have the data necessary to estimate loads for the low flow months - June-Sept. (you may want to change LDOPT to 2, so you get the monthly load estimates). Note, however that if you still want to proceed, there are some options (lucky for you, I'm not a statistician, so I won't tell you to abandon ship — but there are likely some statistical arguments for scraping the analysis due to the extrapolation issue). As it turns out, the problem arises w/ all of the regression models that include a Q^2 term. The remaining models (1, 3, 4, & 7) all 'work'. So you could select one of those models; I ran them all manually (explicitly setting MODNO = X, where X=1, 3, 4, 7) and found model 7 to have the lowest AIC value (AIC is used to select the model when MODNO=0)"
"I ran various constituents and various models and found that most of the problems occurred when a model containing Q^2 was selected. I then happened to look at the summary stats on streamflow (Part IIa, Constituent Output file) and noticed that the estimation and calibration data sets have very different flow characteristics. The calibration data set has a minimum streamflow of >1 cfs, while most of the estimation data is for streamflow<1 cfs. So the problem appears to be the fact that you are extrapolating to low flows. To test this out I edited the estimation file again, this time eliminating all flows <0.18. After doing this, all constituents and all models work fine. The unfortunate effect of eliminating these low flows is that by doing so one neglects part of the load. In many cases the flows were so low (<0.1) that the amount of missing load may be negligible."
"what you're seeing is a fairly common problem with the AMLE approach (I probably shouldn't say its a problem w/ AMLE — its really a data problem that MLE forgives, but AMLE does not). In general, its a problem of extrapolation. It most often arises when people have a regression model w/ the dtime or dtime^2 terms, and they extrapolate w/ respect to time (e.g. in the most extreme case I've seen, 1 yr of calibration data and 30 years of flows for estimation). In your case its due to extrapolation w/ respect to streamflow. Model 9 includes a lnQ and lnQ^2 term that is causing the problem. For the example you sent, the first constituent 'works', whereas the second does not. As shown below, the amount of extrapolation in the first case is not as bad as the second. In the second case, there's almost an order of magnitude difference between the max calibration and estimation flows; the result is a1 and a2 coefficients that yield a negative sum, such that you're generating some very small loads - not what you'd expect for high flows."
First, note that the 'entire estimation period' refers to the time period spanned by the streamflow data in the estimation file. For example, if the estimation file contains 5 years of streamflow data, the mean load for the 'entire estimation period' will be the mean load for the 5 year time period. To obtain mean load estimates for individual years (or other specific time periods), the estimation file will thus need to be modified such that it contains streamflow data from a single year only (or other specific time period). For the example considered here, mean loads for each of the 5 years would be obtained by doing 5 separate LOADEST runs, where the estimation file for each run contains a single year of streamflow data (the calibration file for each of the 5 runs would be identical).
Finally, note that mean load estimates for individual years (or other specific time periods) could be obtained using the information contained in the 'Individual Load Files' (using a spreadsheet, for example). These estimates would not include the statistical information (standard error of prediction and confidence intervals) that are provided with the method described above, however.