<P> There are a number of assumptions which are made to complete the analysis which determines the 100 - year flood . First, the extreme events observed in each year must be independent from year - to - year . In other words, the maximum river flow rate from 1984 cannot be found to be significantly correlated with the observed flow rate in 1985 . 1985 cannot be correlated with 1986, and so forth . The second assumption is that the observed extreme events must come from the same probability distribution function . The third assumption is that the probability distribution relates to the largest storm (rainfall or river flow rate measurement) that occurs in any one year . The fourth assumption is that the probability distribution function is stationary, meaning that the mean (average), standard deviation and max / min values are not increasing or decreasing over time . This concept is referred to as stationarity . </P> <P> The first assumption is often but not always valid and should be tested on a case by case basis . The second assumption is often valid if the extreme events are observed under similar climate conditions . For example, if the extreme events on record all come from late summer thunder storms (as is the case in the southwest U.S.), or from snow pack melting (as is the case in north - central U.S.), then this assumption should be valid . If, however, there are some extreme events taken from thunder storms, others from snow pack melting, and others from hurricanes, then this assumption is most likely not valid . The third assumption is only a problem when trying to forecast a low, but maximum flow event (for example, an event smaller than a 2 - year flood). Since this is not typically a goal in extreme analysis, or in civil engineering design, then the situation rarely presents itself . The final assumption about stationarity is difficult to test from data for a single site because of the large uncertainties in even the longest flood records (see next section). More broadly, substantial evidence of climate change strongly suggests that the probability distribution is also changing and that managing flood risks in the future will become even more difficult . The simplest implication of this is that not all of the historical data are, or can be, considered valid as input into the extreme event analysis . </P> <P> When these assumptions are violated there is an unknown amount of uncertainty introduced into the reported value of what the 100 - year flood means in terms of rainfall intensity, or flood depth . When all of the inputs are known the uncertainty can be measured in the form of a confidence interval . For example, one might say there is a 95% chance that the 100 - year flood is greater than X, but less than Y . </P> <P> Direct statistical analysis to estimate the 100 - year riverine flood is possible only at the relatively few locations where an annual series of maximum instantaneous flood discharges has been recorded . In the United States as of 2014, taxpayers have supported such records for at least 60 years at fewer than 2,600 locations, for at least 90 years at fewer than 500, and for at least 120 years at only 11 . For comparison, the total area of the nation is about 3,800,000 square miles (9,800,000 km), so there are perhaps 3,000 stream reaches that drain watersheds of 1,000 square miles (2,600 km) and 300,000 reaches that drain 10 square miles (26 km). In urban areas, 100 - year flood estimates are needed for watersheds as small as 1 square mile (2.6 km). For reaches without sufficient data for direct analysis, 100 - year flood estimates are derived from indirect statistical analysis of flood records at other locations in a hydrologically similar region or from other hydrologic models . Similarly for coastal floods, tide gauge data exist for only about 1,450 sites worldwide, of which only about 950 added information to the global data center between January 2010 and March 2016 . </P>

What is a 1 in 100 year storm event