I wasn’t sure if I should post this after writing it due to my “newness” to weather and my weather dyslexia(when I say west I usually mean east and when I say north I usually mean east…I think you get the idea!
) so I will just add the disclaimer below….read on at your own peril!!!
*Disclaimer*
Below is a statement from my own learning, if anything contained in it is considered to be wrong please feel free to correct me. It is my opinion and not necessarily true fact!
(Just wanted to add this first- For everyday weather generally Dynamical models are what is used but there does exist other model types that are different and are usually used for specific events….)
There are three types of models, statistical, dynamical and a combination of both, models based solely on historical weather data are statistical models and example would be CLIPER used for forecasting hurricanes, the models that everyone is used to nowadays are the dynamical models(ECMWF, GFS, UKMET, ect) and Hillybilly’s post explains them well, statistical-dynamical models combine both to produce output usually for specific weather events, namely tropical cyclones. (I should state that I have only ever run into the historical and historical-dynamical models in regards to TC’s but I am pretty sure they exist (in part) for other events such as MJO and ENSO.)
In regards to dynamical models inter-model and intra-model comparison is very important for forecasting success, no single model run (individual member) will be able to be relied upon for an accurate forecast and neither should a single model, comparing models is of far greater use and applying a system of weighting depending on a certain models previous performance is also poignant. I don’t mean though that a single model ensemble won’t be correct in forecasting something and even an individual member run may correctly forecast a situation. Statistical models(Historical models or climatic norms) can also be used in comparison with dynamical models to better gauge the accuracy of the model output and this is often used in "hindsight" to determine a models "forecast skill"(it's ability to get things right).
Statistical models "retrograde", meaning they take intial/recent conditions and then compare them to historical data to find a likeness and forwardly project from the current conditions based on the previous observational data.(or atleast that's the best way that I can explain it!)
To compare models or single model runs we can use different methods, I will talk about two: Consensus and Ensemble. Ensembles are where the output from all individual members is recalculated alongside one another creating like a combined output. Ensembles are more accurate because they compile a series of computations on top of one another leading to a more detailed(when I say "detailed" I don't mean higher resolution!) output, that being said erroneous members can drag the forecast askew and degrade the output of the Ensemble run, multi-model ensembles again build another layer on top creating an even more detailed run comprising a far greater amount of equations and thus an even more reliable forecast run. Of course… again an erroneous member can drag the multi-model ensemble away from a correct forecast outcome. I have heard the term “superensemble’s” around before too, used again for TC’s.
Consensus is more or less just compiling the output on top of one another and using a median result as the forecast, an example would be like a map comprising all the tracks forecast by models for a TC and drawing a line down the centre, again certain models output could be disregarded or be given extra weighting based on performance or likely hood of the outcome portrayed.
All that being said the ability to remove a single member from a models ensemble or a multi-model ensemble based on good met knowledge is the best way to use models but is pretty much a privilege held only by top mets and with good reason because erroneous model output could/can be dangerous if portrayed in the wrong light(*refrains from pointing the finger but looks sideways at the media*)!!! Multi-model ensemble output is also usually limited to professional mets but there does exist a fair amount of output available on the net and some would argue far too much model data is now available to the general public…but not me!
Crikey (if your still reading? And I don’t think we are quite in the right place for this convo.?) in relation to your post, the SOI is just one of many factors that [CAN] indicate above average rainfall across Australia, using just one indicator though is not good, just because an indicator is there doesn’t mean conditions will happen, ie. High SOI= widespread flooding rain won’t always work! Also it’s a broard indication and where the rain would actually fall depends on prevailing atmospheric conditions and that would affect its impacts so how to say whether one year is worse or better is very complex imo.
Beyond that I can’t really say much more other than every year is different and it all depends on where and how the rain falls, it wouldn’t be unreasonable to say from the current indicators that it will flood somewhere this wet season and somewhere will have above average rainfall, this could be widespread or relatively limited, exactly where though and how widespread??? All depends on things beyond my current learning!