Some years ago the International Association of Professional Meteorologists issued a memo to their membership with full-face and profile photographs of me together with the warning:
Avoid this man at all costs. He is a weather groupie and once he knows you are a meteorologist you will have no peace.
Poor Frank Singleton missed the memo and therefore we were able to sneak up on him. He did try to escape, but Phyllis, being younger than both of us, was able to run him down for the tackle, enabling me to slip the handcuffs on, and so Frank is now captive here at Attainable Adventure Cruising World Headquarters, where we feed him in exchange for the meteorology knowledge that I’m going to share with you in this article.
But seriously, when Frank wrote to me after the publication of my last weather article with a very kind email saying that I had got it right, I used that contact as a springboard to ask the poor man a huge number of questions.
You can learn more about Frank over at his excellent web site, but the short version is that he is a retired professional meteorologist with decades of experience at the British Met Office and an offshore sailor—a perfect combination for our purposes.
You should also know that Frank is totally his own man who calls them like he sees them without fear or favour. He is even willing to disagree with me…just imagine!
Here are four great tips I derived from Frank’s shared wisdom for ways that we can make more comfortable and safer voyages:
Hi John,
You have done a nice service introducing Frank to a wider audience. I have been in the Med and northern Europe for a decade now and found Frank’s web site and various writings early on. They were (and remain) a big help to understanding the complex systems that wander these regions and in deciding on a package of data sources that covers the bases without becoming data overload.
My best, Dick Stevenson
g f s and g r i b stand for what please ? recognize the latter as a meteorological term but would like to know what they abbreviate please
richard in tampa bay (soon bound for the antilles for a while)
Hi Richard,
Sorry I should have put that in.
GFS stands for Global Forecast System and is the world wide weather model run by the US National Weather Service every six hours.
GRIB stands for GRidded Binary and is the data form that the output from the GFS is distributed in. You can download a GRIB file and then load it into a viewer on your computer and view the output of the model in graphic form.
Shouldn’t you add that, besides computing power, forecast accuracy depends critically on the quality of data used to initialize the model? (“garbage in, garbage out”) Frank’s note of Feb 26, 2012, in which he points out that a 0.1% error in pressure would result in a significant difference in wind speed in the English Channel, seems to illustrate how important good initial conditions are. In some places, such as high latitudes, surface data may be scarce. Or is satellite data — which ought to cover the globe equally well everywhere — all the models need?
Hi Philip,
As I understand it, you are absolutely right. Having said that I think I’m right in saying that pretty much all of the initialization data is in the public domain too and therefore there is really no “special sauce” there either. So it still comes down to how much computer power you can throw it.
For example, I think I’m right in saying that the advantage the the European model had over the GFS for a while was because they were initializing with more points, but it was not that they had special access to said data, just that they had more computer horsepower.
I gather from Frank that much, or maybe all, of said advantage went away with the January 2015 upgrade to the GFS.
Hi Again Philip,
On the high latitude issue, I believe that most initialization data now comes from satellites and therefore it’s world wide.
Frank, any thoughts on that?
Hi John
Data analysis and model initialization are critical to weather prediction. As has been said GIGO rules. Ideally we would have accurate, precise data, all at the same time on a regular 3-D grid with short grid lengths horizontally and vertically.
In reality we have a mix of data with variable accuracies, resolutions and times.
Land stations, ships and tethered buoys measure point specific values at fixed times. Resolutions are nowhere near small enough. Radiosondes make temperature, humidity, wind ascents at fixed times over land with large data gaps especially over the oceans. Drifting buoys provide point specific values but are paged on an as and when basis. Aircraft provide point specific values at various times. Low earth orbiting satellite microwave instruments provide areal temperature and humidity data with horizontal resolutions of around 50 km and at various times. Vertical resolutions are far more coarse than radiosondes and refer to substantial depths of the atmosphere. Infrared sounders measure absorbtion by CO2 through substantial depths of the atmosphere. These are related to temperature. Horizontal resolutions can be down to a few kilometres. Satellites also provide surface wind data using molecular scattering from the sea surface. Sea surface temperatures are measured using IR radiances.
Geostationary satellites provide wind data from cloud or water vapour area movement but not at precise heights.
Initialization of models involves a form of 4-D analysis using all these data coupled with output from the last run of the weather prediction model. One analysis method (4DVar) is a genuine 4-D scheme. An older scheme (3DVar) is what I call pseudo 4-D. It does use all the same data as 4DVar but is not a true 4-D scheme.
All this is a longwinded way of saying that all available data are, in principle, used. There are many mathematical problems that I do not understand. One obvious problem is how to weight data from relatively few “accurate” in situ instruments with a mass of remotely sensed data of low resolution but in their way quite accurate but measuring the atmosphere in very different ways.
Within the constraints of the available computer power, models (or modellers) tend to be ahead of what is possible using the data available. The data analysis schemes lag behind the demands of models and the availability of data. Increasing global model resolution has improved prediction but I have to wonder how much more can be achieved without an, as yet unforeseen, improvement in satellite observing system resolutions.
There is plenty of good science good mathematics and good technology. However, results are and always will be limited by the reality of the complexity and noise in the atmospheric system.
Hi Frank,
Really interesting, thank you. I was pretty hazy on the whole issue of how the models are initialized. I’m a lot clearer now.
I guess, based on that, we will let you go now :-).
One thing that interests, and surprises me, is how good the GFS is in the high latitudes, and I would assume it has got even better since 2011 when I was last in Greenland and Baffin Island. Pretty amazing really, when you consider (I assume) that there must be less initialization data points available in out of the way, low population, areas.
John, remember that these are global models. It is not so much a question of how much information there is in northern latitudes but how much there is globally. A truism is that to be able to predict the weather somewhere, you have to know about weather everywhere.
Hi Frank,
That, and your other excellent comments have really improved my understanding on that, and so much else. thank you!
John,
Nice job like always. When one of these weather service websites begins to show their predictions with what actually occurred in an easily seen format instead of immediately removing predictions as that hour passes they will likely hook me. Forecasting has come a long way from when a commercial fisherman taught me in early 80’s the best way to evaluate a NOAA forecast was to add it up- 10-15 kt & 3-5 ft would likely be 25 kits & 8 ft seas! Now much more likely to over call as people don’t get as mad when conditions lighter than predicted but sure do when stronger.
Bruce, an advantage of using GRIB files via Saildocs, zyGrib, UGrib and the various tablet apps is that you can always compare a forecast with actuality. The data are saved to your computer. ECMWF does make available its forecasts over the past few days. Some years ago I put up a page at http://weather.mailasail.com/Franks-Weather/Grib-Forecast-Examples. This showed some examples. I should find the time to put up some recent examples. I show two in my book, one was a good forecast and one a poor one.
I have been off-air for some while cruising France and struggling with Win 10 so may be a little out of date.
Fiirst, UGrib is no more. The service has terminated. Of course, if you have the viewer it can still be used with .grb files. A plus for the zyGrib viewer – my preferred option – can use both .grb and .grb.bz2.
If you have a tablet, then Weather4D is worth looking at. It can generate an email to Saildocs. The reply can then be used with other apps – eg iNavX. Weather 4D has a useful facility letting superimpose a meteogram on the chart. By moving the point around you can see the meteogram change. See http://weather.mailasail.com/w/uploads/Franks-Weather/weather4dmeteogram.png
Hi Frank,
Welcome back! Any man who has survived a windows upgrade deserves a warm welcome. And thanks for the updates on UGrib. and Weather4D.
Reminds me that I need to check out zyGrib.
Good piece John, Frank Singleton has helped guide my thinking a few times and offered some very good suggestions. I agree with your comments and his about the GFS. Through many years of practical experience on board, I have tried several different models depending on where we are located but I can say that the GFS is the most accurate, most of the time! I like his suggestion to watch for divergence in the model over time to potentially identify model trouble. That makes good sense and I will start to watch for that. Frank Singleton was the one to recommend Xygrib to me which displays the data very well. Thanks for the tips.
I think you mean zyGrib (www.zygrib.org)
Robert
i still beieve the best w atlantic basin forecasting-via-chart source is the bermuda weather service
i still believe the best forecast-via-charts source for the w atlantic basin 70n to 10n is the bermuda weather svc (weather.bm)…this has always provided results superior to anything else i have ever seen…their charts go out about five days not counting today, and they do all the heavy lifting…all we need do is behold thei handiwork…richard in tampa bay
The Global Ensemble Forecast System (GEFS) is an example of an approach similar to the tip in #4. It is based on 21 separate forecasts from models in the GFS family. The most critical part of modeling, as you said in the article, is getting the initial conditions correct. The models themselves are pretty darned good these days. However, there will always be a difference in what was measured for the initial conditions and what was reality. That difference may or may not be meaningful in the forecast and does not necessarily mean the model is having trouble. It may be that the particular set of conditions is ripe for showing just how chaotic the real system is (e.g. butterfly wings causing storms). The GEFS is an attempt to quantify that uncertainty. I find it especially useful to look at outliers as possible worst case scenarios.
A relevant quote….”all models are wrong, but some are useful” (George E.P. Box, Mathematician)
Robert
Hi Robert,
That makes sense. Love the quote.
Hi, John and Robert.
Operational forecasts runs start with the best analysis possible. Analysis schemes are optimised on the basis of experience. That is when changes are made to analysis schemes these are tested against outcomes and only implemented when there is improved performance of the models.
Operationally, having run the forecast a diagnostic programme can be run to identify those areas of the analysis where small differences might affect significantly the outcome. For example it might be that small differences over Northern Canada would impact on predictions for the UK. Small changes are then introduced into the analysis that are compatible with the original observational data. The model is then run many (up to around 50) times but in a degraded mode ie with longer grid lengths and time steps.
The idea of looking at outliers is a good one – assuming you have the bandwidth to view the relevant output and tine to assess the results. As a sailor who does not always have the bandwidth etc, I have found the “Singleton” technique of looking for consistency/inconsistency between successive runs to be a practical tool.
Hi Robert,
You allude to what I think could/would be a valuable addition to forecasts, a probability assessment. Like you, in planning passages, I attempt to forecast a worse possible conditions forecast: a “what if that front comes through a day early and we are still out there” type of question. Or what are the outlier possibilities. It does not seem too much to ask forecasts to have some sort of probability index (some forecasts/nations-Norway?- already do something similar). I am sure they already think this way even if it is not shared. An alternative, especially for longer range forecasts, might be to suggest 2 or 3 possibilities: one being the most likely but others in the ball park.
My best, Dick Stevenson, s/v Alchemy
Hi Dick,
I agree on desirability of being given information on the forecaster’s confidence in the forecast as well as the likelihood of other things happening.
As I am wont to say in our high latitude course:
I write about this issue in this chapter. I’m also a huge fan of the US National Weather Service forecast discussion in which the forecaster writes about all of the models used and what he or she’s level of confidence in the final forecast.
Hi, John and Dick
As ever, I like to be controversial if only to put an alternative view.
From the point of view of the poor guy who has to decide what warnings and forecasts he has to issue, I can well see the value of knowing or having some objective idea of the odds. As a user, I am never sure how I personally would use the information.
For example, intending to make a 4 or 5 day Biscay crossing, I would probably not go if there were gales mentioned however small the probability. I know that a forecast gale F 8 can easily become a severe gale F 9. On the other hand, faced with a high probability of a gale F 8 for a 12-24 hour English Channel or Golfe de Lion crossing, I might well go after studying the forecast carefully. I would pay little heed to the probability. I would assume the worst and decide whether or not it would put us into danger. I have done both when F8s were expected.
Hi Frank,
That makes sense. On the other hand, what I, and I think Dick, were thinking about is situations where there is no mention of violent weather in the forecast, even though there is a significant risk thereof.
For example, as a Canadian resident I’m ashamed to admit that Environment Canada is absolutely terrible in this regard. They often make no mention of the possibility or gales or even storms in the marine forecast, even though a rank amateur like me can see said possibility in the models. This is so bad that we on “Morgan’s Cloud” have, over the years, developed a name for this: “an Environment Canada Creeping Gale Warning”.
The US met office are much better in this regard, and particularly if one, as I do, reads the forecast discussion.
John. Clearly there is a difference in approach by Met Services. For sea (offshore) areas, the UK and France (where we spend most of our sailing season) have fairly strict rules about gale and strong wind warnings. For the UK, a gale warning MUST be issued if a gale is possible within the next 12 hours. Obviously, they try to issue warnings further ahead. The forecast might say “perhaps gale 8 at times,” or “locally.” A gale warning is only cancelled (or allowed to lapse after 24 hours) if the forecaster is sure that there are no winds reaching F8 in the sea area.
The French will include “Menace de grand frais” or “Risque de coup de vent” or some such wording. For Inshore (coastal) areas, both services will make special mention of winds of F6.
The downside of all this is that some users complain of over-forecasting of strong winds. Because of the limited number of words it will always be that strong winds, poor visibility or thundery conditions will seem to be given undue prominence. Our Shipping Forecast comprises about 27 areas, 1M sq miles. The total text, preamble, general situation, gale warning summary, area forecasts must be no more than 330 words. Three minutes of BBC reading speed of 110 wpm. That is a good rule for the NAVTEX version also. It is a good discipline but we hear numerous complaints that the forecast was wrong. Usually, you find that the complainant is itiing in a sheltered bay or sailing in the lee of the land or some such.
In terms of probability I, at least, am quite happy with words such as “may,” “locally,” “perhaps,” “at times,” rather than some spurious attempt at giving a numerical probability. But that is a bone of contention and always leads to vigorous discussion. Put simply, I do not think it is useful to give numbers to something that is so imprecise. The same applies to use of Beaufort forces for wind speeds. A Swedish sailor once tried to tell me that giving winds in m/sec was more scientific. I explained that weather was not precise and giving ranges such as 10 to 15 implied unachievable precision. In the Adriatic we often heard forecasts of 5 – 15 (kts) becoming 6 – 16! Really! F 3 or 4 or F 2 – 4 would have been as good, have used fewer words and been clearer when heard over R/T. I have heard such nonsense’s in forecasts generated direct from a computer.
Sorry, another hobby horse of mine but one which might spark some discussion.
Hi Frank,
Yes, I think that a lot of this is in met office policy, and I definitely prefer the British policy on this one.
I’m also totally with you on the dangers of using language in weather forecasting that implies a level of consistency in time and space that the underlying forecast simply doesn’t have. Again, I think the British use of the Beaufort scale and words like “perhaps” is the best way to convey the reality of forecast accuracy.
Perhaps this is not the place to address a need for a paradigm shift in meteorology, in which case I apologize beforehand, but on “forecast modelling” and “garbage in, garbage out” I would like to add this:
if meteorologists are looking at weather with only one eye (“it’s all thermal & Coriolis”) and not include the driving force behind weather (“it’s all electrical”), forecast models will never reach satisfactory prediction levels. There will continue to be large discrepancies between prediction and reality, no matter how many millions and no matter how much CPU power is thrown at it.
Contrary to popular belief, our atmosphere and its workings are NOT fully understood.
Just one example:
it is not mentioned in mainstream science and as far as I know the models make no use of it (please correct me if I’m wrong), but there is a correlation between “Coronal Mass Ejection & Coronal Hole Stream impacts” and “tropical storm formation” (as well as earthquake and volcanic activity). These are electrical events. Incorporating that aspect in one’s view of the weather and climate is very enriching and can potentially save lives through better understanding and forecasting.
In addition to checking Weather Forecast updates, I also check Space Weather forecasts.
Don’t get me wrong: in my view, our current weather forecasts ARE very good, unless the earth is in geomagnetic upheaval and then all bets/charts are off the table…
We can measure what is going on in space pretty well and there are lots of geomagnetic models that do a pretty good job. In fact, there are models for just about every phenomenon we have witnessed on earth. But everything can’t be thrown into the calculations or the weather models will be running in real time. Some assumptions have to be made, such as rejecting certain mechanisms or oversimplifying a parameter. If it isn’t mentioned in mainstream science or incorporated into models, then a plurality of scientists have thought it not significant enough to warrant the extra computations. That doesn’t mean the effect isn’t real, just that most feel there isn’t compelling enough evidence of its significance to add it in. The fact that our models do a pretty good job forecasting most of the time is evidence that the assumptions made are decent. There will always be outliers though, so it pays to understand the goes-ins and the goes-outs.
You bring up a good point about understanding the models you use. As John pointed out, interpolating between model output points is not the same as calculating at a higher resolution. And if you are uncomfortable with certain parameters (e.g. CMEs) being left out of calculations, then you know to check other sources (e.g. space weather forecast) to adjust your confidence level. There is nothing that irritates a modeler faster than having someone use their models outside their range of validity and then claiming the model somehow failed.
That said, I think there are plenty of reasons to keep tabs on Space Weather. The vulnerability of spacecraft to space events (e.g. a CME) does make me pause since they provide much of the data required to get a reasonable prediction. There are lots of ways for Space Weather to indirectly affect forecasts.
Robert
Hi Axel and Robert,
Interesting, but way beyond my meteorological pay grade, so I will keep quiet!
Axel, all I would say to that is that the physical processes that drive the atmosphere are well understood. At the most basic level, Force=Mass x Acceleration. The problems lie in computing the forces. Clearly, as you say, forecasting these days is pretty good. The limits are defined by predictability.
This may be throwing a cat among the pigeons but here goes:
One thing that I am very curious about is ..
Given that we are experiencing rapid changes in weather due to climate change, global warming etc etc…what sort of impact are these changes having on the scientific knowledge around which the models are built? Is the accuracy of the models being impacted in any way?
Regards
Patrick
That’s a very good question. We have the low level physics and processes of what is going on pretty well covered. What is really difficult is how the myriad ocean and atmospheric processes on the planet interact with each other. The earth system is big and complicated. Rapid climate change may change the dynamics and time scales of the various processes making it more difficult for the current models to keep up. They may have to start computing at higher resolutions, which will take much more computer time.
The good news is that weather modeling is a very active area of research and computing capabilities are growing exponentially, as well. The next generation of modelers, in my opinion, are doing a great job. More data is being collected than ever before and that’s what will help models be more accurate. They do have their work cut out for them though.
Robert
Hi Robert,
Thanks for fielding that. You seem to know a lot about this. What’s your background? Always great to have another person commenting that has specialized knowledge.
Hi John,
I’m a computational physicist by formal training and much of my professional life.
Most of my modeling experience was related to spacecraft survivability (space weather) but I spent several years running and modifying climate models and
oceanographic models, including university research. Then I went into
law enforcement (Natural Resources) which really threw folks for a loop. I
ended up running an electronics design and manufacturing firm for the
past decade; however, I still keep up with climate and weather modeling since that’s
where I cut my engineering teeth….and it’s still of interest to me. Bizarre progression, I know.
Btw, I’m definitely not a meteorologist and can’t even play one on TV. We depended on the real weather experts and climatologists to validate models.
Robert
Hi Robert,
Thanks for the fill in and great to have your insights, thank you again.
As you say, interesting career path, almost as bizarre as mine: mainframe computer technician, sailmaker, founder printing company, founder computer systems integrator, voyager, internet writer/publisher…
Thank you for shedding light on that.
Regards
Patrick
Robert, I doubt that climate change will make short period (days) more difficult. The laws of physics will not change. That is not to say that weather will not become more volatile. Data quality is, I believe, a critical limiting factor. The other, as I have said, is predictability. I do not believe that a butterfly can create a storm although chaos is clearly important. One large thunderstorm will not lead to a hurricane but an easterly wave with several large storms may well do.
Satellites are the only source of global data but observing resolution is far from what we really would like. In the few seconds that a polar orbiter does a horizon to horizon scan the sub-satellite point has moved by 10s of kilometres. Geostationary satellites are too far out to provide numerical data at high resolution. In time these deficiencies will, no doubt be resolved.
I was just reading an article about the latest upgrades to the European ECMWF global forecast model ( http://arstechnica.com/science/2016/03/the-european-forecast-model-already-kicking-americas-butt-just-improved/ ), which touched on a few points relevant here:
– The newest ECMWF model is initialized and run at much higher resolution than before. It’s not just a matter of brute computing power, but how you structure the grid; there are funny games you can play with octahedral meshes that yield a much more efficient calculation of the air properties.
– The GFS and ECMWF still use different ways of transforming raw measurements into model initialization data. GFS has a major upgrade coming this May which will probably include a more sophisticated initialization system as well as a better grid structure.
– There is an *extremely* strong correlation between the accuracy of the GFS model and the accuracy of the ECMWF model. If one’s right, the other’s also right; similarly, if one’s prediction ends up being wrong, the other’s prediction is also wrong and by about the same relative amount.
That last note in particular really reinforces John & Frank’s 4th point: You’re far better off comparing several model runs with very slightly different initial conditions, than several models with the same initial conditions. The first case is a useful sensitivity analysis that gives you some idea of the probable range of error; the second case is just duplicating what you’ve already done.
Hi Matt,
Interesting, as always. Good to hear that the GFS will undergo another major upgrade in May. I didn’t now that.
Matt. In my opinion, the kick-ass article is a little unfair. The NCEP, like the UK Met Office, has to run to tight deadlines. This is to be able to provide short period – 1 to 2 days forecasts for aviation and other routine users. That means that data cut off is not long after the nominal start time for the forecasts. The main computer runs use 00 and 12 UTC data. Subsidiary runs are at 06 and 18 UTC. ECMWF is, as the name says, a medium range forecast service. They have a main run starting from 12 UTC and a subsidiary run starting from 00 UTC. A consequence is that they can use later data cut-off times and spend longer on the data analysis/initialization. Over the past few years, verification statistics have shown that ECMWF comes out on top followed by the UK then the US and Japan. Of course, on any particular occasion, one or another will out-perform the rest.
Currently, the GFS uses a grid length of about 12 km, ECMWF uses about 15 or 16. ECMWF will shortly (if not already) be implementing an 8 km grid. In principle that should give better forecasts. My gut feeling is that the problems of data analysis and assimilation will mitigate against much significant improvement as far as we are concerned.
Here is a URL from Nature about the quiet revolution of numerical prediction, following a study on September 2015 about forecast skill: the correlation between the forecasts and the verifying analysis:
http://www.nature.com/nature/journal/v525/n7567/fig_tab/nature14956_F1.html
Pretty amazing what happened over the last 25 years of modelling.
Hi Frank,
I love the “truism is that to be able to predict the weather somewhere, you have to know about weather everywhere”. It feels like an attitude that would be of benefit to keep in mind when making decisions in many aspects of life beyond weather prediction.
My best, Dick Stevenson, s/v Alchemy
+1 on the recommendation for Frank’s book.
Pretty much all of the Reed’s ‘Handbook’ series are worth having on board: useful, compact and inexpensive aide-memoires, full of good information and tips.
Can any of the well informed weather experts weigh in on a concept I read about long ago: that long range or even mid range weather forecasting will remain a non-starter because of the chaotic nature of the weather system.
That is to say, beyond a short period even the weather doesn’t know what the weather will do.
Is this still a prevailing belief?
Hi Jeff,
While there is no question that chaos plays a part in limiting wether forecast accuracy, modern forecasts are near-perfect out to 72 hours (better than 95% as I understand it) and amazingly accurate for as much as six or even eight days. The point being that what you read was probably written before the availability of supercomputers and really good models, a time when forecast accuracy was limited by the human brain’s inability to grasp and process the huge number of data points required for useful modelling. Or to put it another way, it’s only chaotic if you can’t measure and model it.
Having said that, the atmosphere is still way more complex than the models, so occasionally a forecast will be radically wrong, a possibility we should always be aware of.
You can read more about the real world limitations of accuracy and how to manage that in the rest of this online book: https://www.morganscloud.com/category/weather/book-weather-analysis/
Jeff, to some extent it depends on what you mean by “forecasting.” Deterministic prediction may well only be possible up to around 2 weeks ahead. The limit, essentially is the lifetimes of major weather features – mainly lows and fronts. Beyond that, prediction can only be probabilistic.
Long range forecasting has always seemed to me like the end of the rainbow, seemingly nearly in our grasp but always tantalisingly just out of reach. In recent years El Nino, La Nina, Atlantic, Pacific and other oscillations have come increasingly studied. This is partly because of the interest in climate change and partly in the monthly, seasonal and longer period prediction. Most of the heat entering the atmosphere comes from the tropical oceans so that understanding how these large scale features affect weather is vital. But, study of these large scale effects will only lead to very generalised large scale predictions.
Sometimes it is asked why we think that it may be possible to predict on the centennial scale when we cannot predict the next three months. However, that is similar to comparing forecasts for several days and several hours. Several days ahead it may be obvious that there will be showers. Several hours ahead it will be impossible to predict whether there will be shower over Cowes at midday. It might well be impossible to say with any certainty whether or not there would be a shower over the Solent or the Isle of Wight.
Thanks John,
What I’ve wondered on a cloudy day that was forecast to be sunny, is not whether the science was wrong or the model was “bad”. More like whether the sometimes chaotic nature of the weather (along with its infinite starting conditions), is indifferent to the structured thinking embedded in any model. So maybe today’s forecasting skills are just about as good as it ever will get. Just wondered how the weather gurus see this.
Hi Jeff,
A common reason that a forecast sunny day ends up cloudy is local effect, rather than anything intrinsically wrong with the forecast process.
You might want to buy Frank’s little pocket book that I featured in the post above. He has a great explanation of forecast accuracy in it.
John, Frank and All, I would just like to thank you for a most interesting, informative and enjoyable thread. It has been a pleasure to read. Alan
Very interesting read!
I’ve been surfing for 50+ years and watching the models for most of that. The only information we had in the beginning was the Sunday edition of the Miami Herald newspaper! They would show a picture of the isobars covering the eastern US. Now we get more information than I could have ever imagined.
My observations of the long range models (5+ days out) has produced interesting results. Generally the models tend to exaggerate the further out you go, but if the model runs hold or strengthen the predicted storm, watch out!
Speaking of accuracy, the resulting wave heights produced by the storm are precisely measured by a satellite called Jason 2. This validates the models ability to predict winds in the open ocean, and I’m impressed!
Very informative postings. Thanks, Sam
Hi Surfwatch,
That’s my experience too. On many occasions I have seen the GFS predict a Nor’easter 10 days out and then have very little variance over subsequent model runs right down to the storm hitting close to the same force and location as predicted 10 days before—extraordinary.
Excellent and informative article, thank you.
btw. an updated edition of Frank’s excellent ‘Reeds Weather Handbook’ will be published on 18APR19 (UK) and 18JUN19 (US) and is available for pre-order from Amazon now: http://www.amazon.com/Reeds-Weather-Handbook-Frank-Singleton/dp/147296506X/
Hi Karl,
That’s good news. Thanks for the heads up.