Naked Science Forum
Non Life Sciences => Physics, Astronomy & Cosmology => Topic started by: evan_au on 06/12/2024 21:43:30
-
On today's podcast, an interview with researchers from Google DeepMind reported results of an AI doing Medium-Range weather forecasting (up to 2 weeks in advance).
- They reported huge reductions in energy consumption, number of CPUs and computation time when using a Google TPU (AI chip) compared to thousands of CPUs in a conventional supercomputer.
- Suggesting that weather forecasting would no longer need supercomputers
- The Google chip started with the current state of the world's weather as input.
5 years ago, I listened to this interview with the European Medium-Range Weather Forecasting team.
- My vague recollection was that something like 60% of their Supercomputer usage was to produce a definition of the current state of the weather, and only 40% was predicting the future of the weather, out to 2 weeks
- There are many different sources of data on the current weather, and they don't always agree
- The data collection points are closer together over land than over sea or deserts
- And so it takes a huge amount of supercomputer analysis to integrate them all to define the current global weather
So while AI might will be able to reduce supercomputer usage by 40% for weather prediction, there is still significant research if forecasters are to reduce the other 60% of supercomputer usage.
https://omegataupodcast.net/326-weather-forecasting-at-the-ecmwf/
-
I do not see how AI can do without a super computer, AI at the minute is a targeted learning programme (narrow ai) that would only develop the same methods as existing programmes I should think, we cannot think of any other way of forecasting the weather so any narrow AI programme I should think will be along similar requiring models of intricacy at vast scales. Maybe it could do it more efficiently, be more vigilant have more imagination and detail and become more accurate but I think it would still need the super computer models. If general ai could come up with another method rather than the 'model' method then we could get rid of the super computer. But we do not have general AI yet.
-
AI at the minute is a targeted learning programme (narrow ai) that would only develop the same methods as existing programmes
Current supercomputer models are based on solving the Navier-Stokes equations, and they forecast the future weather from the current weather.
- There is a process of comparing the predictions with the actuals (when they happen, 2 weeks later), which feeds into manually optimizing future versions of the software
- For example, the effects of clouds are not obvious, and are not explicitly covered in the Navier Stokes equations, so various cloud corrections are put into the model "by hand"
- Landform features are included as "boundary conditions" for the flow of air
- https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations
Typical AI models use comparison of predicted vs actual weather as the primary input, with minimal manual tweaking
- They do not explicitly insert the Navier-Stokes equations (which are impossible to solve analytically, and extremely hard to solve numerically), but the TPU calculates output signals from its artificial neurons, based on weighted inputs (a very different calculation).
- The designers may not even explicitly insert the landform
- They let the model work this out for itself, after observing its predictions vs the actual weather (from historical records)
One thing is clear - they will definitely need parallel supercomputers for the training phase!
Another limitation of AI systems is that a lot of data is implicit, and it's not clear how to change the AI model for new situations
- For example, if a large volcanic eruption dumps a lot of sulphate particles in the atmosphere (as Mount Pinatubo did in 1991), it will change the atmospheric behavior. If sulphate reflectivity is not an explicit input variable to the AI model, with a range of examples in the training data, the AI model may not cope with a new volcanic explosion. This can be handled more directly in a numeric model.
- If there is a sudden change in ocean currents in the Atlantic, this will be outside the range of AI training data, and the results may be misleading (unless sea surface temperatures are part of the input data).
-
A colleague was involved in the design and construction of North Sea oil platforms. A crucial input to the building and location program was forecasting daily wave height. The Met Office forecasts were "polished" by local fishing skippers looking at satellite images, and their accuracy improved from "just useable" to "reliable".
When the ab-inito analytic solution gets complicated, there's a lot to be said for neural processing and learning by experience.
-
From what I understood weather forecasts go from 0.1 percent reliable a long time in the furure to 99.9 percent just before the lightning bolt hits you, but never ever 100. It is done by running lots of different models and past information of similar weather. I an not sure how an AI is going to do this markedly differently, itself would probably be a supercomputer. If the a AI just takes a singular guess based on past learning it would surely be no different than now and perhaps, if possible less accurate.
https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/