A recent report by Wired reveals that while many weather prediction efforts are supported by intricate computer algorithms, humans are still doing a good amount of the legwork. We currently rely on the GOES-16 and -17 satellites (AKA the latest geostationary satellites) as well as the Global Forecasting System (GFS) and European Center for Medium-Range Weather Forecasts (ECMWF) models to forecast the weather with more precision than ever before. But despite several decades of computerized automatic forecast development, human-powered (or at least human-enhanced) predictions assisted by these technologies are more accessible and more accurate than their AI counterparts.
Like in plenty of other verticals, a fully-automated meteorological future faces many obstacles. Weather prediction produced solely by AI would require massive amounts of computing power, such as that supplied by exascale computers, a category of supercomputer capable of processing 1018 calculations per second. Three exascale computers are currently under development in the US, with the first—the Aurora supercomputer at Argonne National Laboratory—slated to go live this year, but meteorology isn’t the only research area in line to experience Aurora’s power. Accurate weather forecasting is also threatened by the inevitable full deployment of 5G, in which radio interference could negatively impact vital satellites’ ability to observe water vapor levels. Weather forecasting relies partly on monitoring the 23.8GHz signals emitted by water vapor.
One solution to this problem is to deploy more equipment in the lower-performance but longer-range C-band. The 24GHz signals used for mmWave 5G offer higher performance but weaker range, as discussed in this PCMag story from 2019. 5G signals in the 3GHz – 7GHz range will not interfere with future weather forecasting.
In the meantime, computer-generated forecasts lack the flavor necessary to effectively prepare for disaster. While algorithmic models are generally more accurate and efficient than humans at predicting mild weather, humans more consistently produce accurate predictions regarding bad weather (the latter of which is arguably more important to get right). An analysis of two decades of human, GFS, and North American Mesoscale Forecast System (NAM) predictions showed that humans beat the world’s two most popular models in the “bad weather” category anywhere from 20 to 40 percent of the time. In other cases, humans were able to add value to automated guidance, using the algorithm’s predictions as a foundation for more detailed forecasting.
None of this is to say that automated forecasts aren’t valuable. Instead, today’s meteorology students are taught to fight complacency by learning to defend their predictions using real-time and historical data. “There’s an old adage that ‘all models are wrong, some are useful,’” meteorologist Shawn Milrad, an instructor at Embry-Riddle Aeronautical College, told Wired. “Even if it’s a great forecast it’s going to be slightly wrong. It’s how you can add value to that model.”
Now Read:
- Multimodal AI Modeling is the Future, But It’s Also Pandora’s Box
- NASA Explains Why Webb Doesn’t Have Any External Cameras
- Verizon and AT&T Cave to FAA, Will Delay 5G Rollout Again
No comments:
Post a Comment