Over the past decade, artificial intelligence and machine learning have emerged as major hotbeds of research, driven by advances in GPU computing, software algorithms, and specialized hardware design. New data suggests that at least some of the algorithmic improvements of the past decade may have been smaller than previously thought.
Researchers working to validate long-term improvements in various AI algorithms have found multiple situations where modest updates to old solutions allowed them to match newer approaches that had supposedly superseded them. The team compared 81 different pruning algorithms released over a ten year period and found no clear and unambiguous evidence of improvement over that period of time.
According to David Blalock, a computer science graduate student at MIT who worked on the project, after fifty papers “it became clear it wasn’t obvious what state of the art even was.” Blalock’s advisor, Dr. John Guttag, expressed surprise at the news and told Science, “It’s the old saw, right? If you can’t measure something, it’s hard to make it better.”
Problems like this, incidentally, are exactly why the MLPerf initiative is so important. We need objective tests scientists can use for valid cross-comparison of models and hardware performance.
What the researchers found, specifically, is that in certain cases, older and simpler algorithms were capable of keeping up with newer approaches once the old methods were tweaked to improve their performance. In one case, a comparison of seven neural net-based media recommendation algorithms demonstrated that six of them were worse than older, simpler, non-neural algorithms. A Cornell comparison of image retrieval algorithms found that performance hasn’t budged since 2006 once the old methods were updated:
There are a few things I want to stress here: First, there are a lot of AI gains that haven’t been illusory, like the improvements to AI video upscalers, or noted advances in cameras and computer vision. GPUs are far better at AI calculations than they were in 2009, and the specialized accelerators and AI-specific AVX-512 instructions of 2020 didn’t exist in 2009, either.
But we aren’t talking about whether hardware has gotten bigger or better at executing AI algorithms. We’re talking about the underlying algorithms themselves and how much complexity is useful in an AI model. I’ve actually been learning something about this topic directly; my colleague David Cardinal and I have been working on some AI-related projects in connection to the work I’ve done with the DS9 Upscale Project. Fundamental improvements to algorithms are difficult and many researchers aren’t incentivized to fully test if a new method is actually better than an old one — after all, it looks better if you invent an all-new method of doing something rather than tuning something someone else created.
Of course, it’s not as simple as saying that newer models haven’t contributed anything useful to the field, either. If a researcher discovers optimizations that improve performance on a new model and those optimizations are also found to work for an old model, that doesn’t mean the new model was irrelevant. Building the new model is how those optimizations were discovered in the first place.
The image above is what Gartner refers to as a hype cycle. AI has definitely been subject to one, and given how central the technology is to what we’re seeing from companies like Nvidia, Google, Facebook, Microsoft, and Intel these days, it’s going to be a topic of discussion well into the future. In AI’s case, we’ve seen real breakthroughs on various topics, like teaching computers how to play games effectively, and a whole lot of self-driving vehicle research. Mainstream consumer applications, for now, remain fairly niche.
I wouldn’t read this paper as evidence that AI is nothing but hot air, but I’d definitely take claims about it conquering the universe and replacing us at the top of the food chain with a grain of salt. True advances in the field — at least in terms of the fundamental underlying principles — may be harder to come by than some have hoped.
Top image credit: Getty Images
Now Read:
- Level Up: Nvidia’s GameGAN AI Creates Pac-Man Without an Underlying Game Engine
- Microsoft Built One of the Most Powerful Supercomputers in the World to Develop Human-Like AI
- Nvidia Unveils Its First Ampere-Based GPU, Raises Bar for Data Center AI
No comments:
Post a Comment