This is a very short blog.
In the past few weeks, for various reasons, I have carefully read many articles and books about the history of artificial intelligence. Overall, I have gained a lot of knowledge and learned many interesting stories of the experts in the field, as well as their conflicts and disputes. However, besides the common saying "where there are people, there is a community," I also have another feeling.
I have been studying machine learning and deep learning for many years, and I have also run many models myself. When deep learning became a popular direction, I suddenly realized that many models that I had never seen before were being mentioned by everyone, including convolutional neural networks, long short-term memory networks, and later on, people came up with astonishing models like generative adversarial networks. I once thought that I was too dull to come up with these clever ideas.
But after reading the history of artificial intelligence, I realized that I was wrong. Whether it is convolutional neural networks, long short-term memory networks, reinforcement learning, or the core minimax optimization function in generative adversarial models, they were all proposed in the development process of artificial intelligence over the past 70 years. They just didn't receive much attention at that time due to various reasons. When computing power and data became sufficient, scientists rediscovered and utilized these methods, combining them with the latest model construction and training methods, giving these "ideas" a new life. Even the emergence of reinforcement learning itself predates the term "artificial intelligence."
For me, these are just a small but significant gain. But for our country's technology, this situation precisely indicates that simply pursuing research hotspots is not the right approach. We need to lay a solid foundation, persist in various aspects, and only then can we possibly succeed in the future. Simply chasing hotspots is equivalent to superficial understanding, thinking that it is just a moment of inspiration for top scientists.
Due to criticism of neural networks in the United States, research on neural networks shifted to Canada, which is why the holy land of deep learning has become the University of Toronto and the University of Montreal in Canada. Hinton and others persisted during the time when neural network models were criticized, which led to their brilliant achievements today. Therefore, it is only natural for them to produce good results.