What are the prerequisites for learning Artificial Intelligence?
With all the hype for deep learning nowadays people are missing so many great ideas that were proposed and even developed to a significant degree in the past. The results that neural networks show on highly specialized tasks like playing Go or Atari games are incredibly misleading. They make people think that by some additional simple trick the same algorithms can also do everything else.
They can’t.
There were many failed attempts to achieve machine reasoning in the past, performed by very famous people like McCarthy, Vinograd and Minsky. Learning about their experiments and the conclusions drawn from them is the main prerequisite for understanding what really needs to be improved and where the possible solution can be.
All that doesn’t mean skipping the math, but it’s not really the main thing there. You will find out what you lack when you start reading von Neumann and Turing and Rosenblatt and Shannon. Those guys weren’t blinded by 100 million acquisitions in the news and their scientific work was far more related to the actual problem than anything that is being praised in the media. It’s worth noting that their papers are supplemented with a decent amount of philosophical discussion, they try to give clear and unambiguous definitions - it makes their work much easier to understand.
Many paths were tested in that era and many of them are forgotten now. Being well informed about the foundation of the AI field makes one much more desirable collaborator than, say, yet another DL enthusiast.
Also, avoid the software. When you make yourself dependent on the particular tool from the beginning, it shapes your worldview and you tend to miss things you would otherwise notice. Conducting experiments is very good, but knowing what and why you are doing that is more important.
Papers from the 50s are a good start, but I’d not limit myself and basically sift through all highly-cited papers by CS pioneers.

Comments
Post a Comment