Course
Deep Learning
-
Teacher(s)Eran Raviv
-
Research fieldData Science
-
DatesPeriod 4 - Feb 28, 2022 to Apr 22, 2022
-
Course typeCore
-
Program yearFirst
-
Credits4
Course description
Deep Learning course covers theoretical and practical aspects, state-of-the-art deep learning architectures, and application examples.
- Introduction to Deep Learning (theory and practice)
- Deep Learning components (gradient descent models, loss functions, avoiding over-fitting, introducing asymmetry)
- Feed forward neural networks
- Transfer learning (pre-trained image classification models, pre-trained embeddings, examples of pre-trained models in images and text (GloVe embeddings, Word2Vec, VGG16, etc.) , bottleneck features and their use)
- Convolutional neural networks
- Embeddings
- Recurrent neural networks
- Long-short term memory units
- Gated recurrent units
- Reinforcement learning.
Course literature
The following list of recommended readings
(presented in alphabetical order) is considered essential for your learning
experience. These articles are also part of the exam material. Changes in the
reading list will be communicated on CANVAS.
Books:
- Goodfellow, I., Bengio, Y. and Courville, A., 2016. Deep learning. MIT press.
- Patterson, J. and Gibson, A., 2017. Deep learning: A practitioner's approach.
Selected papers, including:
- Frank Z Xing, Erik Cambria, and Roy E Welsch. Natural language based financial forecasting: a survey. Artificial Intelligence Review, 50(1):49–73, 2018
- Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017
- Heaton, J.B., Polson, N.G. and Witte, J.H., 2017. Deep learning for finance: deep portfolios. Applied Stochastic Models in Business and Industry, 33(1), pp.3-12.
- Honglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. In Advances in neural information processing systems, pages 1096–1104, 2009.
- Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pages 2177–2185, 2014
- Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R., 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1), pp.1929-1958.
Lecture notes available on CANVAS.