• Graduate Program
    • Why study Business Data Science?
    • Program outline
    • Courses
    • Course registration
    • Admissions
    • Facilities
      • Student Offices
      • Location
      • Housing
      • Student Council
  • Research
  • News
  • Events
    • Events Calendar
    • Events archive
    • Summer School
      • Behavioral Decision Making
      • Deep Learning
      • Econometrics and Data Science Methods for Business and Economics and Finance
      • Foundations of Data Analysis and Machine Learning in Python
      • Introduction to Genome-Wide Data Analysis
      • Reinforcement Learning
      • Tinbergen Institute Summer School Program
  • Summer School
  • Alumni
Home | Courses | Deep Learning

Deep Learning

  • Teacher(s)
    Eran Raviv
  • Research field
    Data Science
  • Dates
    Period 4 - Feb 28, 2022 to Apr 22, 2022
  • Course type
  • Program year
  • Credits

Course description

Deep Learning course covers theoretical and practical aspects, state-of-the-art deep learning architectures, and application examples.

  1. Introduction to Deep Learning (theory and practice)
  2. Deep Learning components (gradient descent models, loss functions, avoiding over-fitting, introducing asymmetry)
  3. Feed forward neural networks
  4. Transfer learning (pre-trained image classification models, pre-trained embeddings, examples of pre-trained models in images and text (GloVe embeddings, Word2Vec, VGG16, etc.) , bottleneck features and their use)
  5. Convolutional neural networks
  6. Embeddings
  7. Recurrent neural networks
  8. Long-short term memory units
  9. Gated recurrent units
  10. Reinforcement learning.

Course literature

The following list of recommended readings (presented in alphabetical order) is considered essential for your learning experience. These articles are also part of the exam material. Changes in the reading list will be communicated on CANVAS.


  • Goodfellow, I., Bengio, Y. and Courville, A., 2016. Deep learning. MIT press.
  • Patterson, J. and Gibson, A., 2017. Deep learning: A practitioner's approach.

Selected papers, including:

  • Frank Z Xing, Erik Cambria, and Roy E Welsch. Natural language based financial forecasting: a survey. Artificial Intelligence Review, 50(1):49–73, 2018
  • Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017
  • Heaton, J.B., Polson, N.G. and Witte, J.H., 2017. Deep learning for finance: deep portfolios. Applied Stochastic Models in Business and Industry, 33(1), pp.3-12.
  • Honglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. In Advances in neural information processing systems, pages 1096–1104, 2009.
  • Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pages 2177–2185, 2014
  • Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R., 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1), pp.1929-1958.

Lecture notes available on CANVAS.