Artificial Intelligence and Deep Learning Models for Actuarial Applications

Lecture slides from UNSW’s ACTL3143 & ACTL5111 courses

Author

Dr Patrick Laub

Overview

These are the lecture slides from my recent “Artificial Intelligence and Deep Learning Models for Actuarial Applications” courses (coded ACTL3143 & ACTL5111) at UNSW. They can be used to see what topics I covered in these courses. The slides are not intended to be used to learn deep learning from scratch. For that, you need to attend the lectures & complete the assessment.

List of Topics Covered

Lecture 1: Python

  • default arguments
  • dictionaries
  • f-strings
  • function definitions
  • Google Colaboratory
  • help
  • list
  • pip install ...
  • range
  • slicing
  • tuple
  • type
  • whitespace indentation
  • zero-indexing

Lecture 2: Deep Learning

  • activations, activation function
  • artificial neural network
  • biases (in neurons)
  • callbacks
  • cost/loss function
  • deep/shallow network, network depth
  • dense or fully-connected layer
  • early stopping
  • epoch
  • feed-forward neural network
  • Keras, Tensorflow, PyTorch
  • matplotlib, seaborn
  • neural network architecture
  • overfitting
  • perceptron
  • ReLU activation
  • representation learning
  • training/validation/test split
  • universal approximation theorem
  • weights (in a neuron)

Tutorial 2: Forward Pass

  • batches, batch size
  • forward pass of network
  • gradient-based learning
  • learning rate
  • stochastic (mini-batch) gradient descent

Lecture 3: Tabular Data

Categorical Variables

  • entity embeddings
  • Input layer
  • Keras functional API
  • nominal variables
  • ordinal variables
  • Reshape layer
  • skip connection
  • wide & deep network

Classification

  • accuracy
  • confusion matrix
  • cross-entropy loss
  • metrics
  • sigmoid activation
  • sofmax activation

Lecture 4: Computer Vision

  • AlexNet, GoogLeNet, Inception
  • channels
  • computer vision
  • convolutional layer & CNN
  • error analysis
  • fine-tuning
  • filter/kernel
  • flatten layer
  • ImageNet
  • max pooling
  • MNIST
  • stride
  • tensor (rank, dimension)
  • transfer learning

Tutorial 4: Backpropagation

  • backpropagation
  • partial derivatives

Lecture 5: Natural Language Processing

  • bag of words
  • lemmatization
  • one-hot embedding
  • stop words
  • vocabulary
  • word embeddings/vectors
  • word2vec

Lecture 6: Uncertainy Quantification

  • aleatoric and epistemic uncertainty
  • Bayesian neural network
  • deep ensembles
  • dropout
  • ensemble model
  • CANN
  • GLM
  • MDN
  • mixture distribution
  • Monte Carlo dropout
  • posterior sampling
  • proper scoring rule
  • uncertainty quantification
  • variational approximation

Lecture 7: Recurrent Networks and Time Series

  • GRU
  • LSTM
  • recurrent neural networks
  • SimpleRNN

Lecture 8: Generative Networks

Lecture 8-9: Generative Networks

  • autoencoder (variational)
  • beam search
  • bias
  • ChatGPT (& RLHF)
  • DeepDream
  • generative adversarial networks
  • greedy sampling
  • Hugging Face
  • language model
  • latent space
  • neural style transfer
  • softmax temperature
  • stochastic sampling

Lecture 9: Interpretability

  • global interpretability
  • Grad-CAM
  • inherent interpretability
  • LIME
  • local interpretability
  • permutation importance
  • post-hoc interpretability
  • SHAP values

Contributors

  • Tian (Eric) Dong
  • Michael Jacinto
  • Hang Nguyen
  • Marcus Lautier
  • Gayani Thalagoda