List of posts
- Random thoughts on ChatGPT
- Custom training loops with Pytorch
- Applications of autoencoders
- The joy of not google'ing: Short to long stick ratio in broken rods
- The expectation-maximization algorithm - Part 1
- Acquisition functions in Bayesian Optimization
- Bayesian optimization for hyperparameter tuning
- Longest substring with non-repeating characters
- Decision Trees: Gini index vs entropy
- An introduction to Gaussian Processes
- Alternating direction method of multipliers and Robust PCA
- Principal Component Analysis limitations and how to overcome them
- Why is normal distribution so ubiquitous?
- A list of machine-learning questions for interviews
- The encoder-decoder model as a dimensionality reduction technique
- Python decorators and the tf.function
- Probabilistic regression with Tensorflow
- How to implement a Naive Bayes classifier with Tensorflow
- Trainable probability distributions with Tensorflow
- Custom training loops and subclassing with Tensorflow
- Computing improper integrals with Cauchy's residue theorem
- A gentle introduction to kernel density estimation
- An almost one-liner to construct the Mandelbrot set with Mathematica
- Have you ever heard of anyone who became ill with Covid-19?
- Principal component analysis with Lagrange multiplier
- GitHub analytics with Mathematica
- Benford's law and COVID-19 conspiracy theories
- Bayesian connection to LASSO and ridge regression
- How to sum a divergent series with Pade approximation
- A simple example of perturbation theory
- How to convert DICOM transfer syntax
- Normality tests in statistics
- Common misconceptions about exponential growth
- Bayes theorem and likelihood ratios for diagnostic tests
- Dual spaces, dual vectors and dual basis
- The birthday paradox, factorial approximation and Laplace's method
- How to derive the Riemann curvature tensor
- Norms and machine learning
- What does it really mean for a machine to learn mathematics?
- Arcsin transformation gone wrong
- The meaning of curl operator
- Energy considerations for training deep neural networks
- Gradient descent
- Adversarial attacks on neural networks