Get back to the basics you fool! Learn how to do Clean Code for your career. This is by far the best book I've read even if this list is related to Deep Learning.
Learn how to be professional as a coder and how to interact with your manager. This is important for any coding career.
Yet halfway through the book, it contains satisfying math content on how to think about actual deep learning.
The audio version is nice to listen to while commuting. This book is motivating about reverse-engineering the mind and thinking on how to code AI.
This book covers many of the core concepts behind neural networks and deep learning.
Some books listed here are less related to deep learning but are still somehow relevant to this list.
arXiv browser with TF/IDF features.
An awesome list for Neuraxle, a ML Framework for coding clean production-level ML pipelines.
This is a hub similar to Hacker News, but specific to data science.
Maybe how I discovered ML - Interesting trends appear on that site way before they get to be a big deal.
This is a Korean search engine - best used with Google Translate, ironically. Surprisingly, sometimes deep learning search results and comprehensible advanced math content shows up more easily there than on Google search.
The most richly dense, accelerated course on the topic of Deep Learning & Recurrent Neural Networks (scroll at the end).
Good intermediate to advanced-level course covering high-level deep learning concepts, I found it helps to get creative once the basics are acquired.
New series of 5 Deep Learning courses by Andrew Ng, now with Python rather than Matlab/Octave, and which leads to a specialization certificate.
I created this richely dense course on Deep Learning and Recurrent Neural Networks.**
This is a class given by Philippe Giguère, Professor at University Laval. I especially found awesome its rare visualization of the multi-head attention mechanism, which can be contemplated at the slide 28 of week 13's class.
Renown entry-level online class with certificate. Taught by: Andrew Ng, Associate Professor, Stanford University; Chief Scientist, Baidu; Chairman and Co-founder, Coursera.
Interesting class for acquiring basic knowledge of machine learning applied to trading and some AI and finance concepts. I especially liked the section on Q-Learning.
Interesting class about neural networks available online for free by Hugo Larochelle, yet I have watched a few of those videos.
Nice animations for rotation and rotation interpolation with Quaternions, a mathematical object for handling 3D rotations.
Convergence methods in physic engines, and applied to interaction design.
Picturing backprop, mathematically.
Unfolding of RNN graphs is explained properly, and potential problems about gradient descent algorithms are exposed.
Understanding bias and variance in the predictions of a neural net and how to address those problems.
Simple Python demo on signal processing.
Okay, I already listed Andrew NG's Coursera class above, but this video especially is quite pertinent as an introduction and defines the gradient descent algorithm.
Visualize how different optimizers interacts with a saddle points.
Visualize how different optimizers interacts with an almost flat landscape.
How to adjust the learning rate of a neural network.
What follows from the previous video: now add intuition.
Animations dealing with complex numbers and wave equations.
RNN as an optimizer: introducing the L2L optimizer, a meta-neural network.
New look on Fourier analysis.
Overview on how does the backpropagation algorithm works.
A visual proof that neural nets can compute any function.
Appearance of the incredible SELU activation function.
A good explanation of overfitting and how to address that problem.
Wikipedia page that lists some of the known window functions - note that the Hann-Poisson window is specially interesting for greedy hill-climbing algorithms (like gradient descent for example).
Exposing backprop's caveats and the importance of knowing that while training models.
Incredibly fast distributed training of a CNN.
Let RNNs decide how long they compute. I would love to see how well would it combines to Neural Turing Machines. Interesting interactive visualizations on the subject can be found here.
(AIAYN) - Introducing multi-head self-attention neural networks with positional encoding to do sentence-level NLP without any RNN nor CNN - this paper is a must-read (also see this explanation and this visualization of the paper).
Batch normalization (BN): to normalize a layer's output by also summing over the entire batch, and then performing a linear rescaling and shifting of a certain trainable amount.
Better classifications with RNNs with bidirectional scanning on the time axis.
You_Again's summary/overview of deep learning, mostly about RNNs.
Very deep residual layers with batch normalization layers - a.k.a. "how to overfit any vision dataset with too many layers and make any vision model work properly at recognition given enough data".
Best Paper Award at CVPR 2017, yielding improvements on state-of-the-art performances on CIFAR-10, CIFAR-100 and SVHN datasets, this new neural network architecture is named DenseNet.
Exploring different approaches to attention mechanisms.
Basically, residual connections can be better than stacked RNNs in the presented case of sentiment analysis.
Nice recursive models using word-level LSTMs on top of a character-level CNN using an overkill amount of GPU power.
ELU activation function for CIFAR vision tasks.
GoogLeNet: Appearance of "Inception" layers/modules, the idea is of parallelizing conv layers into many mini-conv of different size with "same" padding, concatenated on depth.
In 2016: stacked residual LSTMs with attention mechanisms on encoder/decoder are the best for NMT (Neural Machine Translation).
Highway networks: residual connections.
Improvements on differentiable memory based on NTMs: now it is the Differentiable Neural Computer (DNC).
AlexNet, 2012 ILSVRC, breakthrough of the ReLU activation function.
For improving GoogLeNet with residual connections.
3D-GANs for 3D model generation and fun 3D furniture arithmetics from embeddings (think like word2vec word arithmetics with 3D furniture representations).
Two networks in one combined into a seq2seq (sequence to sequence) Encoder-Decoder architecture. RNN Encoder–Decoder with 1000 hidden units. Adadelta optimizer.
That yields intuition about the boundaries of what works for doing NMT within a framed seq2seq problem formulation.
Interesting way of doing one-shot learning with low-data by using an attention mechanism and a query to compare an image to other images for classification.
Classify a new example from a list of other examples (without definitive categories) and with low-data per classification task, but lots of data for lots of similar classification tasks - it seems better than siamese networks. To sum up: with Matching Networks, you can optimize directly for a cosine similarity between examples (like a self-attention product would match) which is passed to the softmax directly. I guess that Matching Networks could probably be used as with negative-sampling softmax trainin...
Interesting overview of the subject of NMT, I mostly read part 8 about RNNs with attention as a refresher.
Attention mechanism for LSTMs! Mostly, figures and formulas and their explanations revealed to be useful to me. I gave a talk on that paper here.
Outstanding for letting a neural network learn an algorithm with seemingly good generalization over long time dependencies. Sequences recall problem.
Nice for photoshop-like "content aware fill" to fill missing patches in images.
Replace word embeddings by word projections in your deep neural networks, which doesn't require a pre-extracted dictionnary nor storing embedding matrices.
Use a distance metric in the loss to determine to which class does an object belongs to from a few examples.
This paper is the sequel to the ProjectionNet just above. The SGNN is elaborated on the ProjectionNet, and the optimizations are detailed more in-depth (also see my attempt to reproduce the paper in code and watch the talks' recording).
4 stacked LSTM cells of 1000 hidden size with reversed input sentences, and with beam search, on the WMT’14 English to French dataset.
LSTMs' attention mechanisms on CNNs feature maps does wonders.
A very interesting and creative work about textual question answering, what a breakthrough, there is something to do with that.
Merges the ideas of the U-Net and the DenseNet, this new neural network is especially good for huge datasets in image segmentation.
The U-Net is an encoder-decoder CNN that also has skip-connections, good for image segmentation at a per-pixel level.
Interesting idea of stacking multiple 3x3 conv+ReLU before pooling for a bigger filter size with just a few parameters. There is also a nice table for "ConvNet Configuration".
For the "deconvnet layer".
Epic raw voice/music generation with new architectures based on dilated causal convolutions to capture more audio length.
Awesome for the use of "local contrast normalization".
Parsey McParseface's birth, a neural syntax tree parser.
Interesting for visual animations, it is a nice intro to attention mechanisms as an example.
Grow decision trees and visualize them, infer the hidden logic behind data.
Clever trick to estimate an optimal learning rate prior any single full training.
Author of Keras - has interesting Twitter posts and innovative ideas.
Learn to slay down hyperparameter spaces automatically rather than by hand.
Very interesting CNN architecture (e.g.: the inception-style convolutional layers is promising and efficient in terms of reducing the number of parameters).
SOTA across many NLP tasks from unsupervised pretraining on huge corpus.
Easily manage huge files in your private Git projects.
Fresh look on how neurons map information.
Thought provoking article about the future of the brain and brain-computer interfaces.
All hail NLP's ImageNet moment.
List of mid to long term futuristic predictions made by Ray Kurzweil.
Awesome for doing clustering on audio - post by an intern at Spotify.
The SOLID principles applied to Machine Learning.
Good for understanding the "Attention Is All You Need" (AIAYN) paper.
François Chollet's thoughts on the future of deep learning.
Understand the different approaches used for NLP's ImageNet moment.
Also good for understanding the "Attention Is All You Need" (AIAYN) paper.
Focus on clear business objectives, avoid pivots of algorithms unless you have really clean code, and be able to know when what you coded is "good enough".
MUST READ post by Andrej Karpathy - this is what motivated me to learn RNNs, it demonstrates what it can achieve in the most basic form of NLP.
Not only the SOLID principles are needed for doing clean code, but the furtherless known REP, CCP, CRP, ADP, SDP and SAP principles are very important for developping huge software that must be bundled in different separated packages.
Explains the LSTM cells' inner workings, plus, it has interesting links in conclusion.
Realistic talking machines: perfect voice generation.
Data is not to be overlooked, and communication between teams and data scientists is important to integrate solutions properly.
An awesome list of public datasets.
Many interesting neural network architectures are implemented by the Korean guy Taehoon Kim, A.K.A. carpedm20.
Neural Turing Machine TensorFlow implementation.
Learn the good design patterns to use for doing Machine Learning the good way, by practicing.
This could be used for a chatbot.
Transfer learning tutorial in TensorFlow for vision from high-level embeddings of a pretrained CNN, AlexNet 2012.
Improvements on the previous project.
Auto (meta) optimizing a neural net (and its architecture) on the CIFAR-100 dataset.
Keras is another intersting deep learning framework like TensorFlow, it is mostly high-level.
Huge free English speech dataset with balanced genders and speakers, that seems to be of high quality.
Tutorial of mine on using LSTMs on time series for classification.
GitHub is full of nice code samples & projects.
Neuraxle is a Machine Learning (ML) library for building neat pipelines, providing the right abstractions to both ease research, development, and deployment of your ML applications.
The best framework for structuring and deploying your machine learning projects, and which is also compatible with most framework (e.g.: Scikit-Learn, TensorFlow, PyTorch, Keras, and so forth).
Another Python framework to benchmark your sentence representations on many datasets (NLP tasks).
With this, you can use words in your deep learning models without training nor loading embeddings.
A Python framework to benchmark your sentence representations on many datasets (NLP tasks).
Tutorial of mine on how to predict temporal sequences of numbers - that may be multichannel.
TensorFlow wrapper à la scikit-learn.
Smooth patch merger for semantic segmentation with a U-Net.
Question answering dataset that can be explored online, and a list of models performing well on that dataset.
Most known deep learning framework, both high-level and low-level while staying flexible.
TONS of datasets for ML.
A talk for a reading group on attention mechanisms (Paper: Neural Machine Translation by Jointly Learning to Align and Translate).
Yet another YouTube playlist I composed, this time about various CS topics.
A list of videos about deep learning that I found interesting or useful, this is a mix of a bit of everything.
Andrew Ng interviews Geoffrey Hinton, who talks about his research and breaktroughs, and gives advice for students.
A primer on how to structure your Machine Learning projects when using Jupyter Notebooks.
A YouTube playlist I composed about DFT/FFT, STFT and the Laplace transform - I was mad about my software engineering bachelor not including signal processing classes (except a bit in the quantum physics class).
Siraj has entertaining, fast-paced video tutorials about deep learning.
Generalize properly how Tensors work, yet just watching a few videos already helps a lot to grasp the concepts.
Interesting and shallow overview of some research papers, for example about WaveNet or Neural Style Transfer.