Find my full blog here
Find my highlighted posts below!
In this tutorial, we will classify BBC news articles into their appropriate categories using 1-D convolutional neural networks layers instead of a RNN or LSTM.
In this tutorial, we are going to train a Transformer model for abstractive summarization task by training it on news articles and their summaries.
In this tutorial, we use metric learning in order to search the nearest neighbours or similar images to a given image from the CIFAR-10 dataset.
In this tutorial, we will train an Autoencoder to denoise MNIST images to reproduce clean pictures of digits from the MNIST dataset.
In this tutorial, we will train a CycleGAN model to translate photos of horses to zebras, and back again to horses. The model we are going to train is the very model which was trained in the CycleGAN paper. CycleGANs are great for unsupervised training capabilities.
Build a Question Answering System using a pre-trained BERT model and tokenizer using context based on first match Wikipedia article.
Build and train an Encoder-Decoder style network to build an Image Captioning model using Transfer Learning from the ResNet50 model.
Train your first Neural Network for Machine Translation using Encoder-Decoder style LSTM to translate German phrases into English. Inspired by Google's GNMT model used in Google Translate.
In this tutorial, we are going to build a neural network with LSTM layers using Keras and train it on Alice's Adventures in Wonderland by Lewis Carroll as a part of Project Gutenberg.
Sneak peak into the methodology on which Nvidia's DLSS works! In this tutorial, we build an SR-CNN model that learns end-to-end mapping of low resolution to high-resolution images. As a result, we can use it to improve the image quality of low-resolution images.
"Is it actually possible for an AI to break an encrypted message?". Take a look at the Learning Parity with Noise or LPN Problem coupled with a tutorial to help us understand how an AI can break encryption.
Deep Learning models are data-hungry models and require a lot of training time and it only grows as number of layers increase. Using RMSProp Optimizer, train a CNN Model on CIFAR 10 dataset using Keras on Kaggle!
Train your first multimodal learning model with Images and Text modalities using Keras and Kaggle Notebooks.