-

What Is Early Stopping?
What Is Early Stopping ? Table Of Contents: What Is Early Stopping? Example To Understand – Classification Use Case. Understand The EarlyStopping() Method. (1) What Is Early Stopping? Let’s say you are training a neural network model, you need to mention how many epochs you need to train your model. the term “epochs” refers to a single complete pass of the training dataset through the neural network. How would you know how many epochs you need to have to train your model perfectly? You can say that I will train my model 1000 thousand times and see the result. But
-

Australian Rain Prediction.
Predicting Next Day Rain In Australia Table Of Contents: What Is The Business Use Case? Python Implementation. (1) What Is The Business Use Case ? Predicting next day rain using a dataset containing 10 years of daily weather observations from different location across Australia. (2) Python Implementation. (1) Importing Required Library import matplotlib.pyplot as plt import seaborn as sns import datetime from sklearn import preprocessing from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from keras.layers import Dense, BatchNormalization, Dropout, LSTM from keras.model import Sequential from keras.utils import to_categorical from keras.optimizer import Adam from tensorflow.keras import regularizers
-

Customer Churn Prediction In The Banking Sector.
Customer Churn Prediction In Banking Sector Table Of Contents: What Is The Business Use Case? List Of Independent Variables. Importing Necessary Libraries for Artificial Neural Network. Importing Dataset. Generating Matrix of Features (X). Generating Dependent Variable Vector(Y). Encoding Categorical Variable Gender. Encoding Categorical Variable Country. Splitting Dataset into Training and Testing Dataset. Performing Feature Scaling. Initializing Artificial Neural Network. Creating Hidden Layers. Creating Output Layer. Compiling Artificial Neural Network. Fitting Artificial Neural Network. Predicting Result for Single Point Observation. Saving Created Neural Networks. (1) What Is Business Use Case ? Business Use Cases: Customer Churn Prediction In the Banking Sector.
-

Data Science Use Cases List
Data Science Use Case List Table Of Contents: Customer Churn Prediction In Banking Sector.
-

Self Attentions In Transformers.
Self Attention In Transformers Table Of Contents: Motivation To Study Self Attention. Problem With Word Embedding. What Is Contextual Word Embedding? How Does Self-Attention Work? How To Get The Contextual Word Embeddings? Advantages Of First Principle Above Approach. Introducing Learnable Parameters In The Model. (1) Motivation To Study Self Attention. In 2024 we all know that there is a technology called ‘GenAI’ has penetrated into the market. With this technology we can create different new images, videos, texts from scratch automatically. The center of ‘GenAI’ technology is the ‘Transformers’. And the center of the Transformer is the ‘Self Attention’. Hence
-

What Is Self Attention ?
What Is Self Attention ? Table Of Contents: What Is The Most Important Thing In NLP Applications? Problem With Word2Vec Model. The Problem Of Average Meaning. What Is Self Attention? (1) What Is The Most Important Thing In NLP Applications? Before understanding the self-attention mechanism, we must understand the most important thing in any NLP application. The answer is how you convert any words into numbers ? Our computers don’t understand words they only understand numbers. Hence the researchers first worked in this direction to convert any words into vectors. We got some basic techniques like, One Hot Encoding. Bag
-

Introduction To Transformers!
Introduction To Transformers ! Table Of Contents: What Is Transformers? History Of Transformers. Impact Of Transformers In NLP. Democratizing AI. Multimodel Capability Of Transformers. Acceleration Of GenAI. Unification Of Deep Learning. Why Transformers Are Created? Neural Machine Translation Jointly Learning To Align & Translate. Attention Is All You Need. The Time Line Of Transformers. The Advantages Of Transformers. Real World Applications Of Transformers. Disadvantages Of Transformers. The Future Of Transformers. (1) What Is Transformers? Transformers is basically a Neural Network Architecture. In deep learning, we have already studied the ANN, CNN & RNN. ANN works for the cross-sectional data, CNN
-

Luong Attention !
Luong’s Attention ! Table Of Contents: What Is Luong’s Attention? Key Features Of Luong’s Attention Model? Advantages Of Luong’s Attention Model? Architecture Of Luong’s Attention Model. Why do We Take the Current Hidden State Output Of The Decoder In Luong’s Attention Model? Architecture Luong’s Attention Model. Difference In Luong’s Attention & Bahdanau’s Attention (1) What Is Luong’s Attention? Luong’s attention is another type of attention mechanism, introduced in the paper “Effective Approaches to Attention-based Neural Machine Translation” by Minh-Thang Luong, Hieu Pham, and Christopher D. Manning in 2015. Luong’s attention mechanism is also designed for encoder-decoder models, similar to Bahdanau’s
-

Bahdanau Attention Vs Luong Attention !
Bahdanau Attention ! Table Of Contents: What Is Attention Mechanism? What Is Bahdanau’s Attention? Architecture Of Bahdanau’s Attention? (1) What Is Attention Mechanism? An attention mechanism is a neural network component used in various deep learning models, particularly in the field of natural language processing (NLP) and sequence-to-sequence tasks. It was introduced in the paper “Attention is All You Need” by Vaswani et al. in 2017. The attention mechanism allows a model to focus on the most relevant parts of the input when generating an output, rather than treating the entire input sequence equally. This is particularly useful when the
-

What Is Attention Mechanism?
What Is Attention Mechanism? Table Of Contents: Problem With Encoder & Decoder Architecture. Solution For Encoder & Decoder Architecture. Math’s Behind Attention Mechanism. Improvements Due To Attention Mechanism. (1) Problem With Encoder & Decoder Architecture. Problem With Encoder: The main idea behind the Encoder & Decoder architecture is that, the encoder summarize the entire text into one vector format and from that vector we need to convert into different language. Let us consider the below example, Your task is to read the entire sentence first, keep all the words in mind and translate it into Hindi without seeing the sentence
