• AWS – How To Deploy ML Model Using Sagemaker Endpoint?

    AWS – How To Deploy ML Model Using Sagemaker Endpoint?

    AWS – How To Deploy ML Model Using Sagemaker Endpoint Using Prebuild Container? Table Of Contents: Setup AWS & Install Dependencies. Train & Save The Model. Create A Docker Container. Push The Docker Image To Amazon ECR. Deploy The Model To Sagemaker Endpoint. Make Prediction Using Endpoint Cleanup The Resources. (1) Setup AWS & Install Dependencies. AWS Dependencies: Ensure You Have The The Following Dependencies Installed. An AWS account with SageMaker and ECR permissions. Docker installed (docker – version). AWS CLI configured (aws configure). Boto3 and SageMaker SDK installed. Python Libraries: Python libraries to build the model. pip install boto3

    Read More

  • Transformers – Encoder Architecture

    Transformers – Encoder Architecture

    Transformers – Encoder Architecture Table Of Contents: What Is Encoder In Transformer? Internal Workings Of Encoder Module. How Encoder Module Works With An Example. Why We Use Addition Operation With The Original Input Again In Encoder Module? (1) What Is Encoder In Transformer? In a Transformer model, the Encoder is responsible for processing input data (like a sentence) and transforming it into a meaningful contextual representation that can be used by the Decoder (in tasks like translation) or directly for classification. Encoding is necessary because, it, Transforms words into numerical format (embeddings). Allows self-attention to analyze relationships between words. Adds

    Read More