I have recently published a post explaining core concepts on how to deploy an ML model in a serverless inference service using AWS Lambda, Amazon API Gateway, and the AWS CDK.

For some use cases you and your ML team may need to implement a more complex inference workflow where predictions come from multiple models and are orchestrated with a DAG. On AWS, Step Functions Synchronous Express Workflows allow you to easily build that orchestration layer for your real-time inference services.

Image for post
Image for post
Photo by Jon Tyson on Unsplash

In this post, I will show how you can create a multi-model serverless inference service with AWS Lambda, Step…

Amazon SageMaker Studio is a fully integrated IDE unifying the tools needed for ML development. With Studio you can write code, track experiments, visualize data, and perform debugging and monitoring in a Jupyterlab-based interface. SageMaker manages the creation of the underlying instances and resources so you can get started quickly in your ML project.

Image for post
Image for post

When creating or launching a Notebook, an Interactive Shell, or a Terminal based on a SageMaker Image, the resources run as Apps on Amazon EC2 instances for which you incur cost, and you must shut them down to stop the metering.

Currently, although you can install…

R is one the most popular languages used in data science. It is open source and has many packages for statistical analysis, data wrangling, visualization, and machine learning.

After training an R model, you and your ML team might explore ways to deploy it as an inference service. AWS offers many options for this so you can adapt the deployment scenario to your needs. Among those, adopting a serverless architecture allows you to build a scalable R inference service while freeing your team from the infrastructure management.

Image for post
Image for post
Lambda + R + CDK = ❤

In this post I will show how you can create a serverless R…

Jupyter Notebooks provide useful environments to interactively explore and experiment during an ML project. However, by helping many teams deliver ML solutions for large enterprises on AWS, I often noticed a time in the project when data scientists and ML engineers needed to work with a full-fledged cloud-based IDE offering better code-completion and debugging capabilities for containers running in SageMaker.

In this post, I will show how you can install and run the Theia IDE on a SageMaker Notebook Instance using a Lifecycle Configuration.

Image for post
Image for post
SageMaker + Theia = ❤

Amazon SageMaker is a fully managed service bringing together a broad set of capabilities to help…

Sofian Hamiti

ML Specialist Solutions Architect

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store