AWS Leverages Containers to Automate ML Model Deployments

Amazon Web Services (AWS) uses containers to make it easier to deploy machine learning models built with the Amazon SageMaker Studio Notebook tool in production environments.

Rather than setting up, configuring, and managing a continuous integration/continuous delivery (CI/CD) pipeline to automate their deployments, AWS makes a case for using containers to allow data scientists to pick a notebook and create a job that runs in Run can run. a production environment.

Amazon SageMaker Studio Notebook achieves this by taking a snapshot of the entire notebook and then packing its dependencies in a container. This job can be scheduled to run and, upon completion, also automatically release the infrastructure used to run this job.

Ankur Mehrotra, general manager for Amazon SageMaker, says the goal is to reduce the amount of time needed to move a notebook into production from the weeks currently required to a few hours.

Also Read :  Salt Lake City is 'the place' for the fastest internet speed in the country

The Amazon SageMaker Platform is a managed service that AWS provides to simplify the development of machine learning models that infuse artificial intelligence (AI) capabilities into applications. The platform covers everything from data preparation and governance to deployment. The models created by Amazon SageMaker are invoked via a standard set of application programming interfaces (APIs) that the managed service automatically creates, Mehrotra noted.

This approach also makes it easy for data science teams to update or replace models without disrupting application development workflows, he says.

The degree to which organizations choose to rely on a managed service to build AI models will naturally vary. However, given the chronic lack of AI expertise, it makes more sense for organizations to use a platform that automates many of the manual tasks of building and deploying these models, Mehrotra says.

Also Read :  Internet misogynist and alleged rapist Andrew Tate is no stranger to climate change conspiracy theories

Most of these models are stored in a repository provided by AWS, but there is a way to integrate Amazon SageMaker with a Git repository if an organization decides to standardize on a single repository for both its ML models and software artifacts, Mehrotra noted. .

There is little doubt that most applications will soon be infused with some sort of ML model. The challenge is to bridge the divide that currently exists between most DevOps and data science teams. ML models are subject to drift over time as new data is collected, so organizations must develop machine learning operations (MLOps) best practices to manage updates or replace models entirely when necessary. Of course, these updates must be aligned to all application updates in which an ML model is incorporated.

Also Read :  Engineering technology grads join forces for $30,000 donation

It’s not clear how much MLOps and DevOps workflows may eventually converge, but it’s obvious that the rate at which ML models are being created is starting to accelerate. In addition to automating development processes, many of the latest generations of models do not require as much data to create them. As such, depending on the use case, ML models of various sizes are now more frequently deployed in production environments.

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button