Running inference as a serverless cloud function

When deploying your model into production, you have to care about configuring and maintaining the runtime environment, scaling, monitoring and more – all tasks, which are more related to DevOps than to ML. In some contexts, you can achieve your goals in a much simpler way by establishing a “serverless” approach. We’ll take a look at the cloud services “AWS Lambda”, “Google Cloud Functions”, and “Azure Functions” and show how they enable running ML inference.

Date
2019-03-04
Time
19:00 - 19:45
Conference / Event
Berlin Machine Learning Group
Venue
Betahaus, Berlin

Slides

Please accept our cookie agreement to see the embedded content. Read more

TAGS

Comments

Please accept our cookie agreement to see full comments functionality. Read more