Serverless deployment of ML inference models

When you’re trying to find use cases for serverless applications, you might not immediately — if at all — think of machine learning inference. Why? Because trained ml-models are basically a big bunch of state (ouch!) and most frameworks come with a large pile of dependencies (double ouch!).

We’ll investigate working examples of deep learning technology for image and text classification using Tensorflow, Pytorch and Spacy on AWS, Google Cloud and Azure. We will use these examples to discuss working with dependencies, handling of global state and deployment/packaging options in serverless environments. Furthermore, we will try to answer the question if these limitations help us to improve architectural goals like separation of concerns in code related to machine learning. Finally, I will summarize if serverless is a good fit for these use cases.

Don’t be afraid: machine learning models are only used as subject of discussion in this talk. There is no in-depth knowledge of these technologies required for this session. Everything you need to know to be able to follow the discussion will be covered by the talk itself.

Date
2019-04-02
Time
15:05 - 15:55
Conference / Event
microXchg 2019
Venue
Kalkscheune, Berlin

TAGS

Comments

Please accept our cookie agreement to see full comments functionality. Read more