Running inference as a serverless cloud function
When deploying your model into production, you have to care about configuring and maintaining the runtime environment, scaling, monitoring and more – all tasks, which are more related to DevOps than to ML. In some contexts, you can achieve your goals in a much simpler way by establishing a “serverless” approach. We’ll take a look at the cloud services “AWS Lambda”, “Google Cloud Functions”, and “Azure Functions” and show how they enable running ML inference.
- Date
- 2019-03-04
- Time
- 19:00 - 19:45
- Conference / Event
- Berlin Machine Learning Group
- Venue
- Betahaus, Berlin
Swipe for more
Scroll or use your arrow keys for more
Scroll for more
Use your arrow keys for more