Vortrag

Running inference as a serverless cloud function

When deploying your model into production, you have to care about configuring and maintaining the runtime environment, scaling, monitoring and more – all tasks, which are more related to DevOps than to ML. In some contexts, you can achieve your goals in a much simpler way by establishing a „serverless” approach. We’ll take a look at the cloud services „AWS Lambda”, „Google Cloud Functions”, and „Azure Functions” and show how they enable running ML inference.

Datum
04.03.2019
Uhrzeit
19:00 - 19:45
Konferenz / Veranstaltung
Berlin Machine Learning Group
Ort
Betahaus, Berlin