Talk

Running inference as a serverless cloud function

When deploying your model into production, you have to care about configuring and maintaining the runtime environment, scaling, monitoring and more – all tasks, which are more related to DevOps than to ML. In some contexts, you can achieve your goals in a much simpler way by establishing a “serverless” approach. We’ll take a look at the cloud services “AWS Lambda”, “Google Cloud Functions”, and “Azure Functions” and show how they enable running ML inference.

Date
2019-03-04
Time
19:00 - 19:45
Conference / Event
Berlin Machine Learning Group
Venue
Betahaus, Berlin
  • Slide 1
    1/52
  • Slide 2
    2/52
  • Slide 3
    3/52
  • Slide 4
    4/52
  • Slide 5
    5/52
  • Slide 6
    6/52
  • Slide 7
    7/52
  • Slide 8
    8/52
  • Slide 9
    9/52
  • Slide 10
    10/52
  • Slide 11
    11/52
  • Slide 12
    12/52
  • Slide 13
    13/52
  • Slide 14
    14/52
  • Slide 15
    15/52
  • Slide 16
    16/52
  • Slide 17
    17/52
  • Slide 18
    18/52
  • Slide 19
    19/52
  • Slide 20
    20/52
  • Slide 21
    21/52
  • Slide 22
    22/52
  • Slide 23
    23/52
  • Slide 24
    24/52
  • Slide 25
    25/52
  • Slide 26
    26/52
  • Slide 27
    27/52
  • Slide 28
    28/52
  • Slide 29
    29/52
  • Slide 30
    30/52
  • Slide 31
    31/52
  • Slide 32
    32/52
  • Slide 33
    33/52
  • Slide 34
    34/52
  • Slide 35
    35/52
  • Slide 36
    36/52
  • Slide 37
    37/52
  • Slide 38
    38/52
  • Slide 39
    39/52
  • Slide 40
    40/52
  • Slide 41
    41/52
  • Slide 42
    42/52
  • Slide 43
    43/52
  • Slide 44
    44/52
  • Slide 45
    45/52
  • Slide 46
    46/52
  • Slide 47
    47/52
  • Slide 48
    48/52
  • Slide 49
    49/52
  • Slide 50
    50/52
  • Slide 51
    51/52
  • Slide 52
    52/52

Swipe for more

Scroll or use your arrow keys for more

Scroll for more

Use your arrow keys for more

Download Slides