Deploy Multiple ML Models on a Single Endpoint Using Multi-model Endpoints on Amazon SageMaker

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ม.ค. 2025

ความคิดเห็น • 8

  • @OPopoola
    @OPopoola 3 ปีที่แล้ว +7

    Thanks for this amazing demo. My question is what if you have models that are not related? All the models you deployed are similar in function with similar input data. Can you deploy models that are different in function and input parameters to the same multi-model endpoint?

  • @radoslawwlodarczyk8699
    @radoslawwlodarczyk8699 4 ปีที่แล้ว +1

    Thanks for a really nice review. You did not mention other related services, so I would like to ask if there is any additional orchestrator that would take care of registering/deregistering models and model versioning? Also champion-challenger scenario would be interesting, where in fact two models are invoked at once and the differences in their behaviour are logged.

  • @WatchUniverseW
    @WatchUniverseW 3 ปีที่แล้ว

    the notebook shown @10:50 seems have a mistake when splitting X_val further

  • @2107mann
    @2107mann 4 ปีที่แล้ว

    Thank you so much helped a lot.

  • @bharathjc4700
    @bharathjc4700 4 ปีที่แล้ว

    Great post !!! can u please drop the link to access notebooks

  • @shuaitang2982
    @shuaitang2982 4 ปีที่แล้ว

    Would you be able to share the notebook and link to dataset so that it's easier to follow?

  • @harrisonkane8584
    @harrisonkane8584 4 ปีที่แล้ว

    I've followed the steps here, but when I pass new data to the model to get an inference with the invoke_endpoint() method, the response object is just an empty byte string. I'm a bit confused because I can tell it's finding the model artifact in s3, but nothing is getting returned. Has anyone else encountered this or know how to solve?