Distributed TensorFlow (TensorFlow Dev Summit 2017)

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ก.ค. 2024
  • TensorFlow gives you the flexibility to scale up to hundreds of GPUs, train models with a huge number of parameters, and customize every last detail of the training process. In this talk, Derek Murray gives you a bottom-up introduction to Distributed TensorFlow, showing all the tools available for harnessing this power.
    Further reading:
    - tensorflow.org/extend/archite...
    - tensorflow.org/how_tos/distri...
    - tensorflow.org/tutorials/esti...
    Visit the TensorFlow website for all session recordings: goo.gl/bsYmza
    Subscribe to the Google Developers channel at goo.gl/mQyv5L
    event: TensorFlow Dev Summit 2017; re_ty: Publish;
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 15

  • @AmilaManoj
    @AmilaManoj 5 ปีที่แล้ว +2

    Contents
    Objectives: 3:41
    Intro: 4:00
    Distbelief inspiration: 5:51
    Replication: 7:55
    In-graph replication: 8:21
    Between-graph replication: 9:54
    Variable placement: 11:17
    Device placement summary: 14:39
    Sessions and servers: 15:14
    Fault tolerance: 18:51
    High-level APIs: 25:08

    • @Kajahzao
      @Kajahzao 5 ปีที่แล้ว

      Hi Google, cant you fix in the comments this TOC ? T h Ankiu V!

  • @user-sj9lc7vr6e
    @user-sj9lc7vr6e 6 ปีที่แล้ว

    Really cool, great work.

  • @raghkripa4666
    @raghkripa4666 7 ปีที่แล้ว +2

    Nice talk. Any pointers to the presentation charts?

  • @redfishleo6578
    @redfishleo6578 6 ปีที่แล้ว

    10:38
    Is there any difference of a smaller graph between these 2 tasks?
    Isn't there the same subgraph (output = ...\ loss = ...) ?
    Or how is it transform to 2 (or maybe more) subgraph?

  • @tina3829
    @tina3829 6 ปีที่แล้ว

    Thanks! great talk

  • @kimbring2727
    @kimbring2727 3 ปีที่แล้ว

    Nice presentation!

  • @ryonakamura6055
    @ryonakamura6055 7 ปีที่แล้ว

    TensorFlow を使用した複数ノード間 multi-GPU のわかりやすい解説だ.秋葉さんの ChainerMN 解説と一緒に見るのがおすすめ.

  • @thesawatdatta
    @thesawatdatta 5 ปีที่แล้ว

    What if I have Images as input data for between graph training on multiple nodes? Do I need to put image database on each of those workers? Please guide me

  • @jugsma6676
    @jugsma6676 6 ปีที่แล้ว

    A question, Does the Distributed Tensorflow uses Round Robin algorithm.

  • @Fanchiotti
    @Fanchiotti 6 ปีที่แล้ว

    Thank you sir.

  • @utkarsh_dubey
    @utkarsh_dubey 7 ปีที่แล้ว +1

    Awesome stuff :)
    Wisdom of Mycroft Holmes(Mark Gattis)

  • @xdxn2010
    @xdxn2010 3 ปีที่แล้ว

    22:10, how can the chief work restore the failed PS tasks?

  • @chefboyrdee1
    @chefboyrdee1 6 ปีที่แล้ว

    just saying dist-keras .. uses spark too ... sorry .. just wanted to leave that here.

  • @user-en4th3yz4j
    @user-en4th3yz4j 6 ปีที่แล้ว

    中文翻译有问题啊