Is this only for on-device inferencing, or will on-device training also be supported? Concurrently, if you are tightly binding to TensorFlow, would you please pull in the AMD implementation of TensorFlow acceleration so that developers with multiple different GPUs can easily perform model training?
Is this only for on-device inferencing, or will on-device training also be supported? Concurrently, if you are tightly binding to TensorFlow, would you please pull in the AMD implementation of TensorFlow acceleration so that developers with multiple different GPUs can easily perform model training?
Ok 👍