Yes. Set the device variable to "cpu" in the code and modify se_extractor.py, model = WhisperModel(model_size, device="cuda", compute_type="float16") with, model = WhisperModel(model_size, device="cpu", compute_type="float32") I have pointed out issues with this model in the video. Surely, there are other better options you can try than this.
Links:
www.patreon.com/CompactAI
github.com/myshell-ai/OpenVoice
Can We run it on CPU based system?
Yes. Set the device variable to "cpu" in the code and modify se_extractor.py,
model = WhisperModel(model_size, device="cuda", compute_type="float16")
with,
model = WhisperModel(model_size, device="cpu", compute_type="float32")
I have pointed out issues with this model in the video. Surely, there are other better options you can try than this.
@@compactai Thanks man. It worked.