Skip to content

tortoise-tts/tortoise/api_fast.py attempts to deserialize object on a CUDA device but torch.cuda.is_available() is False #855

@kentontroy

Description

@kentontroy

BACKGROUND:

The init() function of class TextToSpeech correctly has:

if torch.backends.mps.is_available():
            self.device = torch.device('mps')
.....

The nightly build of PyTorch recognizes the mps device on my Mac M3 Ultra.

PROBLEM:

When running:

time python do_tts.py \
    --output_path results \
    --preset ultra_fast \
    --voice test \
    --text "Frank Herbert is the best science fiction author ever"

An error occurs:

"tortoise-tts/tortoise/api_fast.py attempts to deserialize object on a CUDA device but torch.cuda.is_available() is False"

RESOLUTION THAT WORKED:

To resolve this, I added this to tortoise-tts/tortoise/api_fast.py: 

def __init__(self, autoregressive_batch_size=None, models_dir=MODELS_DIR,
                 enable_redaction=True, kv_cache=False, use_deepspeed=False, half=False, device=None,
                 tokenizer_vocab_file=None, tokenizer_basic=False):


        hifi_model = torch.load(get_model_path('hifidecoder.pth'), map_location=self.device)

The original torch.load method does not specify the map_location.

Please confirm or suggest a better way.

Thanks - very nice project!
kentontroy

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions