if i run
python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast
the error pops up
here is the full thing
C:\Users\giggl>python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast
GPT2InferenceModel has generative capabilities, as prepare_inputs_for_generation is explicitly defined. However, it doesn't directly inherit from GenerationMixin. From 👉v4.50👈 onwards, PreTrainedModel will NOT inherit from GenerationMixin, and this model will lose the ability to call generate and other related functions.
- If you're using
trust_remote_code=True, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
- If you are the owner of the model architecture code, please modify your model class such that it inherits from
GenerationMixin (after PreTrainedModel, otherwise you'll get an exception).
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
D:\Python\Lib\site-packages\torch\nn\utils\weight_norm.py:143: FutureWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
WeightNorm.apply(module, name, dim)
Generating autoregressive samples..
0%| | 0/24 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\giggl\tortoise\do_tts.py", line 41, in
gen, dbg_state = tts.tts_with_preset(args.text, k=args.candidates, voice_samples=voice_samples, conditioning_latents=conditioning_latents,
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
preset=args.preset, use_deterministic_seed=args.seed, return_deterministic_state=True, cvvp_amount=args.cvvp_amount)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\Lib\site-packages\api.py", line 332, in tts_with_preset
return self.tts(text, **settings)
~~~~~~~~^^^^^^^^^^^^^^^^^^
File "D:\Python\Lib\site-packages\api.py", line 416, in tts
codes = autoregressive.inference_speech(auto_conditioning, text_tokens,
do_sample=True,
...<5 lines>...
max_generate_length=max_mel_tokens,
**hf_generate_kwargs)
File "D:\Python\Lib\site-packages\tortoise\models\autoregressive.py", line 560, in inference_speech
gen = self.inference_model.generate(inputs, bos_token_id=self.start_mel_token, pad_token_id=self.stop_mel_token, eos_token_id=self.stop_mel_token,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\Lib\site-packages\torch\nn\modules\module.py", line 1940, in getattr
raise AttributeError(
f"'{type(self).name}' object has no attribute '{name}'"
)
AttributeError: 'GPT2InferenceModel' object has no attribute 'generate'
if i run
python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast
the error pops up
here is the full thing
C:\Users\giggl>python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast
GPT2InferenceModel has generative capabilities, as
prepare_inputs_for_generationis explicitly defined. However, it doesn't directly inherit fromGenerationMixin. From 👉v4.50👈 onwards,PreTrainedModelwill NOT inherit fromGenerationMixin, and this model will lose the ability to callgenerateand other related functions.trust_remote_code=True, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classesGenerationMixin(afterPreTrainedModel, otherwise you'll get an exception).D:\Python\Lib\site-packages\torch\nn\utils\weight_norm.py:143: FutureWarning:
torch.nn.utils.weight_normis deprecated in favor oftorch.nn.utils.parametrizations.weight_norm.WeightNorm.apply(module, name, dim)
Generating autoregressive samples..
0%| | 0/24 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\giggl\tortoise\do_tts.py", line 41, in
gen, dbg_state = tts.tts_with_preset(args.text, k=args.candidates, voice_samples=voice_samples, conditioning_latents=conditioning_latents,
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
preset=args.preset, use_deterministic_seed=args.seed, return_deterministic_state=True, cvvp_amount=args.cvvp_amount)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\Lib\site-packages\api.py", line 332, in tts_with_preset
return self.tts(text, **settings)
~~~~~~~~^^^^^^^^^^^^^^^^^^
File "D:\Python\Lib\site-packages\api.py", line 416, in tts
codes = autoregressive.inference_speech(auto_conditioning, text_tokens,
do_sample=True,
...<5 lines>...
max_generate_length=max_mel_tokens,
**hf_generate_kwargs)
File "D:\Python\Lib\site-packages\tortoise\models\autoregressive.py", line 560, in inference_speech
gen = self.inference_model.generate(inputs, bos_token_id=self.start_mel_token, pad_token_id=self.stop_mel_token, eos_token_id=self.stop_mel_token,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\Lib\site-packages\torch\nn\modules\module.py", line 1940, in getattr
raise AttributeError(
f"'{type(self).name}' object has no attribute '{name}'"
)
AttributeError: 'GPT2InferenceModel' object has no attribute 'generate'