Thank you for your contribution. I encountered the following error when training with toy data:
TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
I read online that the following reasons may be the cause:
- The maximum length of the tokenizer is not set;
- There are blank lines in the jsonl file;
- The higher version transformer library is incompatible;
- There are Nan values in the data.
However, I tried the solutions corresponding to the above 4 reasons, and this error is still reported. I want to know why. Thank you very much!
Thank you for your contribution. I encountered the following error when training with toy data:
TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
I read online that the following reasons may be the cause:
However, I tried the solutions corresponding to the above 4 reasons, and this error is still reported. I want to know why. Thank you very much!