Limit detection for GPT3/4
under review
MindMac
Just a quality of life improvement and maybe a good suggestion/hint for new users.
If the conversation-context becomes too big under the current GPT-Model, MindMac currently points out to the issue:This model’s maximum context length is XXXX tokens. However, your messages resulted in XXXX tokens. Please reduce the length of the messages.
It could be nice to point out, that you can change your model of your current conversation to a higher GPT-Model with a button (selects next fitting GPT-Model by calculation context-size of current conversation) or just a simple hint in the error message.
Suggested by
Sascha
MindMac
under review