You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Allow selection of model in AI assistant front end. For example, ChatGPT4, ChatGPT3.5 or Llama-70b or call them using an alias (like Poe) using @chatgpt4
The latter could be extensible to other parts of nextcloud like Talk to call a model as a bot.
Describe the solution you'd like
I use LITELLM to proxy requests to a variety of GPT services. This can be integrated into Nextcloud instead of LocalAI which gave me lots of hassles.
Even if using one provider the OpenAI spec allows you to specify model.
Some models perform better at different tasks or have different cost implications.
Providing a drop down in the Assistant would allow the user to pick the model for the task.
This could also tie into the nextcloud permissions system where different models are restricted for different user groups and different tasks.
If the available models are aliased in a drop down and callable like Poe, then there could be a "Callable" option where multiple models could be called in the dialogue with @ aliases. Bracketing responses could allow nesting where one model processes the results of another.
This could also allow "custom gpt" (different stored prompts) to be stored in assistant. In a sense the "headline", "summary" buttons are these. If these were a user defineable stored set with their own aliases, then these could be combined with the model calls.
The benefit of the above, together with proxying calls across ai services (using something like LiteLLM, is that you effectively provide an abstraction layer across AI services, and allow application of permissions, cost management, etc to that.
Describe alternatives you've considered
I can do the above outside nextcloud using litellm and privategpt.
This creates a further user environment and losing nextcloud integration.
The text was updated successfully, but these errors were encountered:
Describe the feature you'd like to request
Allow selection of model in AI assistant front end. For example, ChatGPT4, ChatGPT3.5 or Llama-70b or call them using an alias (like Poe) using @chatgpt4
The latter could be extensible to other parts of nextcloud like Talk to call a model as a bot.
Describe the solution you'd like
I use LITELLM to proxy requests to a variety of GPT services. This can be integrated into Nextcloud instead of LocalAI which gave me lots of hassles.
Even if using one provider the OpenAI spec allows you to specify model.
Some models perform better at different tasks or have different cost implications.
Providing a drop down in the Assistant would allow the user to pick the model for the task.
This could also tie into the nextcloud permissions system where different models are restricted for different user groups and different tasks.
If the available models are aliased in a drop down and callable like Poe, then there could be a "Callable" option where multiple models could be called in the dialogue with @ aliases. Bracketing responses could allow nesting where one model processes the results of another.
This could also allow "custom gpt" (different stored prompts) to be stored in assistant. In a sense the "headline", "summary" buttons are these. If these were a user defineable stored set with their own aliases, then these could be combined with the model calls.
The benefit of the above, together with proxying calls across ai services (using something like LiteLLM, is that you effectively provide an abstraction layer across AI services, and allow application of permissions, cost management, etc to that.
Describe alternatives you've considered
I can do the above outside nextcloud using litellm and privategpt.
This creates a further user environment and losing nextcloud integration.
The text was updated successfully, but these errors were encountered: