Feature description
We currently abstract LLMs through ragna.core.Assistant. While this allows users to implement arbitrary assistants, it makes it unnecessarily hard to use LLMs for other tasks in the RAG pipeline. For example, preprocessing a prompt would require a custom implementation of potentially the same LLM used for answering the prompt.
Thus, I propose we implement a Llm base class and add builtin components for all assistants we currently have. Probably under the ragna.llms namespace.
I'm not sure yet whether ragna.core.Assistant should subclass the new Llm base class or if we rather use composition. Open to both.
I'm also not 100% sure what the interface should look like. I would like to see a small survey of other tools in the ecosystem to compare.
Value and/or benefit
Less duplication for users that want to use a more complex pipleline.
Anything else?
No response
Feature description
We currently abstract LLMs through
ragna.core.Assistant. While this allows users to implement arbitrary assistants, it makes it unnecessarily hard to use LLMs for other tasks in the RAG pipeline. For example, preprocessing a prompt would require a custom implementation of potentially the same LLM used for answering the prompt.Thus, I propose we implement a
Llmbase class and add builtin components for all assistants we currently have. Probably under theragna.llmsnamespace.I'm not sure yet whether
ragna.core.Assistantshould subclass the newLlmbase class or if we rather use composition. Open to both.I'm also not 100% sure what the interface should look like. I would like to see a small survey of other tools in the ecosystem to compare.
Value and/or benefit
Less duplication for users that want to use a more complex pipleline.
Anything else?
No response