“The Assistants API permits you to construct AI assistants inside your personal functions. An Assistant has directions and might leverage fashions, instruments, and data to reply to consumer queries”, OpenAI.
Sounds nice, so we’re going to have a look at how we are able to use the brand new API to do information evaluation on native information.
The Assistants API represents an method that’s a substitute for no less than some makes use of of Retrieval Augmented Era (RAG). So has RAG simply been a stopgap measure, a short lived answer to the drawbacks of the present technology of LLMs? In any case, LlamaIndex’s Jerry Liu has stated that(albeit a strong one).
Listed here are three particular issues inherent to LLMs that RAG presently addresses and that the Assistants API will even sort out:
- LLMs are outdated. It takes loads of money and time ( ) to coach a mannequin, so the knowledge that they had been skilled on may very well be a few years outdated.
- LLMs don’t learn about your information. It’s fairly unlikely that your information had been a part of the coaching set for an LLM.
- LLMs hallucinate. Typically they’ll give totally believable responses which might be completely false.
By offering the LLM with information that’s related to your software you’ll be able to cut back these issues.
For instance, if you’d like the LLM to provide Streamlit code, you may give it information from the newest documentation to allow it to make use of new options of the framework. Or, if you wish to do some evaluation on some particular information, then clearly giving it that information is important. And, lastly, by offering related information to the LLM, you improve the prospect of it offering an appropriate response and thus cut back the opportunity of it simply making issues up.
Whereas RAG has been used to mitigate these points, they’re now additionally addressed by the brand new Assistants API. The RAG method makes…