The AWS Generative AI Workshop
Had an AI workshop today, where we went through some of the generative AI services AWS offers and how they could be used. It was reasonably high level yet I still got something out of it.
What was striking was just how much of integrating these foundational models (something like an LLM that was pre-trained on the web) involved natural language. Like if you building a chat bot to have a certain personality, you’d start each context with something like:
You are a friendly life-coach which is trying to be helpful. If you don’t know the answer to a question, you are to say I don’t know. (Question)
This would extend to domain knowledge. Now you could fine tune a foundational model with your own data set, but an easier, allbeit slightly less efficient way, would be to do something like hand craft a bunch of questions and answers pairs, and feed that straight into the prompt.
This may also extend to agents as well (code that the model interacts with). We didn’t cover agents to a significant degree, but after looking at some of the marketing materials, it seems to me that much of the integration is instructing the model to put parameters within XML tags (so that the much “dumber” agent can parse it out), and how to interpret the structured response.
A lot of boilerplate, written in natural language, in the prompt just to deal with passing information around. I didn’t expect that.
Nevertheless, it was pretty interesting. And although I haven’t got the drive to look into this much further, I would like to learn more about how one might hook up external data-sources and agents (somthing that involves vector databases that’s available to the model and doesn’t require fine turning. I not sure how to represent these “facts” so that it’s usable by the model, or even if that’s a thing).