The handleLLMNewToken callback is invoked every time a new token is generated by LangChain. An additional callback is defined using handleLLMNewToken to stream the response from LangChain to the shopper. The ConsoleCallbackHandler will output the response from LangChain to the Gadget Logs, so you can see the exact input and output from LangChain. Once you have done this, any additional data pushed to the stream will be sent back to the route called.įinally, chain.call() is invoked to generate a response to the shopper's question, and takes the and productString as input for the shopper's question and the products with the closest descriptions matching the question, respectively. You define a new Readable stream and immediately send it as a response using await nd(stream). The call method takes two parameters: an object containing the input variables for the prompt and an array of callback handlers. ![]() The call method of the chain is used to invoke the chain. Now that the chain is defined, you can invoke it and stream the response back to the shopper. You are now ready to invoke the chain and stream the response back to the shopper. The schema option is used to define the JSON schema for the request body.ĭefine the parameter at the bottom of routes/POST-chat.js: To define this message parameter, you can use the schema option in the route module's options object. The message will be passed into the prompt template. The route will accept a message from the shopper as part of a request body. Now that LangChain is set up, you can define the route parameters. LangChain has a variety of different models, including chat-specific models, that might be worth investigating for your app. LangChain model selection The OpenAI model and LLMChain are used in this tutorial as an example of how to use chains and prompt templates with LangChain. The chain is defined using the model, prompt, and parser. The streaming parameter is set to true to ensure that the response from OpenAI is streamed in as it is generated, rather than waiting for the entire response to be generated before returning it. The temperature parameter is set to 0 to ensure that the response from OpenAI is deterministic. Enter yarn add zod to install the LangChain client and zod.Enter > in the palette to allow you to run yarn commands.Open the Gadget command palette using P or Ctrl P. ![]() The zod package will be used to provide a parser to LangChain for reliably extracting structured data from the LLM response. To start, install the LangChain and zod npm packages. This will allow you to show the shopper that the chatbot is typing while it is generating a response. You will also stream the response from LangChain to the shopper's chat window. In addition, you will include product information to display recommended products by LangChain and provide a link to the store page for these products. These products, along with a prompt, will be passed into LangChain, which will then use an OpenAI model to respond to the shopper's question. ![]() To complete your app backend, you will use cosine similarity on the stored vector embeddings to extract products that are closely related to a shopper's query. Step 5: Add HTTP route to handle incoming chat messages
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |