I am currently working on establishing an API endpoint using the latest Route Handler feature in Nextjs 13. This particular API utilizes LangChain and streams the response directly to the frontend. When interacting with the OpenAI wrapper class, I make sure to include the Streaming parameter and define a callback function. This callback function is responsible for providing the stream in chunks (referred to as tokens), which are then transmitted to the frontend to display the AI's ongoing responses.
In the past, I successfully implemented this functionality using the traditional API route approach with the code snippet below:
import { OpenAI } from "langchain/llms/openai";
export default async function handler(req, res) {
const chat = new OpenAI({
modelName: "gpt-3.5-turbo",
streaming: true,
callbacks: [
{
handleLLMNewToken(token) {
res.write(token);
},
},
],
});
await chat.call("Write me a song about sparkling water.");
res.end();
}
Now, my goal is to adapt this code to the new Route Handler implementation, but unfortunately, I have not yet achieved success in doing so.
I have experimented with various strategies without any positive outcome.
For instance:
import { NextResponse } from "next/server";
import { OpenAI } from "langchain/llms/openai";
export const dynamic = "force-dynamic";
export const revalidate = true;
export async function GET(req, res) {
const chat = new OpenAI({
modelName: "gpt-3.5-turbo",
streaming: true,
callbacks: [
{
handleLLMNewToken(token) {
// res.write(token);
return new NextResponse.json(token);
},
},
],
});
await chat.call("Write me a song about sparkling water.");
}
It appears that there is no straightforward method to output or "write" the tokens to the response while they are being streamed to the Route Handler's response.
Any form of guidance or support on this matter would be immensely valuable and highly appreciated.