Whenever I attempt to retrieve data from the specified URL, I encounter an error. The URL and port are accurate, as opening the URL in a browser yields the expected response. However, attempting to connect to it via the API route results in everything breaking down.
The error message reads:
TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:14062:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
{ cause: Error: connect ECONNREFUSED ::1:7071
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1487:16) at
TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '::1',
port: 7071 } }
Within the API route:
export async function GET(request: Request) {
try {
// Connect to mcrft azure func endpoint
const response = await fetch(
`${process.env.VERCEL_URL || "http://localhost:7071"
}/api/getChatGPTSuggestion`,
{
cache: "no-store",
}
);
const textData = await response.text();
return new Response(JSON.stringify(textData.trim()), {
status: 200,
});
} catch (error) {
console.log("error inside get route", error)
if (error instanceof Error) {
return new Response(error.message, { status: 500 });
}
return new Response("Internal Server Error", { status: 500 });
}
}
Regarding the cloud function:
const { app } = require('@azure/functions')
const openai = require('../../lib/openai')
app.http('getChatGPTSuggestion', {
methods: ['GET'],
authLevel: 'anonymous',
handler: async (request, context) => {
const response = await openai.createCompletion({
model: 'text-davinci-003',
prompt:
'...',
max_tokens: 100,
temperature: 0.8,
})
context.log(`Http function processed request for url "${request.url}"`)
const responseText = response.data.choices[0].text
return {
body: responseText,
}
},
})