AI & Vectors

Generating OpenAI GPT3 completions

Generate GPT text completions using OpenAI and Supabase Edge Functions.

OpenAI provides a completions API that allows you to use their generative GPT models in your own applications.

OpenAI's API is intended to be used from the server-side. Supabase offers Edge Functions to make it easy to interact with third party APIs like OpenAI.

Setup Supabase project

If you haven't already, install the Supabase CLI and initialize your project:


_10
supabase init

Create edge function

Scaffold a new edge function called openai by running:


_10
supabase functions new openai

A new edge function will now exist under ./supabase/functions/openai/index.ts.

We'll design the function to take your user's query (via POST request) and forward it to OpenAI's API.

index.ts

_23
import OpenAI from 'https://deno.land/x/openai@v4.24.0/mod.ts'
_23
_23
Deno.serve(async (req) => {
_23
const { query } = await req.json()
_23
const apiKey = Deno.env.get('OPENAI_API_KEY')
_23
const openai = new OpenAI({
_23
apiKey: apiKey,
_23
})
_23
_23
// Documentation here: https://github.com/openai/openai-node
_23
const chatCompletion = await openai.chat.completions.create({
_23
messages: [{ role: 'user', content: query }],
_23
// Choose model from here: https://platform.openai.com/docs/models
_23
model: 'gpt-3.5-turbo',
_23
stream: false,
_23
})
_23
_23
const reply = chatCompletion.choices[0].message.content
_23
_23
return new Response(reply, {
_23
headers: { 'Content-Type': 'text/plain' },
_23
})
_23
})

Note that we are setting stream to false which will wait until the entire response is complete before returning. If you wish to stream GPT's response word-by-word back to your client, set stream to true.

Create OpenAI key

You may have noticed we were passing OPENAI_API_KEY in the Authorization header to OpenAI. To generate this key, go to https://platform.openai.com/account/api-keys and create a new secret key.

After getting the key, copy it into a new file called .env.local in your ./supabase folder:


_10
OPENAI_API_KEY=your-key-here

Run locally

Serve the edge function locally by running:


_10
supabase functions serve --env-file ./supabase/.env.local --no-verify-jwt

Notice how we are passing in the .env.local file.

Use cURL or Postman to make a POST request to http://localhost:54321/functions/v1/openai.


_10
curl -i --location --request POST http://localhost:54321/functions/v1/openai \
_10
--header 'Content-Type: application/json' \
_10
--data '{"query":"What is Supabase?"}'

You should see a GPT response come back from OpenAI!

Deploy

Deploy your function to the cloud by runnning:


_10
supabase functions deploy --no-verify-jwt openai
_10
supabase secrets set --env-file ./supabase/.env.local

Go deeper

If you're interesting in learning how to use this to build your own ChatGPT, read the blog post and check out the video: