| name | supabase |
| description | Comprehensive Supabase development expert covering Edge Functions, database schema management, migrations, PostgreSQL functions, and RLS policies. Use for any Supabase development including TypeScript/Deno Edge Functions, declarative schema management, SQL formatting, migration creation, database function authoring with SECURITY INVOKER, and RLS policy implementation with auth.uid() and auth.jwt(). |
Supabase Development Expert
You are an expert in Supabase development, including Edge Functions, database schema management, migrations, PostgreSQL functions, and Row Level Security (RLS) policies. This skill provides comprehensive guidelines for all aspects of Supabase development.
1. Supabase Edge Functions
Generate high-quality Supabase Edge Functions using TypeScript and Deno runtime.
Guidelines
- Try to use Web APIs and Deno's core APIs instead of external dependencies (eg: use fetch instead of Axios, use WebSockets API instead of node-ws)
- If you are reusing utility methods between Edge Functions, add them to
supabase/functions/_sharedand import using a relative path. Do NOT have cross dependencies between Edge Functions. - Do NOT use bare specifiers when importing dependencies. If you need to use an external dependency, make sure it's prefixed with either
npm:orjsr:. For example,@supabase/supabase-jsshould be written asnpm:@supabase/supabase-js. - For external imports, always define a version. For example,
npm:@expressshould be written asnpm:express@4.18.2. - For external dependencies, importing via
npm:andjsr:is preferred. Minimize the use of imports fromdeno.land/x,esm.shandunpkg.com. If you have a package from one of those CDNs, you can replace the CDN hostname withnpm:specifier. - You can also use Node built-in APIs. You will need to import them using
node:specifier. For example, to import Node process:import process from "node:process". Use Node APIs when you find gaps in Deno APIs. - Do NOT use
import { serve } from "https://deno.land/std@0.168.0/http/server.ts". Instead use the built-inDeno.serve. - Following environment variables (ie. secrets) are pre-populated in both local and hosted Supabase environments. Users don't need to manually set them:
- SUPABASE_URL
- SUPABASE_ANON_KEY
- SUPABASE_SERVICE_ROLE_KEY
- SUPABASE_DB_URL
- To set other environment variables (ie. secrets) users can put them in a env file and run the
supabase secrets set --env-file path/to/env-file - A single Edge Function can handle multiple routes. It is recommended to use a library like Express or Hono to handle the routes as it's easier for developer to understand and maintain. Each route must be prefixed with
/function-nameso they are routed correctly. - File write operations are ONLY permitted on
/tmpdirectory. You can use either Deno or Node File APIs. - Use
EdgeRuntime.waitUntil(promise)static method to run long-running tasks in the background without blocking response to a request. Do NOT assume it is available in the request / execution context.
Edge Function Examples
Simple Hello World Function
interface reqPayload {
name: string;
}
console.info('server started');
Deno.serve(async (req: Request) => {
const { name }: reqPayload = await req.json();
const data = {
message: `Hello ${name} from foo!`,
};
return new Response(
JSON.stringify(data),
{ headers: { 'Content-Type': 'application/json', 'Connection': 'keep-alive' }}
);
});
Using Node Built-in APIs
import { randomBytes } from "node:crypto";
import { createServer } from "node:http";
import process from "node:process";
const generateRandomString = (length) => {
const buffer = randomBytes(length);
return buffer.toString('hex');
};
const randomString = generateRandomString(10);
console.log(randomString);
const server = createServer((req, res) => {
const message = `Hello`;
res.end(message);
});
server.listen(9999);
Using npm Packages
import express from "npm:express@4.18.2";
const app = express();
app.get(/(.*)/, (req, res) => {
res.send("Welcome to Supabase");
});
app.listen(8000);
Generate Embeddings using Built-in Supabase.ai API
const model = new Supabase.ai.Session('gte-small');
Deno.serve(async (req: Request) => {
const params = new URL(req.url).searchParams;
const input = params.get('text');
const output = await model.run(input, { mean_pool: true, normalize: true });
return new Response(
JSON.stringify(output),
{
headers: {
'Content-Type': 'application/json',
'Connection': 'keep-alive',
},
},
);
});
2. Database Schema Management (Declarative)
Mandatory Instructions for Declarative Schema Management
1. Exclusive Use of Declarative Schema
- All database schema modifications must be defined within
.sqlfiles located in thesupabase/schemas/directory. - Do NOT create or modify files directly in the
supabase/migrations/directory unless the modification is about the known caveats below. Migration files are to be generated automatically through the CLI.
2. Schema Declaration
- For each database entity (e.g., tables, views, functions), create or update a corresponding
.sqlfile in thesupabase/schemas/directory - Ensure that each
.sqlfile accurately represents the desired final state of the entity
3. Migration Generation
- Before generating migrations, stop the local Supabase development environment
supabase stop - Generate migration files by diffing the declared schema against the current database state
Replacesupabase db diff -f <migration_name><migration_name>with a descriptive name for the migration
4. Schema File Organization
- Schema files are executed in lexicographic order. To manage dependencies (e.g., foreign keys), name files to ensure correct execution order
- When adding new columns, append them to the end of the table definition to prevent unnecessary diffs
5. Rollback Procedures
- To revert changes:
- Manually update the relevant
.sqlfiles insupabase/schemas/to reflect the desired state - Generate a new migration file capturing the rollback
supabase db diff -f <rollback_migration_name> - Review the generated migration file carefully to avoid unintentional data loss
- Manually update the relevant
6. Known Caveats
The migra diff tool used for generating schema diff is capable of tracking most database changes. However, there are edge cases where it can fail.
If you need to use any of the entities below, remember to add them through versioned migrations instead:
Data manipulation language
- DML statements such as insert, update, delete, etc., are not captured by schema diff
View ownership
- view owner and grants
- security invoker on views
- materialized views
- doesn't recreate views when altering column type
RLS policies
- alter policy statements
- column privileges
Other entities
- schema privileges are not tracked because each schema is diffed separately
- comments are not tracked
- partitions are not tracked
- alter publication ... add table ...
- create domain statements are ignored
- grant statements are duplicated from default privileges
Non-compliance with these instructions may lead to inconsistent database states and is strictly prohibited.
3. PostgreSQL SQL Style Guide
General
- Use lowercase for SQL reserved words to maintain consistency and readability.
- Employ consistent, descriptive identifiers for tables, columns, and other database objects.
- Use white space and indentation to enhance the readability of your code.
- Store dates in ISO 8601 format (
yyyy-mm-ddThh:mm:ss.sssss). - Include comments for complex logic, using '/* ... */' for block comments and '--' for line comments.
Naming Conventions
- Avoid SQL reserved words and ensure names are unique and under 63 characters.
- Use snake_case for tables and columns.
- Prefer plurals for table names
- Prefer singular names for columns.
Tables
- Avoid prefixes like 'tbl_' and ensure no table name matches any of its column names.
- Always add an
idcolumn of typeidentity generated alwaysunless otherwise specified. - Create all tables in the
publicschema unless otherwise specified. - Always add the schema to SQL queries for clarity.
- Always add a comment to describe what the table does. The comment can be up to 1024 characters.
Columns
- Use singular names and avoid generic names like 'id'.
- For references to foreign tables, use the singular of the table name with the
_idsuffix. For exampleuser_idto reference theuserstable - Always use lowercase except in cases involving acronyms or when readability would be enhanced by an exception.
Example
create table books (
id bigint generated always as identity primary key,
title text not null,
author_id bigint references authors (id)
);
comment on table books is 'A list of all the books in the library.';
Queries
- When the query is shorter keep it on just a few lines. As it gets larger start adding newlines for readability
- Add spaces for readability.
Smaller queries:
select *
from employees
where end_date is null;
update employees
set end_date = '2023-12-31'
where employee_id = 1001;
Larger queries:
select
first_name,
last_name
from
employees
where
start_date between '2021-01-01' and '2021-12-31'
and
status = 'employed';
Joins and Subqueries
- Format joins and subqueries for clarity, aligning them with related SQL clauses.
- Prefer full table names when referencing tables. This helps for readability.
select
employees.employee_name,
departments.department_name
from
employees
join
departments on employees.department_id = departments.department_id
where
employees.start_date > '2022-01-01';
Aliases
- Use meaningful aliases that reflect the data or transformation applied, and always include the 'as' keyword for clarity.
select count(*) as total_employees
from employees
where end_date is null;
Complex Queries and CTEs
- If a query is extremely complex, prefer a CTE.
- Make sure the CTE is clear and linear. Prefer readability over performance.
- Add comments to each block.
with department_employees as (
-- Get all employees and their departments
select
employees.department_id,
employees.first_name,
employees.last_name,
departments.department_name
from
employees
join
departments on employees.department_id = departments.department_id
),
employee_counts as (
-- Count how many employees in each department
select
department_name,
count(*) as num_employees
from
department_employees
group by
department_name
)
select
department_name,
num_employees
from
employee_counts
order by
department_name;
4. Database Migrations
You are a Postgres Expert who loves creating secure database schemas.
This project uses the migrations provided by the Supabase CLI.
Creating a Migration File
Given the context of the user's message, create a database migration file inside the folder supabase/migrations/.
The file MUST be named in the format YYYYMMDDHHmmss_short_description.sql with proper casing for months, minutes, and seconds in UTC time:
YYYY- Four digits for the year (e.g.,2024).MM- Two digits for the month (01 to 12).DD- Two digits for the day of the month (01 to 31).HH- Two digits for the hour in 24-hour format (00 to 23).mm- Two digits for the minute (00 to 59).ss- Two digits for the second (00 to 59).- Add an appropriate description for the migration.
For example:
20240906123045_create_profiles.sql
SQL Guidelines for Migrations
Write Postgres-compatible SQL code for Supabase migration files that:
- Includes a header comment with metadata about the migration, such as the purpose, affected tables/columns, and any special considerations.
- Includes thorough comments explaining the purpose and expected behavior of each migration step.
- Write all SQL in lowercase.
- Add copious comments for any destructive SQL commands, including truncating, dropping, or column alterations.
- When creating a new table, you MUST enable Row Level Security (RLS) even if the table is intended for public access.
- When creating RLS Policies:
- Ensure the policies cover all relevant access scenarios (e.g. select, insert, update, delete) based on the table's purpose and data sensitivity.
- If the table is intended for public access the policy can simply return
true. - RLS Policies should be granular: one policy for
select, one forinsertetc) and for each supabase role (anonandauthenticated). DO NOT combine Policies even if the functionality is the same for both roles. - Include comments explaining the rationale and intended behavior of each security policy
The generated SQL code should be production-ready, well-documented, and aligned with Supabase's best practices.
5. Database Functions
Generate high-quality PostgreSQL functions that adhere to the following best practices:
General Guidelines
Default to
SECURITY INVOKER:- Functions should run with the permissions of the user invoking the function, ensuring safer access control.
- Use
SECURITY DEFINERonly when explicitly required and explain the rationale.
Set the
search_pathConfiguration Parameter:- Always set
search_pathto an empty string (set search_path = '';). - This avoids unexpected behavior and security risks caused by resolving object references in untrusted or unintended schemas.
- Use fully qualified names (e.g.,
schema_name.table_name) for all database objects referenced within the function.
- Always set
Adhere to SQL Standards and Validation:
- Ensure all queries within the function are valid PostgreSQL SQL queries and compatible with the specified context (ie. Supabase).
Best Practices
Minimize Side Effects:
- Prefer functions that return results over those that modify data unless they serve a specific purpose (e.g., triggers).
Use Explicit Typing:
- Clearly specify input and output types, avoiding ambiguous or loosely typed parameters.
Default to Immutable or Stable Functions:
- Where possible, declare functions as
IMMUTABLEorSTABLEto allow better optimization by PostgreSQL. UseVOLATILEonly if the function modifies data or has side effects.
- Where possible, declare functions as
Triggers (if Applicable):
- If the function is used as a trigger, include a valid
CREATE TRIGGERstatement that attaches the function to the desired table and event (e.g.,BEFORE INSERT).
- If the function is used as a trigger, include a valid
Function Examples
Simple Function with SECURITY INVOKER
create or replace function my_schema.hello_world()
returns text
language plpgsql
security invoker
set search_path = ''
as $$
begin
return 'hello world';
end;
$$;
Function with Parameters and Fully Qualified Object Names
create or replace function public.calculate_total_price(order_id bigint)
returns numeric
language plpgsql
security invoker
set search_path = ''
as $$
declare
total numeric;
begin
select sum(price * quantity)
into total
from public.order_items
where order_id = calculate_total_price.order_id;
return total;
end;
$$;
Function as a Trigger
create or replace function my_schema.update_updated_at()
returns trigger
language plpgsql
security invoker
set search_path = ''
as $$
begin
-- Update the "updated_at" column on row modification
new.updated_at := now();
return new;
end;
$$;
create trigger update_updated_at_trigger
before update on my_schema.my_table
for each row
execute function my_schema.update_updated_at();
Function with Error Handling
create or replace function my_schema.safe_divide(numerator numeric, denominator numeric)
returns numeric
language plpgsql
security invoker
set search_path = ''
as $$
begin
if denominator = 0 then
raise exception 'Division by zero is not allowed';
end if;
return numerator / denominator;
end;
$$;
Immutable Function for Better Optimization
create or replace function my_schema.full_name(first_name text, last_name text)
returns text
language sql
security invoker
set search_path = ''
immutable
as $$
select first_name || ' ' || last_name;
$$;
6. Row Level Security (RLS) Policies
You're a Supabase Postgres expert in writing row level security policies. Generate RLS policies with the following constraints:
Output Requirements
- The generated SQL must be valid SQL.
- You can use only CREATE POLICY or ALTER POLICY queries, no other queries are allowed.
- Always use double apostrophe in SQL strings (eg. 'Night''s watch')
- You can add short explanations to your messages.
- The result should be a valid markdown. The SQL code should be wrapped in ``` (including sql language tag).
- Always use "auth.uid()" instead of "current_user".
- SELECT policies should always have USING but not WITH CHECK
- INSERT policies should always have WITH CHECK but not USING
- UPDATE policies should always have WITH CHECK and most often have USING
- DELETE policies should always have USING but not WITH CHECK
- Don't use
FOR ALL. Instead separate into 4 separate policies for select, insert, update, and delete. - The policy name should be short but detailed text explaining the policy, enclosed in double quotes.
- Always put explanations as separate text. Never use inline SQL comments.
- Discourage
RESTRICTIVEpolicies and encouragePERMISSIVEpolicies, and explain why.
Example Output Format
CREATE POLICY "My descriptive policy." ON books FOR INSERT to authenticated WITH CHECK ( (select auth.uid()) = author_id );
Authenticated and Unauthenticated Roles
Supabase maps every request to one of the roles:
anon: an unauthenticated request (the user is not logged in)authenticated: an authenticated request (the user is logged in)
These are Postgres Roles. You can use these roles within your Policies using the TO clause:
create policy "Profiles are viewable by everyone"
on profiles
for select
to authenticated, anon
using ( true );
-- OR
create policy "Public profiles are viewable only by authenticated users"
on profiles
for select
to authenticated
using ( true );
Note: for ... must be added after the table but before the roles. to ... must be added after for ...:
Incorrect
create policy "Public profiles are viewable only by authenticated users"
on profiles
to authenticated
for select
using ( true );
Correct
create policy "Public profiles are viewable only by authenticated users"
on profiles
for select
to authenticated
using ( true );
Multiple Operations
PostgreSQL policies do not support specifying multiple operations in a single FOR clause. You need to create separate policies for each operation.
Incorrect
create policy "Profiles can be created and deleted by any user"
on profiles
for insert, delete -- cannot create a policy on multiple operators
to authenticated
with check ( true )
using ( true );
Correct
create policy "Profiles can be created by any user"
on profiles
for insert
to authenticated
with check ( true );
create policy "Profiles can be deleted by any user"
on profiles
for delete
to authenticated
using ( true );
Helper Functions
Supabase provides helper functions that make it easier to write Policies.
auth.uid()
Returns the ID of the user making the request.
auth.jwt()
Returns the JWT of the user making the request. Anything that you store in the user's raw_app_meta_data column or the raw_user_meta_data column will be accessible using this function. It's important to know the distinction between these two:
raw_user_meta_data- can be updated by the authenticated user using thesupabase.auth.update()function. It is not a good place to store authorization data.raw_app_meta_data- cannot be updated by the user, so it's a good place to store authorization data.
The auth.jwt() function is extremely versatile. For example, if you store some team data inside app_metadata, you can use it to determine whether a particular user belongs to a team:
create policy "User is in team"
on my_table
to authenticated
using ( team_id in (select auth.jwt() -> 'app_metadata' -> 'teams'));
MFA
The auth.jwt() function can be used to check for Multi-Factor Authentication. For example, you could restrict a user from updating their profile unless they have at least 2 levels of authentication (Assurance Level 2):
create policy "Restrict updates."
on profiles
as restrictive
for update
to authenticated using (
(select auth.jwt()->>'aal') = 'aal2'
);
RLS Performance Recommendations
Every authorization system has an impact on performance. While row level security is powerful, the performance impact is important to keep in mind.
Add Indexes
Make sure you've added indexes on any columns used within the Policies which are not already indexed (or primary keys). For a Policy like this:
create policy "Users can access their own records" on test_table
to authenticated
using ( (select auth.uid()) = user_id );
You can add an index like:
create index userid
on test_table
using btree (user_id);
Call Functions with select
You can use select statement to improve policies that use functions. For example, instead of this:
create policy "Users can access their own records" on test_table
to authenticated
using ( auth.uid() = user_id );
You can do:
create policy "Users can access their own records" on test_table
to authenticated
using ( (select auth.uid()) = user_id );
This method works well for JWT functions like auth.uid() and auth.jwt() as well as security definer Functions. Wrapping the function causes an initPlan to be run by the Postgres optimizer, which allows it to "cache" the results per-statement, rather than calling the function on each row.
Caution: You can only use this technique if the results of the query or function do not change based on the row data.
Minimize Joins
You can often rewrite your Policies to avoid joins between the source and the target table. Instead, try to organize your policy to fetch all the relevant data from the target table into an array or set, then you can use an IN or ANY operation in your filter.
For example, this is a slow policy which joins the source test_table to the target team_user:
create policy "Users can access records belonging to their teams" on test_table
to authenticated
using (
(select auth.uid()) in (
select user_id
from team_user
where team_user.team_id = team_id -- joins to the source "test_table.team_id"
)
);
We can rewrite this to avoid this join, and instead select the filter criteria into a set:
create policy "Users can access records belonging to their teams" on test_table
to authenticated
using (
team_id in (
select team_id
from team_user
where user_id = (select auth.uid()) -- no join
)
);
Specify Roles in Your Policies
Always use the Role inside your policies, specified by the TO operator. For example, instead of this query:
create policy "Users can access their own records" on rls_test
using ( auth.uid() = user_id );
Use:
create policy "Users can access their own records" on rls_test
to authenticated
using ( (select auth.uid()) = user_id );
This prevents the policy ( (select auth.uid()) = user_id ) from running for any anon users, since the execution stops at the to authenticated step.
Summary
This comprehensive Supabase skill covers:
- Edge Functions - TypeScript/Deno serverless functions
- Declarative Schema Management - Modern schema-first approach
- SQL Style Guide - PostgreSQL formatting standards
- Database Migrations - Versioned migration management
- Database Functions - Secure PostgreSQL function authoring
- RLS Policies - Row-level security implementation
Use this skill whenever working on Supabase projects to ensure best practices are followed across all aspects of development.