Back to Blog

Stop Coding Prompts: Building a "Self-Correcting" AI Agent with n8n & Supabase

The First-Principles approach to fixing AI hallucinations.

AI Neural Network

Most developers try to fix AI errors by writing longer, more complex system prompts. They add rules like "Do not use Wix for SaaS apps" or "Always recommend Bubble for marketplaces."

This approach is fundamentally flawed. As the rule set grows, the prompt becomes unmanageable, token costs spike, and the model's attention dilutes, leading to "instruction drift."

πŸ’‘ The Superior Alternative: Dynamic Few-Shot Prompting (RAG for Examples) Instead of hard-coding rules, we store hundreds of "correct decisions" in a vector database and dynamically inject only the 3-4 most relevant examples into the prompt at runtime.

This guide walks through building an automated pipeline using n8n and Supabase where you can "teach" the AI simply by adding a row to a spreadsheet.

The Architecture

We need three components to decouple the logic (Prompt) from the knowledge (Examples):

The Architecture
  • The Brain (OpenAI): To generate embeddings and final answers.
  • The Memory (Supabase): A vector database storing "Problem β†’ Solution" pairs.
  • The Nervous System (n8n): Orchestrating the flow of data between the user, the DB, and the LLM.

Why This Works

LLMs are pattern matchers, not logic engines. If you show the model how you solved a similar problem in the past, it mimics the pattern. This is mathematically more robust than trying to describe the logic in abstract English text.

The Why

Phase 1: The Vector Store (Supabase)

We need a database that understands the "semantic meaning" of a requirement, not just keyword matching.

  1. Create a Project at Supabase.
  2. Navigate to the SQL Editor and run the following setup script.

This enables the pgvector extension and creates a similarity search function.

⚠️ Note We are using 1024 dimensions to match OpenAI's text-embedding-3-large model.
-- 1. Enable Vector Extension
create extension if not exists vector;

-- 2. Create the Storage Table
create table examples (
  id uuid primary key,
  content text,       -- The "User Requirement" (what we search against)
  metadata jsonb,     -- The full context (Recommendation + Explanation)
  embedding vector(1024) -- MUST match your embedding model dimensions
);

-- 3. Create the Similarity Search Function
create or replace function relevant_examples (
  query_embedding vector(1024),
  match_threshold float,
  match_count int
)
returns table (
  id uuid,
  content text,
  metadata jsonb,
  similarity float
)
language plpgsql
as $$
begin
  return query
  select
    examples.id,
    examples.content,
    examples.metadata,
    1 - (examples.embedding <=> query_embedding) as similarity
  from examples
  where 1 - (examples.embedding <=> query_embedding) > match_threshold
  order by examples.embedding <=> query_embedding
  limit match_count;
end;
$$;

Phase 2: The Ground Truth (n8n Data Tables)

Instead of managing vectors directly, we use n8n's internal Data Tables as our CMS (Content Management System).

  1. In n8n, create a Data Table named Website Recommendation Examples.
  2. Add three string columns:
    • requirements (The input scenario)
    • recommendation (The correct output)
    • explanation (The "Chain of Thought")

Seed Data: Add 10-15 distinct examples.

  • Row 1: "Simple portfolio for a photographer" β†’ "Squarespace"
  • Row 2: "Multi-vendor marketplace like Airbnb" β†’ "Bubble or Sharetribe"

Phase 3: The Learning Loop (Sync Workflow)

We need a workflow that watches the n8n table and updates the Supabase vector store. This ensures that when you add a new "lesson," the AI learns it automatically.

  1. Trigger: Schedule (Every Day)
  2. Get Rows (n8n Table): Fetch all rows where updatedAt is recent (e.g., last 48 hours).
  3. Loop over Items: Process one example at a time.
  4. Delete Old Version (Supabase):
    Crucial Step Before adding the updated example, delete any existing row with the same ID to prevent duplicates.
  5. Add Document (Supabase Vector Store):
    • Model: text-embedding-3-large
    • Dimensions: 1024
    • Map Data:
      • Page Content β†’ requirements (This is what we embed).
      • Metadata β†’ Map the full object (id, recommendation, explanation).

Phase 4: The Thinking Loop (Inference Workflow)

This is the live agent that responds to users.

  1. Trigger: Chat or Webhook (Input: user_requirements)
  2. Retrieve (Supabase Vector Store):
    • Operation: Get Many (Retrieve Ranked Documents).
    • Query: {{ $trigger.user_requirements }}
    • Limit: 4
    • Function Name: relevant_examples
  3. Format Context (Code Node): Take the 4 JSON results and flatten them into a string.
  4. LLM Chain (OpenAI GPT-4):

System Prompt:

You are a website recommendation expert.
To help you answer the user, here are examples of how we handled similar requests in the past:

{{ $json.formatted_examples }}

Analyze the user's request based on these precedents:
User Request: {{ $trigger.user_requirements }}

The "Teaching" Paradigm

The beauty of this system is how you handle failure.

If the user asks for a "Complex SaaS App" and the AI incorrectly recommends "Wix":

βœ“ Do NOT edit the system prompt. βœ“ Do NOT write code.

Action: Go to your n8n Data Table. Add a new row:
  • Requirement: "Building a complex SaaS app with auth and database."
  • Recommendation: "Bubble or Custom Code."
  • Explanation: "Wix is restricted to static content and simple ecommerce; SaaS logic requires a dedicated backend."
The Teaching Diagram

The next time the sync runs, this vector is added. When a user asks a similar question, the system will retrieve this specific correction and get the answer right.

The Result

Critical Analysis

While powerful, this "Managed Memory" approach has trade-offs:

Managed Memory

Latency

The extra hop to Supabase and the embedding generation adds ~500ms-1s to the response time.

Contradictions

⚠️ Data Hygiene is Critical If your examples table contains contradictory advice (e.g., Row 1 says "Use Webflow for blogs" and Row 10 says "Never use Webflow"), the model will get confused.

Dimension Mismatch

A common error is using text-embedding-3-small (1536 dims) in n8n while the database is set to 1024. Ensure these match exactly in your SQL setup.

Conclusion

This architecture moves you from an "AI Consumer" to an "AI Architect," building systems that improve over time through data curation rather than endless prompt tweaking.

By treating AI corrections as data entries rather than code changes, you create a sustainable, scalable approach to managing AI behavior that any team member can contribute toβ€”no programming required.

Ready to Build Your Own?

Get started with n8n and Supabase to create self-improving AI agents.

Get in Touch