We’re excited to share our newest case study — one that’s a little different from the rest. This time, we’re the client.
We built an AI-powered assistant for the Step 7 Consulting website using n8n, Supabase, OpenAI, and Firecrawl — turning years of published content into a conversational knowledge base that gives visitors instant, accurate answers 24×7.
The Challenge
The Step 7 website is more than a marketing site. It’s a deep knowledge base — case studies, service descriptions, process documentation, and capability content published across dozens of pages. The challenge was making that knowledge instantly accessible to visitors without requiring them to navigate, search, or read through multiple pages to find what they need.
We needed an assistant that could:
- Answer questions accurately using only our published content — no hallucination, no outside knowledge
- Handle both natural language queries (“how do you help with workflow automation?”) and exact keyword queries (“do you work with Otter.ai?”)
- Remember context across a conversation so follow-up questions work naturally
- Update automatically whenever website content changed
- Enforce strict confidentiality rules around client names, pricing, and personnel
No off-the-shelf AI assistant / chatbot could meet all of these requirements. From the chat UI to the backend search function logic — we built it from scratch.
Our Solution
We designed a three-workflow RAG (Retrieval-Augmented Generation) architecture in n8n — treating the platform not as a simple automation tool, but as an engineering platform capable of supporting a production-grade AI system.
At the core of the solution is a continuous content pipeline that:
- Detects page changes on the WordPress site in real time via webhooks
- Automatically scrapes, cleans, and re-ingests updated content into Supabase
- Generates OpenAI vector embeddings per content chunk for semantic search
- Extracts keywords as structured tags to power exact keyword matching
- Retrieves the most relevant content using a custom three-signal hybrid search function on every visitor query
Why This Architecture Matters
Most RAG implementations use two search signals — vector similarity plus maybe full-text search. We added a third: tag-based exact matching — the kind of search one would expect if say they were searching for a specific part number.
When a prospect types a specific tool name, acronym, or product — “Salesforce”, “n8n”, “Otter.ai” — vector search often fails. The query is too short and too specific for semantic similarity to work reliably. Full-text search helps but struggles with product names that don’t stem well. Tag matching solves this definitively: keywords / tags defined per page act as guaranteed retrieval triggers, regardless of how the query is phrased.
The three signals are scored together using Reciprocal Rank Fusion (RRF), with tag matches carrying a weighted boost that ensures exact keyword hits always surface at the top.
This approach is directly applicable to any business with a content-rich knowledge base — internal documentation, product catalogs, support libraries, or part number databases where exact match retrieval is critical.
The Results
- Visitors get instant, accurate answers grounded exclusively in Step 7 content
- Exact keyword retrieval works reliably for tool names, service categories, and proper nouns
- Content stays automatically current — page updates trigger re-ingestion with no manual intervention
- Session memory makes follow-up questions work naturally across a conversation
- Full auditability via n8n execution history
This is what practical AI implementation looks like: applied where it creates real value, built with engineering discipline, and designed to run reliably without ongoing maintenance.
Why This Matters for Organizations
Many organizations sit on years of valuable content — website pages, documentation, product information, support articles — but lack the infrastructure to make it conversational and instantly accessible.
This case study demonstrates how Step 7:
- Designs RAG architectures using n8n, vector databases, and LLMs
- Solves real retrieval problems that vector search alone can’t handle
- Applies AI where it creates measurable value — not for its own sake
- Builds systems that stay current automatically, without ongoing manual effort
You can read the full case study here: https://step7consulting.com/work/case-study-n8n-rag-ai-assistant-knowledge-base/
Ready to make your knowledge base conversational?
If your organization has a content-rich website, internal documentation, a product catalog, or a support library, you already have the raw material for an AI-powered assistant. We can design and build a RAG system that makes that knowledge instantly accessible — without hallucination, without manual updates, and without replacing your existing content infrastructure.
Schedule a Consultation to talk through your use case.
