Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cometchat-22654f5b-docs-android-v6-beta2.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Imagine an Express service that indexes your docs, streams precise answers with citations, and plugs into CometChat so teams can summon it mid-conversation without losing context.

What You’ll Build

  • A Vercel AI SDK agent that joins conversations as a documentation expert.
  • A repeatable ingestion pipeline that writes sources into knowledge/<namespace>.
  • Retrieval and answer generation that stay grounded in stored snippets and cite sources.
  • A streaming /agent endpoint that converts Vercel AI SDK chunks into CometChat events via @cometchat/vercel-adapter.

Prerequisites

  • Node.js 18 or newer.
  • OpenAI API key available locally (e.g., .env with OPENAI_API_KEY).
  • A CometChat app with access to the AI Agents dashboard.
  • curl or an API client to call the Express endpoints.


How it works

This example builds a retrieval-augmented assistant around the Vercel AI SDK:
  • Ingest - POST /api/tools/ingest accepts URLs, markdown, plain text, or file uploads. Content is converted to markdown, deduplicated by hash, and stored under knowledge/<namespace>. Limits enforce 6 MB per file and 200 kB per text snippet.
  • Store - lib/knowledge/storage.js resolves the knowledge root (override with KNOWLEDGE_DIR), enforces namespace patterns, and exposes helpers for listing and reading documents.
  • Retrieve - lib/knowledge/retrieve.js tokenizes the query, ranks markdown files lexically, and returns excerpts plus filenames for citations. Requests default to the default namespace unless a different one is supplied.
  • Answer - lib/knowledge/agent.js wires the docsRetriever tool into an Experimental_Agent, forcing a retrieval call before every response and appending a “Sources:” footer. routes/agent.js wraps streamText and uses @cometchat/vercel-adapter to stream Server-Sent Events (SSE) that CometChat consumes.

Setup

1

Clone & install

Clone the repo, then run npm install inside vercel-knowledge-agent/agent.
2

Configure environment

Create .env with OPENAI_API_KEY. Optional knobs: PORT (default 3000), OPENAI_MODEL, TEMPERATURE, and KNOWLEDGE_DIR.
3

Start the server

Launch with npm start. The Express app loads environment variables via bin/www and exposes APIs at http://localhost:3000.
4

Ingest knowledge

Call POST /api/tools/ingest with a namespace, sources array, and/or multipart uploads. Responses report saved, skipped, and per-source errors.
5

Search docs

Use POST /api/tools/searchDocs to verify retrieval scoring before enabling the agent.
6

Chat with the agent

Send messages to POST /api/agents/knowledge/generate. Include toolParams.namespace to target a specific knowledge folder.
7

Stream via CometChat

Point CometChat at the /agent endpoint for SSE streaming once you are ready to integrate.

Project Structure


Step 1 - Configure the Knowledge Agent

agent/lib/knowledge/agent.js (view in repo):
  • Registers a docsRetriever tool that defaults to the active namespace and caps results at 20.
  • Hard-requires OPENAI_API_KEY before creating the Experimental_Agent.
  • Sets the system prompt so every reply triggers retrieval, stays grounded, and cites sources (for example, Sources: getting-started.md).
  • Honors OPENAI_MODEL and TEMPERATURE through the Express layer (routes/agent.js), so you can tune behaviour without code changes.

Step 2 - Expose Knowledge APIs

agent/routes/knowledge.js (view in repo):
  • POST /api/tools/ingest handles JSON or multipart payloads, converts PDFs and HTML to markdown, deduplicates by content hash, and reports detailed counts.
  • POST /api/tools/searchDocs validates namespace plus query, returning ranked excerpts and warnings for unreadable files.
  • POST /api/agents/knowledge/generate sanitizes messages, composes a chat prompt, and returns the agent’s grounded answer plus tool call traces.
Supporting modules:
  • ingest.js - normalization, dedupe, PDF parsing, slug management.
  • retrieve.js - lexical scoring, excerpt creation, namespace fallbacks.
  • storage.js - namespace validation and filesystem helpers (override the root with KNOWLEDGE_DIR).

Step 3 - Run the Server Locally

Expected base URL: http://localhost:3000
1

Install dependencies

npm install inside agent/ (already covered in Setup if you completed it once).
2

Start Express

Run npm start. Logs appear on stdout; bin/www loads .env before binding the port.
3

Ingest docs

POST to /api/tools/ingest using JSON or multipart examples from the README.
4

Query search

POST to /api/tools/searchDocs and confirm excerpts plus filenames look correct.
5

Ask the agent

POST to /api/agents/knowledge/generate with a messages array. Include toolParams.namespace to target non-default folders.
6

Stream responses

For CometChat testing, POST the same payload to /agent and consume the SSE stream (curl example is in the README under APIs).
Key endpoints:
  • POST /api/tools/ingest - add docs into knowledge/<namespace>
  • POST /api/tools/searchDocs - retrieve ranked snippets plus citations
  • POST /api/agents/knowledge/generate - non-streaming chat responses
  • POST /agent - SSE stream compatible with @cometchat/vercel-adapter

Step 4 - Deploy the API

  • Keep /api/agents/knowledge/generate and /agent reachable over HTTPS.
  • Store secrets (OPENAI_API_KEY, optional OPENAI_MODEL, TEMPERATURE, KNOWLEDGE_DIR) in your hosting provider’s secret manager.
  • Re-run ingestion whenever docs change; the dedupe logic skips unchanged content.

Step 5 - Configure in CometChat

1

Open Dashboard

Sign in at app.cometchat.com.
2

Navigate

Choose your app → AI Agents.
3

Add agent

Set Provider=Vercel AI SDK, choose an Agent ID (for example, knowledge), and paste the public /agent URL.
4

Headers (optional)

If your Express service expects auth headers, add them as JSON under Headers.
5

Enable

Save and ensure the toggle shows Enabled.
The server auto-imports additional tools provided by CometChat, so you can layer chat actions on top of the docs retriever without changing backend code.

Step 6 - Customize in UI Kit Builder

1

Open variant

From AI Agents select your new agent to open UI Kit Builder.
2

Customize & Deploy

Choose Customize and Deploy.
3

Adjust settings

Theme, layout, behaviour - verify the Vercel Knowledge agent is attached to the variant.
4

Preview

Use live preview to test retrieval answers and any frontend actions.

Step 7 - Integrate

Once your agent is connected, embed it wherever users need doc answers:
Widget

Widget Builder

React

React UI Kit

Pre Built UI Components
The knowledge agent you configured above is part of the exported configuration - no extra glue code required.

Step 8 - Test Your Setup

1

Agent answers with citations

POST to /api/agents/knowledge/generate and confirm the response ends with Sources:.
2

SSE works with CometChat

Stream the same payload to /agent; events should include threadId, runId, and partial deltas.
3

Namespaces respond correctly

Switch toolParams.namespace and verify documents load from the matching folder.
4

Dashboard shows agent enabled

In CometChat → AI Agents, ensure your Vercel agent toggle remains ON.