SHARCSHARC

Semantic Code Search
Built for AI Agents

State-of-the-art code embeddings + hybrid search.Drop-in OpenAI compatible. MCP-ready.

Works with your favorite AI tools

Anthropic
Cursor
Windsurf
Cline
OpenCode
Anthropic
Cursor
Windsurf
Cline
OpenCode

See the difference

Traditional file-by-file exploration vs semantic code understanding. Same question, dramatically different efficiency.

Classic
> How does chat history persistence work in this codebase?

- Search (**/*.ts)
  - Found 64 files
- Search (**/*.tsx)
  - Found 96 files
- Search (chat.*history|saveChat|database)
  - Found 29 files
- Read (lib/db/schema.ts)
  - Read 174 lines
- Read (lib/db/queries.ts)
  - Read 594 lines
- Read (app/(chat)/api/history/route.ts)
  - Read 47 lines
- Read (components/sidebar-history.tsx)
  - Read 370 lines

------------------------------------------------------

Chat history uses Drizzle ORM with PostgreSQL. The schema defines User, Chat,
Message_v2, and Vote_v2 tables...
VS
Sharc MCP
> How does chat history persistence work?

- sharc - search_code (query: "chat history persistence", limit: 3) (MCP)
  - Found 3 results for query: "chat history persistence"

  1. Code snippet (typescript) [ai-chatbot]
     Location: lib/db/queries.ts:83-105
     Score: 0.9847
     ... +22 lines (ctrl+o to expand)

  2. Code snippet (typescript) [ai-chatbot]
     Location: lib/db/queries.ts:157-180
     Score: 0.9623
     ... +18 lines (ctrl+o to expand)

  3. Code snippet (typescript) [ai-chatbot]
     Location: app/(chat)/api/chat/route.ts:162-173
     Score: 0.9418
     ... +8 lines (ctrl+o to expand)

------------------------------------------------------

Chat persistence uses Drizzle ORM with saveChat() for creation and
getChatsByUserId() for retrieval with cursor-based pagination.
10x
fewer tool calls
33x
less code to read
15x
faster results

Product Roadmap

Where we've been and where we're headed

Q2 2025
Complete

Research & Experimentation

Basic embeddings and semantic search exploration

Internal evaluation loops, dataset curation, and early retrieval baselines.
Q3 2025
Complete

Core Models

SHARC embedding model, reranking & MCP prototype

Iterated on training recipes, reranker calibration, and MCP tool design.
Q4 2025
Current

Public Launch

MCP tool + Inference API goes live

Docs, onboarding, rate limits, and production telemetry.
2026+
Planned

Cloud Vector DB

Launch of SHARC-hosted vector database

Managed ingestion, auth, backups, and multi-tenant isolation.
Developer Setup

Get started in minutes

OpenAI SDK compatible - just change the base URL.

1{
2 "mcpServers": {
3 "sharc": {
4 "command": "npx",
5 "args": ["-y", "@sharc/mcp"],
6 "env": {
7 "SHARC_API_KEY": "sk_..."
8 }
9 }
10 }
11}

Then just ask: "Index this codebase and search for auth logic"

SHARC
Embeddings API
Code Reranking
MCP Server
SHARC

Questions about SHARC?

We'd love to hear from you. Reach out anytime for docs, onboarding, or integration guidance.