Index
gpc init scans files, chunks them, writes rows to Postgres. Git hooks rerun on commit / merge / checkout.
chunk#471 · gpc/mcp_server.py · 312 tokens
Open-source MCP server · multi-repo AI context
GPC shares project context across multiple Git repositories and serves bounded retrieval to Codex, Claude, Copilot and Gemini through one local MCP endpoint.
Chunks land in Postgres, vectors in Qdrant, embeddings from Ollama, and a consolidated Graphify knowledge graph in Neo4j. The canvas behind this text is the real graph of this repository — 627 nodes, 1250 edges, 49 communities — not decoration.
Live graph · generated by graphify on this repository
Why it matters
Most assistants forget the project the moment a chat ends. GPC keeps an indexed, retrievable view of your repositories outside the chat — so the next session starts with context, not silence.
Indexing pipeline
Four jobs, four local services. Each stage below activates as the pipeline processes a chunk on your machine — no round trip to any remote API.
gpc init scans files, chunks them, writes rows to Postgres. Git hooks rerun on commit / merge / checkout.
chunk#471 · gpc/mcp_server.py · 312 tokens
Ollama generates a vector for each chunk with nomic-embed-text. No code is sent to any remote embedding API.
[0.12, -0.83, 0.41, 0.07, …] · dim=768
Postgres holds canonical state — projects, files, chunks, hashes, runs. Qdrant holds the vectors that point back to it.
upsert OK · qdrant://global-project-context
AI clients call the MCP server. It embeds the query, searches Qdrant, hydrates chunks from Postgres, returns a bounded context block.
gpc.search → 4 hits · 1,387 tokens
Live MCP demo
Pick a question. The widget shows the exact MCP call, the streamed response, and the tokens saved versus pasting the relevant files into the prompt.
waiting…
gpc token-savings on this repository
Observability
Every MCP call is logged to Postgres. Search, context and token savings calls also emit compact token-economy samples that Grafana turns into an operational dashboard.
last 24h
docker compose --profile observability up -d grafana
Explore the graph
Click a node to inspect it. The graph below is the real structural map of this codebase: calls, contains, references, rationales. EXTRACTED edges are solid; INFERRED are dashed.
Top communities
loading…
Graphify + Neo4j
Graphify is a separate indexer that extracts the relationship
graph of a repository — what calls what, what depends on what.
GPC consolidates those graphs in Neo4j by
project_slug and repo_slug, so a
multi-repo product becomes one navigable system map.
Most AI tools read files. Graphify maps relationships. Together they answer questions like which worker calls which service or which modules touch the same database tables, instead of just returning matching text.
Real example
To you, these folders are one product. To most AI tools they look like unrelated repositories — unless you re-explain the connection every prompt.
GPC indexes them under one project slug. When the assistant asks about checkout, it can pull the web component, the API route, the payment worker and the admin status screen in one bounded retrieval, plus the Graphify edges that connect them.
API, workers, web, admin and mobile live in separate folders.
Postgres + Qdrant store chunks and vectors. Neo4j holds the consolidated Graphify graph.
Codex, Claude, Copilot and Gemini query the same project through the GPC MCP server.
Frequently asked
Short answers to the questions developers ask most when evaluating GPC.
Point GPC at each repository on your machine and it indexes them under a single
project slug in Postgres + Qdrant. When any MCP-capable assistant — Codex, Claude,
Copilot, Gemini — calls gpc.search, it receives bounded chunks from
every repo that belongs to the project, plus the Graphify edges that connect them
in Neo4j.
MCP is an open protocol for giving AI assistants access to external tools and
data. GPC runs as a local MCP server that exposes tools like
gpc.search, gpc.graph_neighbors and
gpc.graph_path. Any MCP-aware client — Codex, Claude Code, Cursor,
Continue — can connect and query the same indexed project.
No. Indexing, embedding (via Ollama's nomic-embed-text) and
retrieval all run on your machine. The AI coding tool that calls the MCP server
still sends the retrieved chunks to its own model — but the choice of chunks,
and the raw code, never leaves your box as part of the indexing pipeline.
Git hooks reindex incrementally after every commit, merge and checkout. Only the chunks whose content hash changed are re-embedded — so branch switches take seconds, not minutes, and retrieval always reflects the current code.
Those are tool-specific and scoped to the open workspace. GPC is tool-agnostic, runs outside the editor, and unifies context across many repositories under one project. It also exposes a structural graph (Graphify) that lets an AI answer questions no text search can — which worker calls which service, which modules share a database table.
Vector search returns text that looks similar to the query. Graphify models
relationships — calls, contains, references,
rationale_for — with EXTRACTED and INFERRED confidence tiers.
gpc.graph_neighbors and gpc.graph_path let the AI
traverse structure rather than just matching strings.
Measured on this repository with gpc token-savings: 25,737 tokens
for the full indexed corpus vs. 1,387 tokens for a focused retrieval — a 94.6%
reduction. Real savings depend on project size and query specificity, but the
shape is consistent: pay only for the chunks Qdrant returns.
Clone the repo, run the installer (it brings up Postgres, Qdrant, Ollama and
Neo4j via Docker), then gpc init in each repository you want
indexed. The installer also wires the MCP config into Codex, Claude Code,
Cursor and any other MCP client it detects.
Try it