Global Project Context

Open-source MCP server · multi-repo AI context

Your codebase, indexed. Every AI tool, connected.

GPC shares project context across multiple Git repositories and serves bounded retrieval to Codex, Claude, Copilot and Gemini through one local MCP endpoint.

Chunks land in Postgres, vectors in Qdrant, embeddings from Ollama, and a consolidated Graphify knowledge graph in Neo4j. The canvas behind this text is the real graph of this repository — 627 nodes, 1250 edges, 49 communities — not decoration.

Live graph · generated by graphify on this repository

Why it matters

AI coding tools stop relearning your codebase every session.

Most assistants forget the project the moment a chat ends. GPC keeps an indexed, retrievable view of your repositories outside the chat — so the next session starts with context, not silence.

  • Less pastingStop dropping files and architecture explanations into every prompt.
  • One index, many toolsCodex, Claude, Copilot and Gemini all read the same indexed project through MCP.
  • Survives sessions and branchesGit hooks reindex after every commit, merge and checkout, so retrieval reflects the current code.
  • Stays on your machineEmbeddings run locally with Ollama. Project text never leaves the box.

Indexing pipeline

From repository to retrievable context, fully local.

Four jobs, four local services. Each stage below activates as the pipeline processes a chunk on your machine — no round trip to any remote API.

01

Index

gpc init scans files, chunks them, writes rows to Postgres. Git hooks rerun on commit / merge / checkout.

chunk#471 · gpc/mcp_server.py · 312 tokens
02

Embed

Ollama generates a vector for each chunk with nomic-embed-text. No code is sent to any remote embedding API.

[0.12, -0.83, 0.41, 0.07, …] · dim=768
03

Store

Postgres holds canonical state — projects, files, chunks, hashes, runs. Qdrant holds the vectors that point back to it.

upsert OK · qdrant://global-project-context
04

Retrieve

AI clients call the MCP server. It embeds the query, searches Qdrant, hydrates chunks from Postgres, returns a bounded context block.

gpc.search → 4 hits · 1,387 tokens

Live MCP demo

This is what your AI tool sees through GPC.

Pick a question. The widget shows the exact MCP call, the streamed response, and the tokens saved versus pasting the relevant files into the prompt.

Request gpc.search

waiting…

Response
Without GPC 0 tokens — full indexed corpus, or repeated file pasting per session
With GPC 0 tokens — only the chunks Qdrant returns for the query
Saved 0% Measured with gpc token-savings on this repository

Observability

Token economy you can audit in Grafana.

Every MCP call is logged to Postgres. Search, context and token savings calls also emit compact token-economy samples that Grafana turns into an operational dashboard.

MCP call gpc.context query, project, repo, duration, result count
Postgres samples indexed, retrieved and saved token counts
Grafana dashboard calls, errors, savings, repos and slow queries
GPC Token Economy last 24h
Saved tokens24.3k
Average saving94.6%
MCP calls128
docker compose --profile observability up -d grafana

Explore the graph

Beyond text search — the relationships your AI never sees.

Click a node to inspect it. The graph below is the real structural map of this codebase: calls, contains, references, rationales. EXTRACTED edges are solid; INFERRED are dashed.

Top communities

loading…

Graphify + Neo4j

One product, many repositories, one indexed map.

Project commerce
Repos api · workers · web
Neo4j consolidated by slug

Graphify is a separate indexer that extracts the relationship graph of a repository — what calls what, what depends on what. GPC consolidates those graphs in Neo4j by project_slug and repo_slug, so a multi-repo product becomes one navigable system map.

Most AI tools read files. Graphify maps relationships. Together they answer questions like which worker calls which service or which modules touch the same database tables, instead of just returning matching text.

Real example

One product, many repositories, one indexed view.

commerce-apiorders, auth, payments
commerce-workersqueues, email, ERP sync
commerce-webcheckout, catalog, account
commerce-adminoperations and support
commerce-mobilemobile customer journey

To you, these folders are one product. To most AI tools they look like unrelated repositories — unless you re-explain the connection every prompt.

GPC indexes them under one project slug. When the assistant asks about checkout, it can pull the web component, the API route, the payment worker and the admin status screen in one bounded retrieval, plus the Graphify edges that connect them.

Your machine Many Git repositories

API, workers, web, admin and mobile live in separate folders.

GPC One indexed project

Postgres + Qdrant store chunks and vectors. Neo4j holds the consolidated Graphify graph.

AI tools Same MCP endpoint

Codex, Claude, Copilot and Gemini query the same project through the GPC MCP server.

Frequently asked

Multi-repo AI context, MCP, local embeddings — answered.

Short answers to the questions developers ask most when evaluating GPC.

How do you share context between multiple Git repositories for AI coding tools?

Point GPC at each repository on your machine and it indexes them under a single project slug in Postgres + Qdrant. When any MCP-capable assistant — Codex, Claude, Copilot, Gemini — calls gpc.search, it receives bounded chunks from every repo that belongs to the project, plus the Graphify edges that connect them in Neo4j.

What is the Model Context Protocol (MCP) and how does GPC use it?

MCP is an open protocol for giving AI assistants access to external tools and data. GPC runs as a local MCP server that exposes tools like gpc.search, gpc.graph_neighbors and gpc.graph_path. Any MCP-aware client — Codex, Claude Code, Cursor, Continue — can connect and query the same indexed project.

Does GPC send my source code to any external service?

No. Indexing, embedding (via Ollama's nomic-embed-text) and retrieval all run on your machine. The AI coding tool that calls the MCP server still sends the retrieved chunks to its own model — but the choice of chunks, and the raw code, never leaves your box as part of the indexing pipeline.

How does GPC stay in sync with code changes?

Git hooks reindex incrementally after every commit, merge and checkout. Only the chunks whose content hash changed are re-embedded — so branch switches take seconds, not minutes, and retrieval always reflects the current code.

How is GPC different from Copilot's workspace context or Cursor's indexing?

Those are tool-specific and scoped to the open workspace. GPC is tool-agnostic, runs outside the editor, and unifies context across many repositories under one project. It also exposes a structural graph (Graphify) that lets an AI answer questions no text search can — which worker calls which service, which modules share a database table.

What does the Graphify knowledge graph add on top of vector search?

Vector search returns text that looks similar to the query. Graphify models relationships — calls, contains, references, rationale_for — with EXTRACTED and INFERRED confidence tiers. gpc.graph_neighbors and gpc.graph_path let the AI traverse structure rather than just matching strings.

How many tokens does bounded retrieval actually save?

Measured on this repository with gpc token-savings: 25,737 tokens for the full indexed corpus vs. 1,387 tokens for a focused retrieval — a 94.6% reduction. Real savings depend on project size and query specificity, but the shape is consistent: pay only for the chunks Qdrant returns.

What does installing and running GPC look like?

Clone the repo, run the installer (it brings up Postgres, Qdrant, Ollama and Neo4j via Docker), then gpc init in each repository you want indexed. The installer also wires the MCP config into Codex, Claude Code, Cursor and any other MCP client it detects.

Try it

Persistent context as infrastructure, not chat history.