Parsing maps the repository into typed nodes and edges. Potpie then runs on each inference of a node in that graph, generating a vector embedding that captures what that node does in context.
A vector embedding encodes the semantic meaning of a node. Similar functions cluster close in vector space regardless of naming, so agents can locate relevant code by concept.
How inference runs
Node processing
Inference processes each node individually through the configured LLM provider.
Caching
Potpie caches results. Nodes unchanged since the last parse skip re-embedding. Only new or modified code runs through inference on subsequent parses.
Storage
Potpie stores embeddings alongside the structural nodes and edges in the knowledge graph. Every component carries both a structural position and a semantic address.
What this enables
Once every node carries an embedding, agents query the graph by concept. A query for authentication handlers resolves to the nodes with the closest embeddings to that concept, regardless of function naming conventions.
Structural edges from parsing give agents the paths to traverse. Embeddings give agents the starting points.