Backend is solid. Today was about building the API layer and the visual interface - making the skill graph something you can actually see and interact with.
Claude handled the API controllers, form requests, and Wayfinder route generation. Codex helped with the initial planning and scoping of the frontend approach. My job was mostly pointing and saying "no, more like this" and "make the nodes look less boring."
The API
Built a complete set of endpoints for managing graphs and nodes:
- Create, update, delete graphs
- Create, update, delete nodes within a graph
- List all graphs for a user
- List all nodes (with their connections) in a graph
Every node creation automatically triggers three things behind the scenes: a version snapshot is saved, wikilinks in the body are parsed into edges, and an embedding is generated for semantic search. One action, three side effects - all handled transparently.
The visual graph
This is where it gets fun. Built an interactive graph visualisation where you can actually see your knowledge map.
Imagine a constellation map: each star is a knowledge node, and the lines between them are connections. Different node types get different colours and icons, so you can spot patterns at a glance. Skills are one colour, personal claims another, workflows another.
You can:
- Pan and zoom around the map like Google Maps
- Drag nodes to rearrange the layout
- Click a node to read it, edit it, or delete it
- Filter by type using a colour-coded legend (e.g., "show me only skills" or "hide references")
- Search to find specific nodes
- Use a minimap in the corner for navigation in large graphs
The layout is force-directed - nodes naturally spread out and cluster based on their connections. Heavily connected nodes pull each other closer, loosely connected ones drift apart. It looks organic, like a real neural network.
How it all comes together
Here is the full journey when you send a message to Iris:
- You type "How do I set up Docker networking?"
- The system takes your question and scores every node in your default graph
- The top matches (maybe "Docker Compose", "Container Networking", "Service Discovery") become seed nodes
- From those seeds, it follows connections one or two hops out, grabbing related nodes like "Load Balancing" and "DNS Resolution"
- All collected nodes are formatted into sections: your personal preferences first, then relevant skills, then supporting references
- This formatted knowledge packet is added to Iris's instructions
- Iris responds, naturally incorporating your specific knowledge, preferences, and workflows
The whole process takes milliseconds. Results are cached for 10 minutes so repeated similar questions are instant. Every retrieval is logged for analysis.
It is like having a research assistant who reads your entire knowledge base before every conversation, except she does it in under a second and only brings back what is relevant.