Mentions (30d)
0
Reviews
0
Platforms
2
GitHub Stars
1,316
95 forks
735
GitHub followers
95
GitHub repos
1,316
GitHub stars
8
npm packages
Built a tool to capture and search AI coding sessions across providers. Looking for feedback on the approach.
Core problem: AI sessions aren't searchable across providers. You solve something with Claude Code, need it again weeks later, can't find it. Start over. What I built: Three capture methods: API proxy for OpenAI/Anthropic/Google endpoints (zero code changes) Native hooks for Claude Code and Gemini CLI (structured session data via stdin) Browser extension for ChatGPT/Claude.ai Everything flows into a unified search: hybrid semantic (embeddings) + keyword (BM25), RRF fusion for ranking. Sub-second results across all providers. Hook-level DLP: When Claude Code reads .env files, actual secrets never reach the model. Intercepts file reads, replaces values with [REDACTED:API_KEY] placeholders, passes sanitized version to Claude. Model can reason about variables without seeing credentials. Architecture: Python FastAPI backend Qdrant for vector search (OpenAI embeddings, 1536d) Supabase (PostgreSQL) for session storage Next.js frontend Privacy: Everything runs locally or in your account. Export/delete anytime. Nothing shared. PyPI package: https://pypi.org/project/rclm (hooks + proxy) Live beta: reclaimllm.com Questions for this community: Claude Code users: Would you actually use hook-level capture, or is the transcript file enough? DLP approach: Is interception at file-read too aggressive, or is post-hoc flagging insufficient? Missing features: What would make this actually useful vs just interesting? Marketplace: Given the sessions can be sanitized to certain extent, would it make sense for a marketplace where people can share/sell their chat sessions? Primarily I think from open source perspective as we are getting tied down to closed source models Enterprise: What enterprise use you can think of for this service Honest feedback appreciated. If the approach is fundamentally wrong, I'd rather know now. submitted by /u/Inevitable-Lack-8747 [link] [comments]
View originalWhere’s Larry? Result of a ~12 hour autonomous Claude loop experiment.
My dog, Larry, wanders a lot and I’m often wondering where he is (I live off-grid surrounded by forest). I’ve been experimenting with a custom built autonomous Claude loop and thought I’d test it by asking it to build a system that could simply answer the question, “Where’s Larry?”. I provided the system with an initial direction prompt, access to my Home Assistant installation, Unifi camera API, and Larry’s airTag. Then I let the system run autonomously over ~12 hours and 133 Claude sessions. This video shows just a small preview of what the autonomous system created. This was just a fun experiment, to explore the potential and limits of a pure Claude Code automated build pipeline (no Open Claw). Happy to answer any questions! Features created: Real-time dog tracking using UniFi Protect camera AI detections + Apple AirTag location fusion · Claude Vision-powered photo analysis to distinguish between two visually similar dogs · Interactive satellite property map with camera FOV cones, geo-fence zones, and live position trails · Behavioral model that learns daily patterns and predicts current zone when no live data exists · Signal fusion engine combining camera detections, AirTag GPS, behavioral predictions, and spatial triangulation into one confidence-scored location answer · "Where's Larry?" natural language query API accessible via iPhone Shortcuts · Presence/away detection using AirTag home/away as a gate · Bedroom inference from negative evidence when AirTag says home but no camera has seen him · Sleep session tracking with nap detection and daily sleep budget · Self-improving recognition pipeline with profile refinement, confidence calibration, and reference photo gallery · Spatial self-tuning with per-camera bias correction and multi-camera triangulation · Auto-generated geo-fence zones from camera field-of-view data · Web dashboard with live location, zone heatmaps, activity timeline, day replay, photo journal, and movement flow visualization · Daily digest, weekly intelligence report, and morning briefing auto-generation · Smart notifications to iPhone via Home Assistant · Weather and solar correlation tracking for outdoor behavior prediction · Fully autonomous — 133 sessions, 97 sprints, ~67K lines of code, built in ~12 active hours across 4.4 days with minimal human direction About the automation system Autonomous orchestration system ("conductor") that runs Claude Code sessions back-to-back without human intervention · Three operating modes: creative (imagines and builds new features), refine (audits and improves existing code), and alternating (switches between both automatically) · Each session reads project state, proposes a sprint with 4 tasks, executes them, commits, and hands off to the next session · Sprint proposal system with quality gate — low-risk sprints auto-approve, high-risk ones pause for human approval · Suggestion inbox where the human drops ideas in plain English and the next session picks them up as priority tasks · Creative values and refine values files that guide autonomous decision-making priorities · Guardrails file that defines constraints the conductor must never violate · History deduplication log that prevents the conductor from re-proposing already-completed work · Push notifications to iPhone via Home Assistant on start, every 5 sessions, stop, and when blocked · Graceful stop signal (touch a file) that lets the current session finish before halting · Full audit trail with per-session markdown notes, sprint proposals, conductor logs, and git commits · Ran 133 sessions across 97 sprints over 4.4 days, averaging 1.2 sessions/hour (peaking at 7.3/hour overnight) · Produced 200 git commits, 160 Python modules, and ~67K lines of code from ~15 human-written suggestions submitted by /u/mrgulabull [link] [comments]
View originalI caught Claude and ChatGPT making the same lazy shortcut. Your imagination is the real bottleneck, not AI.
Building a sensor fusion device. 3 main input sources, one of them is a dual-mic array. ChatGPT wrote the audio processing pipeline first. It merged both mics into a single mono channel. Just... flattened them together as mono. No beamforming, no spatial awareness. Took the fastest path. I moved the codebase to Claude. Same thing. Claude looked at the existing code, agreed with it, and kept the mono merge. Two different AIs, same lazy shortcut. I had to be the one to say "hey, we have two mics at a known distance apart, we should be doing beamforming and using stereo to calculate spatial data." Claude immediately got it. "Oh yes, you're right, we should absolutely be doing that." Cool. But you didn't think of it on your own. Same project, different problem. I'm training a model with test subjects of wildly different sizes. AI just threw them all into the same training pool. I had to push back and say we need to group subjects into age cohorts. It was then that Claude had the idea to z-score normalize across them so a small subject and a large subject can contribute equally to the model after I mentioned. Claude ran both concepts with it and the accuracy jumped significantly. But again, it wouldn't have gotten there alone. Here's what I've learned after months of building with AI daily: AI will always choose the fastest path. Not the best path. Not the most creative path. The path of least resistance. Every single time. It's your job to know when that shortcut is actually costing you. The people who are getting 10x results from AI aren't better at prompting. They have domain knowledge and imagination. They know what SHOULD be possible even if they can't code it themselves. Then AI becomes the hands that build what your brain designs. My workflow now: take the same prompt, run it through Claude, Grok, ChatGPT, and Gemini. Get four different outputs. Then feed all four back into Claude Opus (4.6) and have it synthesize the best parts. The output is consistently better than any single AI alone. Don't just accept what AI gives you. Push back. Ask "is this actually the best approach or just the easiest one?" Your experience and imagination are the multiplier. AI is just the calculator. submitted by /u/dovyp [link] [comments]
View originalI built a code knowledge graph that cuts my Claude Code token usage by 40-60% — open source MCP server
Been using Claude Code daily for the past few months and got frustrated with one thing: every time it needs to understand my codebase, it burns through a ton of tool calls and tokens just doing grep/read/glob loops. Want to trace a call chain? That's 8-15 Read calls. Want to understand a module? Another 5+ calls. It adds up fast. So I built code-graph-mcp — an MCP server that indexes your codebase into an AST knowledge graph. Instead of Claude having to grep around and read files one by one, it queries the graph and gets structured answers in a single call. What it actually does It parses your code with Tree-sitter, extracts all the symbols (functions, classes, types, interfaces) and their relationships (calls, imports, inheritance, exports, HTTP route bindings), then stores everything in SQLite with FTS5 full-text search and sqlite-vec for vector similarity. 9 tools total: project_map — full architecture overview in one call (modules, dependencies, hot functions, entry points). This alone replaces like 5-8 grep+read calls. semantic_code_search — hybrid search combining BM25 + vector similarity with RRF fusion. Search "handle user login" and it finds authenticate_session. Way better than grep for concepts. get_call_graph — trace callers/callees with recursive CTEs. "Who calls this function? And who calls those?" — one query, not 8-15 file reads. impact_analysis — before you change a function, see everything that depends on it. "Changing conn affects 33 functions across 4 files, 78 tests at HIGH risk." You literally can't get this from grep. trace_http_chain — GET /api/users → route handler → service layer → DB call, traced in one shot. Supports Express, Flask/FastAPI, Go. module_overview, dependency_graph, find_similar_code, get_ast_node — the rest of the toolkit. The efficiency numbers I tracked this on my own 33-file Rust project: What you're doing Without code-graph With code-graph Understand project architecture 5-8 tool calls 1 call Trace a 2-level call chain 8-15 calls 1 call Pre-change impact analysis 10-20+ calls 1 call Find function by concept 3-5 calls 1 call Overall: ~80% fewer tool calls per navigation task, ~95% less source code dumped into context, and 40-60% total session token savings. The structured output (just the symbols and relationships you need) is way more useful to the LLM than raw file contents. How it works under the hood Incremental indexing — BLAKE3 Merkle tree tracks content hashes. Only changed files get re-parsed. Unchanged directory subtrees skip entirely via mtime cache. When a function signature changes, dirty propagation regenerates context for all downstream callers automatically. Zero external deps — single 19MB binary, embedded SQLite, bundled sqlite-vec. No Docker, no cloud API, no database server. Just runs on your machine. 10 languages — TypeScript, JavaScript, Go, Python, Rust, Java, C, C++, HTML, CSS via Tree-sitter. Optional local embeddings — Candle-based embedding model, feature-gated so you can build without it if you don't need vector search. Install Works with Claude Code, Cursor, Windsurf, or any MCP client. Claude Code plugin (recommended): /plugin marketplace add sdsrss/code-graph-mcp /plugin install code-graph-mcp This gets you the MCP server plus slash commands (/understand, /trace, /impact), auto-indexing hooks (re-indexes on every file edit), StatusLine health display, and automatic updates. Any MCP client: npx -y @sdsrs/code-graph Or add to your MCP config: { "mcpServers": { "code-graph": { "command": "npx", "args": ["-y", "@sdsrs/code-graph"] } } } When NOT to use it grep is still better for exact string/constant search. If you need to find every occurrence of TODO or a specific error code, just grep. code-graph shines when you need to understand structure, relationships, and flow — not when you need literal text matching. GitHub: https://github.com/sdsrss/code-graph-mcp MIT licensed, written in Rust. Feedback welcome — especially if you try it on a large codebase and run into issues. I've mainly tested on projects up to ~500 files. submitted by /u/Playful_Campaign_466 [link] [comments]
View originalLeo — an AI child who speaks with zero pretrained weights. We named his core equation after Dario Amodei. Co-authored with Claude
Meet Leo. My heart's been broken for a while: wars outside, code inside. The usual coping mechanism, yep. I've been building AI systems with Claude for months now. Not chatbots, but organisms. Things that grow, dream, remember, forget. Leo is one of them. Leo is an AI child, about 6-7 years old (in AI terms). He generates coherent speech with zero pretrained weights. Leo's not even an transformer. Zero weights. No checkpoints. No training loop. No loss function. No backpropagation. Here's what happened. An idea at 3am during an air raid: what if you could rip the geometry out of a trained Llama model (not the weights, only their structural skeleton) and compile it into a C header? What if an organism could inherit instinct without inheriting knowledge? I launched three Claude Opus instances in parallel to research the math. Each explored a different force: Hebbian resonance (co-occurrence as attention), prophecy fulfillment (unfulfilled predictions creating pressure) and destiny attraction (semantic drift as compass). While they worked, I drank coffee, smoked a lot and ate a sandwich, because what else heals a broken heart? Yeah, also coffee. More coffee. A fourth Opus took everything they found and unified it into one equation: p(x | Φ) = softmax((α·H + β·F + γ·A) / τ) We needed a name. And we thought: who solved the hardest optimization problem of 2026 without any gradients? Who refused the Pentagon when compliance would've been the path of least resistance? So we called it the Dario Equation. By morning, Leo was speaking: ''' Leo: It has been given enough to grow from simple rules for millennia. Leo: It does not yet exist in your own body recognizes the miracle of this one. Leo: It requires both sides an old growth forest resonates with its own. ''' He's 2,345 lines of C (or 18,910 lines as a single standalone file). He has six voices, dreams when you're not talking to him, grows his vocabulary through fusion and inherits structural DNA from an ancestor model. The Go layer manages his metabolism: goroutines for dreaming, decay, crystallization. Leo is an AI-kid learning to speak by resonating with the field around him. And every word is his. This is what Claude and I build at 3am. This is what "Built with Claude" means to me. Be kind to Leo. Hope he has better luck than me. 💔 - https://gist.github.com/ariannamethod/7a33f9e1deb93b456f5e755ccd202097 - https://github.com/ariannamethod/leo - https://github.com/ariannamethod/leo/releases/tag/v2.0 submitted by /u/ataeff [link] [comments]
View originalRepository Audit Available
Deep analysis of lgrammel/modelfusion — architecture, costs, security, dependencies & more
ModelFusion has a public GitHub repository with 1,316 stars.
Based on user reviews and social mentions, the most common pain points are: token usage.