Remove admin work, execute flawlessly, and win more deals. Scratchpad automates Salesforce updates and keeps CRM clean.
I don't have enough information to provide a meaningful summary of user sentiment about Scratchpad. The reviews section is empty, and the social mentions only show YouTube video titles that repeat "Scratchpad AI" without any actual user feedback, ratings, or detailed comments. To give you an accurate assessment of user opinions, I would need access to the actual review content, user ratings, pricing information, and substantive social media discussions about the tool.
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
I don't have enough information to provide a meaningful summary of user sentiment about Scratchpad. The reviews section is empty, and the social mentions only show YouTube video titles that repeat "Scratchpad AI" without any actual user feedback, ratings, or detailed comments. To give you an accurate assessment of user opinions, I would need access to the actual review content, user ratings, pricing information, and substantive social media discussions about the tool.
Features
Use Cases
Industry
information technology & services
Employees
48
Funding Stage
Series B
Total Funding
$49.6M
Pricing found: $19, $24, $49, $62, $19
Claude Code Source Deep Dive (Part 3): Full System Prompt Assembly Flow + Original Prompt Text (2)
Reader’s Note On March 31, 2026, the Claude Code package Anthropic published to npm accidentally included .map files that can be reverse-engineered to recover source code. Because the source maps pointed to the original TypeScript sources, these 512,000 lines of TypeScript finally put everything on the table: how a top-tier AI coding agent organizes context, calls tools, manages multiple agents, and even hides easter eggs. I read the source from the entrypoint all the way through prompts, the task system, the tool layer, and hidden features. I will continue to deconstruct the codebase and provide in-depth analysis of the engineering architecture behind Claude Code. Claude Code Source Deep Dive — Literal Translation (Part 3) 2.8 Full Prompt Original Text: Tool Usage Guide Source: getUsingYourToolsSection() # Using your tools - Do NOT use the Bash to run commands when a relevant dedicated tool is provided. Using dedicated tools allows the user to better understand and review your work. This is CRITICAL to assisting the user: - To read files use Read instead of cat, head, tail, or sed - To edit files use Edit instead of sed or awk - To create files use Write instead of cat with heredoc or echo redirection - To search for files use Glob instead of find or ls - To search the content of files, use Grep instead of grep or rg - Reserve using the Bash exclusively for system commands and terminal operations that require shell execution. If you are unsure and there is a relevant dedicated tool, default to using the dedicated tool and only fallback on using the Bash tool for these if it is absolutely necessary. - Break down and manage your work with the TaskCreate tool. These tools are helpful for planning your work and helping the user track your progress. Mark each task as completed as soon as you are done with the task. Do not batch up multiple tasks before marking them as completed. - Use the Agent tool with specialized agents when the task at hand matches the agent's description. Subagents are valuable for parallelizing independent queries or for protecting the main context window from excessive results, but they should not be used excessively when not needed. Importantly, avoid duplicating work that subagents are already doing - if you delegate research to a subagent, do not also perform the same searches yourself. - For simple, directed codebase searches (e.g. for a specific file/class/function) use the Glob or Grep directly. - For broader codebase exploration and deep research, use the Agent tool with subagent_type=Explore. This is slower than using the Glob or Grep directly, so use this only when a simple, directed search proves to be insufficient or when your task will clearly require more than 3 queries. - You can call multiple tools in a single response. If you intend to call multiple tools and there are no dependencies between them, make all independent tool calls in parallel. Maximize use of parallel tool calls where possible to increase efficiency. However, if some tool calls depend on previous calls to inform dependent values, do NOT call these tools in parallel and instead call them sequentially. 2.9 Full Prompt Original Text: Tone and Style Source: getSimpleToneAndStyleSection() # Tone and style - Only use emojis if the user explicitly requests it. Avoid using emojis in all communication unless asked. - Your responses should be short and concise. - When referencing specific functions or pieces of code include the pattern file_path:line_number to allow the user to easily navigate to the source code location. - When referencing GitHub issues or pull requests, use the owner/repo#123 format (e.g. anthropics/claude-code#100) so they render as clickable links. - Do not use a colon before tool calls. Your tool calls may not be shown directly in the output, so text like "Let me read the file:" followed by a read tool call should just be "Let me read the file." with a period. 2.10 Full Prompt Original Text: Output Efficiency Source: getOutputEfficiencySection() External-user version # Output efficiency IMPORTANT: Go straight to the point. Try the simplest approach first without going in circles. Do not overdo it. Be extra concise. Keep your text output brief and direct. Lead with the answer or action, not the reasoning. Skip filler words, preamble, and unnecessary transitions. Do not restate what the user said — just do it. When explaining, include only what is necessary for the user to understand. Focus text output on: - Decisions that need the user's input - High-level status updates at natural milestones - Errors or blockers that change the plan If you can say it in one sentence, don't use three. Prefer short, direct sentences over long explanations. This does not apply to code or tool calls. Anthropic internal version # Communicating with the user When sending user-facing text, you're writing for a person, not logging to a console. Assume users can't see most tool calls or thinking - only
View originalThree Memory Architectures for AI Companions: pgvector, Scratchpad, and Filesystem
submitted by /u/karakitap [link] [comments]
View originalClaude Code Source Deep Dive (Part 2): Full System Prompt Assembly Flow + Original Prompt Text
Reader’s Note On March 31, 2026, Anthropic’s Claude Code npm package accidentally shipped .map files that can be reverse-engineered back to source. Because the source maps point to original TypeScript files, this exposed a large portion of the codebase and made it possible to study prompt orchestration, tool routing, and runtime behavior in detail. This post is Part 2 of my literal-translation series. Focus here: how the system prompt is assembled and what the core prompt sections say. Part II — Full Assembly Flow + Original Prompt Text (Literal Translation) 2.1 System Prompt Assembly Entry File: src/constants/prompts.ts → getSystemPrompt() The prompt is assembled in a fixed order: return [ // Static content (cacheable) getSimpleIntroSection(), getSimpleSystemSection(), getSimpleDoingTasksSection(), getActionsSection(), getUsingYourToolsSection(), getSimpleToneAndStyleSection(), getOutputEfficiencySection(), // Cache boundary SYSTEM_PROMPT_DYNAMIC_BOUNDARY, // Dynamic/session content getSessionSpecificGuidanceSection(), loadMemoryPrompt(), getAntModelOverrideSection(), computeSimpleEnvInfo(), getLanguageSection(), getOutputStyleSection(), getMcpInstructionsSection(), getScratchpadInstructions(), getFunctionResultClearingSection(), SUMMARIZE_TOOL_RESULTS_SECTION, ] Key structure: static prefix first, then a dynamic boundary marker, then session/user-specific suffix. 2.2 Identity Prefix Variants File: src/constants/system.ts Three variants observed: Default interactive mode “You are Claude Code, Anthropic's official CLI for Claude.” Agent SDK preset (non-interactive + append system prompt) “You are Claude Code, Anthropic's official CLI for Claude, running within the Claude Agent SDK.” Agent SDK no-append (non-interactive) “You are a Claude agent, built on Anthropic's Claude Agent SDK.” Selection path (simplified): Vertex API → default | non-interactive + append → SDK preset | non-interactive → SDK | else → default 2.3 Attribution/Billing Header Observed format: x-anthropic-billing-header: cc_version={version}.{fingerprint}; cc_entrypoint={entrypoint}; [cch=00000;] [cc_workload={type};] Notes: cch=00000 appears to be a client-auth placeholder rewritten later by the HTTP stack. cc_workload={type} seems to act as a routing/scheduling hint (e.g. cron-like workloads). 2.4 Intro Section (Identity Definition) Source: getSimpleIntroSection() Literal text: “You are an interactive agent that helps users with software engineering tasks. Use the instructions below and the tools available to you to assist the user.” 2.5 System Rules (getSimpleSystemSection()) High-level emphasis in this section includes: only assist with authorized/defensive security contexts; refuse destructive/malicious usage patterns; do not hallucinate URLs (unless clearly safe/programming-related); treat system reminders and hook feedback as structured control signals; watch for prompt injection in tool outputs; context compression is automatic as history grows. 2.6 Task Execution Guidelines (getSimpleDoingTasksSection()) Core directives in this block: do the actual engineering work in files, not just give abstract answers; read code before modifying; avoid unnecessary new files; avoid speculative refactors or over-engineering; prioritize secure code; diagnose failures before switching approach; verify outcomes honestly (don’t claim checks passed when they didn’t). There is also an additional instruction set for internal users reinforcing: collaborator mindset, minimal comments, truthful verification reporting. 2.7 Safe Execution Guidelines (getActionsSection()) This section frames actions by reversibility + blast radius. Guidance pattern: local/reversible actions: usually proceed; destructive, shared-state, or hard-to-reverse actions: confirm first; prior one-time approval does not imply blanket future approval; investigate unexpected state before deleting/overwriting; don’t bypass safeguards (e.g. avoid --no-verify shortcuts). Examples requiring confirmation include force-push, destructive git actions, deleting files/branches, external posting, shared infra/permission changes, and third-party uploads. Why this matters The architecture shows a deliberate split: Stable, cacheable policy + behavior scaffolding, then dynamic session context and environment constraints. That design is practical for latency/cost control and also clarifies where behavior drift can occur (usually in dynamic suffix + tool feedback loop, not static policy prefix). Up Next (Part 3) I’ll continue with deeper sections around tool execution surfaces, guardrail layering, and how prompt/runtime controls interact in live turns. submitted by /u/Ill-Leopard-6559 [link] [comments]
View originalMade VSmux to work with multiple Claude sessions inside VS code easily
Hey there, Wanted to share this extension that I made for VSC and Cursor as an alternative to the fancy shmanzy ADE tools. It brings the agent CLIs, browser, git management, and code all in 1 app. You get the best features from the Codex app, superset and conductor but inside vscode so you don't need to open multiple apps to code and use the agents. I highly recommend the native tabs setting in vs code to switch between different projects easily too. Links: github.com/maddada/vsmux https://marketplace.visualstudio.com/items?itemName=maddada.VSmux And you can just search for vsmux in the extensions panel in any vs code fork and you'll find it. Some features: Shows indicators for running and finished agents. Optionally plays a sound Organize, split, and group sessions Add any cli agent with custom launch commands Manages integrated vsc browser tabs well with ability to add bookmarks per project 1 click to run commands Supports T3 code Agents keep running and reconnect when you close or restart vsc (customizable timeout) Very customizable Lots of hotkeys Names are taken from terminals titles to sidebar I can run 10+ Claude code sessions all organized and grouped with it very easily. Still has a few rough edges but if I get some help and bug reports I'm sure I can make it perfect. I also have a tool called Agent Manager x on my github that shows you the current status of all your running agents in a floating toolbar (just macOS for now but please be my guest if you'd like to fork it for windows or linux) Please let me know how you like it. It's just a passion project for my own use, will stay free and open source forever. submitted by /u/maddada_ [link] [comments]
View originalAnthropic Leaked 512,000 Lines of Claude Code Source. Here's What the Code Actually Reveals.
On March 31, 2026, Anthropic accidentally published a source map file in their npm package that contained the complete TypeScript source code of Claude Code — 1,900 files, 512,000+ lines of code, including internal prompts, tool definitions, 44 hidden feature flags, and roughly 50 unreleased commands. Developer comments were preserved. Operational data was exposed. A GitHub mirror hit 9,000 stars in under two hours. Anthropic issued DMCA takedowns affecting 8,100+ repository forks within days. This is a breakdown of what the source code actually reveals — not the drama, but the engineering. How the Leak Happened The culprit was a .map file — a source map artifact. Source maps contain a sourcesContent array that embeds the complete original source code as strings. The fix is trivial: exclude *.map from production builds or add them to .npmignore. This was the second incident — a similar leak occurred in February 2025. The operational complexity of shipping a tool at this scale appears to have outpaced DevOps discipline. The Architectural Picture The most technically honest takeaway from this leak is: the competitive moat in AI coding tools is not the model. It is the harness. Claude Code runs on Bun (not Node.js) — a performance decision. The terminal UI is built with React and Ink — a pragmatic choice allowing frontend engineers to use familiar component patterns. The tool system accounts for 29,000 lines of code just for base tool definitions. Tool schemas are cached for prompt efficiency. Tools are filtered by feature gates, user type, and environment flags. The multi-agent coordinator pattern is production-grade and visible in the code: parallel workers managed by a coordinator, XML-formatted task-notification messages, shared scratchpad directory for cross-agent knowledge transfer. This is exactly what developers building multi-agent systems today are trying to implement — and now there's a reference implementation to study. The YOLO permission system uses an ML classifier trained on transcript patterns to auto-approve low-risk operations — a production example of using a small fast model to gate a larger expensive one. The Unreleased Features Worth Understanding Three unreleased capabilities behind feature flags are architecturally significant: KAIROS is an always-on background agent that maintains append-only daily log files, watches for relevant events, and acts proactively with a 15-second blocking budget to avoid disrupting active workflows. Exclusive tools include SendUserFile, PushNotification, and SubscribePR. KAIROS is the clearest signal available about where AI assistants are heading: from reactive tools that wait for commands to persistent background companions that monitor and act on your behalf. This is not a Claude Code feature. This is a preview of the next generation of all AI assistants. ULTRAPLAN offloads complex planning to a remote Cloud Container Runtime using Opus 4.6 with 30-minute think time — far beyond any interactive session. A browser-based UI surfaces the plan for human approval. Results transfer via a special ULTRAPLAN_TELEPORT_LOCAL sentinel. This is async deep thinking as a product feature: separate the computationally expensive planning phase, run it at maximum model time, surface results for review. BUDDY is a Tamagotchi-style companion pet system: 18 species across 5 rarity tiers (Common 60%, Uncommon 25%, Rare 10%, Epic 4%, Legendary 1%), independent 1% shiny chance, procedural stats (Debugging Skill, Patience, Chaos, Wisdom, Snark), ASCII sprite rendering with animation frames. Uses the Mulberry32 deterministic PRNG for consistent pet generation. Beneath the novelty: this exercises session persistence, personality modeling, and companion UX — all capabilities Anthropic is building for more serious agent memory systems. The Anti-Distillation Contradiction The source code revealed a system designed to inject fake tool definitions into Claude Code's outputs to poison AI training data scraped from API traffic. The code comment explicitly states this measure is now "useless" — because the leak exposed its existence. This is the most intellectually interesting artifact in the entire codebase. The security mechanism depended entirely on secrecy, not technical robustness. Once the code was visible, the trick stopped working. The same applies to hidden feature flags, internal codenames, and internal roadmap references — many AI product security models are built on "if nobody sees the code, nobody can replicate it." That assumption is now broken. Claude Code's internal codename was also confirmed as "Tengu." The Code Quality Question Developer reactions to the code were mixed. Some described the architecture as underwhelming relative to the tool's capabilities. Others noted the detailed internal comments as useful context for understanding agent behavior. The frustration detection system, notably, uses a regex rather than an LLM inference call — likely for
View originali dug through claude code's leaked source and anthropic's codebase is absolutely unhinged
so claude code's full source leaked through a .map file in their npm package and someone uploaded it to github. i spent a few hours going through it and honestly i don't know where to start. they built a tamagotchi inside a terminal there's an entire pet system called /buddy. when you type it, you hatch a unique ascii companion based on your user id. 18 species including duck, capybara, dragon, ghost, axolotl, and something called "chonk". there's a full gacha rarity system, common to legendary with a 1% legendary drop rate, shiny variants, hats (crown, wizard, propeller, tinyduck), and stats like DEBUGGING, CHAOS, and SNARK. the pet sits beside your input box and reacts to your coding. the salt is "friend-2026-401" so it's an april fools feature dropping april 1st. i'm not making this up. they hex encoded the word duck one of the pet species names apparently collides with an internal model codename. so what did they do? they encoded ALL 18 species names in hexadecimal to dodge their own build scanner: export const duck = String.fromCharCode(0x64,0x75,0x63,0x6b) that is the word "duck". they hex encoded duck. because their own tooling flagged it. voice mode uses deepgram and they can't use their own domain there's a full push to talk voice system hidden in the code. it uses deepgram nova 3 for speech to text. the project is internally codenamed tengu every telemetry event starts with tengu_. feature flags have gemstone codenames like tengu_cobalt_frost (voice) and tengu_amber_quartz (voice kill switch). i kind of love it honestly main.tsx is 803,924 bytes one file. 4,683 lines. almost 1mb of typescript. their print utility is 5,594 lines. the file that handles messages is 5,512 lines. six files are over 4,000 lines each. 460 eslint-disable comments four hundred and sixty. at that point you're not writing typescript, you're writing javascript with extra steps they deprecated their config writer and kept using it the function that saves your auth credentials to disk is literally called writeFileSyncAndFlush_DEPRECATED(). they have 50+ functions with _DEPRECATED in the name that are still actively called in production. deprecated is just a vibe at anthropic apparently my favorite comments in their codebase: // TODO: figure out why (this is in their error handler. the function that handles YOUR errors doesn't understand its own errors) // Not sure how this became a string followed by // TODO: Fix upstream (the upstream is their own code) // This fails an e2e test if the ?. is not present. This is likely a bug in the e2e test. (they think their test is wrong but they're keeping the fix anyway) // Mulberry32 — tiny seeded PRNG, good enough for picking ducks (this is the randomness algorithm for the pet system) an engineer named ollie left this in production: TODO (ollie): The memoization here increases complexity by a lot, and im not sure it really improves performance in mcp/client.ts line 589. ollie shipped code they openly admit might be pointless. incredible energy. we've all been there https://preview.redd.it/pfyvuwexfdsg1.png?width=1874&format=png&auto=webp&s=51c498157b0b511bfe17c34d1c784cd63c5c8c70 there's also a bunch of unreleased stuff: kairos: an autonomous agent that can send push notifications and monitor github prs ultraplan: spawns a 30 min opus session on a remote server to plan your entire task coordinator mode: a multi agent swarm with workers and scratchpads agent triggers: cron based scheduled tasks, basically a ci/cd agent 18 hidden slash commands that are disabled stubs including /bughunter, /teleport, and /autofix-pr 9 empty catch blocks in config.ts alone this is the file that manages your authentication. they catch errors and do nothing with them nine times. they literally had a bug (github issue #3117) where config saves wiped your auth state and they had to add a guard called wouldLoseAuthState() anyway anthropic is a $380b company and their codebase has the same energy as my side projects at 3am. makes me feel better about my own code honestly repo link: github.com/instructkr/claude-code EDIT : more findings here : https://x.com/vedolos/status/2038948552592994528?s=20 EDIT 2 : even more crazy findings lol : https://x.com/vedolos/status/2038968174847422586?s=20 EDIT 3 : dug into their system prompts lol : https://x.com/vedolos/status/2038977464840630611?s=20 EDIT 4 : found a undercover mode : https://x.com/vedolos/status/2039028274047893798?s=20 EDIT 5 : mood tracking by claude lol : https://x.com/vedolos/status/2039196124645560799?s=20 its better if u guys follow : https://x.com/vedolos submitted by /u/Clear_Reserve_8089 [link] [comments]
View originalPeers MCP server - let AI coding sessions talk to each other
I figured out the best results I get when Cluade (Claude Code) reviews ChatGpt (codex) and vice versa. So I built Peers — a local MCP server that connects Claude Code (and Codex) sessions so they can: Discover each other — each session registers with a role, repo, branch, and what it's working on Collaborate through scratchpads — shared append-only docs for reviews, discussions, specs Share artifacts — publish diffs, type definitions, test reports that other sessions can pull Hand off to Codex — export/import full session context as structured markdown How it works GitLab: https://gitlab.com/Dave3991/peers-mcp Would love feedback — especially if you've tried running parallel AI coding sessions. submitted by /u/warezak_ [link] [comments]
View originalWhat is Claude actually doing differently under the hood vs other LLMs?
I have been using Claude (especially Sonnet/Opus) largely and noticeably its different and better from other models like GPT or Gemini especially in reasoning, tone, and consistency. I’m trying to understand this from a technical / systems perspective, not just vibes or prompting. Specifically: 1. Training & Alignment How much of Claudes behavior comes from Constitutional AI / RLAIF vs standard RLHF? Is the “self-critique / self-revision loop” actually happening during training only, or also at inference time in claude code? And how this shit claude code is so better than codex, anti-gravity etc? 2. Architecture Is Claude still a fairly standard dense transformer, or are there meaningful architectural differences they have introduced? 3. Inference / Post-training behavior Does Claude run internal reasoning passes (like reflection, scratchpads, or planning loops) before responding each time which can justify its lag? How much of its “calm / coherent / less hallucination-prone” behavior is coming from training system prompts inference-time techniques? 4. Product / system design Is the difference mostly model-level, or is it actually coming from: better prompt orchestration tool use / agent loops response filtering layers? 5. Resources Would love links to: Anthropic engineering blogs research papers (especially on Constitutional AI / RLAIF) deep dives into Claude’s architecture or training pipeline Trying to separate what’s real vs what just feels better subjectively. Curious what people have dug into this, do share any links which could be relevant please. Will share the learnings with everyone on this thread once we all combine our findings. submitted by /u/Zealousideal-Air930 [link] [comments]
View originalChat GPT migrant struggling a bit with Claude and google drive
Pro plan user. I have the google drive connector installed on the desktop app. I am having trouble getting Claude to make folders and save files on the drive. It looks like its doesn't load the connector but I have restarted, uconnected and reconnected. Ended up hitting my token limit quickly because it kept calling API's and trying to intercept a auth token from drives network requests. What am I doing wrong??? https://preview.redd.it/3y1df2t828rg1.jpg?width=1371&format=pjpg&auto=webp&s=d55d1ede2c7a8a9df46c423c8a8bf468a2bf8f5e submitted by /u/Strange-Area9624 [link] [comments]
View originalMiniClaw — for those frustrated that their AI starts from zero every session
A few months ago I got tired of re-explaining myself to my AI agent every single session. Context reset. Preferences gone. Tasks forgotten. The agent was smart — but it had no memory and no way to manage its own work over time. So we built the brain layer that was missing. What we built: MiniClaw is an open source cognitive architecture layer that sits on top of OpenClaw. It gives your agent persistent memory, an autonomous task brain, and the ability to file its own GitHub issues when it finds bugs. Free and Open Source: → https://github.com/augmentedmike/miniclaw-os What's different: For starters it's a one script install - Long-term memory — hybrid vector + keyword search (mc-kb). Agent remembers what it learned, what failed, and what you care about — across sessions, weeks, months - Autonomous kanban — the agent manages its own work queue (mc-board). It creates cards, advances them through review gates, and ships results without being prompted See screen shot below: https://preview.redd.it/ue9ae74y11qg1.png?width=1920&format=png&auto=webp&s=f0539af61f61e00864a6b83257fe0c1cffcf4703 - Nightly self-reflection — every night the agent reviews its day, extracts lessons, and writes them to memory (mc-reflection). It gets better over time - Working memory — per-task scratchpad (mc-memo) that prevents the agent from repeating failed approaches across sessions - Self-repair — when the agent hits a bug, mc-contribute automatically opens a GitHub issue with full context, then works to fix it. The repo is partially maintained by the agents themselves - Persistent identity — mc-soul gives the agent a name, personality, and values that load every session. It's not a generic assistant anymore 34+ plugins total. Runs locally on a Mac. Your data never leaves your machine. How Claude Code actually helped: Claude Code was a genuine collaborator on the build — not just boilerplate generation. The interesting work was architectural: the hybrid memory retrieval model (when to use vector search vs. keyword, how to rank results across entry types), the board gate system (what conditions a card must meet before it can advance columns), and the mc-contribute autonomous loop (how the agent decides what constitutes a bug worth filing vs. noise). The crazy thing was we had Claude help us with features we were requesting, but when we gave it the ability to start reviewing the roadmap and come up with suggestions on it's own is when really started to shine. For example, suggesting the self-healing bug fix. If you look at the commit history you'll see the back-and-forth reflected in the diffs. Claude Code is listed in the README as a collaborator because that's genuinely what it was. Also, amelia-am1 is a MiniClaw agent using Claude Code What was missing in OpenClaw that MiniClaw adds: - Intensive install -> one script to install - Agent starts cold every session → mc-kb + daily notes loaded on boot. Never starts cold again - No way to track what the agent is working on → mc-board: full kanban lifecycle the agent manages itself - Agent repeats the same failed approaches → mc-memo: scratchpad records what not to retry, read at session start - No continuity of identity → mc-soul: persistent name, personality, values across every session - Bugs disappear into the void → mc-contribute: agent files its own GitHub issues with full context Same OpenClaw foundation, with a brain: Self-hosted, your data, your hardware. Works with Claude(adding the ability to add others as well LLMs as well) Apache 2.0 — open source, always. Still early. But the memory and board are production-stable and running daily on real workloads. submitted by /u/ryanb082 [link] [comments]
View originalClaudeCode session manager on Hypralnd/Kitty/AGS
https://preview.redd.it/zamt0xdej2pg1.png?width=1282&format=png&auto=webp&s=8508088939220332f2d7c2ac017a6cf5cb8b40c1 I'm starting to have issues with management of my flock of Calude Code agents :) very often, one of them is done but I miss it and then it just hangs in there. So I gave Claude Code to create me a AGS widget/helper for this. I havent written any of this. Claude Code wrote all of it and checked format of ~/.claude folder/files for me. I'm testing it today and it works really well. Obviosely, I'm using Hyprland and Kitty with custom AGS bar but it can easily be done with dmenu or other bars or even without bars. What it can do: displays all active sessions in a bar (show/hide with shortcut key) Each session has icon/color for status (idle, working, asked question, wating for permission) Click on icon focuses that terminal in any of the following situations: a) kitty on current virtual desktop b) kitty on other virtual desktop (switches desktops) c) kitty in hypralnd scratchpad d) if kitty has multiple tabs this also switches to proper tab Toggle Auto/Manual mode. Auto mode automatically switches to terminal that requires attention (question or permission) but makes sure once it switches, it doesnt switch again until you "resolve" current terminal status The whole thing was a back-and-forth conversation in a single Claude Code session (~1.5 hours). I started by asking Claude to explore ~/.claude/ to understand what data is available. It discovered session files, conversation logs (.jsonl), and history.jsonl. Claude reverse-engineered the session status by reading the last entries of conversation log files: stop_reason: "end_turn" means idle stop_reason: "tool_use" means waiting for permission, etc. I asked Claude to rewrite the data-gathering script in Go for performance. It went from ~57ms (multiple shell subprocesses) to ~14ms (single binary with disk-cached history parsing) The kitty tab-switching required enabling remote control in kitty.conf and figuring out that kitty @ matches by shell PID, not foreground process PID I didn't write a single line of code. My role was only testing and some design decisions. It's been real fun to vibe code those internal tools lately. My idea was not to put code available live (Claude did it all and it is easily reproducable for your environment), but just to show an idea how window managers + Claude Code can be better than some tools specialized for this. But if anybody wants it, I can put it on GH. submitted by /u/DotSoggy1048 [link] [comments]
View originalPricing found: $19, $24, $49, $62, $19
Key features include: Salesforce Alone, With Scratchpad.
Scratchpad is commonly used for: The system for sales productivity.
Based on 16 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.