I cannot provide a meaningful summary about user opinions on "Fig" based on the provided content. The social mentions you've shared appear to be unrelated posts about KDE Plasma releases, political topics, and various news articles, with no apparent connection to a software tool called "Fig." Additionally, no user reviews were provided in the reviews section. To give you an accurate summary of user sentiment about Fig, I would need actual reviews and social mentions that specifically discuss the Fig software tool.
Mentions (30d)
20
Reviews
0
Platforms
7
GitHub Stars
25,133
5,499 forks
I cannot provide a meaningful summary about user opinions on "Fig" based on the provided content. The social mentions you've shared appear to be unrelated posts about KDE Plasma releases, political topics, and various news articles, with no apparent connection to a software tool called "Fig." Additionally, no user reviews were provided in the reviews section. To give you an accurate summary of user sentiment about Fig, I would need actual reviews and social mentions that specifically discuss the Fig software tool.
Industry
information technology & services
Employees
11
Funding Stage
Merger / Acquisition
Total Funding
$2.3M
863
GitHub followers
42
GitHub repos
25,133
GitHub stars
13
npm packages
KDE Plasma 6.4 released
The KDE community today announced the latest release: **[Plasma 6.4](https://kde.org/announcements/plasma/6/6.4.0/)**. This fresh new release improves on nearly every front, with progress being made in accessibility, color rendering, tablet support, window management, and more. Plasma already offered virtual desktops and customizable tiles to help organize your windows and activities, and now it lets you choose a different configuration of tiles on each virtual desktop. The Wayland session brings some new accessibility features: you can now move the pointer using your keyboard’s number pad keys, or use a three-finger touchpad pinch gesture to zoom in or out. Plasma file transfer notification now shows a speed graph, giving you a more visual idea of how fast the transfer is going, and how long it will take to complete. When any applications are in full screen mode Plasma will now enter Do Not Disturb mode and only show urgent notifications, and when you exit full screen mode, you’ll see a summary of any notifications you missed. Now when an application tries to access the microphone and finds it muted, a notification will pop up. A new feature in the Application Launcher widget will place a green New! tag next to newly installed apps, so you can easily find where something you just installed lives in the menu. The Display and Monitor page in System Settings comes with a brand new HDR calibration wizard, and support for Extended Dynamic Range (a different kind of HDR) and P010 video color format has been added. System Monitor now supports usage monitoring for AMD and Intel graphic cards, it can even show the GPU usage on a per-process basis. Spectacle, the built-in app for taking screenshots and screen recordings, has much improved design and more streamlined functionality. The background of the desktop or window now darkens when an authentication dialog shows up, helping you locate and focus on the window asking for your password. There’s a brand-new Animations page in System Settings that groups all the settings for purely visual animated effects into one place, making it easier to find and configure them. Aurorae is a newly added SVG vector graphics theme engine for KWin window decorations. You can read more about these and many other other features in the [Plasma 6.4 anounncement](https://kde.org/announcements/plasma/6/6.4.0/) and [complete changelog](https://kde.org/announcements/changelogs/plasma/6/6.3.5-6.4.0/).
View originalI built agent-revamp-skills — open source agent skills for stack migrations (js→ts, CRA→Vite, Express→Fastify and more)
Been using Claude Code heavily and noticed a gap — great skill libraries exist for greenfield development but nothing structured for migrations and revamps. So I built agent-revamp-skills. What it is A collection of SKILL.md files that give AI coding agents a structured workflow for stack migrations. Not just "here's how to use TypeScript" but the full process: Audit → Strategize → Prepare → Migrate → Validate → Cut Over 8 complete skills across 5 phases Phase 1 — Audit → codebase-audit (produces a structured AUDIT.md artifact) Phase 2 — Strategize → migration-strategy (signal-based decision table: big bang vs strangler fig vs parallel run) → risk-assessment (3×3 likelihood × impact matrix, hard blocker at score 9) Phase 3 — Prepare → test-coverage-baseline (must pass before any migration starts) Phase 4 — Migrate → js-to-typescript → cra-to-vite → express-to-fastify Phase 5 — Validate → behavioral-equivalence (shadow mode, snapshots, fuzzing) What makes each skill different from a tutorial Every SKILL.md has: - Coexistence strategy (how old + new live together during transition) - Equivalence tests (how to prove nothing broke) - Rollback triggers (specific conditions, not vague) - Binary done-criteria checklist Works with: Claude Code, Cursor, Windsurf — anything that reads Markdown instruction files. Github: agent-revamp-skills What migrations are you doing repeatedly that should be a skill here? submitted by /u/Particular_Cut3340 [link] [comments]
View originalCrowdStrike, Cisco and Palo Alto Networks all shipped agentic SOC tools at RSAC 2026 — the agent behavioral baseline gap survived all three
CrowdStrike CEO George Kurtz highlighted in his RSA Conference 2026 keynote that the fastest recorded adversary breakout time has dropped to 27 seconds. The average is now 29 minutes, down from 48 minutes in 2024. That is how much time defenders have before a threat spreads. Now CrowdStrike sensors detect more than 1,800 distinct AI applications running on enterprise endpoints, representing nearly 160 million unique application instances. Every one generates detection events, identity events, and data access logs flowing into SIEM systems architected for human-speed workflows. Cisco found that 85% of surveyed enterprise customers have AI agent pilots underway. Only 5% moved agents into production, according to Cisco President and Chief Product Officer Jeetu Patel in his RSAC blog post. That 80-point gap exists because security teams cannot answer the basic questions agents force. Which agents are running, what are they authorized to do, and who is accountable when one goes wrong. “The number one threat is security complexity. But we’re running towards that direction in AI as well,” Etay Maor, VP of Threat Intelligence at Cato Networks, told VentureBeat at RSAC 2026. Maor has attended the conference for 16 consecutive years. “We’re going with multiple point solutions for AI. And now you’re creating the next wave of security complexity.” Agents look identical to humans in your logs In most default logging configurations, agent-initiated activity looks identical to human-initiated activity in security logs. “It looks indistinguishable if an agent runs Louis’s web browser versus if Louis runs his browser,” Elia Zaitsev, CTO of CrowdStrike, told VentureBeat in an exclusive interview at RSAC 2026. Distinguishing the two requires walking the process tree. “I can actually walk up that process tree and say, this Chrome process was launched by Louis from the desktop. This Chrome process was launched from Louis’s Claude Cowork or ChatGPT application. Thus, it’s agentically con
View originalClaude Code's source code appears to have leaked: here's what we know
Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public. A 59.8 MB JavaScript source map file (.map), intended for internal debugging, was inadvertently included in version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry pushed live earlier this morning. By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted archive, acted as a digital flare. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers. For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property.The timing is particularly critical given the commercial velocity of the product. Market data indicates that Claude Code alone has achieved an annualized recurring revenue (ARR) of $2.5 billion, a figure that has more than doubled since the beginning of the year. With enterprise adoption accounting for 80% of its revenue, the leak provides competitors—from established giants to nimble rivals like Cursor—a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent. Anthropic confirmed the leak in a spokesperson’s e-mailed statement to VentureBeat, which reads: “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again.” The anatomy of agentic memory The most significant takeaway for competitors lies in how Anthropic solved "context entropy"—the tendency for AI agents to
View originalShow HN: Coasts – Containerized Hosts for Agents
Hi HN - We've been working on Coasts (“containerized hosts”) to make it so you can run multiple localhost instances, and multiple docker-compose runtimes, across git worktrees on the same computer. Here’s a demo: <a href="https://www.youtube.com/watch?v=yRiySdGQZZA" rel="nofollow">https://www.youtube.com/watch?v=yRiySdGQZZA</a>. There are also videos in our docs that give a good conceptual overview: <a href="https://coasts.dev/docs/learn-coasts-videos">https://coasts.dev/docs/learn-coasts-videos</a>.<p>Agents can make code changes in different worktrees in isolation, but it's hard for them to test their changes without multiple localhost runtimes that are isolated and scoped to those worktrees as well. You can do it up to a point with port hacking tricks, but it becomes impractical when you have a complex docker-compose with many services and multiple volumes.<p>We started playing with Codex and Conductor in the beginning of this year and had to come up with a bunch of hacky workarounds to give the agents access to isolated runtimes. After bastardizing our own docker-compose setup, we came up with Coasts as a way for agents to have their own runtimes without having to change your original docker-compose.<p>A containerized host (from now on we’ll just say “coast” for short) is a representation of your project's runtime, like a devcontainer but without the IDE stuff—it’s just focused on the runtime. You create a Coastfile at your project root and usually point to your project's docker-compose from there. When you run `coast build` next to the Coastfile you will get a build (essentially a docker image) that can be used to spin up multiple Docker-in-Docker runtimes of your project.<p>Once you have a coast running, you can then do things like assign it to a worktree, with `coast assign dev-1 -w worktree-1`. The coast will then point at the worktree-1 root.<p>Under the hood the host project root and any external worktree directories are Docker-bind-mounted into the container at creation time but the /workspace dir, where we run the services of the coast from, is a separate Linux bind mount that we create inside the running container. When switching worktrees we basically just do umount -l /workspace, mount --bind <path_to_worktree_root>, mount --make-rshared /workspace inside of the running coast. The rshared flag sets up mount propagation so that when we remount /workspace, the change flows down to the inner Docker daemon's containers.<p>The main idea is that the agents can continue to work host-side but then run exec commands against a specific coast instance if they need to test runtime changes or access runtime logs. This makes it so that we are harness agnostic and create interoperability around any agent or agent harness that runs host-side.<p>Each coast comes with its own set of dynamic ports: you define the ports you wish to expose back to the host machine in the Coastfile. You're also able to "checkout" a coast. When you do that, socat binds the canonical ports of your coast (e.g. web 3000, db 5432) to the host machine. This is useful if you have hard coded ports in your project or need to do something like test webhooks.<p>In your Coastfile you point to all the locations on your host-machine where you store your worktrees for your project (e.g. ~/.codex/worktrees). When an agent runs `coast lookup` from a host-side worktree directory, it is able to find the name of the coast instance it is running on, so it can do things like call `coast exec dev-1 make tests`. If your agent needs to do things like test with Playwright it can so that host-side by using the dynamic port of your frontend.<p>You can also configure volume topologies, omit services and volumes that your agent doesn't need, as well as share certain services host-side so you don't add overhead to each coast instance. You can also do things like define strategies for how each service should behave after a worktree assignment change (e.g. none, hot, restart, rebuild). This helps you optimize switching worktrees so you don't have to do a whole docker-compose down and up cycle every time.<p>We'd love to answer any questions and get your feedback!
View originalSora is shutting down. OpenAI's 'backup' is a full data export. I built SoraVault (free, open source)
Update: SoraVault 2.0 is now available - saves Sora v1 images, v2 videos, liked content and drafts all within Sora2! Update 2: chrome plugin released, available on GitHub. I started using Sora when it first launched. Image generation always fascinated me. The whole process, not just the outputs. Testing new prompts, iterating on ideas, checking what others were creating on the worldwide feed, then putting my own spin on it. Some images hit a nerve and got 1,000+ likes. It was addictive. Then last week, Sam announced Sora is done. OK. He said they'd share "details on preserving your work" soon. I waited. Two days ago, the "details" arrived: request a full ChatGPT data export. One link, valid for 24 hours, containing everything from 3 years of ChatGPT history. Dig through the dump yourself to find your Sora images. No prompts attached. No original quality. That's their "preserve your work" solution. No thanks. So I built SoraVault. It's a Tampermonkey script that pulls your full Sora library before it's gone: Downloads Sora v2 videos (Profile and Draft) in full resolution Downloads all Sora v1 images in original quality (the actual renders from OpenAI's servers, not compressed thumbnails) Saves every prompt as a matching .txt sidecar file so you keep the creative thinking behind each piece, not just the files Smart filters: keyword, aspect ratio, quality, date range, operation type (generate/extend/edit) Parallel downloads (up to 5). 500 files in under 10 minutes. File System Access API: pick one folder, done. No "Save As" popup for every file. The images are one thing. But losing the prompts, the iterations, the weird ideas that actually worked, the learning from hundreds of attempts. That's what I wasn't willing to let go. https://i.redd.it/t9lhfb0pglsg1.gif How it works technically: API interception (raw JSON responses between sora.chatgpt.com and OpenAI's servers), not a DOM scrape. This is why it pulls original resolution files and complete metadata, not whatever thumbnails are currently rendered. How to get it: - GitHub (free, full source): https://github.com/charyou/SoraVault/ - Demo video (1 min): https://www.youtube.com/watch?v=0eFteRew5mI - A standalone desktop app (Mac/Win/Linux, no browser needed) is coming next week. - This only works while Sora's servers are live. Once they pull the plug, the data is gone. Happy to answer questions. Edit: I have a working prototype of a standalone desktop app (no Tampermonkey, no browser extension). If that's something people want, I'll push the release this week. Any interest? :) Update: SoraVault 2.0 is now live! https://github.com/charyou/SoraVault/ > I just pushed a massive update that moves the tool to an API-driven architecture. Major Updates in 2.0: No more scrolling: It now fetches Sora 1 and 2 content simultaneously in the background. ❤️ Backup "Liked" content from other creators. 🔗 JSON saved with raw JSON metadata (including valid REMIX Chain Download URLs!) 📂 Auto-sorting into 6 dedicated subfolders. MUCH Faster Scans Many more fixes and UI updates. Edit 2: Chrome / Edge Plugin is coming soon! https://preview.redd.it/4c2gtktmrlsg1.png?width=535&format=png&auto=webp&s=132eebfb813a44f761ce9b106a75cc6447726276 submitted by /u/charju_ [link] [comments]
View originalOpenAI’s Frontier puts AI agents in a fight SaaS can’t afford to lose
When OpenAI launched Frontier in February, the announcement was described as a platform for enterprise AI agents. What it actually signalled was a challenge to the revenue architecture underpinning the software industry. Frontier is designed to act as a semantic layer in an organisation’s existing systems, connecting data warehouses, CRM platforms, ticketing tools, and internal […] The post OpenAI’s Frontier puts AI agents in a fight SaaS can’t afford to lose appeared first on AI News.
View originalFixing AI failure: Three changes enterprises should make now
Recent reports about AI project failure rates have raised uncomfortable questions for organizations investing heavily in AI. Much of the discussion has focused on technical factors like model accuracy and data quality, but after watching dozens of AI initiatives launch, I’ve noticed that the biggest opportunities for improvement are often cultural, not technical. Internal projects that struggle tend to share common issues. For example, engineering teams build models that product managers don’t know how to use. Data scientists build prototypes that operations teams struggle to maintain. And AI applications sit unused because the people they were built for weren't involved in deciding what “useful” really meant. In contrast, organizations that achieve meaningful value with AI have figured out how to create the right kind of collaboration across departments, and established shared accountability for outcomes. The technology matters, but the organizational readiness matters just as much. Here are three practices I’ve observed that address the cultural and organizational barriers that can impede AI success. Expand AI literacy beyond engineering When only engineers understand how an AI system works and what it’s capable of, collaboration breaks down. Product managers can't evaluate trade-offs they don't understand. Designers can't create interfaces for capabilities they can't articulate. Analysts can't validate outputs they can't interpret. The solution isn't making everyone a data scientist. It's helping each role understand how AI applies to their specific work. Product managers need to grasp what kinds of generated content, predictions or recommendations are realistic given available data. Designers need to understand what the AI can actually do so they can design features users will find useful. Analysts need to know which AI outputs require human validation versus which can be trusted. When teams share this working vocabulary, AI stops being something that happens in
View originalfeat: Sandbox integration test — real binary lifecycle + stress testing (#37)
## Summary Implements comprehensive GitHub Actions sandbox testing workflow that validates real daemon binary lifecycle, catching deployment bugs that in-process tests cannot detect. ## Changes - **Complete Sandbox Workflow**: Tests actual `pi-daemon` binary in CI environment - **Comprehensive Coverage**: Smoke tests, concurrency, stress testing, crash recovery - **Real-world Validation**: PID files, port binding, signal handling, memory behavior - **Future-Ready**: Enhancement issues created for persistence, supervisor, scheduler testing ## Test Phases Implemented ### 🔍 Phase 1: Smoke Testing - **Binary Startup**: Release build starts as real daemon process - **Endpoint Validation**: Health, status, agent CRUD, webchat, OpenAI API - **PID Management**: daemon.json creation, tracking, cleanup verification - **Basic Functionality**: All core features work in real deployment scenario ### ⚡ Phase 2: Concurrency & Load Testing - **HTTP Load**: 50 concurrent requests to `/api/status` endpoint - **Agent Stress**: 20 concurrent agent registrations with verification - **WebSocket Load**: 5 concurrent WebSocket connections within per-IP limits - **Memory Monitoring**: RSS usage tracking with 200MB warning threshold ### 💪 Phase 3: Stress & Recovery Testing - **Sustained Load**: 30-second continuous request generation with memory growth monitoring - **Crash Recovery**: Kill -9 simulation → restart verification → full functionality restored - **Memory Validation**: Growth monitoring with warnings if >50MB increase during load ### 🛑 Phase 4: Graceful Shutdown Testing - **API Shutdown**: `POST /api/shutdown` endpoint triggers graceful exit - **Process Cleanup**: PID file removal, port release verification - **CLI Validation**: Commands handle daemon state correctly when stopped ## Critical Gaps Addressed | What In-Process Tests Miss | Real Deployment Bug Example | Sandbox Test Coverage | |---------------------------|----------------------------|---------------------| | Binary actually starts | Compiles but panics on launch | ✅ Real daemon startup | | PID file lifecycle | Written but not cleaned up | ✅ File creation/removal | | Port binding issues | Works on random ports, fails on 4200 | ✅ Standard port binding | | Signal handling | Ctrl+C cleanup, SIGTERM shutdown | ✅ Kill signals + cleanup | | Concurrent behavior | Race conditions under load | ✅ 50+ concurrent operations | | Memory leaks | Only visible after sustained use | ✅ Memory growth monitoring | | Config from disk | Tests use in-memory config | ✅ Real TOML file loading | | WebSocket limits | Per-IP connection enforcement | ✅ Connection limit testing | ## Future Enhancements Created ### Issue #77: P2.6 Persistence Testing (Phase 2+) - Data survival across restarts (agents, sessions, usage) - Database integrity after ungraceful shutdown - **Blocked by:** #13 (SQLite memory substrate) ### Issue #78: P3.4 Supervisor Stress Testing (Phase 3+) - Heartbeat timeout detection under load - Auto-restart functionality validation - **Blocked by:** #17 (Supervisor implementation) ### Issue #79: P3.5 Scheduler Validation (Phase 3+) - Cron job execution timing accuracy - Job management under concurrent load - **Blocked by:** #16 (Cron scheduler engine) ## Workflow Configuration ### Trigger Conditions - **Pull Requests** to main branch - **Path Filter**: Only when `crates/**`, `Cargo.toml`, `Cargo.lock` change - **Skip**: Documentation-only changes (no unnecessary CI overhead) ### Environment Setup - **Ubuntu Latest**: Standard CI environment - **Release Build**: Tests production binary (optimized, no debug symbols) - **Dependencies**: jq for JSON parsing, websocat for WebSocket testing - **Timeout**: 10 minutes prevents hung processes from blocking CI ### Error Handling & Reporting - **Actionable Errors**: Clear failure messages with context - **Resource Monitoring**: Memory usage warnings and alerts - **Cleanup**: Guaranteed daemon process cleanup even on test failures - **Debugging**: Process PID tracking and status validation ## Test Execution Flow ```bash # 1. Build release binary cargo build --release # 2. Start daemon in background ./target/release/pi-daemon start --foreground & # 3. Wait for health endpoint (30s timeout) curl -sf http://127.0.0.1:4200/api/health # 4. Run comprehensive test suite # - API endpoint validation # - Agent CRUD lifecycle # - Webchat content verification # - OpenAI compatibility testing # - Concurrent load testing # - Memory usage monitoring # - Crash recovery simulation # - Graceful shutdown validation # 5. Cleanup and summary pkill pi-daemon && rm daemon.json ``` ## Benefits - ✅ **Deployment Confidence**: Catches real-world integration issues - ✅ **Performance Validation**: Memory and concurrency behavior under load - ✅ **Recovery Testing**: Ensures robustness against crashes and restarts - ✅ **Signal Handling**: Validates production process management - ✅ **Resource Management**: Prevents port confli
View originalfeat: Quota-aware trial scheduling based on /usage subscription limits
## Problem The performance harness burns significant tokens per trial (300K-1.8M each). With subscription-based Claude Code, daily and weekly quotas are real constraints. Currently there is zero awareness of quota state when recommending or executing benchmark runs — the orchestrator will happily suggest 7 trials that consume 5M+ tokens without knowing the user is at 80% of their daily limit with no reset for 5 days. ## Desired Behavior The harness (and the orchestrator recommending runs) should be **quota-aware**: 1. **Before recommending trials**, estimate the token cost based on historical averages per task 2. **Query `/usage`** (or equivalent subscription API) to get: - Current daily usage vs daily limit - Current weekly usage vs weekly limit - Reset timing (when does daily reset? when does weekly reset?) 3. **Apply scheduling intelligence**: - If daily resets tomorrow and we're at 60% → run all trials now - If weekly doesn't reset for 5 days and we're at 70% → suggest deferring or running fewer trials - If we're at 90% daily but resets in 2 hours → suggest waiting 4. **Surface quota impact** in run recommendations: ``` Proposed: 7 trials, est. ~5M tokens Current quota: 12M/20M daily (60%), resets in 14h Weekly: 45M/100M (45%), resets in 3d Recommendation: Safe to proceed — daily headroom sufficient ``` ## Implementation Considerations - `/usage` data structure needs investigation — what does the subscription API expose? - Historical token-per-task averages can be computed from existing results - The 70% threshold is a starting point — should be configurable - Weekly quota is the binding constraint when reset is far away - Daily quota matters more when reset is imminent - The orchestrator (not just the runner CLI) should have this awareness — it's the one suggesting "let's run 18 trials" ## Not In Scope (first pass) - Automatic pause/resume of running trials (too complex for v1) - Real-time token tracking during a trial (stream parsing is post-hoc) - Multi-user quota coordination ## Acceptance Criteria - [ ] Runner CLI shows estimated token cost before execution - [ ] Quota state queried and displayed before trial recommendations - [ ] Scheduling advice based on quota headroom + reset timing - [ ] Configurable threshold (default 70% of remaining quota) - [ ] Works with current subscription model (daily + weekly limits)
View originalAdd example usage for Costory billing datasources in documentation
- Included Terraform example for Anthropic billing datasource. - Added Terraform example for Cursor billing datasource. - Defined required variables and provider configuration for both datasources.
View originalchore(deps): update updates-patch-minor
> ℹ️ **Note** > > This PR body was truncated due to platform limits. This PR contains the following updates: | Package | Update | Change | |---|---|---| | 1password/connect-sync | patch | `1.8.1` → `1.8.2` | | [alpine/openclaw](https://openclaw.ai) ([source](https://redirect.github.com/openclaw/openclaw)) | minor | `2026.2.22` → `2026.3.8` | | [cloudflare/cloudflared](https://redirect.github.com/cloudflare/cloudflared) | minor | `2026.2.0` → `2026.3.0` | | kerberos/agent | patch | `v3.6.12` → `v3.6.15` | | [searxng/searxng](https://searxng.org) ([source](https://redirect.github.com/searxng/searxng)) | patch | `2026.3.8-a563127a2` → `2026.3.9-d4954a064` | --- > [!WARNING] > Some dependencies could not be looked up. Check the [Dependency Dashboard](../issues/304) for more information. --- ### Release Notes <details> <summary>openclaw/openclaw (alpine/openclaw)</summary> ### [`v2026.3.8`](https://redirect.github.com/openclaw/openclaw/blob/HEAD/CHANGELOG.md#202638) [Compare Source](https://redirect.github.com/openclaw/openclaw/compare/v2026.3.7...v2026.3.8) ##### Changes - CLI/backup: add `openclaw backup create` and `openclaw backup verify` for local state archives, including `--only-config`, `--no-include-workspace`, manifest/payload validation, and backup guidance in destructive flows. ([#​40163](https://redirect.github.com/openclaw/openclaw/issues/40163)) thanks [@​shichangs](https://redirect.github.com/shichangs). - macOS/onboarding: add a remote gateway token field for remote mode, preserve existing non-plaintext `gateway.remote.token` config values until explicitly replaced, and warn when the loaded token shape cannot be used directly from the macOS app. ([#​40187](https://redirect.github.com/openclaw/openclaw/issues/40187), supersedes [#​34614](https://redirect.github.com/openclaw/openclaw/issues/34614)) Thanks [@​cgdusek](https://redirect.github.com/cgdusek). - Talk mode: add top-level `talk.silenceTimeoutMs` config so Talk waits a configurable amount of silence before auto-sending the current transcript, while keeping each platform's existing default pause window when unset. ([#​39607](https://redirect.github.com/openclaw/openclaw/issues/39607)) Thanks [@​danodoesdesign](https://redirect.github.com/danodoesdesign). Fixes [#​17147](https://redirect.github.com/openclaw/openclaw/issues/17147). - TUI: infer the active agent from the current workspace when launched inside a configured agent workspace, while preserving explicit `agent:` session targets. ([#​39591](https://redirect.github.com/openclaw/openclaw/issues/39591)) thanks [@​arceus77-7](https://redirect.github.com/arceus77-7). - Tools/Brave web search: add opt-in `tools.web.search.brave.mode: "llm-context"` so `web_search` can call Brave's LLM Context endpoint and return extracted grounding snippets with source metadata, plus config/docs/test coverage. ([#​33383](https://redirect.github.com/openclaw/openclaw/issues/33383)) Thanks [@​thirumaleshp](https://redirect.github.com/thirumaleshp). - CLI/install: include the short git commit hash in `openclaw --version` output when metadata is available, and keep installer version checks compatible with the decorated format. ([#​39712](https://redirect.github.com/openclaw/openclaw/issues/39712)) thanks [@​sourman](https://redirect.github.com/sourman). - CLI/backup: improve archive naming for date sorting, add config-only backup mode, and harden backup planning, publication, and verification edge cases. ([#​40163](https://redirect.github.com/openclaw/openclaw/issues/40163)) Thanks [@​gumadeiras](https://redirect.github.com/gumadeiras). - ACP/Provenance: add optional ACP ingress provenance metadata and visible receipt injection (`openclaw acp --provenance off|meta|meta+receipt`) so OpenClaw agents can retain and report ACP-origin context with session trace IDs. ([#​40473](https://redirect.github.com/openclaw/openclaw/issues/40473)) thanks [@​mbelinky](https://redirect.github.com/mbelinky). - Tools/web search: alphabetize provider ordering across runtime selection, onboarding/configure pickers, and config metadata, so provider lists stay neutral and multi-key auto-detect now prefers Grok before Kimi. ([#​40259](https://redirect.github.com/openclaw/openclaw/issues/40259)) thanks [@​kesku](https://redirect.github.com/kesku). - Docs/Web search: restore $5/month free-credit details, replace defunct "Data for Search"/"Data for AI" plan names with current "Search" plan, and note legacy subscription validity in Brave setup docs. Follows up on [#​26860](https://redirect.github.com/openclaw/openclaw/issues/26860). ([#​40111](https://redirect.github.com/openclaw/openclaw/issues/40111)) Thanks [@​remusao](https://redirect.github.com/remusao). - Extensions/ACPX tests: move the shared runtime fixture helper from `src/runtime-internals/` to `src/test-utils/` so the test-only he
View original[AI Agent] Master Tracking: Complete AI Agent Implementation
## 🎯 Goal Track the complete implementation of autonomous Python AI Agent for CoffeeOrderSystem. --- ## 📋 Implementation Steps ### Phase 1: Infrastructure (Week 1) - [ ] **Step 1:** Setup Python AI Agent Service infrastructure #133 - Python service with FastAPI - Docker integration - Basic health checks - Makefile commands ### Phase 2: AI Integration (Week 1-2) - [ ] **Step 2:** Implement Cognee integration with semantic code search #134 - CogneeService with RAG search - Architecture context gathering - Entity file discovery - Integration tests - [ ] **Step 3:** Implement PlannerAgent with LangChain #135 - TaskPlan models - LangChain planning chain - Prompt templates - Plan generation and posting ### Phase 3: Code Generation (Week 2-3) - [ ] **Step 4:** Implement DevAgent for code generation #136 - GitHub file operations - Code generation prompts - Create/modify/delete operations - Branch management - [ ] **Step 5:** Implement PRAgent and workflow orchestration #137 - PR creation with rich descriptions - Workflow orchestrator - Background task processing - Complete end-to-end flow ### Phase 4: Advanced Features (Week 3-4) - [ ] **Step 6:** Add learning/memory system - Store successful patterns in Cognee - Learn from PR reviews - Avoid failed patterns - Improve over time - [ ] **Step 7:** Add GitHub webhook listener - Auto-trigger on issue label - Real-time processing - Queue management - Concurrent task handling --- ## 🎯 Success Criteria ### MVP (Minimum Viable Product) - ✅ Agent creates plans for issues - ✅ Agent generates compilable code - ✅ Agent creates PRs with descriptions - ✅ Works for simple tasks (add field, update config) - ✅ Error handling with GitHub notifications ### Production Ready - ☐ Handles complex multi-layer changes - ☐ Learns from successful PRs - ☐ Automatic triggering via webhooks - ☐ Rate limiting and queue management - ☐ Comprehensive test coverage (>80%) - ☐ Monitoring and metrics --- ## 📊 Architecture Overview ``` ┌────────────────────────────────────┐ │ GitHub Issues (labeled: ai-agent) │ └───────────────┬────────────────────┘ │ ↓ (webhook or manual trigger) ┌───────────────┼────────────────────┐ │ WorkflowOrchestrator │ └───────────────┬────────────────────┘ │ ┌────────┼────────┐ │ │ │ ↓ ↓ ↓ PlannerAgent DevAgent PRAgent │ │ │ │ │ │ ┌───┼───┐ │ ┌───┼───┐ │ │ │ │ │ │ │ ↓ ↓ ↓ ↓ ↓ ↓ ↓ Cognee LLM GitHub GitHub (RAG) (GPT-4) (API) ``` ### Component Responsibilities **PlannerAgent:** - Analyzes GitHub issue - Searches codebase via Cognee - Creates structured TaskPlan - Posts plan as issue comment **DevAgent:** - Generates code via LLM - Creates/modifies/deletes files - Commits to feature branch - Preserves code style **PRAgent:** - Creates pull request - Writes comprehensive PR description - Links to original issue - Adds testing checklist **WorkflowOrchestrator:** - Coordinates all agents - Handles errors - Posts progress updates - Manages background execution --- ## 📦 Tech Stack ### Core - **Python 3.11+** - Agent runtime - **FastAPI** - Web framework - **LangChain** - LLM orchestration - **OpenAI GPT-4** - Code generation - **PyGithub** - GitHub API client ### AI/ML - **Cognee** - RAG and semantic search - **OpenAI Embeddings** - Vector search - **LangChain Chains** - Prompt management ### Infrastructure - **Docker** - Containerization - **PostgreSQL** - Shared with .NET API - **uvicorn** - ASGI server --- ## 📝 Usage Example ### 1. Create Issue ```markdown Title: Add loyalty points to Customer Labels: ai-agent, enhancement Description: Add LoyaltyPoints field (int, default 0) to Customer entity. Requirements: - Update Domain/Entities/Customer.cs - Update Application/DTOs/CustomerDto.cs - Create EF Core migration - Add unit tests ``` ### 2. Trigger Agent ```bash make agent-process # Enter issue number: 138 ``` ### 3. Monitor Progress Issue comments show: ``` 🤖 AI Agent Started Phase 1/3: Analyzing task... 🤖 Execution Plan Summary: Add LoyaltyPoints field to Customer entity Steps: 1. MODIFY Domain/Entities/Customer.cs 2. MODIFY Application/DTOs/CustomerDto.cs 3. CREATE Migration file 4. CREATE Test file 🛠️ Phase 2/3: Generating code (4 files)... ✅ Step 1/4: modify Customer.cs ✅ Step 2/4: modify CustomerDto.cs ... 📦 Phase 3/3: Creating pull request... 🤖 Pull Request Created: #139 Branch: ai-agent/issue-138 Ready for review! ``` ### 4. Review PR PR includes: - Closes #138 - Comprehensive description - File changes summary - Testing checklist - Risk assessment ### 5. Merge Agent learns from successful merge for future tasks. --- ## 🧪 Testing Strategy ### Unit Tests - Agent logic (plan parsing, code generation) - Service mocks (GitHub, Cognee) - P
View originalWeekly Rules Review: 2026-03-09
## Weekly Rules Documentation Review - 2026-03-09 ### Overall Health Assessment The rules documentation is in **good shape overall**. All 11 rule files are relevant, and the vast majority of file paths and code patterns they reference still exist in the codebase. Two files need minor updates, and one has a moderate accuracy issue. --- ### Audit Results #### AGENTS.md **Status**: Keep **Reasoning**: Core agent guide is accurate and well-structured. The rules index table, project setup instructions, pre-commit checks, and general guidance are all current. References to `npm run ts`, `tsgo`, TanStack Router, and Base UI are correct. --- #### rules/electron-ipc.md **Status**: Keep **Reasoning**: High-value, comprehensive guide. All referenced file paths exist (`src/ipc/contracts/core.ts`, `src/ipc/types/*.ts`, `src/ipc/handlers/base.ts`, `src/lib/queryKeys.ts`). The `pendingStreamChatIds` pattern still exists in `useStreamChat.ts`. The `writeSettings` shallow merge warning is still relevant (confirmed by recent fix in commit ef4ec84 preventing stale settings reads). --- #### rules/local-agent-tools.md **Status**: Keep **Reasoning**: Concise and accurate. `modifiesState` flag is actively used across many tool files. `buildAgentToolSet` exists in `tool_definitions.ts`. `handleLocalAgentStream` exists in `local_agent_handler.ts` with `readOnly`/`planModeOnly` guards confirmed. `todo_persistence.ts` exists. `fs.promises` guidance remains relevant for Electron main process. --- #### rules/e2e-testing.md **Status**: Needs Update **Reasoning**: Mostly accurate and high-value, but has two inaccuracies in helper method references. **Issues Found**: - Lines 64-75: References `po.clearChatInput()` and `po.openChatHistoryMenu()` as methods on PageObject directly, but they actually live on the `chatActions` sub-component: `po.chatActions.clearChatInput()` and `po.chatActions.openChatHistoryMenu()`. This contradicts the sub-component pattern documented earlier in the same file (lines 29-43). **Suggested Changes**: - Update the Lexical editor section examples to use `po.chatActions.clearChatInput()` and `po.chatActions.openChatHistoryMenu()` instead of `po.clearChatInput()` and `po.openChatHistoryMenu()` --- #### rules/git-workflow.md **Status**: Keep **Reasoning**: Comprehensive and high-value. Contains many hard-won learnings about fork workflows, `gh pr create` edge cases, GitHub API workarounds, and rebase conflict resolution patterns. The `GITHUB_TOKEN` workflow chaining limitation and the `--input` pattern for special characters are particularly valuable. Recent commits (51fc07e - GitHub App tokens) confirm this area is actively evolving. **Note**: This is the longest rules file (123 lines). Some of the very specific rebase conflict tips (React component wrapper conflicts, refactoring conflicts) may be overly situational, but they're low-cost to keep. --- #### rules/base-ui-components.md **Status**: Keep **Reasoning**: All referenced component files exist (`context-menu.tsx`, `tooltip.tsx`, `accordion.tsx`). `@base-ui/react` is in the project dependencies. The TooltipTrigger `render` prop guidance and Accordion API differences from Radix are high-value patterns that prevent common mistakes. --- #### rules/database-drizzle.md **Status**: Keep **Reasoning**: Short and high-value. `drizzle/meta/_journal.json` exists. Migration conflict resolution guidance is important for rebase workflows. --- #### rules/typescript-strict-mode.md **Status**: Keep **Reasoning**: All references verified. `tsconfig.app.json` confirms ES2020 target with `lib: ["ES2020"]`. The `tsgo` installation note (Go binary, not npm package) and `response.json()` returning `unknown` are valuable gotchas. Node.js >= 24 requirement is noted. --- #### rules/openai-reasoning-models.md **Status**: Needs Update **Reasoning**: The core concept is still valid - orphaned reasoning parts are still filtered in `src/ipc/utils/ai_messages_utils.ts`. However, the specific function name referenced is outdated. **Issues Found**: - References `filterOrphanedReasoningParts()` as a named function, but this logic was refactored into the `cleanMessage()` function (inline filtering within that function). The named export no longer exists. **Suggested Changes**: - Update the reference from `filterOrphanedReasoningParts()` to describe the filtering logic within `cleanMessage()` in `src/ipc/utils/ai_messages_utils.ts` --- #### rules/adding-settings.md **Status**: Keep **Reasoning**: All file paths verified: `UserSettingsSchema` in `src/lib/schemas.ts`, `DEFAULT_SETTINGS` in `src/main/settings.ts`, `SETTING_IDS` in `src/lib/settingsSearchIndex.ts`, `AutoApproveSwitch.tsx` as template. Recent commit d6ab829 (add max tool call steps setting) confirms this pattern is actively used. --- #### rules/chat-message-indicators.md **Status**: Keep **Reasoning**: Short (12 lines), low token cost. `dyad-status` tag implementation confirmed in
View originalfeat: production observability — pluggable metrics, cost tracking, budget alerts
Addresses issue #16 ## What Full observability stack for prism-pipe: ### Metrics Engine - **MetricsManager** — central hub with counter/histogram/gauge, namespace prefixing, event name remapping - **Scoped emitters** — per-request `ctx.metrics` with merged tags - **Zero overhead** when `enabled: false` — all methods are no-ops ### Exporters - **Prometheus** pull (`/metrics` endpoint) with proper text exposition format - **OTLP** push (HTTP/JSON to OpenTelemetry Collector) - **StatsD** push (UDP, zero deps) - **Console** (dev mode) - Pluggable via `MetricsExporter` interface ### Cost Tracking - **PricingDB** with built-in prices for OpenAI, Anthropic, Google, Meta models - Prefix matching + alias resolution (e.g. `gpt-4o-2024-08-06` → `gpt-4o`) - Flat-rate handling (tokens tracked, cost = $0) - `X-Prism-Cost-USD` + `X-Prism-Cost-Breakdown` response headers - Per-key/provider/model aggregation (daily + monthly) ### Budget Alerts - Configurable daily/monthly thresholds - Alert at configurable percentages (e.g. 80%, 90%, 100%) - Hard enforcement: `BudgetError` → 403 when exceeded - Alert handler system (log, webhook, custom) ### Config - New `metrics`, `cost`, `budget` sections in config schema (all default to disabled) ### Tests - 19 new tests: Prometheus output format, namespace remapping, counter accumulation, cost calculation accuracy, flat-rate tracking, budget alert firing, hard limit 403 enforcement
View originalWeekly Report Mar 2 -- Mar 9, 2026
# Weekly Report: Mar 2 -- Mar 9, 2026 ## Quick Stats | Metric | Count | |--------|-------| | Merged PRs | 47 | | Open PRs | 24 (11 draft) | | Open issues | 61 | | New issues this week | 33 | | Issues closed this week | 6 | | CI runs on main | 30 | ## Highlights An exceptionally active week with 47 merged PRs. Key themes: - **Realm migration**: Keycloak master-to-kagenti realm migration landed (#764), with follow-up fixes (#851, #863) - **Platform hardening**: Podman support (#861), Docker Hub rate limit fixes (#844), PostgreSQL mount fix (#852) - **CI/CD improvements**: OpenSSF Scorecard 7.1->8+ (#807), stale workflow permissions (#859), HyperShift cluster auto-cleanup (#854) - **New capabilities**: CLI/TUI for Kagenti (#835), Istio trace export to OTel (#795), RHOAI 3.x integration (#809) - **Dependency updates**: 8 Dependabot PRs (Docker actions major bumps, CodeQL, Trivy) - **Authorization epic**: 7 new issues (#787-#794) laying out a comprehensive authorization and policy framework - **Agent sandbox epic**: New epic (#820) for platform-owned sandboxed agent runtime ## Issue Analysis ### Epics (active initiatives) | # | Title | Owner | Status | |---|-------|-------|--------| | #862 | AgentRuntime CR — CR-triggered injection | @cwiklik | New, design phase | | #820 | Platform-Owned Sandboxed Agent Runtime | @Ladas | Active, PR #758 in progress | | #828 | Migrate installer from Ansible/Helm to Operator | @pdettori | New, planning | | #787 | Authorization, Policies, and Access Management | @mrsabath | New, 6 sub-issues filed | | #841 | Org-wide orchestration: CI, tests, security | @Ladas | Active, PRs #866-#868 open | | #767 | Migrate from Keycloak master realm | @mrsabath | Mostly done (#764 merged), close candidate | | #619 | Tracing observability PoC | @evaline-ju | Active (#795 merged) | | #621 | OpenSSF Scorecard to 10/10 | @Ladas | Active (#807 merged, now 8+) | | #523 | Refactor APIs for Compositional Architecture | @pdettori | Active, PR #770 open | | #518 | OpenShift AI deployment issues | @Ladas | Active (#809 merged) | | #309 | Full Coverage E2E Testing | @cooktheryan | Ongoing | | #440 | Multi-Team Deployment on RHOAI | @Ladas | Ongoing | | #439 | Namespace-Based Token Usage Quotas | @Ladas | Ongoing | | #614 | Feedback review community meeting | @Ladas | Stale (>30d no update) | | #623 | Identify Emerging Agentic Deployment Patterns | @kellyaa | Stale | | #612 | Agent Attestation Framework | @mrsabath | Stale, PR #613 still draft | ### Security-Adjacent Issues | # | Title | Status | Recommendation | |---|-------|--------|----------------| | #822 | Keycloak configmap should be secret | Open | High priority — credentials in configmap | | #106 | Replace hardcoded secret with SPIRE identity | Open | Long-standing, PR #769 in draft | | #333 | SPIFFE ID missing checks | Open | Stale, needs triage | | #267 | Replace hard-coded Client Secret File path | Open | Good first issue, needs assignee | ### Bug Reports | # | Title | Still affects main? | PR exists? | Recommendation | |---|-------|---------------------|------------|----------------| | #856 | Warnings during Kagenti install | Likely yes | No | Triage — install warnings | | #855 | Can't checkout source on Windows | Yes (skill naming) | PR #869 | In progress | | #829 | Deleting A2A agent doesn't delete HTTPRoute | Likely yes | No | Needs fix | | #826 | No way to log out of Kagenti | Yes | No | UX bug, needs fix | | #825 | Build failures lead to stuck state | Likely yes | No | Needs investigation | | #738 | UI drops spire label on 2nd deploy | Likely yes | No | Stale (>30d) | | #486 | Installer issues (Postgres/Phoenix) | Partially (#852 fixed PG) | Partial | Re-verify Phoenix | | #781 | kagenti-deps fails on OCP 4.19 | Unknown | No | Stale, needs triage | | #606 | Unsupported Helm version | Unknown | No | Stale, needs triage | | #655 | Duplicated resources between repos | Unknown | No | Stale, needs triage | ### Issues Closed This Week (good velocity) | # | Title | Fix PR | |---|-------|--------| | #833 | UI login fails after realm migration | #834 | | #831 | --preload fails when images cached | #832 | | #819 | Remove deprecated Component CRD refs | #818 | | #813 | Import env vars references bad URL | #821 | | #810 | Import env vars silently fails on dup | #821 | | #804 | OAuth secret job SSL error on OCP | #805 | ### Feature Requests | # | Title | Priority | Recommendation | |---|-------|----------|----------------| | #858 | Use new URL for fetching Agent Cards | Medium | Good first issue | | #836 | AuthBridge sidecar opt-out controls in UI | Medium | Tied to #862 epic | | #824 | Help text for UI fields | Low | Good UX improvement | | #823 | Examples as suggestions in UI | Low | Nice-to-have | | #817 | Auto-add issues/PRs to project board | Medium | PR #870 open | | #814 | Mechanism to update agent via K8s | Medium | Operator feature | | #786 | Register MCP servers from UI | Medium | UI feature | | #783 | Agent card signing/verifica
View originalRepository Audit Available
Deep analysis of withfig/autocomplete — architecture, costs, security, dependencies & more
Fig has a public GitHub repository with 25,133 stars.
Based on user reviews and social mentions, the most common pain points are: usage monitoring, API costs, token cost, ai agent.
Based on 75 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Co-founder and CEO at Reddit
3 mentions