Figure is the first-of-its-kind AI robotics company bringing a general purpose humanoid to life.
I cannot provide a meaningful summary about "Figure" software based on the provided content. The social mentions you've shared appear to be about various unrelated topics (KDE Plasma releases, political articles, etc.) and don't contain any reviews or discussions about a software tool called "Figure." Additionally, no user reviews were provided. To summarize user sentiment about Figure, I would need actual reviews and social mentions that specifically discuss this software tool.
Mentions (30d)
7
Reviews
0
Platforms
6
Sentiment
0%
0 positive
I cannot provide a meaningful summary about "Figure" software based on the provided content. The social mentions you've shared appear to be about various unrelated topics (KDE Plasma releases, political articles, etc.) and don't contain any reviews or discussions about a software tool called "Figure." Additionally, no user reviews were provided. To summarize user sentiment about Figure, I would need actual reviews and social mentions that specifically discuss this software tool.
Industry
machinery
Employees
370
Funding Stage
Series C
Total Funding
$1.9B
KDE Plasma 6.4 released
The KDE community today announced the latest release: **[Plasma 6.4](https://kde.org/announcements/plasma/6/6.4.0/)**. This fresh new release improves on nearly every front, with progress being made in accessibility, color rendering, tablet support, window management, and more. Plasma already offered virtual desktops and customizable tiles to help organize your windows and activities, and now it lets you choose a different configuration of tiles on each virtual desktop. The Wayland session brings some new accessibility features: you can now move the pointer using your keyboard’s number pad keys, or use a three-finger touchpad pinch gesture to zoom in or out. Plasma file transfer notification now shows a speed graph, giving you a more visual idea of how fast the transfer is going, and how long it will take to complete. When any applications are in full screen mode Plasma will now enter Do Not Disturb mode and only show urgent notifications, and when you exit full screen mode, you’ll see a summary of any notifications you missed. Now when an application tries to access the microphone and finds it muted, a notification will pop up. A new feature in the Application Launcher widget will place a green New! tag next to newly installed apps, so you can easily find where something you just installed lives in the menu. The Display and Monitor page in System Settings comes with a brand new HDR calibration wizard, and support for Extended Dynamic Range (a different kind of HDR) and P010 video color format has been added. System Monitor now supports usage monitoring for AMD and Intel graphic cards, it can even show the GPU usage on a per-process basis. Spectacle, the built-in app for taking screenshots and screen recordings, has much improved design and more streamlined functionality. The background of the desktop or window now darkens when an authentication dialog shows up, helping you locate and focus on the window asking for your password. There’s a brand-new Animations page in System Settings that groups all the settings for purely visual animated effects into one place, making it easier to find and configure them. Aurorae is a newly added SVG vector graphics theme engine for KWin window decorations. You can read more about these and many other other features in the [Plasma 6.4 anounncement](https://kde.org/announcements/plasma/6/6.4.0/) and [complete changelog](https://kde.org/announcements/changelogs/plasma/6/6.3.5-6.4.0/).
View originalClaude Code's source code appears to have leaked: here's what we know
Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public. A 59.8 MB JavaScript source map file (.map), intended for internal debugging, was inadvertently included in version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry pushed live earlier this morning. By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted archive, acted as a digital flare. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers. For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property.The timing is particularly critical given the commercial velocity of the product. Market data indicates that Claude Code alone has achieved an annualized recurring revenue (ARR) of $2.5 billion, a figure that has more than doubled since the beginning of the year. With enterprise adoption accounting for 80% of its revenue, the leak provides competitors—from established giants to nimble rivals like Cursor—a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent. Anthropic confirmed the leak in a spokesperson’s e-mailed statement to VentureBeat, which reads: “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again.” The anatomy of agentic memory The most significant takeaway for competitors lies in how Anthropic solved "context entropy"—the tendency for AI agents to
View originalShow HN: Coasts – Containerized Hosts for Agents
Hi HN - We've been working on Coasts (“containerized hosts”) to make it so you can run multiple localhost instances, and multiple docker-compose runtimes, across git worktrees on the same computer. Here’s a demo: <a href="https://www.youtube.com/watch?v=yRiySdGQZZA" rel="nofollow">https://www.youtube.com/watch?v=yRiySdGQZZA</a>. There are also videos in our docs that give a good conceptual overview: <a href="https://coasts.dev/docs/learn-coasts-videos">https://coasts.dev/docs/learn-coasts-videos</a>.<p>Agents can make code changes in different worktrees in isolation, but it's hard for them to test their changes without multiple localhost runtimes that are isolated and scoped to those worktrees as well. You can do it up to a point with port hacking tricks, but it becomes impractical when you have a complex docker-compose with many services and multiple volumes.<p>We started playing with Codex and Conductor in the beginning of this year and had to come up with a bunch of hacky workarounds to give the agents access to isolated runtimes. After bastardizing our own docker-compose setup, we came up with Coasts as a way for agents to have their own runtimes without having to change your original docker-compose.<p>A containerized host (from now on we’ll just say “coast” for short) is a representation of your project's runtime, like a devcontainer but without the IDE stuff—it’s just focused on the runtime. You create a Coastfile at your project root and usually point to your project's docker-compose from there. When you run `coast build` next to the Coastfile you will get a build (essentially a docker image) that can be used to spin up multiple Docker-in-Docker runtimes of your project.<p>Once you have a coast running, you can then do things like assign it to a worktree, with `coast assign dev-1 -w worktree-1`. The coast will then point at the worktree-1 root.<p>Under the hood the host project root and any external worktree directories are Docker-bind-mounted into the container at creation time but the /workspace dir, where we run the services of the coast from, is a separate Linux bind mount that we create inside the running container. When switching worktrees we basically just do umount -l /workspace, mount --bind <path_to_worktree_root>, mount --make-rshared /workspace inside of the running coast. The rshared flag sets up mount propagation so that when we remount /workspace, the change flows down to the inner Docker daemon's containers.<p>The main idea is that the agents can continue to work host-side but then run exec commands against a specific coast instance if they need to test runtime changes or access runtime logs. This makes it so that we are harness agnostic and create interoperability around any agent or agent harness that runs host-side.<p>Each coast comes with its own set of dynamic ports: you define the ports you wish to expose back to the host machine in the Coastfile. You're also able to "checkout" a coast. When you do that, socat binds the canonical ports of your coast (e.g. web 3000, db 5432) to the host machine. This is useful if you have hard coded ports in your project or need to do something like test webhooks.<p>In your Coastfile you point to all the locations on your host-machine where you store your worktrees for your project (e.g. ~/.codex/worktrees). When an agent runs `coast lookup` from a host-side worktree directory, it is able to find the name of the coast instance it is running on, so it can do things like call `coast exec dev-1 make tests`. If your agent needs to do things like test with Playwright it can so that host-side by using the dynamic port of your frontend.<p>You can also configure volume topologies, omit services and volumes that your agent doesn't need, as well as share certain services host-side so you don't add overhead to each coast instance. You can also do things like define strategies for how each service should behave after a worktree assignment change (e.g. none, hot, restart, rebuild). This helps you optimize switching worktrees so you don't have to do a whole docker-compose down and up cycle every time.<p>We'd love to answer any questions and get your feedback!
View originalFixing AI failure: Three changes enterprises should make now
Recent reports about AI project failure rates have raised uncomfortable questions for organizations investing heavily in AI. Much of the discussion has focused on technical factors like model accuracy and data quality, but after watching dozens of AI initiatives launch, I’ve noticed that the biggest opportunities for improvement are often cultural, not technical. Internal projects that struggle tend to share common issues. For example, engineering teams build models that product managers don’t know how to use. Data scientists build prototypes that operations teams struggle to maintain. And AI applications sit unused because the people they were built for weren't involved in deciding what “useful” really meant. In contrast, organizations that achieve meaningful value with AI have figured out how to create the right kind of collaboration across departments, and established shared accountability for outcomes. The technology matters, but the organizational readiness matters just as much. Here are three practices I’ve observed that address the cultural and organizational barriers that can impede AI success. Expand AI literacy beyond engineering When only engineers understand how an AI system works and what it’s capable of, collaboration breaks down. Product managers can't evaluate trade-offs they don't understand. Designers can't create interfaces for capabilities they can't articulate. Analysts can't validate outputs they can't interpret. The solution isn't making everyone a data scientist. It's helping each role understand how AI applies to their specific work. Product managers need to grasp what kinds of generated content, predictions or recommendations are realistic given available data. Designers need to understand what the AI can actually do so they can design features users will find useful. Analysts need to know which AI outputs require human validation versus which can be trusted. When teams share this working vocabulary, AI stops being something that happens in
View originalchore(deps): update updates-patch-minor
> ℹ️ **Note** > > This PR body was truncated due to platform limits. This PR contains the following updates: | Package | Update | Change | |---|---|---| | 1password/connect-sync | patch | `1.8.1` → `1.8.2` | | [alpine/openclaw](https://openclaw.ai) ([source](https://redirect.github.com/openclaw/openclaw)) | minor | `2026.2.22` → `2026.3.8` | | [cloudflare/cloudflared](https://redirect.github.com/cloudflare/cloudflared) | minor | `2026.2.0` → `2026.3.0` | | kerberos/agent | patch | `v3.6.12` → `v3.6.15` | | [searxng/searxng](https://searxng.org) ([source](https://redirect.github.com/searxng/searxng)) | patch | `2026.3.8-a563127a2` → `2026.3.9-d4954a064` | --- > [!WARNING] > Some dependencies could not be looked up. Check the [Dependency Dashboard](../issues/304) for more information. --- ### Release Notes <details> <summary>openclaw/openclaw (alpine/openclaw)</summary> ### [`v2026.3.8`](https://redirect.github.com/openclaw/openclaw/blob/HEAD/CHANGELOG.md#202638) [Compare Source](https://redirect.github.com/openclaw/openclaw/compare/v2026.3.7...v2026.3.8) ##### Changes - CLI/backup: add `openclaw backup create` and `openclaw backup verify` for local state archives, including `--only-config`, `--no-include-workspace`, manifest/payload validation, and backup guidance in destructive flows. ([#​40163](https://redirect.github.com/openclaw/openclaw/issues/40163)) thanks [@​shichangs](https://redirect.github.com/shichangs). - macOS/onboarding: add a remote gateway token field for remote mode, preserve existing non-plaintext `gateway.remote.token` config values until explicitly replaced, and warn when the loaded token shape cannot be used directly from the macOS app. ([#​40187](https://redirect.github.com/openclaw/openclaw/issues/40187), supersedes [#​34614](https://redirect.github.com/openclaw/openclaw/issues/34614)) Thanks [@​cgdusek](https://redirect.github.com/cgdusek). - Talk mode: add top-level `talk.silenceTimeoutMs` config so Talk waits a configurable amount of silence before auto-sending the current transcript, while keeping each platform's existing default pause window when unset. ([#​39607](https://redirect.github.com/openclaw/openclaw/issues/39607)) Thanks [@​danodoesdesign](https://redirect.github.com/danodoesdesign). Fixes [#​17147](https://redirect.github.com/openclaw/openclaw/issues/17147). - TUI: infer the active agent from the current workspace when launched inside a configured agent workspace, while preserving explicit `agent:` session targets. ([#​39591](https://redirect.github.com/openclaw/openclaw/issues/39591)) thanks [@​arceus77-7](https://redirect.github.com/arceus77-7). - Tools/Brave web search: add opt-in `tools.web.search.brave.mode: "llm-context"` so `web_search` can call Brave's LLM Context endpoint and return extracted grounding snippets with source metadata, plus config/docs/test coverage. ([#​33383](https://redirect.github.com/openclaw/openclaw/issues/33383)) Thanks [@​thirumaleshp](https://redirect.github.com/thirumaleshp). - CLI/install: include the short git commit hash in `openclaw --version` output when metadata is available, and keep installer version checks compatible with the decorated format. ([#​39712](https://redirect.github.com/openclaw/openclaw/issues/39712)) thanks [@​sourman](https://redirect.github.com/sourman). - CLI/backup: improve archive naming for date sorting, add config-only backup mode, and harden backup planning, publication, and verification edge cases. ([#​40163](https://redirect.github.com/openclaw/openclaw/issues/40163)) Thanks [@​gumadeiras](https://redirect.github.com/gumadeiras). - ACP/Provenance: add optional ACP ingress provenance metadata and visible receipt injection (`openclaw acp --provenance off|meta|meta+receipt`) so OpenClaw agents can retain and report ACP-origin context with session trace IDs. ([#​40473](https://redirect.github.com/openclaw/openclaw/issues/40473)) thanks [@​mbelinky](https://redirect.github.com/mbelinky). - Tools/web search: alphabetize provider ordering across runtime selection, onboarding/configure pickers, and config metadata, so provider lists stay neutral and multi-key auto-detect now prefers Grok before Kimi. ([#​40259](https://redirect.github.com/openclaw/openclaw/issues/40259)) thanks [@​kesku](https://redirect.github.com/kesku). - Docs/Web search: restore $5/month free-credit details, replace defunct "Data for Search"/"Data for AI" plan names with current "Search" plan, and note legacy subscription validity in Brave setup docs. Follows up on [#​26860](https://redirect.github.com/openclaw/openclaw/issues/26860). ([#​40111](https://redirect.github.com/openclaw/openclaw/issues/40111)) Thanks [@​remusao](https://redirect.github.com/remusao). - Extensions/ACPX tests: move the shared runtime fixture helper from `src/runtime-internals/` to `src/test-utils/` so the test-only he
View original[Future] Pre-generate and cache common sentences via S3 + CloudFront
## Purpose For frequently used or pre-defined learning content, pre-generate audio files and serve them via CDN for instant playback. ## Background - Some sentences in `words.json` are static learning content - These can be pre-generated once and cached permanently - CDN delivery is faster and cheaper than Lambda invocation ## Task 1. Create a batch script to generate audio for all sentences in words.json 2. Upload generated WAV files to S3 3. Configure CloudFront distribution for low-latency delivery 4. Update iOS app to check CDN first, fall back to Lambda API ## Implementation Notes - S3 bucket: `voicevox-audio-cache-{env}` - File naming: `{hash(text + speakerID)}.wav` - CloudFront: edge caching with long TTL - iOS fallback: CDN → Lambda API → error handling ## Cost Estimate - S3 storage: ~$0.02/GB/month - CloudFront: ~$0.085/GB transferred - Total: minimal for typical usage (<$5/month) ## Acceptance Criteria - [ ] Pre-generated audio available for seed vocabulary - [ ] iOS app fetches from CDN with Lambda fallback - [ ] New sentences generated by OpenAI fall back to Lambda correctly ## Priority Low - consider after user-generated content patterns are understood.
View originalRefactor a-help skill to use RAG-backed retrieval instead of monolithic prompt
## Context The `/a-help` built-in skill is a 48KB monolithic prompt (`src/anteroom/cli/default_skills/a-help.yaml`) that embeds the entire Anteroom reference — config tables, tool descriptions, CLI commands, environment variables, and more. It's at 95% of the 50KB skill prompt limit and growing with every feature. This is unsustainable. Every new config field, tool, or CLI command requires squeezing more into an already-bloated prompt that gets injected in full on every `/a-help` invocation. Additionally, users shouldn't need to remember to type `/a-help` — when someone asks "how do I configure tools?" in a normal conversation, the AI should automatically recognize this as an Anteroom question and invoke the help skill. ## Proposal Two changes: 1. **Slim `a-help`** from a monolithic 48KB inline reference to a ~10-15KB strategy prompt with a curated docs index and explicit `read_file` fallback. The AI reads the specific docs page it needs on demand instead of receiving everything upfront. 2. **Improve auto-invocation reliability** by broadening the `a-help` skill description so the LLM recognizes natural Anteroom questions as matching the skill. This is skill-specific prompt engineering — no changes to shared infrastructure (`invoke_skill` tool, `<available_skills>` catalog instruction, or system prompt builders). This is **not** a RAG integration. The skill stays a pure prompt template — no new fields, no retrieval pipeline changes. ### Benefits - Removes the 50KB ceiling — docs can grow freely - Reduces token cost per `/a-help` invocation (~10K tokens instead of ~12K) - Docs pages stay the single source of truth — no more maintaining parallel content in `a-help.yaml` and `docs/` - New features auto-appear in `/a-help` when their docs pages are written and indexed - Users get help without needing to know about `/a-help` — natural questions trigger it automatically - Zero infrastructure change — works today with no code modifications ### Future: Skill-Scoped RAG Retrieval A separate future issue will explore proper RAG-backed skills with: - Retrieval scoping by source IDs / corpus - Non-user-visible storage for built-in docs - Update semantics for bundled docs - `rag_enabled` skill field and CLI/web parity That's new RAG infrastructure, not a refactor of `a-help`. ## Acceptance Criteria ### Slim a-help (Phase 1 — done in PR #850) - [x] `a-help.yaml` is under 15KB (hard budget — leaves room for growth) - [x] Strategy section tells the AI to check inline quick-ref first, then `read_file` specific docs pages - [x] Curated docs index maps question categories to specific file paths - [x] Inline quick-reference retained for the most common questions (~80% coverage): config layers, tool tiers, approval modes, skill format, CLI commands - [x] Less common reference (full config field tables, env var lists, detailed architecture) moved to docs-only — accessed via `read_file` - [x] Links to #843 content (`docs/cli/porting-from-claude-code.md`, `docs/cli/skill-examples.md`) included in the docs index - [x] Existing skill-loader tests pass — `a-help` still loads as a valid skill ### Auto-invocation (Phase 2) - [ ] `a-help` skill description broadened to trigger auto-invocation for Anteroom questions (description-only change, no shared code) - [ ] Natural questions like "how do I configure tools?" trigger `invoke_skill(name="a-help")` without explicit `/a-help` - [ ] Manual verification: ask Anteroom questions without `/a-help` prefix — AI uses the skill automatically - [ ] If description-only approach proves insufficient, open a separate issue for changing the generic `<available_skills>` instruction in `repl.py:1431-1437` and `chat.py:495-501` with broader eval coverage ## Related Issues - #843 — porting docs and skill examples; `a-help` will link to these new pages - Future: skill-scoped RAG retrieval (not yet filed) - Future (if needed): tune generic `<available_skills>` matching language in `repl.py` and `chat.py` ## Parity **Parity exception**: Built-in skill content change only (YAML `description` field). Both interfaces read the same `a-help.yaml` via the shared skill registry. The `<available_skills>` catalog in both `repl.py` and `chat.py` renders the description identically. No changes to shared prompt builders or runtime behavior. --- ## Implementation Plan ### Summary Slim the `a-help` built-in skill from 48KB to under 15KB by replacing inline reference tables with a curated docs index and `read_file` fallback strategy. Then broaden the skill's `description` field to improve auto-invocation for natural Anteroom questions. ### Phase 1: Slim a-help (done — PR #850) | File | Change | |------|--------| | `src/anteroom/cli/default_skills/a-help.yaml` | Restructure: keep strategy + high-frequency quick-ref + curated docs index; remove low-frequency inline tables | | `tests/unit/test_skills.py` | Size budget assertion (< 15KB) | ### Phase 2: Auto-invocation (skill-specific, no shared code c
View originalupstream(agents): 移植 5 个冲突 commit (P0,P1) — v2026.3.7→v2026.3.8
## 任务 将以下 5 个上游 commit 的修改语义化应用到本 fork。这些 commit 无法直接 cherry-pick(存在冲突),需要理解修改意图后手动应用等效变更。 ### 上游版本范围 - **来源**: openclaw/openclaw v2026.3.7 → v2026.3.8 - **模块**: `agents` - **优先级**: P0,P1 ### 需要移植的 commit #### Commit 1: `e8775cda932f` (P1) **描述**: fix(agents): re-expose configured tools under restrictive profiles **涉及文件**: `src/agents/pi-tools.policy.test.ts,src/agents/pi-tools.policy.ts,src/plugins/config-state.test.ts,src/plugins/config-state.ts` <details> <summary>查看上游 diff</summary> ```diff diff --git a/src/agents/pi-tools.policy.test.ts b/src/agents/pi-tools.policy.test.ts index 4b7a16b4d..0cdc572c4 100644 --- a/src/agents/pi-tools.policy.test.ts +++ b/src/agents/pi-tools.policy.test.ts @@ -3,6 +3,7 @@ import type { OpenClawConfig } from "../config/config.js"; import { filterToolsByPolicy, isToolAllowedByPolicyName, + resolveEffectiveToolPolicy, resolveSubagentToolPolicy, } from "./pi-tools.policy.js"; import { createStubTool } from "./test-helpers/pi-tool-stubs.js"; @@ -176,3 +177,59 @@ describe("resolveSubagentToolPolicy depth awareness", () => { expect(isToolAllowedByPolicyName("sessions_spawn", policy)).toBe(false); }); }); + +describe("resolveEffectiveToolPolicy", () => { + it("implicitly re-exposes exec and process when tools.exec is configured", () => { + const cfg = { + tools: { + profile: "messaging", + exec: { host: "sandbox" }, + }, + } as OpenClawConfig; + const result = resolveEffectiveToolPolicy({ config: cfg }); + expect(result.profileAlsoAllow).toEqual(["exec", "process"]); + }); + + it("implicitly re-exposes read, write, and edit when tools.fs is configured", () => { + const cfg = { + tools: { + profile: "messaging", + fs: { workspaceOnly: false }, + }, + } as OpenClawConfig; + const result = resolveEffectiveToolPolicy({ config: cfg }); + expect(result.profileAlsoAllow).toEqual(["read", "write", "edit"]); + }); + + it("merges explicit alsoAllow with implicit tool-section exposure", () => { + const cfg = { + tools: { + profile: "messaging", + alsoAllow: ["web_search"], + exec: { host: "sandbox" }, + }, + } as OpenClawConfig; + const result = resolveEffectiveToolPolicy({ config: cfg }); + expect(result.profileAlsoAllow).toEqual(["web_search", "exec", "process"]); + }); + + it("uses agent tool sections when resolving implicit exposure", () => { + const cfg = { + tools: { + profile: "messaging", + }, + agents: { + list: [ + { + id: "coder", + tools: { + fs: { workspaceOnly: true }, + }, + }, + ], + }, + } as OpenClawConfig; + const result = resolveEffectiveToolPolicy({ config: cfg, agentId: "coder" }); + expect(result.profileAlsoAllow).toEqual(["read", "write", "edit"]); + }); +}); diff --git a/src/agents/pi-tools.policy.ts b/src/agents/pi-tools.policy.ts index db9a36755..61d037dd9 100644 --- a/src/agents/pi-tools.policy.ts +++ b/src/agents/pi-tools.policy.ts @@ -2,6 +2,7 @@ import { getChannelDock } from "../channels/dock.js"; import { DEFAULT_SUBAGENT_MAX_SPAWN_DEPTH } from "../config/agent-limits.js"; import type { OpenClawConfig } from "../config/config.js"; import { resolveChannelGroupToolsPolicy } from "../config/group-policy.js"; +import type { AgentToolsConfig } from "../config/types.tools.js"; import { normalizeAgentId } from "../routing/session-key.js"; import { resolveThreadParentSessionKey } from "../sessions/session-key-utils.js"; import { normalizeMessageChannel } from "../utils/message-channel.js"; @@ -196,6 +197,37 @@ function resolveProviderToolPolicy(params: { return undefined; } +function resolveExplicitProfileAlsoAllow(tools?: OpenClawConfig["tools"]): string[] | undefined { + return Array.isArray(tools?.alsoAllow) ? tools.alsoAllow : undefined; +} + +function hasExplicitToolSection(section: unknown): boolean { + return section !== undefined && section !== null; +} + +function resolveImplicitProfileAlsoAllow(params: { + globalTools?: OpenClawConfig["tools"]; + agentTools?: AgentToolsConfig; +}): string[] | undefined { + const implicit = new Set<string>(); + if ( + hasExplicitToolSection(params.agentTools?.exec) || + hasExplicitToolSection(params.globalTools?.exec) + ) { + implicit.add("exec"); + implicit.add("process"); + } + if ( + hasExplicitToolSection(params.agentTools?.fs) || + hasExplicitToolSection(params.globalTools?.fs) + ) { + implicit.add("read"); + implicit.add("write"); + implicit.add("edit"); + } + return implicit.size > 0 ? Array.from(implicit) : undefined; +} + export function resolveEffectiveToolPolicy(params: { config?: OpenClawConfig; sessionKey?: string; @@ -226,6 +258,15 @@ export function resolveEffectiveToolPolicy(params: { modelProvider: params.modelProvider, modelId: params.modelId, }); + const explicitProfileAl
View originalSix Days of War, 10 Rationales
On the third day of the war in Iran, Defense Secretary Pete Hegseth [called](https://www.war.gov/News/Transcripts/Transcript/Article/4418959/secretary-of-war-pete-hegseth-and-chairman-of-the-joint-chiefs-of-staff-gen-dan/) Operation Epic Fury the “most-precise aerial operation in history.” A difficult claim to fact-check. More difficult still has been parsing statements from the White House and the Pentagon to figure out, with any exactitude, why we are at war in the first place. So far, the Trump administration has offered at least 10 separate rationales in just six days. Let’s start shortly after the first missiles launched early Saturday morning. In an eight-minute [address](https://www.youtube.com/watch?v=o-E7DIctrzo) posted soon after to his social-media platform, President Trump outlined a few explanations. The reason for war, he said, is to eliminate “imminent threats” from the Iranian regime—threats that “directly endanger the United States, our troops, our bases overseas, and our allies throughout the world.” (Let’s call this Rationale No. 1: the imminent threat.) Also, he said, the objective is to ensure that the regime “can never have a nuclear weapon.” (Rationale No. 2: no nukes.) Also, he added, the objective is to “ensure that the region’s terrorist proxies can no longer destabilize the region or the world.” (Rationale No. 3: halt the militias.) These goals are not incompatible, of course, and all involve degrading Iran’s ability to project force beyond its borders. But just as he appeared to be wrapping up, Trump floated a major new reason: laying the groundwork for the Iranian people to “seize control of your destiny, and to unleash the prosperous and glorious future that is close within your reach.” In other words, “Take over your government.” (Rationale No. 4: regime change.) A couple hours later, Trump said his attention was steadfastly on this last explanation—securing the liberty of the Iranian people from the country’s 47-year theocratic regime. “All I want is freedom for the people,” he told [The Washington Post](https://www.washingtonpost.com/national-security/2026/02/28/trump-iran-war-regime-change-freedom/) just after 4 a.m. About half an hour later, another justification was evidently on the commander in chief’s mind: “Iran tried to interfere in 2020, 2024 elections to stop Trump, and now faces renewed war with United States,” he [wrote](https://truthsocial.com/@realDonaldTrump/posts/116147572522796874) on Truth Social. The post included a link to a story in a right-wing media outlet purporting to show Iranian election interference. (That seemed enough to constitute Rationale No. 5: election interference, before the sun had even risen over Mar-a-Lago.) Later on Saturday, Trump revisited his second and third rationales for the strikes in an interview with [Axios](https://www.axios.com/2026/02/28/trump-iran-war-israel-off-ramps). He cited the failure of negotiations (led by his son-in-law Jared Kushner and the real-estate developer turned Middle East envoy Steve Witkoff) to reach a deal to end Iran’s nuclear ambitions. And he also spoke about his realization, while writing his speech the day before the bombing started, that Iran had a history of violence in the region: “I saw that every month they did something bad, blew something up or killed someone.” By Saturday afternoon, though, the president was ready to unveil his most ambitious rationale yet. As reports filtered in about the assassination of Supreme Leader Ali Khamenei, Trump [took to social media](https://truthsocial.com/@realDonaldTrump/posts/116150413051904167) again to declare that the operation would continue “as long as necessary to achieve our objective of PEACE THROUGHOUT THE MIDDLE EAST AND, INDEED, THE WORLD!” (Rationale No. 6.: world peace, an appropriately grand finale for launch day.) On Sunday morning, Trump was back to Rationale No. 2, preventing Iran from developing nuclear weapons—with little time to spare, apparently. “If we didn’t do that, they would have had a nuclear weapon within two weeks,” the president [told](https://x.com/JacquiHeinrich/status/2028127909093798201) Fox News, citing a time frame he had not included in his initial remarks. The same morning, the president [told](https://www.nbcnews.com/politics/donald-trump/trump-casualties-us-military-operation-iran-khamenei-rcna261212) NBC News that the reason for the launch was simple: “They weren’t willing to say they will not have a nuclear weapon.” (For context: The White House had [announced](https://www.whitehouse.gov/articles/2025/06/irans-nuclear-facilities-have-been-obliterated-and-suggestions-otherwise-are-fake-news/) last June that Iranian nuclear facilities had been obliterated and “suggestions otherwise are fake news.” An analysis of satellite images by The New York Times last month [showed](https://www.nytimes.com/2026/02/06/world/middleeast/iran-missile-nuclear-repairs.html) repairs at key missile sites began shortly after those
View originalThanks to Trump's Iran War, US LNG Giants Could See $20 Billion in Monthly Windfall Profits
 From [declaring](https://www.commondreams.org/news/trump-energy-emergency-threat) an energy emergency and [ditching](https://www.commondreams.org/news/trump-withdraws-global-treaties) global climate initiatives to abducting the Venezuelan leader to [seize control](https://www.commondreams.org/news/venezuela-oil-sale-trump-donor) of the country's nationalized oil industry, President Donald Trump has taken various actions to serve his fossil fuel [donors](https://www.commondreams.org/news/big-oil-donations-trump) since returning to power last year. Now, his and Israel's war on Iran could soon lead to US liquefied natural gas giants pocketing tens of billions in windfall profits. "The Persian Gulf has some of the world's largest oil and gas producers," Oil Change International research co-director Lorne Stockman [explained](https://oilchange.org/blogs/trumps-war-on-iran-as-people-are-killed-big-oils-windfall-will-deepen-our-energy-affordability-crisis/) in a Tuesday blog post, "and a large proportion of that production, around 20% of global petroleum, must pass through a relatively narrow corridor controlled by Iran to reach global markets: the Strait of Hormuz," between the Persian Gulf and the Gulf of Oman. Stockman—whose advocacy group works to expose the costs of fossil fuels and facilitate a just transition to clean energy—noted that "crude oil, refined petroleum products, and liquefied natural gas (LNG) traverse the strait in [vast quantities every day](https://www.csis.org/analysis/how-war-iran-could-disrupt-energy-exports-strait-hormuz). But not since Saturday. With missiles, fighter jets, and drones circling, shipping has ground to a halt, and Iran [reportedly](https://www.aljazeera.com/news/2026/3/2/iran-says-will-attack-any-ship-trying-to-pass-through-strait-of-hormuz) threatened to close the strait by force on Monday." > As the conflict in the Persian Gulf continues, fossil fuel companies are preparing for record-breaking profits while billions of people face soaring energy bills and "energy poverty."We’re tired of a world where our energy system fuels war and destroys our climate. oilchange.org/blogs/trumps... > > [[image or embed]](https://bsky.app/profile/did:plc:dpuhovqi4tl6gdjpnqj5peay/post/3mg7yn5ltxc2h?ref_src=embed) > — 350.org ([@350.org](https://bsky.app/profile/did:plc:dpuhovqi4tl6gdjpnqj5peay?ref_src=embed)) [March 4, 2026 at 4:43 AM](https://bsky.app/profile/did:plc:dpuhovqi4tl6gdjpnqj5peay/post/3mg7yn5ltxc2h?ref_src=embed) Based on ship-tracking data from MarineTraffic, *Reuters* [estimated](https://www.reuters.com/business/energy/hormuz-shutdown-worsens-after-us-hits-iranian-warship-tankers-stranded-fifth-day-2026-03-04/) Wednesday that "at least 200 ships, including oil and liquefied natural gas tankers as well as cargo ships, remained at anchor in open waters off the coast of major Gulf producers including Iraq, Saudi Arabia, and Qatar," and "hundreds of other vessels remained outside Hormuz unable to reach ports." Stockman warned that "depending on how long the violence and its atrocious human toll continues—Trump said it [may take weeks](https://www.nytimes.com/2026/03/01/us/politics/trump-iran-war-interview.html) until his undefined objectives are achieved—this will have huge implications for energy markets. Oil and gas companies may achieve huge windfall profits in a year that previously looked far less lucrative for them, and billions of people could see their energy bills soar." Since Trump and Israeli Benjamin Netanyahu launched "Operation Epic Fury" on Saturday, over 1,000 people had been killed as of Wednesday, [according to](https://www.dropsitenews.com/p/iran-death-toll-1000-trump-kurds-iran-overthrow-lebanon-hezbollah-israel) the Iranian government, and oil prices have [surged](https://www.commondreams.org/news/iran-war-gas-prices)—highlighting how, as Greenpeace International executive director Mads Christensen [put it](https://www.commondreams.org/news/trump-iran-war-oil) earlier this week, "as long as our world runs on oil and gas, our peace, security and our pockets will always be at the mercy of geopolitics." Qatar exports about 20% of the global LNG supply, second only to the United States. All of that LNG goes through the Strait of Hormuz. An Iranian drone attack on Monday targeted Qatari LNG facilities, leading state-owned QatarEnergy to declare force majeure on exports. Two unnamed sources [told](https://www.reuters.com/business/energy/qatarenergy-declares-force-majeure-lng-shipments-2026-03-04/) *Reuters* that QE "will fully shut down gas liquefaction on Wednesday," and "it may take at least a month to return to normal production volumes." The Qatari shutdown is expected to boost the US LNG industry, wh
View originalArizona’s water is drying up. That’s not stopping the data center rush.
It’s no secret that Arizona is worried about its water. The [Colorado River is drying up](https://grist.org/politics/colorado-river-deal-trump-burgum/), [in part due to climate change](https://www.youtube.com/watch?v=AzpYHXgfbbI), and groundwater aquifers are running dry. Some of the state’s biggest industries are suffering as a result: Many farmers have been forced to rip up their cotton and alfalfa fields, and some home developers have been blocked from building new subdivisions. A state with hydrologic woes of this magnitude would seem an unlikely place to attract new factory-scale industries, which often have substantial water appetites themselves, but over the past year that’s exactly what’s happened. So-called hyperscaler tech companies like Microsoft and Meta have swarmed in to build the data centers fuelling the artificial-intelligence boom, and the Taiwan Semiconductor Manufacturing Company has spent billions of dollars on a factory complex outside Phoenix. This [rapid](https://www.reuters.com/sustainability/climate-energy/desert-storm-can-data-centres-slake-their-insatiable-thirst-water--ecmii-2025-12-17/) [development](https://fortune.com/2024/04/08/tsmc-water-usage-phoenix-chips-act-commerce-department-semiconductor-manufacturing/) has [triggered](https://www.azcentral.com/story/money/business/tech/2024/11/04/phoenix-provides-water-to-a-new-chipmaker-any-cause-for-worry/75917812007/?gnt-cfr=1&gca-cat=p&gca-uir=true&gca-epti=z1104xxe1104xxv004275d--47--b--47--&gca-ft=198&gca-ds=sophi) [fears](https://www.apmresearchlab.org/10x/data-centers-resource) that the industry will suck up the finite water supplies available to residents of Phoenix and Tucson. So far, however, these predictions have not come true. Even though Arizona will soon be home to nearly 200 data centers and chip factories, these facilities have not yet caused a major bump in the state’s water consumption. The companies’ precise effects on water supply are hard to discern due to their own secrecy about their water usage, but the aggregate picture suggests they have found ways to minimize their impact, whether through new cooling technologies or by recycling water on-site. And despite [local](https://news.azpm.org/s/102502-marana-data-center-vote-sparks-backlash-three-residents-launch-council-runs/) [backlash](https://www.theguardian.com/us-news/2025/oct/15/tucson-arizona-ai-data-center-project-blue), water experts and many local officials appear to have largely made their peace with the industry’s arrival — and with the Phoenix region’s emergence as one of the nation’s largest AI infrastructure clusters. “There’s not a hair-on-fire context right now,” said Sarah Porter, a fellow at Arizona State University’s Kyl Center for Water Policy. “We just don’t see it.” Arizona is home to [more than 150 data centers](https://www.datacentermap.com/usa/arizona/), according to an analysis from the Data Center Map, an industry resource. Each of these buildings contains thousands of servers that need to stay cool in the desert heat as they process computational queries. This cooling can be done with air conditioners, but it’s more efficient to surround them with pipes full of cold water, or to use evaporating mists to draw out hot air. Cooling systems like these *can* consume a huge amount of water, but [no one knows](https://www.azcentral.com/story/money/business/tech/2026/02/04/arizona-data-centers-water-power-use/88054536007/?gnt-cfr=1&gca-cat=p&gca-uir=true&gca-epti=z119875p003550c003550e1185xxv119875d--55--b--55--&gca-ft=206&gca-ds=sophi) how much they *are* consuming. Independent estimates suggest that an average data center can use anywhere from [50,000](https://www.eenews.net/articles/states-push-to-end-secrecy-over-data-center-water-use/) to [5 million](https://www.eesi.org/articles/view/data-centers-and-water-consumption) gallons of water per day. An [analysis](https://www.ceres.org/resources/reports/drained-by-data-the-cumulative-impact-of-data-centers-on-regional-water-stress) from the sustainability advocacy organization Ceres estimated that the data centers active in Phoenix last summer used around 385 million gallons of water per year. Ceres projected that the metropolitan’s data center water consumption could grow tenfold to around 3.8 billion gallons per year. But even that worst-case-scenario would make data center usage equivalent to just around 1 percent of total [residential water consumption](https://www.azwater.gov/adwr-data-dashboards) in the Phoenix area — and less than half a percent of the region’s total 2024 water usage. (A comparison with agricultural usage is even more stark: Agriculture uses [more than 70 percent](https://environment.arizona.edu/news/where-does-our-water-come) of the state’s water, and still accounts for around 35 percent of water consumption even in the Phoenix metro, the state’s most urban region.) Furthermore, there’s some evidence that Ceres’ estimates may be too high. State data show that
View originalArizona’s water is drying up. That’s not stopping the data center rush.
It’s no secret that Arizona is worried about its water. The [Colorado River is drying up](https://grist.org/politics/colorado-river-deal-trump-burgum/), [in part due to climate change](https://www.youtube.com/watch?v=AzpYHXgfbbI), and groundwater aquifers are running dry. Some of the state’s biggest industries are suffering as a result: Many farmers have been forced to rip up their cotton and alfalfa fields, and some home developers have been blocked from building new subdivisions. A state with hydrologic woes of this magnitude would seem an unlikely place to attract new factory-scale industries, which often have substantial water appetites themselves, but over the past year that’s exactly what’s happened. So-called hyperscaler tech companies like Microsoft and Meta have swarmed in to build the data centers fuelling the artificial-intelligence boom, and the Taiwan Semiconductor Manufacturing Company has spent billions of dollars on a factory complex outside Phoenix. This [rapid](https://www.reuters.com/sustainability/climate-energy/desert-storm-can-data-centres-slake-their-insatiable-thirst-water--ecmii-2025-12-17/) [development](https://fortune.com/2024/04/08/tsmc-water-usage-phoenix-chips-act-commerce-department-semiconductor-manufacturing/) has [triggered](https://www.azcentral.com/story/money/business/tech/2024/11/04/phoenix-provides-water-to-a-new-chipmaker-any-cause-for-worry/75917812007/?gnt-cfr=1&gca-cat=p&gca-uir=true&gca-epti=z1104xxe1104xxv004275d--47--b--47--&gca-ft=198&gca-ds=sophi) [fears](https://www.apmresearchlab.org/10x/data-centers-resource) that the industry will suck up the finite water supplies available to residents of Phoenix and Tucson. So far, however, these predictions have not come true. Even though Arizona will soon be home to nearly 200 data centers and chip factories, these facilities have not yet caused a major bump in the state’s water consumption. The companies’ precise effects on water supply are hard to discern due to their own secrecy about their water usage, but the aggregate picture suggests they have found ways to minimize their impact, whether through new cooling technologies or by recycling water on-site. And despite [local](https://news.azpm.org/s/102502-marana-data-center-vote-sparks-backlash-three-residents-launch-council-runs/) [backlash](https://www.theguardian.com/us-news/2025/oct/15/tucson-arizona-ai-data-center-project-blue), water experts and many local officials appear to have largely made their peace with the industry’s arrival — and with the Phoenix region’s emergence as one of the nation’s largest AI infrastructure clusters. “There’s not a hair-on-fire context right now,” said Sarah Porter, a fellow at Arizona State University’s Kyl Center for Water Policy. “We just don’t see it.” Arizona is home to [more than 150 data centers](https://www.datacentermap.com/usa/arizona/), according to an analysis from the Data Center Map, an industry resource. Each of these buildings contains thousands of servers that need to stay cool in the desert heat as they process computational queries. This cooling can be done with air conditioners, but it’s more efficient to surround them with pipes full of cold water, or to use evaporating mists to draw out hot air. Cooling systems like these *can* consume a huge amount of water, but [no one knows](https://www.azcentral.com/story/money/business/tech/2026/02/04/arizona-data-centers-water-power-use/88054536007/?gnt-cfr=1&gca-cat=p&gca-uir=true&gca-epti=z119875p003550c003550e1185xxv119875d--55--b--55--&gca-ft=206&gca-ds=sophi) how much they *are* consuming. Independent estimates suggest that an average data center can use anywhere from [50,000](https://www.eenews.net/articles/states-push-to-end-secrecy-over-data-center-water-use/) to [5 million](https://www.eesi.org/articles/view/data-centers-and-water-consumption) gallons of water per day. An [analysis](https://www.ceres.org/resources/reports/drained-by-data-the-cumulative-impact-of-data-centers-on-regional-water-stress) from the sustainability advocacy organization Ceres estimated that the data centers active in Phoenix last summer used around 385 million gallons of water per year. Ceres projected that the metropolitan’s data center water consumption could grow tenfold to around 3.8 billion gallons per year. But even that worst-case-scenario would make data center usage equivalent to just around 1 percent of total [residential water consumption](https://www.azwater.gov/adwr-data-dashboards) in the Phoenix area — and less than half a percent of the region’s total 2024 water usage. (A comparison with agricultural usage is even more stark: Agriculture uses [more than 70 percent](https://environment.arizona.edu/news/where-does-our-water-come) of the state’s water, and still accounts for around 35 percent of water consumption even in the Phoenix metro, the state’s most urban region.) Furthermore, there’s some evidence that Ceres’ estimates may be too high. State data show that
View originalIs Flock just a poor US-centric copy of, globally active Genetec?
I've read all of Genetec's [customer stories](https://www.genetec.com/customer-stories/search) (the PDFs), and although I recognize these, as being Genetec marketing material (at least in part), they do contain insightful information, regarding implementation of surveillance systems; that is, from the perspective of a diverse palette of organisations. This palette primarily consists of: universities, school districts, ports, critical infrastructure providers, business to business companies, health care providers, real estate developers, gambling companies, (sports) venues, cities, public transportation services, airports, retailers, and foremost police departments. What most have in common, is the increasing scale at which they operate; setting in motion a search for IT-solutions, able to scale alongside organisational growth, and doing so in a cost-effective way. This entails: the centralisation of (previously "siloed") systems and departments, automatization of (previously time-consuming, or outright unmanageable) tasks, and proactive 'Data-Driven Decision-Making (DDDM)'; unlocking operational efficiencies and granular control over vast operations. Which is where Genetec introduces itself, primarily through [its partners](https://www.genetec.com/partners/partner-integration-hub?keywords) (including: hardware manufacturers, software solutions companies, system integrators, consultancy firms, etc.), often during an organisation's 'call for tender' or 'Request For Proposal (RFP)'; or it's recommended by other Genetec customers (including by law enforcement, to "community" partners: primarily businesses). The most recognizable partners, of the consortium-like construction, include: Axis Communications, Sony Corporation, Hanwha Vision, Bosch, NVIDIA, ASSA ABLOY, Intel, Pelco, Canon, Dell technologies, HID Global, FLIR Systems, Global Parking Solutions, and Seagate Technology. Alongside the Genetec-certified [hardware](https://www.genetec.com/supported-device-list) and software integrations (of which their partners' being actively co-marketed to customers), it also allows for custom integrations: through their 'Software Development Kits (SDKs)', and 'Application Programming Interfaces (APIs)'. So instead of single-vendor lock-in, organisations are effectively subject to multi-vendor lock-in (unless: spending resources, on custom integrations, is more cost-effective). Genetec's primary focus, lies on their extensive suite, of (specialized) software applications, deployed on: an on-site server, multiple (distributed) on-site servers (possibly federated: allowing for a centralized view over multiple implementations), in the "cloud" (i.e. someone else's server) as a '... as a Service' solution; or a combination of aforementioned (providing "cloud" flexibility). When using multiple applications, Genetec's 'Security Center' can unify all; meaning operators aren't required to switch between applications. And considering applications aren't limited to just camera surveillance, but also include: intrusion detection (intrusion panels, line-crossing cameras, panic switches, etc.), access control (electronic locks, access control readers (pin, card, tag, mobile, and/or biometric), door control modules, etc.), communication (intercoms, 'Public Address (PA)' systems, emergency stations, etc.) and ALPR (ALPR boom gates, gateless (license plate as a credential), enforcement vehicles, etc.); it allows for centralization of these systems (unless prohibited by strict IT policies). All of these technologies combined, primarily serve to: save on resources, protect assets, prevent losses, ensure operational continuity, and resolve disputes over: parking tickets, insurance claims (as a result of damages: suffered or caused on premise; potentially increasing premium), or even legal allegations ("increase the number of early guilty pleas"); all of course, under the guise of safety. Whether it be organisations individually, or "community" initiatives (often spearheaded by businesses, while citizens are left to follow); most circle back to previously outlined, financially-grounded motives. Resources include staff, who's function might become more versatile, or entirely obsolete (through efficiency gains), and might depend on events, reported by analytics (growing queues, areas requiring clean-up, crowd bottlenecks, etc.); meaning they too, are subject to this system: from onboarding ("minimise the time that elapses before they make a productive contribution") and throughout their career ("employee theft", "employee attendance", "agents' activities, collectively or individually", etc.). Previously, some organisations utilized analog cameras (having a recorder each), in which: a looping tape, would periodically overwrite previous recordings (minimizing retention periods: physically); which possbily caused quality degradations, sometimes to such a degree, footage could no longer serve as legal evidence (which too, is privacy-friendly).
View originalFebruary 26, 2026
It appears the State of the Union was the marker for the White House to launch directly into campaign mode. Much of that mode centers on trying to defang Trump’s weaknesses with attacks on Democrats. And since the 2024 campaign brought us the insistence from the Trump campaign, including Trump and then–vice presidential candidate J.D. Vance, that “they’re eating the dogs…they’re eating the cats,” it’s reasonable to assume the next several months are going to be a morass of lies and disinformation. Trump announced in his State of the Union that he was declaring a “war on fraud to be led by our great Vice President J.D. Vance” and said that “members of the Somali community have pillaged an estimated $19 billion from the American taxpayer…in actuality, the number is much higher than that. And California, Massachusetts, Maine and many other states are even worse.” He added: “And we’re able to find enough of that fraud, we will actually have a balanced budget overnight.” This, in part, seemed designed to reverse victim and offender by suggesting that rather than Trump’s being the perpetrator of extraordinary frauds and corruption in cryptocurrency, for example—he was, after all, found guilty on 34 charges of business fraud in 2024—immigrants are to blame for fraud. As Kirsten Swanson and Ryan Raiche of KSTP in Minneapolis explain, members of Minnesota’s Somali community, 95% of whom are U.S. citizens, pay about $67 million in taxes annually and have an estimated $8 billion impact on the community. While some have indeed been charged and convicted of fraud over the past five years, the accusation of $19 billion in fraud is just a number thrown out without evidence by “then-Assistant U.S. Attorney Joe Thompson,” who estimated in December 2025 that “‘half or more’ of $18 billion in Medicaid reimbursements from 14 high-risk programs could be fraudulent.” Yesterday Vance and Dr. Mehmet Oz, who oversees Medicaid, the federal healthcare program for low-income households, announced the administration is withholding $259 million in Medicaid funds from Minnesota, claiming the state has not done enough to protect taxpayers from fraud. It is illegal for the executive branch to withhold funds appropriated by Congress, and a federal judge has blocked a similar freeze on $10 billion in childcare funding for Illinois, California, Colorado, Minnesota, and New York while the case is in court. Nonetheless, Minnesota representative Tom Emmer, who is part of the Republican leadership in the House, approved the attack on his constituents, posting: “The war on fraud has begun. And Somali fraudsters in my home state are about to find out.” Minnesota governor Tim Walz, a Democrat, posted: “This has nothing to do with fraud…. This is a campaign of retribution. Trump is weaponizing the entirety of the federal government to punish blue states like Minnesota. These cuts will be devastating for veterans, families with young kids, folks with disabilities, and working people across our state.” While Walz is almost certainly correct that this is a campaign of retribution, the administration is also salting into the media an explanation for the sudden depletion of the trust funds that are used to pay Medicare and Social Security. In March 2025, the nonpartisan Congressional Budget Office (CBO) estimated the trust fund that pays for Medicare A would be solvent until 2052. On Monday, it updated its projections, saying the funds will run out in 2040. The CBO also expects the Social Security trust fund to run dry a year earlier than previously expected, by the end of 2031. As Nick Lichtenberg of *Fortune* wrote, policy changes by the Republicans under Trump, especially the tax cuts in the budget reconciliation bill the Republicans call the “One Big Beautiful Bill Act” have “drastically shortened the financial life spans of both Medicare and Social Security, accelerating their paths toward insolvency.” Between Trump’s statement that if the administration finds enough fraud it can balance the budget overnight, and the subsequent insistence that cuts to Medicaid are necessary because of that fraud, it sure looks like the administration is trying to distract attention from the CBO’s report that Trump’s tax cuts have cut the solvency of Social Security and Medicare by more than a decade. Instead, they are hoping to convince voters that immigrants are at fault. Similarly, in an oldie but a goodie, Republicans today hauled former secretary of state Hillary Clinton before the House Oversight and Government Reform Committee to testify by video about her knowledge of the investigations into sex traffickers Jeffrey Epstein and Ghislaine Maxwell. In a scathing opening statement, Clinton noted that while committee chair James Comer (R-KY) subpoenaed eight law enforcement officials who were directly involved in that investigation, only one appeared before the committee. The rest simply submitted brief statements saying they had no information. Clinton al
View originalPlaywright Mcp vs Openbrowser mcp . Can you expect the token gap?
<p>Spent the last few days going deep on MCP server options for browser automation and the token usage difference between Playwright MCP and OpenBrowser is something my team genuinely did not see coming until we started measuring it ourselves.</p> <p>The official Microsoft Playwright MCP server earns its reputation and I want to be clear about that, but the part nobody warned us about was that every single interaction ships a full accessibility tree back to the model no matter how small the action is. We opened a moderately complex page during one of our test runs and hit five figure token counts in a single step without even trying. Scale that up to a ten step agent workflow and your API costs are already bleeding before your agent has done anything meaningful.</p> <p>OpenBrowser takes a completely different approach where the model writes Python that runs inside a persistent browser session and only fetches the exact data it needs at that moment rather than pushing the full DOM back
View originalThe real state of Trump’s America: Social misery, dictatorship, war—and an upsurge of class struggle
https://redlib.catsarch.com/r/stupidpol/comments/1rdlawo/the_real_state_of_trumps_america_social_misery/ > US President Donald Trump will deliver the annual State of the Union address Tuesday night before a joint session of Congress. Were it honestly titled, the speech would be called “The State of the Rich”—a demagogic exercise in self-congratulation, burying the catastrophic reality of American life beneath a mountain of lies. > > The speech is being delivered in the year that the United States is to celebrate the 250th anniversary of the Declaration of Independence. On July 4, 2026, the nation will commemorate the issuing of the document that proclaimed “all men are created equal” and asserted the inalienable rights of “Life, Liberty and the pursuit of Happiness.” The founders fought a revolution against King George III. > > Two hundred and fifty years later, Trump’s America offers its own answer to the Declaration’s ideals: a government of the billionaires, by the billionaires and for the billionaires, presiding over armed raids on immigrant communities, a network of concentration camps spreading across the country, children torn from their parents, and the gunning down of unarmed civilians by federal agents on the streets of Minneapolis. > > Hanging over the proceedings is the stench of Jeffrey Epstein. The Justice Department’s release of nearly 3 million pages has shaken the upper tiers of corporate America, exposing connections of billionaires, chief executives and senior political figures to a convicted child sex trafficker. The files, as WSWS Chairman David North recently wrote, “reveal the social physiognomy of a degenerate ruling class and oligarchical society in an advanced state of decomposition. Their offenses are rank; they smell to heaven.” > > Trump himself is deeply implicated. Attorney General Pam Bondi responded to questions about the ongoing cover-up by boasting that the Dow had hit 50,000. FBI documents reveal that billionaire Leslie Wexner was identified as a co-conspirator in 2019, yet no charges have been brought. Emails show Epstein comparing the girls he trafficked to “shrimp—you throw away the head and keep the body.” This is the ruling class that governs America. > > Consider the facts that will be buried beneath Trump’s demagogy. The national debt stands at $38.4 trillion, or $113,000 for every person in the country, and is increasing at $8 billion per day. The Congressional Budget Office (CBO) projects the deficit at $1.9 trillion this year, growing to $3.1 trillion by 2036, with debt reaching 120 percent of GDP, which is higher than at any point in the nation’s history. > > The dollar fell more than 9 percent in 2025, its worst annual performance since 2017, and remains in bear market territory as BRICS nations increase local currency trade settlements from 35 to 50 percent. Cumulative price increases since 2020 have been devastating: food up more than 25 percent, housing costs still climbing at 3 percent annually, natural gas up nearly 10 percent. > > The level of social inequality is historically unprecedented. The top 1 percent of households now control 31.7 percent of all wealth, the highest share since the Fed began tracking such data in 1989. They hold $55 trillion, nearly as much as the bottom 90 percent combined. The combined wealth of 935 American billionaires surged to $8.1 trillion at the end of 2025. > > ... > > But the most sinister dimension of the real state of the union is the deployment of armed federal agents against the population. In December 2025, the administration launched Operation Metro Surge, deploying 3,000 masked ICE and Border Patrol agents into the Twin Cities, Minnesota, in what the DHS itself called the largest immigration operation in American history. On January 7, ICE shot and killed Renée Good, a 37-year-old American citizen and mother of three. On January 24, Border Patrol agents shot and killed Alex Pretti, a 37-year-old ICU nurse, firing as many as 10 rounds into him as he lay on the ground. > > The administration has built a network of detention camps across the country, holding nearly 66,000 people—a 75 percent increase, the highest level in history. Congress has allocated $45 billion for still more facilities, with capacity projected at 135,000. Children are separated from their parents at a record rate. A tent camp at Fort Bliss, Texas, holds 5,000 people on the same ground that served as an internment camp for Japanese Americans during World War II. 2025 was the deadliest year for ICE detention on record. > > ... > > This is the real state of Trump’s America in the 250th year since the Declaration of Independence: a society riven by class antagonism on a scale not seen since the Gilded Age, ruled by a criminal oligarchy that has dispensed with even the pretense of democratic governance, lurching toward war abroad and dictatorship at home. > > But there is another side to the equation, and it is the decisive one. The same crisis dri
View originalFigure uses a tiered pricing model. Visit their website for current pricing details.
Based on user reviews and social mentions, the most common pain points are: usage monitoring, token cost, large language model, ai agent.
Based on 35 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Kelsey Piper
Reporter at Vox Future Perfect
2 mentions