I don't see any reviews or social mentions specifically about a software tool called "Ada" in the provided content. The social mentions cover various other tools and topics including Logi Options Plus, Claude AI, streaming platforms, and other software, but none appear to be discussing Ada specifically. Without relevant user feedback about Ada, I cannot provide a meaningful summary of what users think about this tool. Could you provide reviews and mentions that specifically reference Ada?
Mentions (30d)
15
Reviews
0
Platforms
7
Sentiment
0%
0 positive
I don't see any reviews or social mentions specifically about a software tool called "Ada" in the provided content. The social mentions cover various other tools and topics including Logi Options Plus, Claude AI, streaming platforms, and other software, but none appear to be discussing Ada specifically. Without relevant user feedback about Ada, I cannot provide a meaningful summary of what users think about this tool. Could you provide reviews and mentions that specifically reference Ada?
Industry
information technology & services
Employees
430
Funding Stage
Series C
Total Funding
$190.6M
Mouser: An open source alternative to Logi-Plus mouse software
I discovered this project because all-of-a-sudden Logi Options Plus software updater started taking 40-60% of my Intel Macbook Pro until I killed the process (of course it restarts). In my searches I ended up at a reddit discussion where I found other people with same issues.<p>I'm a minor contributor to this project but it aims to reduce/eliminate the need to use Logitech proprietary software and telemetry. We could use help if other people are interested.<p>Please check out the github link for more detailed motivations (eliminating telemetry) as a part of this project. Here is link: <a href="https://github.com/TomBadash/MouseControl" rel="nofollow">https://github.com/TomBadash/MouseControl</a>
View originalMidjourney engineer debuts new vibe coded, open source standard Pretext to revolutionize web design
For three decades, the web has existed in a state of architectural denial. It is a platform originally conceived to share static physics papers, yet it is now tasked with rendering the most complex, interactive, and generative interfaces humanity has ever conceived. At the heart of this tension lies a single, invisible, and prohibitively expensive operation known as "layout reflow." Whenever a developer needs to know the height of a paragraph or the position of a line to build a modern interface, they must ask the browser’s Document Object Model (DOM), the standard by which developers can create and modify webpages. In response, the browser often has to recalculate the geometry of the entire page — a process akin to a city being forced to redraw its entire map every time a resident opens their front door. Last Friday, March 27, 2026, Cheng Lou — a prominent software engineer whose work on React, ReScript, and Midjourney has defined much of the modern frontend landscape — announced on the social network X that he had "crawled through depths of hell" to release an open source (MIT License) solution: Pretext, which he coded using AI vibe coding tools and models like OpenAI's Codex and Anthropic's Claude. It is a 15KB, zero-dependency TypeScript library that allows for multiline text measurement and layout entirely in "userland," bypassing the DOM and its performance bottlenecks. Without getting too technical, in short, Lou's Pretext turns text blocks on the web into fully dynamic, interactive and responsive spaces, able to adapt and smoothly move around any other object on a webpage, preserving letter order and spaces between words and lines, even when a user clicks and drags other objects to intersect with the text, or resizes their browser window dramatically. Ironically, it's difficult with mere text alone to convey how significant Lou's latest release is for the entire web going forward. Fortunately, other third-party developers whipped up quick demos with Pretext
View originalShow HN: Forkrun – NUMA-aware shell parallelizer (50×–400× faster than parallel)
forkrun is the culmination of a 10-year-long journey focused on "how to make shell parallelization fast". What started as a standard "fork jobs in a loop" has turned into a lock-free, CAS-retry-loop-free, SIMD-accelerated, self-tuning, NUMA aware shell-based stream parallelization engine that is (mostly) a drop-in replacement for xargs -P and GNU parallel.<p>On my 14-core/28-thread i9-7940x, forkrun achieves:<p>* 200,000+ batch dispatches/sec (vs ~500 for GNU Parallel)<p>* ~95–99% CPU utilization across all 28 logical cores, even when the workload is non-existant (bash no-ops / `:`) (vs ~6% for GNU Parallel). These benchmarks are intentionally worst-case (near-zero work per task) because they measure the capability of the parallelization framework itself, not how much work an external tool can do.<p>* Typically 50×–400× faster on real high-frequency low-latency workloads (vs GNU Parallel)<p>A few of the techniques that make this possible:<p>* Born-local NUMA: stdin is splice()'d into a shared memfd, then pages are placed on the target NUMA node via set_mempolicy(MPOL_BIND) before any worker touches them, making the memfd NUMA-spliced. Each numa node only claims work that is <i>already</i> born-local on its node. Stealing from other nodes is permitted under some conditions when no local work exists.<p>* SIMD scanning: per-node indexers/scanners use AVX2/NEON to find line boundaries (delimiters) at speeds approaching memory bandwidth, and publish byte-offsets and line-counts into per-node lock-free rings.<p>* Lock-free claiming: workers claim batches with a single atomic_fetch_add — no locks, no CAS retry loops; contention is reduced to a single atomic on one cache line.<p>* Memory management: a background thread uses fallocate(PUNCH_HOLE) to reclaim space without breaking the logical offset system.<p>…and that’s just the surface. The implementation uses many additional systems-level techniques (phase-aware tail handling, adaptive batching, early-flush detection, etc.) to eliminate overhead, increase throughput and reduce latency at every stage.<p>In its fastest (-b) mode (fixed-size batches, minimal processing), it can exceed 1B lines/sec.<p>forkrun ships as a single bash file with an embedded, self-extracting C extension — no Perl, no Python, no install, full native support for parallelizing arbitrary shell functions. The binary is built in public GitHub Actions so you can trace it back to CI (see the GitHub "Blame" on the line containing the base64 embeddings). Trying it is literally two commands:<p><pre><code> . frun.bash frun shell_func_or_cmd < inputs </code></pre> For benchmarking scripts and results, see the BENCHMARKS dir in the GitHub repo<p>For an architecture deep-dive, see the DOCS dir in the GitHub repo<p>Happy to answer questions.
View originalHow I built an LLM-powered B2B matching engine for China's small commodity export market
I've been building in the Canada-China B2B trade space for a while, and the biggest friction I kept...
View originalMouser: An open source alternative to Logi-Plus mouse software
I discovered this project because all-of-a-sudden Logi Options Plus software updater started taking 40-60% of my Intel Macbook Pro until I killed the process (of course it restarts). In my searches I ended up at a reddit discussion where I found other people with same issues.<p>I'm a minor contributor to this project but it aims to reduce/eliminate the need to use Logitech proprietary software and telemetry. We could use help if other people are interested.<p>Please check out the github link for more detailed motivations (eliminating telemetry) as a part of this project. Here is link: <a href="https://github.com/TomBadash/MouseControl" rel="nofollow">https://github.com/TomBadash/MouseControl</a>
View originalLaunch HN: Captain (YC W26) – Automated RAG for Files
Hi HN, we’re Lewis and Edgar, building Captain to simplify unstructured data search (<a href="https://runcaptain.com">https://runcaptain.com</a>). Captain automates the building and maintenance of file-based RAG pipelines. It indexes cloud storage like S3 and GCS, plus SaaS sources like Google Drive. There’s a quick walkthrough at <a href="https://youtu.be/EIQkwAsIPmc" rel="nofollow">https://youtu.be/EIQkwAsIPmc</a>.<p>We also put up this demo site called “Ask PG’s Essays” which lets you ask/search the corpus of pg’s essays, to get a feel for how it works: <a href="https://pg.runcaptain.com">https://pg.runcaptain.com</a>. The RAG part of this took Captain about 3 minutes to set up.<p>Here are some sample prompts to get a feel for the experience:<p>“When do we do things that don't scale? When should we be more cautious?” <a href="https://pg.runcaptain.com/?q=When%20do%20we%20do%20things%20that%20don't%20scale%3F%20When%20should%20we%20be%20more%20cautious%3F">https://pg.runcaptain.com/?q=When%20do%20we%20do%20things%20...</a><p>“Give me some advice, I'm fundraising” <a href="https://pg.runcaptain.com/?q=Give%20me%20some%20advice%2C%20I'm%20fundraising">https://pg.runcaptain.com/?q=Give%20me%20some%20advice%2C%20...</a><p>“What are the biggest advantages of Lisp” <a href="https://pg.runcaptain.com/?q=what%20are%20the%20biggest%20advantages%20of%20Lisp">https://pg.runcaptain.com/?q=what%20are%20the%20biggest%20ad...</a><p>A good production RAG pipeline takes substantial effort to build, especially for file workloads. You have to handle ETL or text extraction, chunking, embedding, storage, search, re-ranking, inference, and often compliance and observability – all while optimizing for latency and reliability. It’s a lot to manage. grep works well in some cases, but for agents, semantic search provides significantly higher performance. Cursor uses both and reports 6.5%–23.5% accuracy gains from vector search over grep (<a href="https://cursor.com/blog/semsearch" rel="nofollow">https://cursor.com/blog/semsearch</a>).<p>We’ve spent the past four years scaling RAG pipelines for companies, and Edgar’s work at Purdue’s NLP lab directly informed our chunking techniques. In conversations with dozens of engineers, we repeatedly saw DIY pipelines produce inconsistent results, even after weeks of tuning. Many teams lacked clarity on which retrieval strategies best fit their data.<p>We realized that a system to provision storage and embeddings, handle indexing, and continuously update pipelines to reflect the latest search techniques could remove the need for every team to rebuild RAG themselves. That idea became Captain.<p>In practice, one API call indexes URLs, cloud storage buckets, directories, or individual files. Under the hood, we’re converting everything to Markdown. For this, we’ve had good results with Gemini 3 Pro for images, Reducto for complex documents, and Extend for basic OCR. For embedding models, ‘gemini-embedding-001’ performed reasonably well at first, but we later switched to the Contextualized Embeddings from ‘voyage-context-3’. It produced more relevant results than even the newer Voyage 4 models because its chunk embeddings are encoded with awareness of the surrounding document context. We then applied Voyage’s ‘rerank-2.5’ as second-stage re-ranking, reducing 50 initial chunks to a final top 15 (configurable in Captain’s API). Dense embeddings are just half the picture and full-text search with RRF complete our hybrid retrieval. In the Captain API, these techniques are exposed through a single /query endpoint. Access controls can be configured via metadata filters, and page number citations are returned automatically.<p>The stack is constantly changing but the Captain API creates a standard interface for this. You can try Captain, 1 month for free, and build your own pipelines at <a href="https://runcaptain.com">https://runcaptain.com</a>. We’re looking for candid feedback, especially anything that can make it more useful, and look forward to your comments!
View originalOne Tool Calling Interface for OpenAI, Claude, and Gemini
llm-api-adapter is an open‑source Python library designed to simplify working with multiple LLM...
View originalBump ruby_llm from 1.2.0 to 1.13.2
Bumps [ruby_llm](https://github.com/crmne/ruby_llm) from 1.2.0 to 1.13.2. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/crmne/ruby_llm/releases">ruby_llm's releases</a>.</em></p> <blockquote> <h2>1.13.2</h2> <h1>RubyLLM 1.13.2: Patch Fixes for Schema + Streaming 🐛🔧</h1> <p>A small patch release with three fixes.</p> <h2>🧩 Fix: Schema Names Are Always OpenAI-Compatible</h2> <p>Schema names now always produce a valid <code>response_format.json_schema.name</code> for OpenAI:</p> <ul> <li>namespaced names like <code>MyApp::Schema</code> are sanitized</li> <li>blank names now safely fall back to <code>response</code></li> </ul> <p>Fixes <a href="https://redirect.github.com/crmne/ruby_llm/issues/654">#654</a>.</p> <h2>🌊 Fix: Streaming Ignores Non-Hash SSE Payloads</h2> <p>Streaming handlers now skip non-Hash JSON payloads (like <code>true</code>) before calling provider chunk builders, preventing intermittent crashes in Anthropic streaming.</p> <p>Fixes <a href="https://redirect.github.com/crmne/ruby_llm/issues/656">#656</a>.</p> <h2>🗓️ Fix: models.dev <code>created_at</code> Date Handling</h2> <p>Improved handling for missing <code>models.dev</code> dates when populating <code>created_at</code> metadata.</p> <h2>Installation</h2> <pre lang="ruby"><code>gem "ruby_llm", "1.13.2" </code></pre> <h2>Upgrading from 1.13.1</h2> <pre lang="bash"><code>bundle update ruby_llm </code></pre> <h2>Merged PRs</h2> <ul> <li>Fix missing models.dev date handling for created_at metadata by <a href="https://github.com/afurm"><code>@afurm</code></a> in <a href="https://redirect.github.com/crmne/ruby_llm/pull/652">crmne/ruby_llm#652</a></li> <li>[BUG] Fix schema name sanitization for OpenAI API compatibility by <a href="https://github.com/alexey-hunter-io"><code>@alexey-hunter-io</code></a> in <a href="https://redirect.github.com/crmne/ruby_llm/pull/655">crmne/ruby_llm#655</a></li> <li>Fix Anthropic streaming crash on non-hash SSE payloads by <a href="https://github.com/crmne"><code>@crmne</code></a> in <a href="https://redirect.github.com/crmne/ruby_llm/pull/657">crmne/ruby_llm#657</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/crmne/ruby_llm/compare/1.13.1...1.13.2">https://github.com/crmne/ruby_llm/compare/1.13.1...1.13.2</a></p> <h2>1.13.1</h2> <h1>RubyLLM 1.13.1: Quick Fixes 🐛🔧</h1> <p>A small patch release with three fixes.</p> <h2>🧩 Fix: Schema + Tool Calls No Longer Crash</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/crmne/ruby_llm/commit/0950693544457b840cee7f7c89e69248f5f32de6"><code>0950693</code></a> Updated cassettes</li> <li><a href="https://github.com/crmne/ruby_llm/commit/b6e62a6df45e6445a91d41ff71440deec1ea8d88"><code>b6e62a6</code></a> Bump to 1.13.2</li> <li><a href="https://github.com/crmne/ruby_llm/commit/67c41488c3fba7af4eea5777c799c3e4789bb38e"><code>67c4148</code></a> Fix Anthropic streaming crash on non-hash SSE payloads (<a href="https://redirect.github.com/crmne/ruby_llm/issues/657">#657</a>)</li> <li><a href="https://github.com/crmne/ruby_llm/commit/afe7d046eb102c42137ee1aac338df861bca8637"><code>afe7d04</code></a> [BUG] Fix schema name sanitization for OpenAI API compatibility (<a href="https://redirect.github.com/crmne/ruby_llm/issues/655">#655</a>)</li> <li><a href="https://github.com/crmne/ruby_llm/commit/fe8994d56640d56112d1afa82d95014a2df967d6"><code>fe8994d</code></a> Updated models</li> <li><a href="https://github.com/crmne/ruby_llm/commit/48bf6f6c98a15ed53501bdabc36cc9fc740bec30"><code>48bf6f6</code></a> Updated models</li> <li><a href="https://github.com/crmne/ruby_llm/commit/d940984d0b3cf63f108a7e0df52db82238ff8549"><code>d940984</code></a> Fix missing models.dev date handling for created_at metadata (<a href="https://redirect.github.com/crmne/ruby_llm/issues/652">#652</a>)</li> <li><a href="https://github.com/crmne/ruby_llm/commit/97c0546d8e09cfd51943ae612d9598ab11e81885"><code>97c0546</code></a> Bump to 1.13.1</li> <li><a href="https://github.com/crmne/ruby_llm/commit/10949459d1243cc2a7c1ebef3a8ce9ac9691bb71"><code>1094945</code></a> Populate Gemini cached token usage</li> <li><a href="https://github.com/crmne/ruby_llm/commit/beec83752c7697465b0f6f9e50b8bdfbc25353d1"><code>beec837</code></a> Fix schema JSON parsing for intermediate tool-call responses (<a href="https://redirect.github.com/crmne/ruby_llm/issues/650">#650</a>)</li> <li>Additional commits viewable in <a href="https://github.com/crmne/ruby_llm/compare/1.2.0...1.13.2">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compat
View originalchore(deps): update updates-patch-minor
> ℹ️ **Note** > > This PR body was truncated due to platform limits. This PR contains the following updates: | Package | Update | Change | |---|---|---| | 1password/connect-sync | patch | `1.8.1` → `1.8.2` | | [alpine/openclaw](https://openclaw.ai) ([source](https://redirect.github.com/openclaw/openclaw)) | minor | `2026.2.22` → `2026.3.8` | | [cloudflare/cloudflared](https://redirect.github.com/cloudflare/cloudflared) | minor | `2026.2.0` → `2026.3.0` | | kerberos/agent | patch | `v3.6.12` → `v3.6.15` | | [searxng/searxng](https://searxng.org) ([source](https://redirect.github.com/searxng/searxng)) | patch | `2026.3.8-a563127a2` → `2026.3.9-d4954a064` | --- > [!WARNING] > Some dependencies could not be looked up. Check the [Dependency Dashboard](../issues/304) for more information. --- ### Release Notes <details> <summary>openclaw/openclaw (alpine/openclaw)</summary> ### [`v2026.3.8`](https://redirect.github.com/openclaw/openclaw/blob/HEAD/CHANGELOG.md#202638) [Compare Source](https://redirect.github.com/openclaw/openclaw/compare/v2026.3.7...v2026.3.8) ##### Changes - CLI/backup: add `openclaw backup create` and `openclaw backup verify` for local state archives, including `--only-config`, `--no-include-workspace`, manifest/payload validation, and backup guidance in destructive flows. ([#​40163](https://redirect.github.com/openclaw/openclaw/issues/40163)) thanks [@​shichangs](https://redirect.github.com/shichangs). - macOS/onboarding: add a remote gateway token field for remote mode, preserve existing non-plaintext `gateway.remote.token` config values until explicitly replaced, and warn when the loaded token shape cannot be used directly from the macOS app. ([#​40187](https://redirect.github.com/openclaw/openclaw/issues/40187), supersedes [#​34614](https://redirect.github.com/openclaw/openclaw/issues/34614)) Thanks [@​cgdusek](https://redirect.github.com/cgdusek). - Talk mode: add top-level `talk.silenceTimeoutMs` config so Talk waits a configurable amount of silence before auto-sending the current transcript, while keeping each platform's existing default pause window when unset. ([#​39607](https://redirect.github.com/openclaw/openclaw/issues/39607)) Thanks [@​danodoesdesign](https://redirect.github.com/danodoesdesign). Fixes [#​17147](https://redirect.github.com/openclaw/openclaw/issues/17147). - TUI: infer the active agent from the current workspace when launched inside a configured agent workspace, while preserving explicit `agent:` session targets. ([#​39591](https://redirect.github.com/openclaw/openclaw/issues/39591)) thanks [@​arceus77-7](https://redirect.github.com/arceus77-7). - Tools/Brave web search: add opt-in `tools.web.search.brave.mode: "llm-context"` so `web_search` can call Brave's LLM Context endpoint and return extracted grounding snippets with source metadata, plus config/docs/test coverage. ([#​33383](https://redirect.github.com/openclaw/openclaw/issues/33383)) Thanks [@​thirumaleshp](https://redirect.github.com/thirumaleshp). - CLI/install: include the short git commit hash in `openclaw --version` output when metadata is available, and keep installer version checks compatible with the decorated format. ([#​39712](https://redirect.github.com/openclaw/openclaw/issues/39712)) thanks [@​sourman](https://redirect.github.com/sourman). - CLI/backup: improve archive naming for date sorting, add config-only backup mode, and harden backup planning, publication, and verification edge cases. ([#​40163](https://redirect.github.com/openclaw/openclaw/issues/40163)) Thanks [@​gumadeiras](https://redirect.github.com/gumadeiras). - ACP/Provenance: add optional ACP ingress provenance metadata and visible receipt injection (`openclaw acp --provenance off|meta|meta+receipt`) so OpenClaw agents can retain and report ACP-origin context with session trace IDs. ([#​40473](https://redirect.github.com/openclaw/openclaw/issues/40473)) thanks [@​mbelinky](https://redirect.github.com/mbelinky). - Tools/web search: alphabetize provider ordering across runtime selection, onboarding/configure pickers, and config metadata, so provider lists stay neutral and multi-key auto-detect now prefers Grok before Kimi. ([#​40259](https://redirect.github.com/openclaw/openclaw/issues/40259)) thanks [@​kesku](https://redirect.github.com/kesku). - Docs/Web search: restore $5/month free-credit details, replace defunct "Data for Search"/"Data for AI" plan names with current "Search" plan, and note legacy subscription validity in Brave setup docs. Follows up on [#​26860](https://redirect.github.com/openclaw/openclaw/issues/26860). ([#​40111](https://redirect.github.com/openclaw/openclaw/issues/40111)) Thanks [@​remusao](https://redirect.github.com/remusao). - Extensions/ACPX tests: move the shared runtime fixture helper from `src/runtime-internals/` to `src/test-utils/` so the test-only he
View originalWe Need to Stop Listening to Tony Blair Once and for All
 It might feel like months, but we’re just over a week into the US and Israel’s illegal assault on Iran, and there’s no end in sight. What is in sight, though, is [the apocalyptic vision of Tehran ablaze](https://time.com/7383099/iran-news-oil-strikes-tehran/), wreathed in thick smoke as black oil-soaked rain falls on its inhabitants. That’s the result of Israeli strikes on several oil storage depots in the city, reportedly sending burning petroleum running through gutters while [geysers of flaming gas exploded](https://www.telegraph.co.uk/world-news/2026/03/08/rivers-of-fire-in-tehran-after-oil-depots-blown-up/) from the streets. A nightmare? For most of us, yes. But for former British prime minister Tony Blair it’s apparently a dream. One that he might have liked the entire British public to be non-consensually forced into realising for him. And [not for the first time](https://www.bbc.co.uk/news/uk-politics-36701854). Were my hands bloodied with the [deaths of up to a million people](https://www.abc.net.au/news/2008-01-31/million-iraqis-dead-since-invasion-study/1028878), I’d probably think twice before giving my opinion on yet another illegal US adventure in the Middle East. Not our Tone, though. On Sunday [the papers reported](https://www.dailymail.co.uk/news/article-15623903/Tony-Blair-rebukes-Keir-Starmer-not-backing-Trump-Iran.html) that the man who told George W. Bush in the months before the disastrous Iraq war, “[I will be with you, whatever](https://www.theguardian.com/uk-news/2016/jul/06/with-you-whatever-tony-blair-letters-george-w-bush-chilcot)”, is still singing the same old tune. “We should,” Blair [told a private Jewish News event](https://www.independent.co.uk/news/uk/home-news/blair-starmer-trump-war-iran-labour-b2934207.html) on Friday night, “Have backed America from the very beginning”. That was a direct criticism of current prime minister Keir Starmer, who, [to a chorus of warmonger criticism](https://www.bbc.co.uk/news/articles/c05v28eqjyvo), initially refused the US and Israel access to British military infrastructure to launch its war on Iran. But it’s not like we’ve stayed completely out of the mess: our bases are now free for use by US jets for “defensive” actions – whatever that means – with American bombers [already touching down](https://www.theguardian.com/world/2026/mar/07/us-bomber-lands-in-uk-after-warning-of-surge-in-strikes-on-iran). Now, nobody was ever supposed to know that a former Labour prime minister so openly rubbished the current one in public. That’s because the event was conducted under Chatham House rules. In short, that means what’s said in the room can be made public, but not who said it. In long, it means elites are emboldened to express their heart’s true desires without any threat of accountability. We can’t know what was in Tony Blair’s heart when he mourned the fact that the UK was not more involved in blasting a hole straight through the security of the hundreds of millions who live in the Middle East. Nor can we tell for sure, as global oil prices [surge above $100 a barrel](https://www.bbc.co.uk/news/articles/c79542n0grwo) for the first time since the Russian invasion of Ukraine, how little the lives of Brits, long blighted by a cost of living crisis, matter to him. We can, though, look at his record. And what that shows – in my opinion – is a tendency, previously expressed via his businesses and [nowadays his Tony Blair Institute](https://www.ft.com/content/bcf1f1f5-a38f-4078-98f8-ab1ff7378895), to see fatal discord as fiscal opportunity. [Autocracy](https://www.theguardian.com/politics/2023/aug/12/tony-blair-institute-continued-taking-money-from-saudi-arabia-after-khashoggi), [oligarchy](https://www.theguardian.com/world/2022/jan/06/how-tony-blair-advised-former-kazakh-ruler-after-2011-uprising), [calamity](https://www.theguardian.com/politics/2025/jul/07/tony-blair-thinktank-worked-with-project-developing-trump-riviera-gaza-plan)? Roll up, roll up: the Blair pitch project is in town, and it has some consultancy to sell. Now, none of that is a crime. But you might think it indicates a conflict when wading into affairs of state. Blair is alleged to have form here too: in 2014, a number of former ambassadors and MPs [called for his resignation](https://www.theguardian.com/politics/2014/jun/27/tony-blair-conflict-interests-middle-east) as Middle East peace envoy for the Quartet (made up of the United Nations, the US, the EU and Russia). They claimed he was ineffective, while others noted [the growth of his business interests in the region](https://www.independent.co.uk/voices/tony-blair-uae-middle-east-envoy-qatar-israel-palestine-foreign-office-a7894641.html). Blair’s [financial arrangements](https://www.telegraph.co.uk/news/poli
View originalWeekly Report Mar 2 -- Mar 9, 2026
# Weekly Report: Mar 2 -- Mar 9, 2026 ## Quick Stats | Metric | Count | |--------|-------| | Merged PRs | 47 | | Open PRs | 24 (11 draft) | | Open issues | 61 | | New issues this week | 33 | | Issues closed this week | 6 | | CI runs on main | 30 | ## Highlights An exceptionally active week with 47 merged PRs. Key themes: - **Realm migration**: Keycloak master-to-kagenti realm migration landed (#764), with follow-up fixes (#851, #863) - **Platform hardening**: Podman support (#861), Docker Hub rate limit fixes (#844), PostgreSQL mount fix (#852) - **CI/CD improvements**: OpenSSF Scorecard 7.1->8+ (#807), stale workflow permissions (#859), HyperShift cluster auto-cleanup (#854) - **New capabilities**: CLI/TUI for Kagenti (#835), Istio trace export to OTel (#795), RHOAI 3.x integration (#809) - **Dependency updates**: 8 Dependabot PRs (Docker actions major bumps, CodeQL, Trivy) - **Authorization epic**: 7 new issues (#787-#794) laying out a comprehensive authorization and policy framework - **Agent sandbox epic**: New epic (#820) for platform-owned sandboxed agent runtime ## Issue Analysis ### Epics (active initiatives) | # | Title | Owner | Status | |---|-------|-------|--------| | #862 | AgentRuntime CR — CR-triggered injection | @cwiklik | New, design phase | | #820 | Platform-Owned Sandboxed Agent Runtime | @Ladas | Active, PR #758 in progress | | #828 | Migrate installer from Ansible/Helm to Operator | @pdettori | New, planning | | #787 | Authorization, Policies, and Access Management | @mrsabath | New, 6 sub-issues filed | | #841 | Org-wide orchestration: CI, tests, security | @Ladas | Active, PRs #866-#868 open | | #767 | Migrate from Keycloak master realm | @mrsabath | Mostly done (#764 merged), close candidate | | #619 | Tracing observability PoC | @evaline-ju | Active (#795 merged) | | #621 | OpenSSF Scorecard to 10/10 | @Ladas | Active (#807 merged, now 8+) | | #523 | Refactor APIs for Compositional Architecture | @pdettori | Active, PR #770 open | | #518 | OpenShift AI deployment issues | @Ladas | Active (#809 merged) | | #309 | Full Coverage E2E Testing | @cooktheryan | Ongoing | | #440 | Multi-Team Deployment on RHOAI | @Ladas | Ongoing | | #439 | Namespace-Based Token Usage Quotas | @Ladas | Ongoing | | #614 | Feedback review community meeting | @Ladas | Stale (>30d no update) | | #623 | Identify Emerging Agentic Deployment Patterns | @kellyaa | Stale | | #612 | Agent Attestation Framework | @mrsabath | Stale, PR #613 still draft | ### Security-Adjacent Issues | # | Title | Status | Recommendation | |---|-------|--------|----------------| | #822 | Keycloak configmap should be secret | Open | High priority — credentials in configmap | | #106 | Replace hardcoded secret with SPIRE identity | Open | Long-standing, PR #769 in draft | | #333 | SPIFFE ID missing checks | Open | Stale, needs triage | | #267 | Replace hard-coded Client Secret File path | Open | Good first issue, needs assignee | ### Bug Reports | # | Title | Still affects main? | PR exists? | Recommendation | |---|-------|---------------------|------------|----------------| | #856 | Warnings during Kagenti install | Likely yes | No | Triage — install warnings | | #855 | Can't checkout source on Windows | Yes (skill naming) | PR #869 | In progress | | #829 | Deleting A2A agent doesn't delete HTTPRoute | Likely yes | No | Needs fix | | #826 | No way to log out of Kagenti | Yes | No | UX bug, needs fix | | #825 | Build failures lead to stuck state | Likely yes | No | Needs investigation | | #738 | UI drops spire label on 2nd deploy | Likely yes | No | Stale (>30d) | | #486 | Installer issues (Postgres/Phoenix) | Partially (#852 fixed PG) | Partial | Re-verify Phoenix | | #781 | kagenti-deps fails on OCP 4.19 | Unknown | No | Stale, needs triage | | #606 | Unsupported Helm version | Unknown | No | Stale, needs triage | | #655 | Duplicated resources between repos | Unknown | No | Stale, needs triage | ### Issues Closed This Week (good velocity) | # | Title | Fix PR | |---|-------|--------| | #833 | UI login fails after realm migration | #834 | | #831 | --preload fails when images cached | #832 | | #819 | Remove deprecated Component CRD refs | #818 | | #813 | Import env vars references bad URL | #821 | | #810 | Import env vars silently fails on dup | #821 | | #804 | OAuth secret job SSL error on OCP | #805 | ### Feature Requests | # | Title | Priority | Recommendation | |---|-------|----------|----------------| | #858 | Use new URL for fetching Agent Cards | Medium | Good first issue | | #836 | AuthBridge sidecar opt-out controls in UI | Medium | Tied to #862 epic | | #824 | Help text for UI fields | Low | Good UX improvement | | #823 | Examples as suggestions in UI | Low | Nice-to-have | | #817 | Auto-add issues/PRs to project board | Medium | PR #870 open | | #814 | Mechanism to update agent via K8s | Medium | Operator feature | | #786 | Register MCP servers from UI | Medium | UI feature | | #783 | Agent card signing/verifica
View originalWe Need to Stop Listening to Tony Blair Once and for All
 It might feel like months, but we’re just over a week into the US and Israel’s illegal assault on Iran, and there’s no end in sight. What is in sight, though, is [the apocalyptic vision of Tehran ablaze](https://time.com/7383099/iran-news-oil-strikes-tehran/), wreathed in thick smoke as black oil-soaked rain falls on its inhabitants. That’s the result of Israeli strikes on several oil storage depots in the city, reportedly sending burning petroleum running through gutters while [geysers of flaming gas exploded](https://www.telegraph.co.uk/world-news/2026/03/08/rivers-of-fire-in-tehran-after-oil-depots-blown-up/) from the streets. A nightmare? For most of us, yes. But for former British prime minister Tony Blair it’s apparently a dream. One that he might have liked the entire British public to be non-consensually forced into realising for him. And [not for the first time](https://www.bbc.co.uk/news/uk-politics-36701854). Were my hands bloodied with the [deaths of up to a million people](https://www.abc.net.au/news/2008-01-31/million-iraqis-dead-since-invasion-study/1028878), I’d probably think twice before giving my opinion on yet another illegal US adventure in the Middle East. Not our Tone, though. On Sunday [the papers reported](https://www.dailymail.co.uk/news/article-15623903/Tony-Blair-rebukes-Keir-Starmer-not-backing-Trump-Iran.html) that the man who told George W. Bush in the months before the disastrous Iraq war, “[I will be with you, whatever](https://www.theguardian.com/uk-news/2016/jul/06/with-you-whatever-tony-blair-letters-george-w-bush-chilcot)”, is still singing the same old tune. “We should,” Blair [told a private Jewish News event](https://www.independent.co.uk/news/uk/home-news/blair-starmer-trump-war-iran-labour-b2934207.html) on Friday night, “Have backed America from the very beginning”. That was a direct criticism of current prime minister Keir Starmer, who, [to a chorus of warmonger criticism](https://www.bbc.co.uk/news/articles/c05v28eqjyvo), initially refused the US and Israel access to British military infrastructure to launch its war on Iran. But it’s not like we’ve stayed completely out of the mess: our bases are now free for use by US jets for “defensive” actions – whatever that means – with American bombers [already touching down](https://www.theguardian.com/world/2026/mar/07/us-bomber-lands-in-uk-after-warning-of-surge-in-strikes-on-iran). Now, nobody was ever supposed to know that a former Labour prime minister so openly rubbished the current one in public. That’s because the event was conducted under Chatham House rules. In short, that means what’s said in the room can be made public, but not who said it. In long, it means elites are emboldened to express their heart’s true desires without any threat of accountability. We can’t know what was in Tony Blair’s heart when he mourned the fact that the UK was not more involved in blasting a hole straight through the security of the hundreds of millions who live in the Middle East. Nor can we tell for sure, as global oil prices [surge above $100 a barrel](https://www.bbc.co.uk/news/articles/c79542n0grwo) for the first time since the Russian invasion of Ukraine, how little the lives of Brits, long blighted by a cost of living crisis, matter to him. We can, though, look at his record. And what that shows – in my opinion – is a tendency, previously expressed via his businesses and [nowadays his Tony Blair Institute](https://www.ft.com/content/bcf1f1f5-a38f-4078-98f8-ab1ff7378895), to see fatal discord as fiscal opportunity. [Autocracy](https://www.theguardian.com/politics/2023/aug/12/tony-blair-institute-continued-taking-money-from-saudi-arabia-after-khashoggi), [oligarchy](https://www.theguardian.com/world/2022/jan/06/how-tony-blair-advised-former-kazakh-ruler-after-2011-uprising), [calamity](https://www.theguardian.com/politics/2025/jul/07/tony-blair-thinktank-worked-with-project-developing-trump-riviera-gaza-plan)? Roll up, roll up: the Blair pitch project is in town, and it has some consultancy to sell. Now, none of that is a crime. But you might think it indicates a conflict when wading into affairs of state. Blair is alleged to have form here too: in 2014, a number of former ambassadors and MPs [called for his resignation](https://www.theguardian.com/politics/2014/jun/27/tony-blair-conflict-interests-middle-east) as Middle East peace envoy for the Quartet (made up of the United Nations, the US, the EU and Russia). They claimed he was ineffective, while others noted [the growth of his business interests in the region](https://www.independent.co.uk/voices/tony-blair-uae-middle-east-envoy-qatar-israel-palestine-foreign-office-a7894641.html). Blair’s [financial arrangements](https://www.telegraph.co.uk/news
View originalTrack token usage per provider response
## Summary Quorum doesn't capture token usage metadata from provider responses. Session JSON stores response text but not the `usage` headers that every provider returns (input tokens, output tokens, total). ## Why - **Cost visibility** — users (and us) can't see how much a deliberation actually costs - **Billing foundation** — if Quorum becomes a billable service, we need real usage data, not estimates - **Provider comparison** — helps evaluate which providers are token-efficient vs verbose ## Proposed 1. Capture the `usage` field from each provider's API response (most return `prompt_tokens`, `completion_tokens`, `total_tokens`) 2. Store it in the session JSON alongside each response 3. Add a summary at the end of deliberation (total tokens by provider, estimated cost) 4. Expose via CLI: `quorum usage <session-id>` or similar ## Estimates (current 3-provider council) - ~97K tokens total per deliberation (~32K per provider) - Without tracking, we're guessing based on text length ## Notes Provider-specific quirks to handle: - OpenAI/Codex: `usage.prompt_tokens` + `usage.completion_tokens` - Anthropic: `usage.input_tokens` + `usage.output_tokens` - Kimi: follows OpenAI format - Some streaming responses return usage only in the final chunk
View originalFintech Daily Digest — Monday, Mar 09, 2026
# TOP 3 STORIES 1. **X taps William Shatner to give out invites to its payments service, X Money** [Source: Fintech News | TechCrunch](https://techcrunch.com/2026/03/04/x-taps-william-shatner-to-give-out-invites-to-its-payments-service-x-money/) X has launched a unique marketing campaign for its payments service, X Money, by partnering with William Shatner to give out invites to 42 users who donated to his charity. This campaign aims to create buzz around X Money's beta launch. **What this means for Stripe:** This marketing strategy could influence how Stripe approaches its own marketing efforts for new product launches, potentially incorporating more creative and charitable initiatives. Stripe's Connect product could be particularly relevant in facilitating such campaigns. **Content angle:** A blog post exploring innovative marketing strategies for fintech products, highlighting the role of charity and celebrity endorsements, could be an interesting response from Stripe's content marketing team. 2. **Stripe wants to turn your AI costs into a profit center** [Source: Fintech News | TechCrunch](https://techcrunch.com/2026/03/02/stripe-wants-to-turn-your-ai-costs-into-a-profit-center/) Stripe has released a preview aimed at helping AI companies track, pass through, and profit from underlying AI model fees. This move positions Stripe as a key player in the AI economy, enabling businesses to monetize their AI investments more effectively. **What this means for Stripe:** By facilitating the monetization of AI costs, Stripe strengthens its position in the payments infrastructure for the internet, making its platform more appealing to AI-driven businesses. This could particularly impact Stripe's Revenue Recognition and Billing products. **Content angle:** A case study or whitepaper on how Stripe's solutions can help AI companies turn their costs into revenue streams could provide valuable insights for potential clients. 3. **Plaid valued at $8B in employee share sale** [Source: Fintech News | TechCrunch](https://techcrunch.com/2026/02/26/plaid-valued-at-8b-in-employee-share-sale/) Plaid, a fintech company specializing in account linking and payment processing, has seen its valuation increase to $8 billion through an employee share sale. This significant valuation underscores the growing importance of fintech infrastructure companies. **What this means for Stripe:** As a major player in the fintech infrastructure space, Stripe should consider the implications of Plaid's valuation on its own valuation and competitive positioning. Stripe's products like Payments and Connect might see increased demand as the fintech space grows. **Content angle:** Stripe could publish a thought leadership piece on the evolving fintech landscape, discussing how valuations like Plaid's reflect the sector's growth and the role of infrastructure providers in facilitating this expansion. # NEWS BY TRACK ## _Advancing Developer Craft_ - **Kast raises $80 million** [Source: Finextra Research Headlines](https://www.finextra.com/newsarticle/47408/stablecoin-startup-kast-raises-80-million?utm_medium=rssfinextra&utm_source=finextrafeed) Kast, a stablecoin startup, has secured $80 million in funding, indicating growing interest in stablecoin technology. **Stripe relevance:** Stripe's Issuing and Treasury products could be relevant for stablecoin startups like Kast. **Content angle:** A developer tutorial on integrating stablecoin payments using Stripe's API could be a useful resource. ## _Designing Adaptive Revenue Models_ - **Papa John’s Thinks the Next Great Pizza Topping Is Software** [Source: PYMNTS.com](https://www.pymnts.com/restaurant-technology/2026/papa-johns-thinks-the-next-great-pizza-topping-is-software/) Papa John's is focusing on technology and digital capabilities to compete and grow, highlighting the importance of adaptive revenue models in the restaurant industry. **Stripe relevance:** Stripe's Billing and Revenue Recognition products can help businesses like Papa John's manage complex revenue models. **Content angle:** A blog post on how restaurants can leverage technology and adaptive pricing strategies to boost revenue could feature Stripe as a solution provider. ## _Charting the Future of Payments_ - **Real-Time Payments Reach a Turning Point in North America** [Source: PYMNTS.com](https://www.pymnts.com/real-time-payments/2026/real-time-payments-reach-a-turning-point-in-north-america/) Real-time payments in North America are transitioning from expansion to execution, with each country following a distinct strategic path. **Stripe relevance:** Stripe's Payments product is well-positioned to support the growth of real-time payments. **Content angle:** An in-depth analysis of the current state and future of real-time payments in North America, highlighting Stripe's role, could be a valuable resource for businesses. ## _Optimizing the Economics of Risk_ - **OpenAI fires employee for using confidential info on prediction
View originalAdd LLM token usage tracking to LoggingCallbacks
## Summary This proposal introduces per-call and per-turn LLM token usage tracking in the `LoggingCallbacks` class to provide visibility into prompt, response, reasoning, and cached token consumption. The objective is to enable better cost monitoring, prompt optimization, and operational insights while maintaining a lightweight implementation that integrates seamlessly with the existing OpenTelemetry → Cloud Logging pipeline. ## Motivation Currently, `agent-foundation` does not expose metrics related to token consumption for LLM calls. Without this visibility, it becomes difficult to: - Monitor API usage and operational costs in production - Identify expensive queries or agents - Optimize prompts to reduce token consumption - Implement usage tracking or budget alerts This feature is inspired by token tracking approaches used in other agent frameworks but adopts a lightweight, log-based design that requires no additional dependencies. ## Proposed Approach The implementation enhances the `LoggingCallbacks` lifecycle to capture and log token usage metadata returned by LLM responses. Key additions include: - Initialize per-turn token counters during the agent lifecycle - Capture the model name from the LLM request - Extract token usage metadata from the LLM response - Log per-call token consumption - Aggregate token usage across all LLM calls within a single user turn Token usage data will be captured from `usage_metadata` fields such as: - `prompt_token_count` - `total_token_count` - `thoughts_token_count` - `cached_content_token_count` ## Expected Output ### Per LLM Call Token usage: { agent_name: "my_agent", model: "gemini-2.5-flash", prompt_tokens: 167, response_tokens: 10, total_tokens: 211, thoughts_tokens: 34, cached_tokens: 0 } ## Design Considerations - **Lightweight implementation** using structured logging instead of database storage - **No additional dependencies** required - **Non-intrusive design**, ensuring existing agent behavior remains unchanged - **Graceful handling** of missing `usage_metadata` fields ## Acceptance Criteria - Token usage logged for every LLM call - Turn-level token totals logged per user query - Missing usage metadata handled safely - No new dependencies introduced - Existing test coverage maintained
View originalX Users Find Their Real Names Are Being Googled in Israel After Using X Verification Software “Au10tix”
X Users Find Their Real Names Are Being Googled in Israel After Using X Verification Software “Au10tix” Alan Macleod On January 30, the Department of Justice released its latest tranche of 3.5 million documents relating to Jeffrey Epstein. Years of emails, texts, and images were suddenly in the public domain. Epstein, a serial rapist, masterminded a global human trafficking and sexual abuse network, and could count princes, professors, and politicians among his closest friends and accomplices. MintPress News has been at the forefront of covering the Epstein saga, revealing his extremely close links to American and Israeli intelligence groups – a discovery that perhaps sheds light on why it took so long for the world’s most notorious pedophile to face accountability for his crimes. Many of the DOJ files have been heavily redacted in order to protect Epstein’s powerful clients. Still, they have exposed a massive elite nexus revolving around the New York billionaire, implicating presidents, diplomats, and plutocrats in his crimes, and imply that Epstein was significantly more powerful than first thought, shaping modern politics in ways never previously understood. With shocking new details emerging on a near-hourly basis, here are ten Epstein- related stories that have flown relatively under the radar. The Israeli Government Installed Surveillance Cameras at Epstein’s New York Apartment The Israeli government installed and maintained a hi-tech surveillance system at Epstein’s Manhattan apartment complex, including a network of alarms and cameras, emails show. Starting in 2016, the director of protective service at the Israeli mission to the United Nations controlled guests’ access to the Manhattan residence, and even performed background checks on prospective cleaners and other Epstein employees. Former Israeli prime minister Ehud Barak admitted visiting the apartment up to 100 times, and stayed there for long periods of time. While Barak’s security may have been a concern, Epstein is known to have housed underage girls at the apartment, and many of his worst sexual crimes and most sordid parties were held there, raising questions as to what sort of images and data the Israeli government had access to. Epstein Plotted War With Iran Ehud Barak became one of Epstein’s closest associates, staying for extended periods of time at the billionaire’s residences. The pair would email, text, call, and meet constantly. A search for “Ehud Barak” elicits more than 3500 results in the latest file dump alone. The pair would talk politics, and shared a vision of the United States attacking Iran. In 2013, with negotiations between the International Atomic Energy Agency and Iran stalling, Epstein emailed Barak stating, in typically poor spelling and grammar: “hopefully somone suggests getting authorization now for Iran. the congress woudl do it.” Epstein would get his wish in 2025, when his close associate Donald Trump began bombing the country. Noam Chomsky Considered Epstein His “Best Friend” Epstein arranged a meeting between Barak and renowned leftist academic (and vehement critic of the U.S. and Israel) Noam Chomsky. An unlikely friendship between the notorious pedophile and star professor blossomed, with the pair regularly meeting up at each other’s houses for dinner. Chomsky flew on Epstein’s “Lolita Express” jet to attend a dinner with Woody Allen in New York. He also expressed his desire to visit Little St. James Island, Epstein’s notorious Caribbean hideaway, and the center of his trafficking operation. Chomsky considered Epstein his “best friend” according to an email sent by his wife, Valeria. The usually curt and matter-of-fact academic signed off his emails to Epstein with unexpectedly flowery language, such as “Like real friendship, deep and sincere and everlasting from both of us, Noam and Valeria.” Chomsky strongly supported Epstein until his dying day in a Manhattan prison cell, taking it upon himself to act as his unofficial crisis manager, describing his accusers as “publicity seekers or cranks of all sorts,” and denouncing the media as a “culture of gossip-mongers” destroying his stellar character. “Ive watched the horrible way you are being treated in the press and public,” he wrote, advising Epstein on tactics to fight the supposed smears against him. For a full rundown of the Chomsky-Epstein relationship, see the MintPress News investigation: “The Chomsky-Epstein Files: Unravelling a Web of Connections Between a Star Leftist Academic and a Notorious Pedophile.” Steve Bannon Developed a Plan to Help Epstein “Crush the Pedo Narrative” A second public figure running defense for Epstein was Steve Bannon. In public, the far-right strategist claimed that he was working on a documentary exposing Epstein. In private messaging, however, Bannon, like Chomsky, was advising Epstein on how best to repair his image. Just weeks before Epstein’s arrest and subsequent death, Bannon was messaging him, devising a complex media strategy
View originalBased on user reviews and social mentions, the most common pain points are: token usage, token cost, large language model, llm.
Based on 57 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
VC Firm at Andreessen Horowitz
2 mentions