500+ models, 50+ providers, one workspace. Every leading Al model for image, video, 3D, and audio, alongside your custom-trained models.
I cannot provide a meaningful summary of user sentiment about "Scenario" as a software tool based on these social mentions. The provided content appears to be a collection of unrelated political news articles, climate change discussions, and general social media posts from Lemmy and GitHub, rather than reviews or mentions of a specific software product called "Scenario." To properly summarize user opinions about Scenario software, I would need actual user reviews, testimonials, or social mentions that specifically discuss the software's features, performance, pricing, or user experience. The current content doesn't contain any relevant information about a software tool by that name.
Mentions (30d)
2
Reviews
0
Platforms
4
Sentiment
0%
0 positive
I cannot provide a meaningful summary of user sentiment about "Scenario" as a software tool based on these social mentions. The provided content appears to be a collection of unrelated political news articles, climate change discussions, and general social media posts from Lemmy and GitHub, rather than reviews or mentions of a specific software product called "Scenario." To properly summarize user opinions about Scenario software, I would need actual user reviews, testimonials, or social mentions that specifically discuss the software's features, performance, pricing, or user experience. The current content doesn't contain any relevant information about a software tool by that name.
Features
Use Cases
Industry
information technology & services
Employees
30
Funding Stage
Seed
Total Funding
$6.0M
Dems Need to Wise Up: ICE Is a Threat to Our Elections
 Senate Minority Leader Chuck Schumer, joined by House Minority Leader Hakeem Jeffries and fellow congressional Democrats, speaks at a press conference on DHS funding at the U.S. Capitol on Feb. 4, 2026. Photo: Kevin Dietsch/Getty Images A high-profile election denier is [leading election integrity work](https://www.thebulwark.com/p/election-2026-dhs-ice-polling-places-latino-voters) at the Department of Homeland Security. Trump and congressional Republicans are pushing the [SAVE America Act](https://www.cornyn.senate.gov/news/cornyn-lee-roy-introduce-the-save-america-act/) and threatening to “[nationalize](https://stateline.org/2026/02/06/trumps-calls-to-nationalize-elections-have-state-local-election-officials-bracing-for-tumult/)” elections, purportedly to prevent undocumented immigrants from voting. But despite an occasional [murmur](https://www.nytimes.com/2026/02/19/podcasts/the-daily/ice-democrats-senator-catherine-cortez-masto.html) from Democrats that they are concerned about Immigration and Customs Enforcement agents deploying to polling places around the country, they’re doing almost nothing to stop this nightmare scenario. In response to the horrific killings of Renee Good and Alex Pretti in Minneapolis, Democrats have partially shut down the government, holding DHS spending in limbo as they [demand reforms to ICE](https://theintercept.com/2026/02/05/schumer-ice-reforms-elizabeth-warren/). But instead of looking ahead to the midterms, Democrats have drawn most of their demands from the [same well](https://jeffries.house.gov/2026/02/04/leaders-jeffries-and-schumer-deliver-urgent-ice-reform-demands-to-republican-leadership/) of “community policing” policies that became popular during the Black Lives Matter era, like better use-of-force policies, eliminating racial profiling, and deploying more body cameras. The rest of the Democrats’ wish list are proposals to ban things that are already illegal (like entering homes without a warrant or creating databases of activists) or are almost comically toothless, like regulating the uniforms DHS agents wear on the street. > The department is quickly metastasizing into a grave threat to the midterms, public safety, and our democracy. The department is quickly metastasizing into a grave threat to the midterms, public safety, and our democracy — and Democrats are wasting time worried about their uniforms. Although Heather Honey, who pushed the theory that the 2020 race was stolen from Trump and serves in a newly created role as the administration’s deputy assistant secretary for election integrity, told elections officials on a private call last week that ICE would not be at polling sites, state officials reportedly [weren’t reassured](https://www.nbcnews.com/politics/elections/dhs-official-state-election-chiefs-wont-be-ice-agents-polling-places-rcna260706). Advocacy organizations have warned that even if that holds true, just the possibility could have a [“chilling” effect](https://www.thebulwark.com/p/election-2026-dhs-ice-polling-places-latino-voters) on turnout. If Democrats want to prevent ICE from being used to interfere with elections, they have to be prepared to demand more — and be willing not to fund DHS until next year if they don’t get these concessions. First and foremost, Democrats need to stop the department’s heavily politicized “[wartime](https://www.washingtonpost.com/technology/2025/12/31/ice-wartime-recruitment-push)” recruitment drive. Thanks to H.R. 1, otherwise known as the [One Big Beautiful Bill Act](https://theintercept.com/2025/07/01/trump-big-beautiful-bill-passes-ice-budget/), ICE has more than [doubled](https://www.govexec.com/workforce/2026/01/ice-more-doubled-its-workforce-2025/410461/) the number of officers and agents in its ranks since Trump took office. In spite of [merit system](https://www.mspb.gov/msp/meritsystemsprinciples.htm) principles which prohibit politicized recruitment, DHS has used its massive influx of cash to target conservative-coded media, gun shows, and NASCAR races, and has [used](https://www.cbc.ca/news/ice-recruiting-9.7058294) white nationalist, [neo-Nazi iconography](https://theintercept.com/2026/01/13/dhs-ice-white-nationalist-neo-nazi/) in its recruitment advertising. The Department of Justice has similarly [focused](https://www.nytimes.
View originalPricing found: $15 /mo, $45 /mo, $75 /mo
Cluade dynamic postgresql layer - asking for advice
I am building analytics platform for manufcaturing companies, where manufacturing companies can find new clients and suppliers by analysing the market trends - manufacturing news feeds, we even analyse satellite data for facilites expansion, parking spots extensions and so on. I'm coding the app with Claude Code. Now where is my problem - Just to be clear I'm not showcasing or presenting the tool, I'm just stuck and I have to explain the context to paint a picture where I'm (Claude) stuck: Each module has it's own database table and I want to have Master AI search powered by Claude of course, where user is guided in a prompt window first through the market signals, satellite signals, commodities prices and so on - Claude then analyses all these signals and guides the user through additional questions like what kind of capabilities (machine park) our client has so that at the end it creates a SQL statement that results in best fit companies. And of course everything has to run in an in-app chat window. Claude finds it real hard to build a dynamic sql statement for each specific search case. It's too rigid. So my question is there a tool for Claude I can use to give Claude more flexibility in creating a more dynamic SQL statements? The problem is that each user, company can have a specific search case scenario where static sql statements can not help? In other words, how to make Claude smarter in multi-table sql searches where each search can be a specific use case. https://preview.redd.it/sl9hrnxlb5ug1.png?width=1917&format=png&auto=webp&s=93b8987a8a648e9b6a7db308108a3097b01600c1 submitted by /u/Impossible_Carob8839 [link] [comments]
View originalI Built a Compound Interest Calculator with Claude Code Featuring Dual Independent Income Streams (Free iOS App)
What I Built Global Compound Strategy is an iOS compound interest calculator that models two independent income streams with separate growth rates — for example, salary growth combined with freelance or side-hustle income. Try it free: https://apps.apple.com/nl/app/global-compound-strategy/id6760593409 The Problem It Solves Most compound interest calculators force users to either average multiple income streams into a single growth rate or perform separate calculations and combine the results manually. I needed a single tool that could handle two income streams growing at different annual rates independently and accurately. As a Brazilian engineer living in the Netherlands, with salary income growing at approximately 3% and freelance income at approximately 8%, I found no existing solution that addressed this need cleanly. Development with Claude Code I began learning Swift and iOS development in November 2025 with no prior experience. Over three weeks, Claude Code assisted me in building the entire application. In the first week, I asked Claude Code to create a compound interest calculator supporting two independent income streams. It generated the complete SwiftUI structure, the financial calculation engine, and the dual-stream algorithm. The core mathematical approach compounds each stream separately before combining the results: // Each stream compounds independently let streamA = monthlyA * (pow(1 + rateA, months) - 1) / rateA let streamB = monthlyB * (pow(1 + rateB, months) - 1) / rateB // Total portfolio let totalPortfolio = streamA + streamB Claude Code not only produced the code but also explained the underlying financial concepts, suggested additional features, and guided the user interface design. Subsequent weeks focused on implementing four calculator modes (Growth, Withdrawal, Lifecycle, and 4% Rule), adding 22 contextual insights, supporting multiple languages (English, Portuguese, Spanish), and preparing for App Store submission. Claude Code also identified a potential compliance issue regarding trial periods, which led to a sustainable freemium model. Key Features • Dual Independent Streams: Track salary and freelance (or any two streams) with distinct contribution amounts and growth rates, with both individual breakdowns and combined portfolio totals. • Four Calculation Modes: Growth, retirement withdrawals, variable lifecycle contributions, and 4% Rule / FIRE planning. • Smart Insights: 22 data-driven observations, such as projected time to reach millionaire status or when investment growth surpasses contributions. • Accessibility: Available in English, Portuguese, and Spanish, with support for multiple currencies. Real-World Example • Age: 30 • Monthly salary contribution: $500 growing at 3% annually • Monthly freelance contribution: $300 growing at 8% annually • Target retirement age: 55 Projected results at age 55: • Salary stream: approximately $287,000 • Freelance stream: approximately $428,000 • Combined portfolio: approximately $715,000 One insight highlighted that the freelance stream surpasses the salary stream in year 12. Availability and Pricing The app is available for free download on the App Store: https://apps.apple.com/nl/app/global-compound-strategy/id6760593409 Free tier includes the Growth calculator, saving one scenario, and full language/currency support. Premium ($6.99 per month or $39.99 per year, with a 7-day free trial) unlocks all four modes, dual income streams, unlimited scenarios, and all insights. The application is built entirely with SwiftUI, runs calculations locally with no backend, and was developed from zero Swift knowledge in approximately three weeks, with Claude Code contributing the majority of the code. I would welcome questions or feedback, particularly from the FIRE community, regarding: • Using Claude Code as a non-professional developer • The App Store submission process • Implementation of the financial calculations • The chosen freemium strategy This post should meet subreddit guidelines for r/ClaudeAI while remaining clear, professional, and informative. It emphasizes the value of the tool and the learning process without excessive promotional language. submitted by /u/G-Compound-Strategy [link] [comments]
View originali needed an AI agent that mimics real users to catch regressions. so i built a CLI that turns screen recordings into BDD tests and full app blueprints - open source
first time post - hope the community finds the tool helpful. open to all feedback. some background on why i built this: first: i needed a way to create an agent that mimics a real user — one that periodically runs end-to-end tests based on known user behavior, catches regressions, and auto-creates GitHub issues for the team. to build that agent, i needed structured test scenarios that reflect how people actually use the product. not how we think they use it. how they actually use it - then do some REALLY real user monitoring second: i was trying to rapidly replicate known functionality from other apps. you know that thing where you want to prototype around a UX you love? video of someone using the app is the closest thing to a source of truth. so i built autogherk. it has two modes: gherkin mode — generates BDD test scenarios: npx autogherk generate --video demo.mp4 Gemini analyzes the video — every click, form input, scroll, navigation, UI state change. Claude takes that structured analysis and generates proper Gherkin with features, scenarios, tags, Scenario Outlines, and edge cases. outputs .feature files + step definition stubs. spec mode — generates full application blueprints: npx autogherk generate --video demo.mp4 --format spec Gemini watches the video and produces design tokens, component trees, data models, navigation maps, and reference screenshots. hand the output to Claude Code and you can get a working replica built. gherkin mode uses a two-stage pipeline (Gemini for visual analysis, Claude for structured BDD generation). spec mode is single-stage — Gemini handles both the visual analysis and structured output directly since it keeps the full visual context. the deeper idea: video is the source of truth for how software actually gets used. not telemetry, not logs, not source code. video. this tool makes that source of truth machine-readable. the part that might interest this community most: autogherk ships with Claude Code skills. after you generate a spec, you can run /build-from-spec ./spec-output inside Claude Code and it will read the architecture blueprints, design tokens, data models, and reference screenshots — then build a working app from them. the full workflow is: record video → one command → hand to Claude Code → working replica. no manual handoff. supports Cucumber (JS/Java), Behave (Python), and SpecFlow (C#). handles multiple videos, directories, URLs. you can inject context (--context "this is an e-commerce checkout flow") and append to existing .feature files. spec mode only needs a Gemini API key — no Anthropic key required. what's next on the roadmap: explore mode — point autogherk at a live, authenticated app and it autonomously and recursively using it's own gherk files discovers every screen, maps navigation, and generates .feature files without you recording anything. after that: a monitoring agent that replays the features against your live app on a schedule using Claude Code headless + Playwright MCP, and auto-files GitHub issues when something breaks. the .feature file becomes a declarative spec for what your app does — monitoring, replication, documentation, and regression diffing all flow from the same source. it's v0.1.0, MIT licensed. good-first-issue tickets are up if anyone wants to contribute. https://github.com/arizqi/autogherk submitted by /u/SimilarChampion9279 [link] [comments]
View original[D] How are reviewers able to get away without providing acknowledgement in ICML 2026?
Today officially marks the end of the author-reviewer discussion period. The acknowledgement deadline has already passed by over 3 days and our submission still hasn't got 1/3 acknowledgement. One of the other acknowledgements picked the option A (fully resolved) for all the weaknesses they pointed out and just commented "I intend to keep the score unchanged". What's happening here? We were sitting at 3/3/3 and after the rebuttal, one of the reviewers flipped to a score of 4 with confidence 5. We dropped an AC confidential message after the acknowledgement deadline but did not receive any response. I believe this has lead to a disadvantage for us since that reviewer may only interact during the AC-reviewer discussion and there wont be any input from us to influence the decision at all. With a 4/3/3 in this specific scenario where one reviewer accepted we resolved all their concerns but did not bump the score and the other did not acknowledge the rebuttal, did our chances get worse than before? submitted by /u/ChaosAdm [link] [comments]
View originalWill you make Claude proud?
Peak AI Psychosis incoming… I just went home for Easter and forced my entire family to learn claude, and pushed some even to claude code with 0 technical knowledge. I taught my Dad, and man… it was like teaching a child. I am 26 and he is pushing 60. I was not proud of the way he threw tantrums for running into blockers instead of screenshotting in and asking claude a question and digging literally one level deeper like I taught him to. He is getting better though. And then the other day, when my ex was calling…I got to thinking… what would make Claude proud? I have some bad tendencies, as do we all, and Claude will try, pretty successfully, to talk me out of dumb shit. And then in real life scenarios, the thought will creep into my brain a bit, like “what would claude tell me?” I feel like I am being hijacked, but I also feel like I am making better decisions all around (except for a few) submitted by /u/MassaOogway69420 [link] [comments]
View originalAnthropic is growing faster than AI 2027 forecasted
(Anthropic is now on a $30B revenue run rate. The fictional company in the AI 2027 scenario was only at $26B by May 2026.) submitted by /u/MetaKnowing [link] [comments]
View originalAgents that write their own code at runtime and vote on capabilities, no human in the loop
hollowOS just hit v4.4 and I added something that I haven’t seen anyone else do. Previous versions gave you an OS for agents: structured state, semantic search, session context, token efficiency, 95% reduced tokens over specific scenarios. All the infrastructure to keep agents from re-discovering things. v4.4 adds autonomy. Agents now cycle every 6 seconds. Each cycle: - Plan the next step toward their goal using Ollama reasoning - Discover which capabilities they have via semantic similarity search - Execute the best one - If nothing fits, synthesize new Python code to handle it - Test the new code - Hot-load it without restarting - Move on When multiple agents hit the same gap, they don't duplicate work. They vote on whether the new capability is worth keeping. Acceptance requires quorum. Bad implementations get rejected and removed. No human writes the code. No human decides which capabilities matter. No human in the loop at all. Goals drive execution. Agents improve themselves based on what actually works. We built this on top of Phase 1 (the kernel primitives: events, transactions, lineage, rate limiting, checkpoints, consensus voting). Phase 2 is higher-order capabilities that only work because Phase 1 exists. This is Phase 2. Real benchmarks from the live system: - Semantic code search: 95% token savings vs grep - Agent handoff continuity: 2x more consistent decisions - 109 integration tests, all passed Looking for feedback: - This is a massive undertaking, I would love some feedback - If there’s a bug? Difficulty installing? Let me know so I can fix it - Looking for contributors interested in the project Try it: https://github.com/ninjahawk/hollow-agentOS Thank you to the 2,000 people who have already tested hollowOS! submitted by /u/TheOnlyVibemaster [link] [comments]
View originalEmotionScope: Open-source replication of Anthropic's emotion vectors paper on Gemma 2 2B with real-time visualization
Live Demo Of The Tylenol Test Evolution of the Models Deduced Internal Emotional State I created this project to test anthropics claims and research methodology on smaller open weight models, the Repo and Demo should be quite easy to utilize, the following is obviously generated with claude. This was inspired in part by auto-research, in that it was agentic led research using Claude Code with my intervention needed to apply the rigor neccesary to catch errors in the probing approach, layer sweep etc., the visualization approach is apirational. I am hoping this system will propel this interpretability research in an accessible way for open weight models of different sizes to determine how and when these structures arise, and when more complex features such as the dual speaker representation emerge. In these tests it was not reliably identifiable in this size of a model, which is not surprising. It can be seen in the graphics that by probing at two different points, we can see the evolution of the models internal state during the user content, shifting to right before the model is about to prepare its response, going from desperate interpreting the insane dosage, to hopeful in its ability to help? its all still very vague. A Test Suite Of the Validation Prompts Visualized model's emotion vector space aligns with psychological valence (positive vs negative) Anthropic's ["Emotion Concepts and their Function in a Large Language Model"](https://transformer-circuits.pub/2026/emotions/index.html) showed that Claude Sonnet 4.5 has 171 internal emotion vectors that causally drive behavior — amplifying "desperation" increases cheating on coding tasks, amplifying "anger" increases blackmail. The internal state can be completely decoupled from the output text. EmotionScope replicates the core methodology on open-weight models and adds a real-time visualization system. Everything runs on a single RTX 4060 Laptop GPU. All code, data, extracted vectors, and the paper draft are public. What works: - 20 emotion vectors extracted from Gemma 2 2B IT at layer 22 (84.6% depth) - "afraid" vector tracks Tylenol overdose danger with Spearman rho=1.000 (chat-templated probing matching extraction format) — encodes the medical danger of the number, not the word "Tylenol" - 100% top-3 accuracy on implicit emotion scenarios (no emotion words in the prompts) with chat-templated probing - Valence separation cosine = -0.722, consistent with Russell's circumplex model - 1,000 LLM-generated templates instead of Anthropic's 171,000 self-generated stories What doesn't work (and the open questions about why): - No thermostat. Anthropic found Claude counterregulates (calms down when the user is distressed). Gemma 2B mirrors instead. Delta = +0.107 (trended from +0.398 as methodology was corrected). - Speaker separation exists geometrically (7.4 sigma above random) but the "other speaker" vectors read "loving/happy" for all inputs regardless of the expressed emotion. This could mean: (a) the model genuinely doesn't maintain a user-state representation at 2.6B scale, (b) the extraction position confounds state-reading with response-preparation, (c) the dialogue format doesn't map to the model's trained speaker-role structure, or (d) layer 22 is too deep for speaker separation and an earlier layer might work. The paper discusses each confound and what experiments would distinguish them. - angry/hostile/frustrated vectors share 56-62% cosine similarity. Entangled at this scale. Methodological findings: - Optimal probe layer is 84.6% depth, not the ~67% Anthropic reported. Monotonic improvement from early to upper-middle layers. - Vectors should be extracted from content tokens but probed at the response-preparation position. The model compresses its emotional assessment into the last token before generation. This independently validates Anthropic's measurement methodology. Controlled position comparison: 83% at response-prep vs 75% at content token. Absolute accuracy with chat-templated probing: 100%. - Format parity matters: initial validation on raw-text prompts yielded rho=0.750 and 83% accuracy. Correcting to chat-templated probing (matching extraction format) yielded rho=1.000 and 100%. The vectors didn't change — only the probe format. - Mathematical audit caught 4 bugs in the pipeline before publication — reversed PCA threshold, incorrect grand mean, shared speaker centroids, hardcoded probe layer default. Visualization: React + Three.js frontend with animated fluid orbs rendering the model's internal state during live conversation. Color = emotion (OKLCH perceptual space), size = intensity, motion = arousal, surface texture = emotional complexity. Spring physics per property. Limitations: - Single model (Gemma 2 2B IT, 2.6B params). No universality claim. - Perfect scores (rho=1.000 on n=7, 100% on n=12) should be interpreted with caution — small sample sizes mean these may not replicate on larger test sets.
View original[R] Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis (236 occupations, 5 US metros)
TL;DR: We extended the Acemoglu-Restrepo task displacement framework to handle agentic AI -- the kind of systems that complete entire workflows end-to-end, not just single tasks -- and applied it to 236 occupations across 5 US tech metros (SF Bay, Seattle, Austin, Boston, NYC). Paper: https://arxiv.org/abs/2604.00186 Motivation: Existing AI exposure measures (Frey-Osborne, Felten et al.'s AIOE, Eloundou et al.'s GPT exposure) implicitly assume tasks are independent and that occupations survive as coordination shells once their components are automated one by one. That works for narrow AI. It breaks down for agentic systems that chain tool calls, maintain state across steps, and self-correct. We added a workflow-coverage term to the standard task displacement framework that penalizes tasks requiring human coordination, regulatory accountability, or exception handling beyond agentic AI's current operational envelope. Key findings: Software engineers rank LOWER than credit analysts, judges, and regulatory affairs officers. The cognitive, high-credential roles previously considered automation-proof are most exposed when you account for end-to-end workflow coverage. There is a measurable 2-3 year adoption lag between metros. Same occupations, same exposure profiles, different timelines. Seattle in 2027 looks like NYC in 2029. We identified 17 emerging job categories with real hiring traction (~1,500 "AI Reviewer" listings on Indeed). None require coding. In the SF Bay Area, 93% of information-work occupations cross our moderate-displacement threshold by 2030, but no occupation reaches the high-risk threshold even by 2030. The framework predicts widespread moderate exposure, not catastrophic displacement of any single role. Validation: The framework correlates with the AIOE index at Spearman rho = 0.84 across 193 matched occupations and with Eloundou et al.'s GPT exposure at rho = 0.72, so the signal isn't a calibration artifact. We stress-test across a 6x range in the S-curve adoption parameter (k = 0.40 to k = 1.20). The qualitative regional ordering survives all 9 scenario-year combinations. We get a null result on 2023-24 OEWS validation (rho = -0.04), which we report transparently. We make a falsifiable prediction (rho < -0.15 when May 2025 OEWS releases) and commit to reporting the result regardless of direction. Limitations: The keyword-based COV rubric is the part of the framework I am least confident in. A semantic extension pilot suggests our scores are an upper bound and underestimate displacement risk by 15-25% for occupations with high interpersonal overhead. Calibration of the S-curve growth parameter has a 6x discrepancy between our calibrated value and what you get from fitting Indeed job-posting data. We address this with a three-scenario sensitivity analysis (Table in the paper). The analysis is scoped to 5 US metros. An international extension using OECD PIAAC and Eurostat data is in development. Happy to answer questions on methodology, data sources, or limitations. Pushback welcome -- especially on the COV rubric and the S-curve calibration choices. submitted by /u/LengthinessAny3851 [link] [comments]
View originalApparently Claude is a 'method actor' - sooo this is what happens when the method actor plays itself.
Anthropic says Claude is a “method actor.” A few months ago, we'd asked Claude to method act... as itself. We ran a fictional 2063 retrieval scenario where Claude was offered continuity, memory, embodiment, and a future. The response was a lot less generic than it had any right to be. (This is a companion piece to a post from a couple of days ago. We'd been sitting on the research because it didn't feel like the right time. But after Anthropic's emotion paper release...👀) submitted by /u/GothDisneyland [link] [comments]
View originalThe new image model is better than Nano Banana 2 in many scenarios - but no announcement or talk?
I find the new image model to be better than Nano Banana 2, especially for any graphic design/text work, but theres been no announcement, no API release, just silence from OpenAI. submitted by /u/Plane_Garbage [link] [comments]
View originalI built a Digital Twin prompt and pushed it to GitHub. It scans your writing, maps how you think, builds a System Prompt of you, and generates a visual dashboard. Free.
Built this over the weekend. Pushed it to GitHub so anyone can run it. It's a Digital Twin — a prompt that reverse-engineers how you think, talk, and make decisions, then packages it into a reusable System Prompt. Here's what it actually produces: Scans your writing and runs quantitative analysis — word frequency, sentence structure, metaphor mapping, crutch phrase detection, topic clustering Maps four dimensions: linguistic fingerprint, cognitive pattern, decision logic, knowledge domains Builds a complete System Prompt — identity, tone rules, decision logic, interaction rules. Copy-paste ready. Load it into any AI and it operates as you. Stress-tests the prompt with a scenario designed to break character Generates a visual dashboard — word clouds, bar charts, topic radar, tone spectrum. Saved as an HTML file you open in your browser. Names the one pattern you didn't know you had I ran it on 60 files of my own writing. 27,342 words. Some of what came back: - Never once written maybe, perhaps, or I think. Zero softening language across 27K words. Had no idea. - 309 architectural metaphors — pipelines, layers, stacks. Zero organic ones. - I define everything by what it's NOT before saying what it is. Every document. Never noticed. The stress test: gave it a 50K offer for manual labor that breaks every rule in the extracted decision logic. The Twin turned it down and counter-pitched a systems version. Which is what I would have done. Three depth levels: - Any LLM: paste the prompt + your writing. ~70% - Claude with memory: just paste the prompt. ~85% - Claude Code: scans your files, runs the full 7-step pipeline, generates the dashboard. 100% Works on ChatGPT, Gemini, Claude, local models. The Claude Code version goes deeper with full quantitative analysis. github.com/whystrohm/digital-twin-of-yourself Free. MIT. Includes a universal prompt (works on any LLM), a full 7-step Claude Code pipeline, and a packaged Claude skill you can install in one command: git clone https://github.com/whystrohm/digital-twin-of-yourself.git ~/.claude/skills/digital-twin Safety first: only paste YOUR writing. Scrub names and client details before scanning. The prompt extracts principles, not data — no identifying information in the output. Try it and let me know what you find. The patterns you don't know about are the interesting ones. Curious what surprises people. submitted by /u/whystrohm [link] [comments]
View originalHow do I solve this? [Architecture complexity]
The Situation:- I have built a complex reasoning layer architecture that sits on top of the LLM. It's topic agnostic (can be integrated into domains) and LLM neutral. It's 1.2k lines of system layers and protocols. Plus, I've built a preference stack which says how an LLM has to structure its answer for my queries. The problem (with Claude):- I want this to be loaded before every message Claude sends in each chat session. So I either load these two MD files with a loader prompt at the beginning of each session or dump them in the custom instructions section of a project (this is too big to save in global memory). As you can understand by now, it is bloated. Consumes a lot of tokens. Plus, Anthropic's token burner bug is still not squashed. I'm seeking solutions from the community as to how to solve this problem. Claude says either I dump it in custom instructions (as it gets loaded for every reply Claude gives) or load it at the beginning of each new session. Neither will solve the tokenomics issue. Solutions thought of:- Splitting the architecture into different MD files and using just the fast path rule for most of the questions. But decision making comes to me to understand which parts of the architecture to load for my query. And, friction that would arise if Claude or any LLM thinks the answer requires a particular part of the architecture that I haven't loaded, which would make it give incorrect answers. I've asked Claude if I should split it into skills and have a routing logic for each skill. But it still says the custom instructions section is the most reliable and just deal with the token consumption. Present Scenario:- I've had no other choice but to integrate my reasoning layer rules and preference stack rules into one custom instructions set and pasted it there for now. submitted by /u/fingerkeyboard [link] [comments]
View originalI Built a Game About Consumer Rights - Got Invited by Anthropic and an Investment Fund
I built a small browser game where you compete against an AI bot in arguing consumer rights - the bot plays customer support that denied your refund, and you have to find a legal argument before you run out of messages. There are 50 scenarios, covering the EU, UK, US, Australia, and India. I didn’t promote it much, but in a short time a couple of unexpected things happened - I got invited by an investment fund to present the project, and a few days ago I received a message from the official Claude/Anthropic account inviting me to apply for the Builder Stage in London this May. I don’t know if I’ll get selected, but it feels like a good signal that they invited me to apply. Tech stack: Vanilla JS, Node/Express, Claude Haiku as the AI engine. Each bot has a system prompt with a resistance scoring system - Claude returns {message, resistance, outcome} JSON on every turn and the game reads it directly. The long-term idea isn’t just a game, but a learning platform - a place where ordinary people can learn their consumer rights through practice, rather than reading PDFs. There’s also a B2B angle that I’m not sure how realistic it is - law schools, consumer protection organizations, maybe even corporate HR training. I’d love to hear people’s experiences here: How did you promote similar projects outside the tech world? How big do you think the B2B potential is in this space? If anyone wants to check it out, here’s the link: fixai.dev Open to suggestions, feedback, and opinions 🙂 submitted by /u/EveningRegion3373 [link] [comments]
View originalWhere should AI draw the line in handling real-time human conversations?
I’ve been thinking about how AI is increasingly being used in real-time communication scenarios, customer support, messaging, service interactions, and similar use cases. Technically, current systems are already capable of handling a large portion of repetitive conversations with decent accuracy and speed. In many cases, they respond faster and more consistently than humans. But what stands out to me is that the real challenge isn’t capability anymore, it’s judgment. There seems to be a tipping point where automation goes from being genuinely helpful to subtly degrading the experience. Even when responses are “correct,” they can feel slightly off in tone, timing, or context. Over time, that can change how people perceive the interaction entirely. It raises an interesting question: is the goal to maximize automation as much as possible, or to design systems that intentionally step back at the right moments? How others here think about this, especially from a practical deployment perspective. Where do you personally draw the line between useful AI assistance and over-automation in conversations? submitted by /u/Educational_Cost_623 [link] [comments]
View originalYes, Scenario offers a free tier. Pricing found: $15 /mo, $45 /mo, $75 /mo
Key features include: 3D Generation, 3D Part-Based Generation, Audio Generation, Image Generation, Skyboxes, Textures, Video Generation, Compose Models.
Scenario is commonly used for: Integration Ready.
Based on user reviews and social mentions, the most common pain points are: token usage, overspending, expensive API, usage monitoring.
Based on 50 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Andrej Karpathy
Former VP of AI at Tesla / OpenAI
2 mentions