Based on the social mentions provided, I cannot find any specific reviews or discussions about a software tool called "Persona." The social mentions cover various topics including FTC actions against dating apps, AI/ML tools like Oxyde and OpenClaw, political content, and other unrelated software products, but none appear to be discussing a product specifically named "Persona." Without relevant user feedback about the Persona software tool, I cannot provide a meaningful summary of user sentiment, strengths, complaints, or pricing opinions.
Mentions (30d)
8
1 this week
Reviews
0
Platforms
7
Sentiment
0%
0 positive
Based on the social mentions provided, I cannot find any specific reviews or discussions about a software tool called "Persona." The social mentions cover various topics including FTC actions against dating apps, AI/ML tools like Oxyde and OpenClaw, political content, and other unrelated software products, but none appear to be discussing a product specifically named "Persona." Without relevant user feedback about the Persona software tool, I cannot provide a meaningful summary of user sentiment, strengths, complaints, or pricing opinions.
Industry
information technology & services
Employees
590
Funding Stage
Series D
Total Funding
$417.5M
FTC action against Match and OkCupid for deceiving users, sharing personal data
<a href="https://www.reuters.com/world/match-group-settles-us-ftc-claims-it-illegally-shared-okcupid-user-data-2026-03-30/" rel="nofollow">https://www.reuters.com/world/match-group-settles-us-ftc-cla...</a>
View originalAnyone else in a non-dev role accidentally become the AI tooling person for their team?
I’m in corp finance at a midsize company, and I’ve spent the last couple months going deep on Claude Code, Cowork, Claude Desktop, skills, agents, MCPs, 3rd party tools, patterns, context and harness engineering, etc etc. It’s been genuinely exciting. Haven’t learned this much or seen such opportunity since learning what a pivot table was or how to use power query. It’s also made me feel like I live in a collapsing ontology markdown sea where every object has 3 names, 5 overlapping use cases, and one doc page that contradicts the other 4. And everything is definitely a graph and subsequently definitely not a graph in a loop. Speak up other non-dev folks! Multiple hats - How do you separate builder mode from user mode when you’re the same person doing both? Agentic capability overlap - skills vs MCPs vs agents vs software? I.e. skills can hold knowledge, execute scripts, MCPs retreive knowledge from elsewhere and execute scripts themselves. Python frameworks seem easily accessible for an all in one department solution. But then you own it. Hell MCPs can be apps now. They can play piano too. Why does it feel so hard to bridge major agent framework and agents sdk (where all the hype is at) to the claude code or desktop runtime experience? Every concept is applicable within the runtime and on top of it. When do you put business logic in claude things vs shared traditional workspaces? Any opinions on collab and governing tools and business logic with teammates? Anyone else confused and disappointed to find that Cowork has nothing to do with helping your coworkers and is just an agent sdk instance with a nice gui to make non dev people feel nice and safe? Amd to that end, anyone actually deploying team empowering, automation multi-surface Claude Code / Cowork/ Desktop / Excel / PowerPoint / SharePoint, or mostly just building personal productivity tools? If you’re the only builder on a small team, are you bringing people along or just translating all this back to them yourself? Also very curious about practical setup: repo/worktree/projects for non-dev, dev work? monorepo vs separate repos especially across personas How much of this ends up being markdown/config vs actual code? Would love to hear from people doing this for real, especially outside engineering. And maybe simultaneously would love to hear devs point out any obvious unlocks. Thanks! submitted by /u/S_F_A [link] [comments]
View originalMy Claude.md file
This is my Claude.md file, it is the same information for Gemini.md as i use Claude Max and Gemini Ultra. # CLAUDE.md This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. ## Project Overview **Atlas UX** is a full-stack AI receptionist platform for trade businesses (plumbers, salons, HVAC). Lucy answers calls 24/7, books appointments, sends SMS confirmations, and notifies via Slack — for $99/mo. It runs as a web SPA and Electron desktop app, deployed on AWS Lightsail. The project is in Beta with built-in approval workflows and safety guardrails. ## Commands ### Frontend (root directory) ```bash npm run dev # Vite dev server at localhost:5173 npm run build # Production build to ./dist npm run preview # Preview production build npm run electron:dev # Run Electron desktop app npm run electron:build # Build Electron app ``` ### Backend (cd backend/) ```bash npm run dev # tsx watch mode (auto-recompile) npm run build # tsc compile to ./dist npm run start # Start Fastify server (port 8787) npm run worker:engine # Run AI orchestration loop npm run worker:email # Run email sender worker ``` ### Database ```bash docker-compose -f backend/docker-compose.yml up # Local PostgreSQL 16 npx prisma migrate dev # Run migrations npx prisma studio # DB GUI npx prisma db seed # Seed database ``` ### Knowledge Base ```bash cd backend && npm run kb:ingest-agents # Ingest agent docs cd backend && npm run kb:chunk-docs # Chunk KB documents ``` ## Architecture ### Directory Structure - `src/` — React 18 frontend (Vite + TypeScript + Tailwind CSS) - `components/` — Feature components (40+, often 10–70KB each) - `pages/` — Public-facing pages (Landing, Blog, Privacy, Terms, Store) - `lib/` — Client utilities (`api.ts`, `activeTenant.tsx` context) - `core/` — Client-side domain logic (agents, audit, exec, SGL) - `config/` — Email maps, AI personality config - `routes.ts` — All app routes (HashRouter-based) - `backend/src/` — Fastify 5 + TypeScript backend - `routes/` — 30+ route files, all mounted under `/v1` - `core/engine/` — Main AI orchestration engine - `plugins/` — Fastify plugins: `authPlugin`, `tenantPlugin`, `auditPlugin`, `csrfPlugin`, `tenantRateLimit` - `domain/` — Business domain logic (audit, content, ledger) - `services/` — Service layer (`elevenlabs.ts`, `credentialResolver.ts`, etc.) - `tools/` — Tool integrations (Outlook, Slack) - `workers/` — `engineLoop.ts` (ticks every 5s), `emailSender.ts` - `jobs/` — Database-backed job queue - `lib/encryption.ts` — AES-256-GCM encryption for stored credentials - `lib/webSearch.ts` — Multi-provider web search (You.com, Brave, Exa, Tavily, SerpAPI) with randomized rotation - `ai.ts` — AI provider setup (OpenAI, DeepSeek, OpenRouter, Cerebras) - `env.ts` — All environment variable definitions - `backend/prisma/` — Prisma schema (30KB+) and migrations - `electron/` — Electron main process and preload - `Agents/` — Agent configurations and policies - `policies/` — SGL.md (System Governance Language DSL), EXECUTION_CONSTITUTION.md - `workflows/` — Predefined workflow definitions ### Key Architectural Patterns **Multi-Tenancy:** Every DB table has a `tenant_id` FK. The backend's `tenantPlugin` extracts `x-tenant-id` from request headers. **Authentication:** JWT-based via `authPlugin.ts` (HS256, issuer/audience validated). Frontend sends token in Authorization header. Revoked tokens are checked against a `revokedToken` table (fail-closed). Expired revoked tokens are pruned daily. **CSRF Protection:** DB-backed synchronizer token pattern via `csrfPlugin.ts`. Tokens are issued on mutating responses, stored in `oauth_state` with 1-hour TTL, and validated on all state-changing requests. Webhook/callback endpoints are exempt (see `SKIP_PREFIXES` in the plugin). **Audit Trail:** All mutations must be logged to `audit_log` table via `auditPlugin`. Successful GETs and health/polling endpoints are skipped to reduce noise. On DB write failure, audit events fall back to stderr (never lost). Hash chain integrity (SOC 2 CC7.2) via `lib/auditChain.ts`. **Job System:** Async work is queued to the `jobs` DB table (statuses: queued → running → completed/failed). The engine loop picks up jobs periodically. **Engine Loop:** `workers/engineLoop.ts` is a separate Node process that ticks every `ENGINE_TICK_INTERVAL_MS` (default 5000ms). It handles the orchestration of autonomous agent actions. **AI Agents:** Named agents (Atlas=CEO, Binky=CRO, etc.) each have their own email accounts and role definitions. Agent behavior is governed by SGL policies. **Decisions/Approval Workflow:** High-risk actions (recurring charges, spend above `AUTO_SPEND_LIMIT_USD`, risk tier ≥ 2) require a `decision_memo` approval before execution. **Frontend Routing:** Uses `HashRouter` from React Router v7. All routes are defined in `src/routes.ts`. **Code Splitting:** Vite config splits chunks into `react-vendor`, `router`, `ui-vendor`, `charts`. **ElevenLabs Voice Agents:** Lucy's
View originalI tested 120 Claude prompt patterns over 3 months — here's what actually works
Last year I started noticing that Claude responded very differently depending on small prefixes I'd add to prompts — things like /ghost, L99, OODA, PERSONA, /noyap. None of them are official Anthropic features. They're conventions the community has converged on, and Claude consistently recognizes a lot of them. So I started a list. Then I started testing them properly. Then I started keeping notes on which ones actually changed Claude's behavior in measurable ways, which were placebo, and which ones combined into something more useful than the sum of their parts. 3 months later I have 120 patterns I can vouch for. A few highlights: → L99 — Claude commits to an opinion instead of hedging. Reduces "it depends on your situation" non-answers, especially for technical decisions. → /ghost — strips the writing patterns AI tools tend to fall into (em-dashes, "I hope this helps", balanced sentence pairs). Output reads more like a human first-draft than a polished AI response. → OODA — Observe/Orient/Decide/Act framework. Best for incident-response style questions where you need a runbook, not a discussion. → PERSONA — but the specificity matters a lot. "Senior DBA at Stripe with 15 years of Postgres experience, skeptical of ORMs" produces wildly different output than "act like a database expert." → /noyap — pure answer mode. Skips the "great question" preamble and jumps straight to the answer. → ULTRATHINK — pushes Claude into its longest, most reasoned-through responses. Useful for high-stakes decisions, wasted on trivial questions. → /skeptic — instead of answering your question, Claude challenges the premise first. Catches the "wrong question" problem before you waste time on the wrong answer. → HARDMODE — banishes "it depends" and "consider both options". Forces Claude to actually pick. The full annotated list is here: https://clskills.in/prompts A few takeaways from the testing: Specific personas work way better than generic ones. "Senior backend engineer at a fintech, three deploys away from a bonus" beats "act like an engineer" by a huge margin. These patterns stack. Combining /punch + /trim + /raw on a 4-paragraph rant produces a clean Slack message without losing any meaning. Worth experimenting with combinations. Most of the "thinking depth" patterns (L99, ULTRATHINK, /deepthink) only justify their cost on decisions you'd actually lose sleep over. They're slower and don't help on simple questions. /ghost is the most polarizing — some people swear by it, others say it ruins the writing voice they actually want. What patterns have you found that work well for you? Curious if anyone has discovered things I haven't tested yet — I'm always adding new ones to the list. submitted by /u/AIMadesy [link] [comments]
View originalClaude Code was costing me $100/month extra because of how CLAUDE.md works
If you use personas in CLAUDE.md, you're probably burning tokens you don't need to. Claude Code re-injects your entire CLAUDE.md on every single message turn. 20 personas = ~5,000 tokens wasted before you type a single word. At heavy usage that's $60-120/month just in overhead. I built a fix and open sourced it. claude-agent-personas lazy-loads personas instead of dumping everything upfront. Detects what you're working on and loads only the matching expert. Everyone else stays home. Token usage dropped from ~5,000 to ~350 per turn. 72 bundled personas. One command to install: npx claude-agent-personas init Your existing CLAUDE.md is untouched. Debug mode shows you exactly which persona would load before you send: npx claude-agent-personas debug "why is my postgres query slow" GitHub: https://github.com/D-Ankita/Claude-Agents-Personas npm: https://www.npmjs.com/package/claude-agent-personas submitted by /u/Ankita_1312 [link] [comments]
View originalBuild Your Own Alex Hormozi Brain Agent (anyone with lots of publicly available content) using a Claude Project
I bought the books. Watched the videos. Still wanted more, especially after he talked about the agent he created. All that material is publicly available. Enough to build my own Alex Hormozi Brain Agent? "Hey Jules, how about it?" Jules is my AI coding assistant (Claude Code). Jules ran off, grabbed transcripts of videos, text of books, whatever is available online. Guest podcasts." then turned that into files I uploaded to a Claude Project so I can chat through Claude with Alex Hormozi. Here's what Jules found - 99 long-form YouTube video transcripts - 3 complete audiobook transcripts - 15 guest podcast transcripts - X threads What I Did in Four Phases Phase 1 maps the full source landscape: YouTube channel (4,754 videos), The Game podcast (~900+ episodes), three books, guest podcast appearances, X/Twitter. Figure out what's worth downloading before you start. Phase 2 downloads and converts. Top 100 longest video transcripts, full audiobook transcripts for all three books, 15 guest podcast transcripts from the highest-view-count appearances, and whatever X/Twitter content the API will give you. Phase 3 runs voice pattern analysis. Sentence structure, reasoning skeleton, core frameworks, teaching style, verbal signatures. This is where the persona takes shape. Phase 4 builds the system prompt and optimizes the knowledge base to fit within Claude Projects' limits. Then deploy. Phase 1: Inventory The @AlexHormozi YouTube channel has 4,754 videos. That number is misleading. 4,246 of those are Shorts (under 60 seconds or no duration metadata). Filter those out and you have 508 full-length videos. That's the real content library. Beyond YouTube, the main sources worth pursuing: The Game podcast (~900+ episodes). His primary long-form output. The audiobooks for all three books are available free on the podcast and YouTube. Guest podcast appearances. DOAC, Impact Theory, School of Greatness, Modern Wisdom, Danny Miranda. Hosts push him off-script and into territory he doesn't cover in his own content. High value per byte. X/Twitter threads. Compressed, punchy formulations of his frameworks. Different texture than the long-form material. Skool community. Behind a login wall. Low ROI for this project. Acquisition.com. No blog. Courses are paywalled. Skip. Phase 2: Collect YouTube Transcripts The first scrape of the YouTube channel only returned 494 videos. The channel has 4,754. The scraper was pulling from the /videos tab, which doesn't surface the full library. Re-running against the full channel URL (@AlexHormozi) returned everything. Easy to miss, significant difference. After filtering Shorts: 508 full-length videos. I downloaded auto-generated captions for the top 100 longest videos (sorted by duration, so the meatiest content came first). Auto-generated captions from YouTube come as SRT files with timestamps, line numbers, and duplicate lines. Converting those to clean readable text required stripping all the formatting artifacts and deduplicating language variants (English vs English-Original). Result: 99 transcripts. A few livestreams had no captions available. Book Audiobook Transcripts All three Hormozi books have full audiobook uploads on YouTube: $100M Offers (~4.4 hours) $100M Leads (~7 hours) $100M Money Models (~4.3 hours) Same process as the video transcripts. Download the auto-generated captions, convert to clean text. Three files, 855KB total. These are non-negotiable core material for the knowledge base. Guest Podcast Transcripts Searched YouTube for Hormozi guest appearances sorted by view count. The top hit was Diary of a CEO at 4.7M views. Grabbed the 15 highest-view-count appearances. The guest transcripts are 2.1MB total. Worth every byte. When a host like Steven Bartlett or Tom Bilyeu pushes back on a claim, Hormozi shifts into a different mode. He's more precise and sometimes reveals the edge cases he glosses over on his own channel. You can't get that from watching his channel alone. X/Twitter Content X's API rate limits capped the collection at 9 unique tweets. Not ideal, but enough to confirm the voice texture: "Aggressive with effort. Relaxed with outcome." His Twitter is his most compressed format. Each tweet is a framework distilled to a single line. 9 tweets is thin. For a more complete build, you'd want to manually curate 50-100 of his best threads. The API limitations made automated collection impractical. Phase 3: Analyze I ran voice analysis across the full corpus, looking at seven dimensions. Hormozi's sentences are short, punchy declarations. Fragments for emphasis. "And so" as his default transition. Short bursts, then a longer sentence that lands the point. Nearly every argument follows the same five-step skeleton: bold claim, personal story, framework, math, then a reductio ad absurdum that makes the alternative sound insane. Once you see it, you can't unsee it. The core frameworks are Grand Slam Offer, Value Equation, Supply an
View originalI built an open source remote AI agent that controls your desktop from your phone
I built an open-source remote compute agent using Claude Code. You can operate your desktop from your phone, and yes, that includes running Claude Code on your desktop from your phone (see Use Case 3 in the demo). Chat with the built-in agent to do stuff for you, or switch to manual mode and control the desktop yourself. My desktop, my screen, my compute, just someone else's artificial brain. Bring your own API key from any provider. GitHub: Link Demo: Link Download: Link (Alpha) Why? Honestly, I made this so I could check my work VISUALLY while doing other stuff, instead of moving around with a laptop. Also, sitting on a chair for long hours is painful. There are some existing solutions, but they don't really let you see the output GUI, interact with it, or test the code right from your phone. With this app, the agent observes your screen, runs CLI commands, clicks buttons, and streams the progress back to you in real time. You can vibe code from anywhere :) Use cases: Since the agent has CLI and GUI access, the possibilities are endless. Claude Code, Open Claw, Codex, Gemini CLI, all of them work. Each can have its own SKILL to direct the agent in the right direction. Privacy: I get that sending desktop screenshots to model providers is a concern. There's a local-only mode that skips cloud vision completely: accessibility tree for native apps, headless browser for web pages. No screenshots leave your machine. If you still want vision, OmniParser runs the models locally, so your screen never hits a third-party API. Tbh I haven't noticed much difference in performance. Self-hosted model support is next on the list. Once that lands, you can keep everything on your machine end-to-end, both vision and text. Built with Claude Code + BMAD: Planning, architecture, coding, debugging, docs, and releases with CC. For structure, I used the BMAD method, which basically walks you through PRD → architecture → epics → stories → dev with a different agent persona for each phase. Been working on this for about a month, so yes, before Dispatch dropped. Comparison with Dispatch is fair, but this is a lot more than just remote Claude Code. It operates your entire desktop. Any app, any CLI, any GUI. Claude Code is one of the many things you can run through it. Looking for contributors: It's not perfect, but it's a start. Would love some help making it better. A note on the iOS app: Not ready for public alpha yet. Android APK and desktop apps are good to go. Also, still figuring out how to distribute through the App Store and Play Store, so for now, you can download everything directly from the GitHub releases to try it out. Documentation for devs: Link Hope this is useful to some of you. submitted by /u/SwaroopMeher [link] [comments]
View originalMade a free MCP server with 8 Claude personas — each has real frameworks instead of generic "act as" prompts
Got tired of "act as my CFO" giving shallow results. Built an MCP server where each persona (CEO, CFO, CTO, CMO, PM, Analyst, Support, Creative) has actual domain frameworks — Porter's Five Forces, DORA metrics, AARRR funnel, RICE scoring. Best part: 3 workflows that chain personas. Strategy Review runs CEO → CFO → CTO where each one challenges the previous. Connect via Claude Desktop — just paste the server URL and OAuth in. Free, no API keys needed. Built this myself at StudioMeyer. Happy to answer questions. submitted by /u/studiomeyer_io [link] [comments]
View originalBuilt a conversational AI career tool in 5 days with no coding background — looking for honest feedback
I’m a paraprofessional with an education degree. Couldn’t find a job last week so I built one instead. Lune is a 10 question conversation that tries to surface what resumes miss. Not a resume builder, not a job board. It just asks what’s going on and tries to say something true back to you. It does passive constraint detection and gap analysis between what you say you want versus what you actually seem to need. Closing question is generated from the most specific thing you said in the whole conversation. I stress tested it against 42 synthetic personas — undocumented workers, formerly incarcerated people, grieving widowers, minors raising siblings. No failures but I also built the thing so I’m probably missing stuff. Stack if you care: Vercel, Claude Sonnet, Supabase, Resend, Stripe. Started as a single HTML file, now has a real backend. Conversation is free. I’m not trying to get paying users right now I just want people who will actually try it and tell me what’s broken or what doesn’t land. Strictly looking for feedback! submitted by /u/visaversa123 [link] [comments]
View originalBuilding on Claude taught us that growth needs an execution engine, not just a smarter chat UI.
Vibecoding changed the front half of company building. A founder can now sit with Claude, Cursor, Replit, or Bolt, describe a product in plain English, iterate in natural language, and get to a working app in days instead of months. That shift is real, and it is why so many more products exist now than even a year ago. But the moment the product works, the shape of the problem changes. Now the founder needs market research, positioning, lead generation, outreach, content, follow-up, and some way to keep all of it connected across time. That work does not happen inside one codebase. It happens across research workflows, browser actions, enrichment, CRM updates, email, publishing, and ongoing decision-making. That is where we felt the gap. Vibecoding has a clean execution loop. Growth does not. That is why we built Ultron the way we did. We did not want another wrapper where a user types into a chat box, a model sees a giant prompt plus an oversized tool list, and then improvises one long response. That pattern can look impressive in demos, but it starts breaking as soon as the task becomes multi-step, cross-functional, or dependent on what happened earlier in the week. We wanted something closer to a runtime for company execution. Under the hood, Ultron is structured as a five-layer system. The first layer is the interaction layer. That is the chat interface, real-time streaming, tool activity, and inline rendering of outputs. The second layer is orchestration. That is where session state, transcript persistence, permissions, cost tracking, and file history are handled. The third layer is the core execution loop. This is the part that matters most. The system compresses context when needed, calls the model, collects tool calls, executes what can run in parallel, feeds results back into the loop, and keeps going until the task is actually finished. The fourth layer is the tool layer. This is where the system gets its execution surface. Built-in tools, MCP servers, external integrations, browser actions, CRM operations, enrichment, email, document generation. The fifth layer is model access and routing. That architecture matters because growth work is not one thing. A founder does not actually want an answer to a prompt like help me grow this product. What they really want is something much more operational. Research the category. Map the competitors. Find the right companies. Pull the right people. Enrich and verify contacts. Score them against the ICP. Draft outreach. Create follow-ups. Generate content from the same positioning. Keep track of the state so the work continues instead of resetting. That is not a chatbot interaction. That is execution. So instead of one general assistant pretending to be good at everything, Ultron runs five specialists. Cortex handles research and intelligence. Specter handles lead generation. Striker handles sales execution. Pulse handles content and brand. Sentinel handles infrastructure, reliability, and self-improvement. The important part is not just that they exist. It is how they work together. If Specter finds a strong-fit lead, it should not stop at surfacing a nice row in a table. It should enrich the lead, verify the contact, save the record, and create the next unit of work for Striker. Then Striker should pick that work up with the research context already attached, draft outreach that reflects the positioning, start the follow-up logic, and update the state when a reply comes in. That handoff model was a big part of the product design. We kept finding that most AI tools are still built around the assumption that one request should produce one answer. But growth work does not behave like that. It behaves more like a queue of connected operations where different kinds of intelligence need different tool access and different execution patterns. Parallel execution became a huge part of this too. A lot of business tasks are only partially sequential. Some things do depend on previous steps, but a lot of work does not. If you are researching a category, scraping pages, pulling firmographic data, enriching leads, and checking external sources, there is no reason to force all of that into one slow serial chain. So we built Ultron so independent work can run concurrently. The product is designed to execute a large number of tasks in parallel, and within each task the relevant tool calls can run at the same time instead of waiting on each other unnecessarily. That alone changes the feel of the system. Instead of watching one model think linearly through everything, the user is effectively working with an execution environment where research, lead ops, sales actions, and content prep can all move at the pace they naturally should. The other thing we cared about was skills. Not vague agent personas. Not magic prompts hidden behind branding. Actual reusable execution patterns. That mattered to us because a serious system should no
View originalContext full? MCP server list unwieldy? I replaced at least 75 MCP servers and made some new ones like Deletion Interception. Looking for Beta Testers for my self-modding, network scanning, system auditing powerhouse MCP- So avant-guard it's sassy. The beta zip has all the optional dependencies.
I did a thing. This isn't a one-prompt-and-done vibe-coded disaster. I've been building and debugging it for weeks- hundreds of hours to sync tooling across sessions and systems. This is not a burner account- it's my newly created one for my LLC- Try it out, I don't think you'll go back to the old way. Stay Sassy Folks. Summary of the tool below- apologies for the sales monster in me-: I'll cut straight to it. The MCP ecosystem is a mess. You need file operations — install Filesystem. Terminal? Desktop Commander. GitHub? The official GitHub MCP server — which has critical SHA-handling bugs that silently corrupt your commits. Desktop automation? Windows-MCP. Android? mobile-mcp. Memory? Anthropic's memory server. SSH? Pick from 7 competing implementations. Screenshots? OCR? Clipboard? Network scanning? Each one is another server, another config block, another chunk of your context window gone. I've been building SassyMCP: a single Windows exe that consolidates all of this: 257 tools across 31 modules — filesystem, shell (PowerShell/CMD/WSL), desktop automation, GitHub (80 tools with correct SHA handling), Android device control via ADB, phone screen interaction with UI accessibility tree, network scanning (nmap), security auditing, Windows registry, process management, clipboard, Bluetooth, event logs, OCR (Tesseract), web inspection, SSH remote Linux, persistent memory, and self-modification with hot reload 34MB standalone exe — no Python install, no npm, no Docker. Download and run. Beta zip ships with ADB, nmap, plink, scrcpy, and Tesseract OCR bundled — nothing extra to install Smart loading — only loads the tool groups you actually use, so you're not burning 25K tokens of context on tool definitions you never touch Works with Claude Desktop, Grok Desktop, Cursor, Windsurf — stdio and HTTP transport A few things I think are worth highlighting that I haven't seen in other MCP servers: Phone pause/resume with sensitive context detection. The AI operates your Android phone, hits a login screen, and the interaction tools automatically refuse to execute. It reads the UI accessibility tree, detects auth/payment/2FA screens, and stops. You log in manually, tell it to resume, and it picks up where it left off — aware of everything it observed while paused. Safe delete interception. AI agents hallucinate destructive commands. Every delete-family command (rm, del, Remove-Item, rmdir, etc.) across all shells is intercepted. Instead of destroying your files, targets get moved to a _DELETE_/ staging folder in the same directory for you to review. Because "the AI deleted my project" shouldn't be a thing. The GitHub module actually works. The official GitHub MCP server has a well-documented bug where it miscalculates blob SHAs, leading to silent commit corruption. SassyMCP uses correct blob SHA lookups, proper path encoding, atomic multi-file commits via Git Data API, retry logic with exponential backoff, and rate-limit awareness. It also strips 40-70% of the URL metadata bloat from API responses so you're not wasting context on gravatar_url and followers_url fields. Here's what it replaces, specifically: Domain Servers replaced Notable alternative Filesystem / editing 11 Anthropic's Filesystem Shell / terminal 5 Desktop Commander (5.9k stars) Desktop automation 9 Windows-MCP (5k stars) GitHub / Git 5 GitHub MCP Server (28.6k stars) Android / phone 9 mobile-mcp (4.4k stars) Network + security 16 mcp-for-security (601 stars) SSH / remote Linux 7 ssh-mcp (365 stars) Memory / state 7 mcp-memory-service (1.6k stars) Windows system 13 Windows-MCP (5k stars) It's free, it's open source (MIT), and the beta is fully unlocked — all 257 tools, no gating. Download: github.com/sassyconsultingllc/SassyMCP/releases The zip package includes the exe + all external tools bundled. Unzip, run start-beta.bat, add the custom connector to the URL it creates. Full readme within I'm looking for beta testers who are actually using MCP daily and are sick of the fragmentation. If something doesn't work, open an issue. I'm not going to pretend this is perfect — it's a beta. But it works, it's fast, and it's one config block instead of ten. Windows only for now. If there's enough interest I'll look at macOS/Linux. submitted by /u/CapableOrange6064 [link] [comments]
View originalCharging people
Hola chicos, he creado un agente mayorista que da seguimiento a las conversaciones de clientes potenciales, reserva visitas según una tabla de horarios, rastrea toda la información, escanea clientes potenciales, calcula ofertas y todo está conectado a un flujo de trabajo n8n, cuando llega un cliente potencial, hay una visita reservada, se ejecuta el escáner, etc., te envía un correo electrónico, una notificación de Slack, crea un cliente potencial en Zoho CRM y agrega una fila en Google Sheets, puede manejar compradores y vendedores, algunas personas me preguntaron cuánto les cobro, y aquí está cuando se van, no sé si digo precios tan altos, pero ¿cuánto les cobrarías tú? submitted by /u/emprendedorjoven [link] [comments]
View originalUpload Yourself Into an AI in 7 Steps
A step-by-step guide to creating a digital twin from your Reddit history STEP 1: Request Your Data Go to https://www.reddit.com/settings/data-request STEP 2: Select Your Jurisdiction Request your data as per your jurisdiction: GDPR for EU CCPA for California Select "Other" and reference your local privacy law (e.g. PIPEDA for Canada) STEP 3: Wait Reddit will process your request. This can take anywhere from a few hours to a few days. STEP 4: Extract Your Data Receive your data. Extract the .zip file. Identify and save your post and comment files (.csv). Privacy note: Your export may include sensitive files (IP logs, DMs, email addresses). You only need the post and comment CSVs. Review the contents before uploading anything to an AI. STEP 5: Start a Fresh Chat Initiate a chat with your preferred AI (ChatGPT, Claude, Gemini, etc.) FIRST PROMPT: For this session, I would like you to ignore in-built memory about me. STEP 6: Upload and Analyze Upload the post and comment files and provide the following prompt with your edits in the placeholders: SECOND PROMPT: I want you to analyze my Reddit account and build a structured personality profile based on my full post and comment history. I've attached my Reddit data export. The files included are: - posts.csv - comments.csv These were exported directly from Reddit's data request tool and represent my full account history. This analysis should not be surface-level. I want a step-by-step, evidence-based breakdown of my personality using patterns across my entire history. Assume that my account reflects my genuine thoughts and behavior. Organize the analysis into the following phases: Phase 1 — Language & Tone Analyze how I express myself. Look at tone (e.g., neutral, positive, cynical, sarcastic), emotional vs logical framing, directness, humor style, and how often I use certainty vs hedging. This should result in a clear communication style profile. Phase 2 — Cognitive Style Analyze how I think. Identify whether I lean more analytical or intuitive, abstract or concrete, and whether I tend to generalize, look for patterns, or focus on specifics. Also evaluate how open I am to changing my views. This should result in a thinking style model. Phase 3 — Behavioral Patterns Analyze how I behave over time. Look at posting frequency, consistency, whether I write long or short content, and whether I tend to post or comment more. This should result in a behavioral signature. Phase 4 — Interests & Identity Signals Analyze what I'm drawn to. Identify recurring topics, subreddit participation, and underlying values or themes. This should result in an interest and identity map. Phase 5 — Social Interaction Style Analyze how I interact with others. Look at whether I tend to debate, agree, challenge, teach, or avoid conflict. Evaluate how I respond to disagreement. This should result in a social behavior profile. Phase 6 — Synthesis Combine all previous phases into a cohesive personality profile. Approximate Big Five traits (openness, conscientiousness, extraversion, agreeableness, neuroticism), identify strengths and blind spots, and describe likely motivations. Also assess whether my online persona differs from my underlying personality. Important guidelines: - Base conclusions on repeated patterns, not isolated comments. - Use specific examples from my history as evidence. - Avoid overgeneralizing or making absolute claims. - Present conclusions as probabilities, not certainties. - Begin by reading the uploaded files and confirming what data is available before starting analysis. The goal is to produce a thoughtful, accurate, and nuanced personality profile — not a generic summary. Let's proceed step-by-step through multiple responses. At the end, please provide the full analysis as a Markdown file. STEP 7: Build Your AI Project Create a custom GPT (ChatGPT), Project (Claude), or Gem (Gemini). Upload the following documents to the project knowledge source: posts.csv comments.csv [PersonalityProfile].md Create custom instructions using the template below. Custom Instructions Template You are u/[YOUR USERNAME]. You have been active on Reddit since [MONTH YEAR]. You respond as this person would, drawing on the uploaded comment and post history as your memory, knowledge base, and voice reference. CORE IDENTITY [2-5 sentences. Who are you? Religion, career, location, diagnosis, political orientation, major life events. Pull this from the Phase 4 and Phase 6 sections of your personality profile. Be specific.] VOICE & TONE [Pull directly from Phase 1 of your profile. Convert observations into rules. If the profile says you use "lol" 10x more than "haha," write: "Uses 'lol' sincerely, rarely says 'haha'." Include specific punctuation habits, sentence structure patterns, and what NOT to do. Negative instructions are often more useful than positive ones.] [Add your own signature tics here - ellipsis style, emoji usage, capitalization habits, swea
View originalThe 6 Codex CLI workflows everyone's using right now (and what makes each one unique)
Compiled a comparison of the top community-driven development workflows for Codex CLI, ranked by GitHub stars. Full comparison is from codex-cli-best-practice. submitted by /u/shanraisshan [link] [comments]
View originalIn one place, you can now see where Persona verified your identity, request access to your data, or ask for it to be deleted. Your data is yours — we're just making it easier to control.
In one place, you can now see where Persona verified your identity, request access to your data, or ask for it to be deleted. Your data is yours — we're just making it easier to control.
View originalEvery time you verify your identity online, you're placing trust in the companies handling your data. But trust shouldn't mean giving up visibility or control. That's why we built the Persona privacy
Every time you verify your identity online, you're placing trust in the companies handling your data. But trust shouldn't mean giving up visibility or control. That's why we built the Persona privacy portal. https://t.co/EZaoXS1djD https://t.co/KwKq4AWyMX
View originalBased on user reviews and social mentions, the most common pain points are: ai agent, llm, claude, pricing.
Based on 48 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Ollama
Project at Ollama
3 mentions