Run AI on your own terms. Connect any model, extend with code, protect what matters—without compromise.
Run AI on your own terms. Connect any model, extend with code, protect what matters—without compromise. Open WebUI is the platform for running AI on your own terms. Connect to any model—local or cloud. Extend with Python. Share what you build with 367K others. 290 million downloads and growing. Every model, every conversation, every tool—in one place. Connect to Ollama, OpenAI, Anthropic, or anything compatible. Run locally, in the cloud, or both. Data stays exactly where it belongs. Prompts, models, tools, functions, discussions and reviews—created by the community, available to everyone. Browse, install, contribute. Everything at openwebui.com. Voice, vision, retrieval, generation, search. Extend with Python. Ready from day one. From startups to global enterprises. SSO, RBAC, audit logs built for regulated industries. One platform. Complete control. No compromises. Not rent it. Not depend on it. Own it. Self-sufficient, adaptable, and ready for wherever humanity goes—from a laptop to the stars. One command. 60 seconds. No account required. Run it anywhere. This week’s picks focused on speed and practicality: quick translation, faster tool execution, visible token usage, self-hosted PDF editing, AI email... Why the smartest AI setup runs both local and cloud. For managing partners, CIOs, and legal technology leaders evaluating AI solutions for their firm.
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Features
Industry
information technology & services
Employees
3
20
npm packages
31
HuggingFace models
Managed Agents launched today. I built a Slack relay, tested it end-to-end. Here's what I found.
Managed Agents dropped a few hours ago. I had been reading the docs ahead of time, so I built a full Slack relay right away - Socket Mode listener, session-per-channel management, SSE streaming, cost tracking via span events. Tested multi-turn conversations, tool usage, session persistence. Wanted to share what I found. The prompt caching is genuinely impressive. My second session cost $0.006 because the system prompt and tool definitions were served from cache automatically. API design is clean. The SDKs work. For simple task execution, it's solid infrastructure. The thing that surprised me most is that the containers have no inbound connectivity. There's no public URL. The agent can reach out (web search, fetch, bash), but nothing can reach in. It can't serve a web page, can't receive a webhook, can't host a dashboard, can't expose an API. It's essentially Claude Code running in Anthropic's cloud - same tools, same agent loop, just in a managed container instead of your terminal. The agent is something you invoke, not something that runs. Cold start is about 130 seconds per new session, so for anything interactive you need to keep sessions alive. Memory is in "research preview" (not shipped yet), so each new session starts fresh. Scheduling doesn't exist - the agent only responds when you message it. The agent definition is static, so it doesn't learn from corrections or adapt over time. If you used Cowork, you know agents benefit from having their own interface. Managed Agents solves the compute problem by moving to the cloud, but there's no UI layer at all. And unlike memory and multi-agent (both in research preview), inbound connectivity isn't on the roadmap. I should be transparent about my perspective. I maintain two open-source projects in this space - Phantom (ghostwright/phantom), an always-on agent with persistent memory and self-evolution, and Specter (ghostwright/specter), which deploys the VMs it runs on. Different philosophy from Managed Agents, so I came into this with opinions. But I was genuinely curious how they'd compare. For batch tasks and one-shot code generation, the infrastructure advantages are real. For anything where the agent needs to be a persistent presence - serving dashboards, learning over time, waking up on a schedule - the architecture doesn't support it. Curious what others are seeing. Has anyone deployed it for a real use case yet? How are you handling the lack of persistent memory? Is anyone running always-on agents on their own infrastructure? submitted by /u/Beneficial_Elk_9867 [link] [comments]
View originalI built a Chrome extension that sends any webpage element's context to Claude Code via MCP — in one click
Hey r/ClaudeAI, Built a small tool that's been saving me a lot of copy-paste time: Clasp-it. The problem it solves: When I'm fixing a UI bug, I used to open DevTools, copy the HTML, copy the computed CSS, paste it into Claude, describe the issue... It was tedious. Especially when the bug involved React props or console errors too. What Clasp-it does: - Click the extension icon → click any element on any page - It captures HTML, CSS selector, computed styles, React props, console logs, network requests, and a screenshot - All of it gets sent to Claude Code via MCP automatically Then I just tell Claude: *"fix all recent picks using clasp-it"* — and it reads the full context and edits my actual source files. Setup (2 minutes): Install from Chrome Web Store (link below) Run one command to add the MCP server: claude mcp add --scope user --transport http clasp-it https://claspit.dev/mcp --header "Authorization: Bearer YOUR_API_KEY" Free plan: 10 picks/day with DOM + CSS Pro: unlimited + screenshot, console, network, React props ($2.99/mo) Chrome Web Store: https://chromewebstore.google.com/detail/clasp-it/inelkjifjfaepgpdndcgdkpmlopggnlk Website: https://claspit.dev Happy to answer any questions. Would love feedback from this community especially. submitted by /u/cyphermadhan [link] [comments]
View originalRunning Claude Code TUI against local models via protocol translation — sharing my approach
I've been working on OwlCC, a protocol proxy that lets you run Claude Code's complete terminal UI — all 25+ tools (Bash, Read, Edit, Write, Glob, Grep, WebSearch...) and 40+ commands — against your own local models. How it works: Claude Code speaks the Anthropic Messages API. OwlCC sits in the middle, translates Anthropic protocol to OpenAI Chat Completions on the fly, and routes to whatever local backend you're running. Claude Code doesn't know the difference. Your prompt → OwlCC proxy (:8019) → Anthropic-to-OpenAI translation → Your local backend → Local models What you get that official Claude Code doesn't have: Any model — Qwen, Llama, Mistral, DeepSeek, MiniMax, whatever you can serve /model hot-swap — switch between models mid-conversation (see screenshot) 100% local — nothing leaves your machine, no API key, no account Local web search — SearXNG replaces Anthropic's cloud search, fully self-hosted Observability — Prometheus metrics, audit log, request tracing, error budgets Multi-backend resilience — circuit breaker, fallback chains, health monitoring Learned skills — auto-synthesizes reusable skills from your coding sessions (42 skills and counting) Training data pipeline — auto-collect, quality scoring, PII sanitization, multi-format export What you lose vs official: No extended thinking (local models don't support it) Model quality depends on what you run — a 7B model won't match Claude Opus No official support The setup: It requires the Claude Code TypeScript source tree (not the compiled npm package — you need to bring your own). OwlCC launches it via Node.js + tsx with ESM loader hooks that redirect 22 cloud-only modules to local stubs. The upstream source is pinned locally — Anthropic updates don't affect you. Full tool use driving a Java build + local SearXNG web search /model switching between 5 local models + /skills showing 42 learned skills git clone https://github.com/yeemio/owlcc-byoscc.git cd owlcc-byoscc # place your CC source at upstream/claude-code/ npm install && npm run build npx owlcc init # auto-detects your local backends npx owlcc Tech stack: TypeScript, 120+ source files, 1652 tests, Apache 2.0. GitHub: https://github.com/yeemio/owlcc-byoscc Happy to answer questions about the architecture (the ESM loader chain that makes this work is kind of interesting). submitted by /u/Single_Mushroom2043 [link] [comments]
View originalI built a visual multi-agent team designer - drag & drop 28 agents, run live simulation, generate prompts. Single HTML file, zero dependencies.
I kept running into the same problem: designing multi-agent Claude Code teams by hand. Writing orchestration prompts for 10+ agents, figuring out which model goes where, making sure the workflow makes sense - it was slow and error-prone. So I built a visual designer for it. What it does You drag agents onto a canvas, connect them into workflows, assign models (Opus/Sonnet/Haiku), run a live simulation, and export a ready-to-use system prompt. One HTML file, zero dependencies, works offline. Live demo: https://thejacksoncode.github.io/Agent-Architecture/ Source: https://github.com/TheJacksonCode/Agent-Architecture Quick demo To get the full experience: open the demo -> pick "Deep Five Minds Ultimate" from the preset sidebar -> click "Simulation" -> watch 27 agents talk to each other. What's inside 28 agents across 6 phases (strategy, research, debate, build, QA, HITL) 29 presets from a 2-agent Solo setup to a 27-agent full orchestra Five Minds Protocol - structured debate: 4 domain experts + Devil's Advocate argue in rounds, then a Synthesizer on Opus produces a "Gold Solution" HITL Decision Gates - simulation pauses at 3 human checkpoints with a 120s countdown timer Live Simulation - agents exchange speech bubbles and data packets along SVG connections Mission Control - fullscreen dashboard with real-time metrics and communications log Agent Encyclopedia - research-backed prompts, anti-patterns, and analogies for every agent Dark/Light theme + full PL/EN bilingual UI How Claude helped build it This entire project was built with Claude Code. Every version (there are 31 of them) was pair-programmed with Claude. The agent prompts follow a structured format: ROLE / INPUT / OUTPUT / RESPONSIBILITIES / RULES / WHAT YOU DO NOT DO / REPORT FORMAT. Example prompt structure (Research Tech agent): ROLE: You are Research Tech - a technical researcher specializing in finding current solutions, libraries, APIs, and implementation patterns. INPUT: Research brief from Planner with specific technical questions. OUTPUT: Structured report with findings, each labeled [CERTAIN], [PROBABLE], or [SPECULATION]. WHAT YOU DO NOT DO: You do not recommend solutions. You do not coordinate with other researchers (to prevent groupthink). Tech stack ~4600 lines of vanilla JS in a single HTML file. Canvas 2D for particles, inline SVG for connections, Web Animations API for agent animations, CSS variables for theming. No npm, no build step, no CDN. 31 versions, each saved as a separate file. I never overwrite previous versions. I'd love to hear what multi-agent workflows you're using with Claude Code, and what agents/presets would be useful to add. Happy to answer any questions about the architecture. submitted by /u/ConceptParticular565 [link] [comments]
View originalI built an MCP server that lets Claude Code act as an Orchestrator for DeepSeek, Gemini, and Kimi (with a TUI monitor!)
Hey r/claude community! I've been obsessed with the new Claude Code CLI lately, but I kept thinking: "Claude is a genius orchestrator, but wouldn't it be cool if it could delegate specific tasks to other specialized models?" So, I spent the last few days pair-programming with Claude to build mcp-multi-model — an open-source MCP server that turns Claude into a true "Boss Agent." What does it do? It allows Claude (via Claude Code or any MCP-compatible client) to call other LLMs as sub-agents to handle specific parts of a workflow: DeepSeek: For heavy-duty coding logic or cost-efficient generations. Gemini: For when you need that massive context window or research tasks. Kimi: Great for real-time information with web search capabilities. Claude acts as the Orchestrator. It automatically decides which task to delegate based on the prompt. For example,it might ask Gemini to research a topic and then tell DeepSeek to implement the code based on that research. The "Cool" Part: Agent Monitor TUI One thing that frustrates me with agents is the "black box" feeling. To fix this, I built a TUI (Terminal User Interface) Monitor that runs alongside it. You can see in real-time: Which model is being called. The exact prompts being sent. The raw responses coming back. It makes debugging (and just watching the "thinking" process) actually fun. Built with Claude, for Claude This project was a 50/50 collaboration with Claude. We went back and forth on the MCP schema, streaming responses across different providers, and even the monitor UI layout. It's been a meta experience using Claude to build a tool that makes Claude even more powerful. Open Source & Feedback I've just open-sourced the whole thing. I'd love for you guys to take it for a spin, break it, and tell me what you think. GitHub: https://github.com/K1vin1906/mcp-multi-model Agent Monitor: https://github.com/K1vin1906/agent-monitor It's still early days, so feedback and PRs are very welcome. If you have ideas for other models or features you'd like to see added, let me know in the comments! submitted by /u/Narrow-Condition-961 [link] [comments]
View originalContext full? MCP server list unwieldy? I replaced at least 75 MCP servers and made some new ones like Deletion Interception. Looking for Beta Testers for my self-modding, network scanning, system auditing powerhouse MCP- So avant-guard it's sassy. The beta zip has all the optional dependencies.
I did a thing. This isn't a one-prompt-and-done vibe-coded disaster. I've been building and debugging it for weeks- hundreds of hours to sync tooling across sessions and systems. This is not a burner account- it's my newly created one for my LLC- Try it out, I don't think you'll go back to the old way. Stay Sassy Folks. Summary of the tool below- apologies for the sales monster in me-: I'll cut straight to it. The MCP ecosystem is a mess. You need file operations — install Filesystem. Terminal? Desktop Commander. GitHub? The official GitHub MCP server — which has critical SHA-handling bugs that silently corrupt your commits. Desktop automation? Windows-MCP. Android? mobile-mcp. Memory? Anthropic's memory server. SSH? Pick from 7 competing implementations. Screenshots? OCR? Clipboard? Network scanning? Each one is another server, another config block, another chunk of your context window gone. I've been building SassyMCP: a single Windows exe that consolidates all of this: 257 tools across 31 modules — filesystem, shell (PowerShell/CMD/WSL), desktop automation, GitHub (80 tools with correct SHA handling), Android device control via ADB, phone screen interaction with UI accessibility tree, network scanning (nmap), security auditing, Windows registry, process management, clipboard, Bluetooth, event logs, OCR (Tesseract), web inspection, SSH remote Linux, persistent memory, and self-modification with hot reload 34MB standalone exe — no Python install, no npm, no Docker. Download and run. Beta zip ships with ADB, nmap, plink, scrcpy, and Tesseract OCR bundled — nothing extra to install Smart loading — only loads the tool groups you actually use, so you're not burning 25K tokens of context on tool definitions you never touch Works with Claude Desktop, Grok Desktop, Cursor, Windsurf — stdio and HTTP transport A few things I think are worth highlighting that I haven't seen in other MCP servers: Phone pause/resume with sensitive context detection. The AI operates your Android phone, hits a login screen, and the interaction tools automatically refuse to execute. It reads the UI accessibility tree, detects auth/payment/2FA screens, and stops. You log in manually, tell it to resume, and it picks up where it left off — aware of everything it observed while paused. Safe delete interception. AI agents hallucinate destructive commands. Every delete-family command (rm, del, Remove-Item, rmdir, etc.) across all shells is intercepted. Instead of destroying your files, targets get moved to a _DELETE_/ staging folder in the same directory for you to review. Because "the AI deleted my project" shouldn't be a thing. The GitHub module actually works. The official GitHub MCP server has a well-documented bug where it miscalculates blob SHAs, leading to silent commit corruption. SassyMCP uses correct blob SHA lookups, proper path encoding, atomic multi-file commits via Git Data API, retry logic with exponential backoff, and rate-limit awareness. It also strips 40-70% of the URL metadata bloat from API responses so you're not wasting context on gravatar_url and followers_url fields. Here's what it replaces, specifically: Domain Servers replaced Notable alternative Filesystem / editing 11 Anthropic's Filesystem Shell / terminal 5 Desktop Commander (5.9k stars) Desktop automation 9 Windows-MCP (5k stars) GitHub / Git 5 GitHub MCP Server (28.6k stars) Android / phone 9 mobile-mcp (4.4k stars) Network + security 16 mcp-for-security (601 stars) SSH / remote Linux 7 ssh-mcp (365 stars) Memory / state 7 mcp-memory-service (1.6k stars) Windows system 13 Windows-MCP (5k stars) It's free, it's open source (MIT), and the beta is fully unlocked — all 257 tools, no gating. Download: github.com/sassyconsultingllc/SassyMCP/releases The zip package includes the exe + all external tools bundled. Unzip, run start-beta.bat, add the custom connector to the URL it creates. Full readme within I'm looking for beta testers who are actually using MCP daily and are sick of the fragmentation. If something doesn't work, open an issue. I'm not going to pretend this is perfect — it's a beta. But it works, it's fast, and it's one config block instead of ten. Windows only for now. If there's enough interest I'll look at macOS/Linux. submitted by /u/CapableOrange6064 [link] [comments]
View originalClaude Code can now submit your app to App Store Connect and help you pass review
I built a native macOS app called Blitz that gives Claude Code (or any MCP client) full control over App Store Connect. Built most of it with Claude Code. The problem was simple: every time I needed to submit to ASC, the entire agentic workflow broke. Metadata, screenshots, builds, localization, review notes... all meant leaving the terminal and fighting Apple's web UI. So I built MCP servers that let Claude Code handle the whole thing. What Claude Code can do through Blitz: Create and edit app metadata across every locale Select builds and submit them for review Manage TestFlight builds, groups, and testers Upload and organize screenshots Write and refine review notes so you actually pass review Manage simulators and connected iPhones for testing The app also has a built-in terminal with Claude Code support, so agents can build, test, and ship all from one place. There's a demo on the repo of an agent submitting an app to ASC for review end to end. Everything runs locally, MCP server is localhost only. BYOK. Open source (Apache 2.0): https://github.com/blitzdotdev/blitz-mac Website: https://blitz.dev Curious if anyone else has been using MCP tooling to automate parts of the App Store workflow. This feels like the kind of thing Claude Code was made for. submitted by /u/invocation02 [link] [comments]
View originalI built an open-source system to manage work context across Claude Code sessions — so agents don't forget what they were doing
https://preview.redd.it/khk4hpdrldtg1.png?width=1440&format=png&auto=webp&s=e6ca2a950631d3432fef30700ed925cc50de161d GitHub: https://github.com/chainofdive/ravenclaw I've been using Claude Code heavily across multiple side projects, and the biggest pain point was context loss between sessions. Every time I started a new session, I had to re-explain: "Here's the project, here's what we did last time, here's what to do next." With 3-4 projects running in parallel, I was spending more time context-switching than actually building. So I built Ravenclaw — a self-hosted work context manager designed specifically for Claude Code (and other AI coding agents). It's free and open-source (Apache 2.0). Built with Claude Code This entire project was built using Claude Code as the primary development tool. Claude Code wrote the API, the web dashboard, the MCP server, and even the Playwright tests — all orchestrated through Ravenclaw itself. It's been a dogfooding loop: build the tool with Claude Code, then use the tool to manage Claude Code sessions better. How it works Project → Epic → Issue hierarchy to break down work Claude Code loads previous context via MCP tools (40+) at session start Claude Code saves progress snapshots when done — next session picks up where it left off Web UI to instruct Claude Code, watch responses stream in real-time, and see project status at a glance Uses claude --resume under the hood for conversation continuity The key insight The issue tracker and wiki aren't for humans to manage manually — they're for agents to maintain. Claude Code creates issues, updates status, writes wiki pages through MCP. Humans use the web UI to observe, add context via comments, and answer questions when Claude Code needs human judgment (Human Input Requests). Why this matters for Claude Code users Session continuity: Context is stored in a DB, not in Claude's conversation history. New session = full context loaded automatically via MCP Multi-agent flexibility: Works with Claude Code, Gemini CLI, and Codex. Context lives in Ravenclaw, so you can switch agents without losing project state Visual overview: Instead of scrolling through terminal history, see your project structure in a graph view — click any node for details Headless execution: Permission mode control (auto-approve, bypass) so Claude Code doesn't get stuck on permission prompts Web UI Chat with Claude Code directly from the browser with real-time streaming Graph view showing epic/issue structure and progress Click any node to view/edit details inline Tool activity indicator ("Running: Bash", "Running: Edit") Try it git clone https://github.com/chainofdive/ravenclaw.git cd ravenclaw && pnpm install && pnpm build docker-compose up -d && pnpm db:push Just needs Node.js and PostgreSQL. Self-hosted, no account required, completely free. GitHub: https://github.com/chainofdive/ravenclaw I've been using this daily for my own projects. Happy to answer any questions or hear what features would be useful. submitted by /u/Far-Investment-7618 [link] [comments]
View originalI built a GUI for managing and syncing Claude Code skills, no terminal needed
I kept running into the same problem: my Claude Code skills were scattered across ~/.claude/skills/, buried in marketplace plugin directories, and impossible to browse without digging through the filesystem. So I built Quiver. What it does: Shows all your local skills and marketplace plugins in one searchable web UI Browse and install skills from community marketplaces with one click Edit SKILL. md files directly in the browser Sync skills across machines with Git Drag and drop .skill.zip import Standalone macOS app (~1.8MB) or install via npm It's free, open source (MIT), and runs locally. No account, no cloud. The most meta part? There's a skill that installs the skill manager. Download the SKILL. md from the site, upload it to Claude via Customize → Skills → Upload plugin, then just say "Launch Quiver". Claude installs it and opens the UI for you. You never touch a terminal. A skill to manage your skills. We've come full circle. https://preview.redd.it/f9ymsh50z8tg1.png?width=2400&format=png&auto=webp&s=49164a760acc93d8549fed7db1ec59fb78b3728a GitHub: github.com/scribblesvurt-crypto/quiver submitted by /u/Viberpsychosis [link] [comments]
View originalI love Claude Code but hate Claude Desktop - so I built my own
Hi Everyone, If you don't like Claude Desktop you're not alone, here are some of the reasons why I don't like it: Design Chat interface sucks with anthropic’s cartoonish fonts, color palette, silly animations. I don’t want to go deep into that but I personally like chatgpt interface better ( but at the end it’s my personal choice take it with the grain of salt ) Lack of control You can’t control the web-search (depth, breadth and number of sources, image search, video search providers - yeah I like to search videos on youtube and embed them into canvas) you can’t control how many tokens you’re willing to burn for specific prompt, number of agentic loops, all you got is only “Extended Thinking” toggle local MCP servers is pain to setup, Anthropic pushes you to use Connectors or mess with local with .json configs Privacy there’s no opt-out for keeping your conversation history on their servers, means you’re the product. No way you will ever switch to competitor or any open-source model in their app as they try to lock you in. Missing some native integrations I want to use my own tools: i.e. Apple Maps, Calendar, TradingView charts integration UX/Productivity can’t fork conversation or start a thread for a particular response with mentioning or tagging other model. Everything is getting bloated with 10 new features shipping every week, code, cowork, artifacts, dispatch etc - everything crammed into single app and shoved down your throat, the feature creep is real and reminds me a time when messenger apps started to adding games directly into chat canvas. Ok, enough rant and unproductive complaints. After experiencing all those pain points I decided to build my own app for BYOK users like myself where I addressed most of those shortcomings. It's built entirely with Claude Code over ~3 months, some caffeine, endless UI iterations and debugging SwiftUI issues - here is what I shipped at the end is https://elvean.app ( it's free to try with some basic features). although it's not the the end - it's just the beginning. Would love to hear everyone's perspective of where things going with desktop AI apps and what features are missing and which ones you'd like to see. submitted by /u/Conscious-Track5313 [link] [comments]
View original4o voice-to-voice alternative?
Does 4o via API allow voice to voice talks? Real, not TTS. Thinking of local Open WebUI app with all of my memories connected there plus OpenAI API, possible? Or sooner better switch to Qwen Omni for example? I don’t know if Claude or Gemini have Omni capabilities, but heard they’re less like 4o and more western restricted than Chinese. Main use case - voice to voice only talks on evening walks :) Myself, family, relationship, job, gigs etc. You know, all of what 4o was capable and 5.2-5.4 is not :/ submitted by /u/DentoNeh [link] [comments]
View original[P] I built a simple gpu-aware single-node job scheduler for researchers / students
(reposting in my main account because anonymous account cannot post here.) Hi everyone! I’m a research engineer from a small lab in Asia, and I wanted to share a small project I’ve been using daily for the past few months. During paper prep and model development, I often end up running dozens (sometimes hundreds) of experiments. I found myself constantly checking whether GPUs were free, and even waking up at random hours just to launch the next job so my server wouldn’t sit idle. I got tired of that pretty quickly (and honestly, I was too lazy to keep writing one-off scripts for each setup), so I built a simple scheduling tool for myself. It’s basically a lightweight scheduling engine for researchers: Uses conda environments by default Open a web UI, paste your command (same as terminal), choose how many GPUs you want, and hit submit Supports batch queueing, so you can stack experiments and forget about them Has live monitoring + built-in logging (view in browser or download) Nothing fancy, just something that made my life way easier. Figured it might help others here too. If you run a lot of experiments, I’d love for you to give it a try (and any feedback would be super helpful). Github Link: https://github.com/gjamesgoenawan/ant-scheduler submitted by /u/Zerokidcraft [link] [comments]
View originalOpen source CLI that generates DALL-E images from terminal — wraps ChatGPT web UI, no API costs
Built a CLI + Claude Code skill that wraps ChatGPT's web interface. Main use case: generate and download DALL-E images without the browser or paid API. cli-web-chatgpt chat image "Product mockup for a fitness app" -o mockup.png cli-web-chatgpt chat image "Watercolor painting of a forest" -o forest.png cli-web-chatgpt images list --json cli-web-chatgpt images download -o saved.png Also supports regular chat, conversation history, and model listing: cli-web-chatgpt chat ask "Summarize the latest AI news" --json cli-web-chatgpt conversations list cli-web-chatgpt models Because it's a Claude Code skill, Claude can use ChatGPT as a tool â ask Claude to "generate a DALL-E image of X" and it runs the commands automatically. How it works: one-time browser login via Camoufox (stealth Firefox), then everything runs headlessly. Uses your existing ChatGPT Plus subscription â zero extra API costs. Part of CLI-Anything-Web â an open source Claude Code plugin that generates CLIs + skills for any website. 14 sites covered: https://github.com/ItamarZand88/CLI-Anything-WEB submitted by /u/zanditamar [link] [comments]
View originalI built an open-source context framework for Codex CLI (and 8 other AI agents)
Codex is incredible for bulk edits and parallel code generation. But every session starts from zero — no memory of your project architecture, your coding conventions, your decisions from yesterday. What if Codex had persistent context? And what if it could automatically delegate research to Gemini and strategy to Claude when the task called for it? I built Contextium — an open-source framework that gives AI agents persistent, structured context that compounds across sessions. I'm releasing it today. What it does for Codex specifically Codex reads an AGENTS.md file. Contextium turns that file into a context router — a dynamic dispatch table that lazy-loads only the knowledge relevant to what you're working on. Instead of a static prompt, your Codex sessions get: Your project's architecture decisions and past context Integration docs for the APIs you're calling Behavioral rules that are actually enforced (coding standards, commit conventions, deploy procedures) Knowledge about your specific stack, organized and searchable The context router means your repo can grow to hundreds of files without bloating the context window. Codex loads only what it needs per session. Multi-agent delegation is the real unlock This is where it gets interesting. Contextium includes a delegation architecture: Codex for bulk edits and parallel code generation (fast, cheap) Claude for strategy, architecture, and complex reasoning (precise, expensive) Gemini for research, web lookups, and task management (web-connected, cheap) The system routes work to the right model automatically based on the task. You get more leverage and spend less. One framework, multiple agents, each doing what they're best at. What's inside Context router with lazy loading — triggers load relevant files on demand 27 integration connectors — Google Workspace, Todoist, QuickBooks, Home Assistant, and more 6 app patterns — briefings, health tracking, infrastructure remediation, data sync, goals, shared utilities Project lifecycle management — track work across sessions with decisions logged and searchable via git Behavioral rules — not just documented, actually enforced through the instruction file Works with 9 AI agents: Claude Code, Gemini CLI, Codex, Cursor, Windsurf, Cline, Aider, Continue, GitHub Copilot. Battle-tested I've used this framework daily for months: 100+ completed projects, 600+ journal entries, 35 app protocols running in production. The patterns shipped in the template are the ones that survived sustained real-world use. Plain markdown. Git-versioned. No vendor lock-in. Apache 2.0. Get started bash curl -sSL contextium.ai/install | bash Interactive installer with a gum terminal UI — picks your agent, selects your integrations, optionally creates a GitHub repo, then launches your agent ready to go. GitHub: https://github.com/Ashkaan/contextium Website: https://contextium.ai Happy to answer questions about the Codex integration or the delegation architecture. submitted by /u/Ashkaan4 [link] [comments]
View originalFor those missing chats: pinned chats are failing in the web UI. Here’s the workaround.
If your chats look missing on ChatGPT Web, they may not actually be gone. In at least some cases, pinned chats are failing to load in the web UI. Workaround using the Requestly browser extension: Install Requestly Click New rule Choose Query Param Under If request, set: URL Contains /backend-api/pins In the action section below, leave it on ADD Set: Param Name = limit Param Value = 20 Save the rule and refresh ChatGPT That restored the missing pinned chats for me. Very short bug description: The ChatGPT web UI appears to be failing on the pinned chats request, so pinned chats do not render properly in the sidebar. If you want to report it to OpenAI: Go to Profile picture → Help → Report a bug and paste this: Title: Pinned chats not rendering on ChatGPT Web Pinned chats are failing to render on ChatGPT Web, which can make chats appear missing in the sidebar. The issue appears to be in the web UI path for the pinned chats request. Expected behavior: Pinned chats should render normally on web. submitted by /u/__nickerbocker__ [link] [comments]
View originalRepository Audit Available
Deep analysis of open-webui/open-webui — architecture, costs, security, dependencies & more
Open WebUI uses a tiered pricing model. Visit their website for current pricing details.
Key features include: A home for AI., 367,151 members sharing what they've built., Everything AI offers. Available now., AI for every organization., Open WebUI is being built so everyone can run AI for themselves., Product, Community, Company.
Based on user reviews and social mentions, the most common pain points are: cost tracking.
Based on 20 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.