Make employees, applications and networks faster and more secure everywhere, while reducing complexity and cost.
Based on the social mentions provided, users view Cloudflare primarily as a reliable infrastructure platform for hosting AI and development projects. Developers frequently mention using Cloudflare's services (R2 storage, D1 database, Workers, KV cache) alongside other platforms like Vercel and Supabase for deploying AI-powered applications and websites. Users appreciate Cloudflare as a cost-effective hosting alternative, with one developer specifically noting it as a free option compared to expensive services like Squarespace. The platform appears to have strong developer mindshare in the AI/ML community, being consistently chosen for backend infrastructure in various coding projects and experiments.
Mentions (30d)
10
8 this week
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Based on the social mentions provided, users view Cloudflare primarily as a reliable infrastructure platform for hosting AI and development projects. Developers frequently mention using Cloudflare's services (R2 storage, D1 database, Workers, KV cache) alongside other platforms like Vercel and Supabase for deploying AI-powered applications and websites. Users appreciate Cloudflare as a cost-effective hosting alternative, with one developer specifically noting it as a free option compared to expensive services like Squarespace. The platform appears to have strong developer mindshare in the AI/ML community, being consistently chosen for backend infrastructure in various coding projects and experiments.
Features
Use Cases
20
npm packages
23
HuggingFace models
Pricing found: $5, $5, $10, $3, $5
This sub made my app viral & got me an invite to apply at the Claude Dev Conference in SF. So, I built caffeine half life & sleep health tooling for everyone.
Hey [r/ClaudeAI](r/ClaudeAI) A little while back I shared my Caffeine Curfew app on here and it completely blew up. Because of that amazing viral response, I actually got invited to apply for the Claude developer conference. I am so incredibly grateful to this community, and I really wanted to find a way to give back and share the core tooling with you all for completely free. I built an MCP server for Claude Code and the Claude mobile app that tracks your caffeine intake over time and tells you exactly when it is safe to sleep. Have you ever had a late afternoon coffee and then wondered at midnight why you are staring at the ceiling? This solves that problem using standard pharmacological decay modeling. Every time you log a drink, the server stores it and runs a decay formula. It adds up your whole history to give you a real time caffeine level in mg. Then it looks forward in time to find the exact minute your caffeine drops below your sleep interference threshold. The default half life is five hours and the sleep threshold defaults to 25mg, but both are adjustable since everyone is different! The tech makes the tools ridiculously easy to use. There are zero complicated parameters to memorize. Once connected, it remembers your history automatically and you just talk to Claude naturally: • "Log 150mg of coffee, I just had it" • "When can I safely go to bed tonight?" • "If I have another espresso right now how late would I have to stay up?" • "Show me my caffeine habits for the last thirty days" Under the hood, there are eight simple tools powering this: • log_entry: Log a drink by name and mg • list_entries: See your history • delete_entry: Remove a mistaken entry • get_caffeine_level: Current mg in your system right now • get_safe_bedtime: Earliest time you can safely sleep • simulate_drink: See how another coffee shifts your bedtime before you even drink it • get_status_summary: Full picture with a target bedtime check • get_insights: Seven or thirty day report with trend direction and peak days I am hosting this server on my Mac Mini behind a Cloudflare Tunnel. It features strict database isolation, meaning every single person gets a unique URL and your data is totally separate from everyone else. No login, no signup, no account. Want to try it out? Just leave a comment below and I will reply with your personal key! Once you get your key, you just paste the URL into your Claude desktop app under Settings then Connected Tools, or drop it into your Claude desktop config file. For the tech people curious about the stack: Python, FastMCP, SQLite, SSE transport, Cloudflare Tunnel, and launchd for auto start. The user isolation uses an ASGI middleware that extracts your key from the SSE connection URL and stores it in a ContextVar, ensuring every tool call is automatically scoped to the right user without any extra steps. If you would rather host it yourself, you can get it running in about five minutes. I have the full open source code on GitHub here: https://github.com/garrettmichae1/CaffeineCurfewMCPServer The repo readme has all the exact terminal commands to easily get your own tunnel and server up and running. Original App: https://apps.apple.com/us/app/caffeine-curfew-caffeine-log/id6757022559 ( The MCP server does everything the app does, but better besides maybe the presentation of the data itself. ) Original Post: https://www.reddit.com/r/ClaudeCode/s/FsrPyl7g6r submitted by /u/pythononrailz [link] [comments]
View originalI had Claude Opus 4.6 write an air guitar you can play in your browser — ~2,900 lines of vanilla JS, no framework, no build step
I learned guitar on and off during childhood and still consider myself a beginner. I also took computer vision classes in grad school and have been an OpenCV hobbyist. I finally found an excuse to combine the two — and Claude wrote the entire thing. Try it: https://air-instrument.pages.dev It's an air guitar that runs in your browser. No app, no hardware — just your webcam and your hand. It plays chords, shows a strum pattern, you play along, and it scores your timing. ~2,900 lines of vanilla JS, all client-side, no framework, no build step. Claude Opus 4.6 wrote the code end to end. What Claude built: Hand tracking with MediaPipe — raw tracking data is jittery enough to trigger false strums at 60fps. Claude implemented two layers of smoothing (5-frame moving average + exponential smoothing) to get it from twitchy to feeling like you're actually moving something physical across the strings. Karplus-Strong string synthesis — no audio files anywhere. Every guitar tone is generated mathematically: white noise through a tuned delay line that simulates a vibrating string. Three tone presets (Warm, Clean, Bright). Claude nailed this on the first pass — the algorithm is elegant and the result sounds surprisingly real. Velocity-sensitive strum cascading — hand speed maps to both loudness and string-to-string delay. Fast sweeps cascade tightly (~3ms between strings), slow sweeps spread out (~18ms). This was Claude's idea and it's what makes it feel like actual strumming rather than triggering a chord sample. Real-time scoring — judges timing (Perfect/Great/Good/Miss) with streak multipliers and a 65ms latency compensation offset to account for the smoothing pipeline. Serverless backend — Cloudflare Workers + KV caching for a Songsterr API proxy. Search any song, load its chords, play along. The hardest unsolved problem (where I'd love community input): On a real guitar, your hand hits the strings going down and lifts away coming back up. That lift is depth — a webcam can't see it. So every hand movement was triggering sound in both directions. Claude's current fix: the guitar body has two zones. Left side only registers downstrokes. Right side registers both. Beginners stay left, move right when ready. It works surprisingly well, but I'd love a better solution. If anyone has experience extracting usable depth from monocular hand tracking, I'm all ears. What surprised me about working with Claude: Most guitar apps teach what to play. Few teach how to strum — and it's the more tractable CV problem. I described that framing to Claude and it ran with it. The velocity-to-cascade mapping, the calibration UI, the strum pattern engine — I described what I wanted at a high level and Claude handled the implementation. The Karplus-Strong synthesis in particular was something I wouldn't have reached for on my own. Strum patterns were the one thing Claude couldn't help with. Chord progressions are everywhere online, but strum patterns almost never exist in structured form. Most live as hand-drawn arrows in YouTube tutorials. I ended up transcribing them manually, listening to each song, mapping the down-up pattern beat by beat. Still a work in progress. Building this has taught me more about guitar rhythm than years of picking one up occasionally ever did. submitted by /u/Ex1stentialDr3ad [link] [comments]
View originalFree MCP server I built: gives Claude access to 11M businesses with phone/email/hours, no Google Places API needed
Hi r/ClaudeAI 👋 I built and published a free MCP server for Claude Desktop / Claude Code that gives Claude access to a structured directory of 11M+ real businesses across 233 countries — phone numbers, opening hours, emails, addresses, websites, geo coordinates. It's called agentweb-mcp. Free signup, no credit card, runs on a single VPS I pay for personally. ────────────────────────────────── What you can ask Claude after installing it ────────────────────────────────── • "Find me 3 vegan restaurants near 51.51, -0.13 within 2 km, with phones" • "What time does that bakery in Copenhagen open on Sundays?" • "Search for dentists in Berlin Mitte with verified opening hours" • "I'm in Tokyo — find a 24/7 pharmacy near my coordinates" • "List all hardware stores in Dublin with a website" Plus write-back tools so Claude can also contribute: • "Add this restaurant I just visited to AgentWeb" (auto-dedupes by name+coords+phone) • "Report that the dentist on Hauptstrasse closed" (3+ closed reports auto-lower trust score) ────────────────────────────────── Install (60 seconds) ────────────────────────────────── Get a free key: https://agentweb.live/#signup Add to claude_desktop_config.json: { "mcpServers": { "agentweb": { "command": "npx", "args": ["-y", "agentweb-mcp"], "env": { "AGENTWEB_API_KEY": "aw_live_..." } } } } Restart Claude Desktop. Done. ────────────────────────────────── Why I built it ────────────────────────────────── I needed business data in agent-native format and Google Places costs ~$17 per 1k lookups, which is fine for human apps but instantly painful for any agent doing meaningful work. OpenStreetMap has the data but Overpass query syntax is rough for LLMs to generate. I wanted something Claude could just call as a tool with no friction. ────────────────────────────────── How I built it (the part that might help anyone making their own MCP) ────────────────────────────────── A few things I learned along the way that I'd recommend to anyone building an MCP server: **Make at least one tool work without an API key.** Most MCP servers gate everything behind auth. Mine has a "substrate read" — agentweb_get_short — that hits a public endpoint with no key required, returns the business in 700 bytes instead of 3-5KB. Single-letter JSON keys, schema documented at /v1/schema/short. ~80% token savings on bulk lookups. Lowering friction by zero-auth on the most common path is the single biggest win for adoption. **The MCP server itself is tiny.** ~400 lines of TypeScript. It's just a thin protocol adapter — search_businesses → /v1/search, get_business → /v1/r/{id}, etc. The real work is in the FastAPI backend behind it (Postgres + PostGIS for geo, Redis for hot caching, Cloudflare in front). If you're starting an MCP, build the REST API first and treat the MCP layer as the last 5% of work. **Postgres is enough for "AI-native" infrastructure.** I almost migrated to ClickHouse for analytics performance but the actual fix was just refreshing the visibility map (VACUUM) and adding composite indexes. Postgres + pgvector handles geo, full-text, JSONB, and vector search in one engine. The boring database is the right database. **Per-field provenance + confidence scores matter for agents.** Every record returned has src (jsonld / osm / owner_claim) and t (trust score 0-1). Agents can filter on these. I think this is going to be table stakes for any agent-data API in 18 months. **Owner-claimable in 30 seconds, no website required.** Most directories require businesses to verify via website or Google Business — long tail businesses (the bakery on the corner) get locked out. Mine lets the owner claim with email-at-domain verification, takes 30 seconds, no website needed. This is the moat I'm betting on long-term. ────────────────────────────────── Honest limitations ────────────────────────────────── • Phone coverage varies by country. Nordics + Western Europe are great (60-80% coverage). Parts of SE Asia and Africa are sparse. • Some rows are stale; I have enrichment workers running continuously but it's not Google-perfect yet. • Free tier has rate limits, but they're generous for personal use. Free, MIT licensed, source: github.com/zerabic/agentweb-mcp npm: https://www.npmjs.com/package/agentweb-mcp Live demo + manifesto: https://agentweb.live Happy to answer any technical questions, particularly about the token-efficient shorthand format, the substrate architecture, or the matview-based aggregate cache. Built solo over a few weeks. submitted by /u/ZeroSubic [link] [comments]
View originalI built a personal finance dashboard using Claude
It started as a simple Python script. Now it’s a full-stack app that brings all your investments into one place — stocks, mutual funds, physical gold, fixed deposits, and more. Entirely runs on my spare PC and served via Cloudflare Tunnel. https://metron.thecoducer.com/ Here’s the part I care about the most. It doesn’t just show what you own. It shows what you’re actually exposed to. It breaks that down, so before you buy a stock, you can see if you’re already overexposed to it through your funds. It can also parse your CAMS CAS statement and show you detailed transaction insights. A few things worth knowing: - Your data stays with you — everything is stored in your own Google Sheets on your Google Drive. No databases used. - You can sync holdings via Zerodha or add them manually - NSDL/CDSL CAS support is coming soon This project is part of my personal learning journey to explore what it really means to build a full system with AI, not just a toy app. While AI was helpful, it still struggles with writing clean, modular code and designing scalable systems. Getting things right required a lot of iteration and careful prompting. That said, the process was genuinely fun and eye-opening. If you try it out, I’d genuinely love your feedback, especially what feels missing or broken. submitted by /u/tenantoftheweb [link] [comments]
View originalsharing AI agent outputs with each other was a nightmare - So we built md.page
My friend and I are both heavy Claude users. But we kept running into the same annoying problem: how do you actually share the stuff your agents answer you? The daily struggle: • "Dude, Claude just wrote the perfect research!" • Screenshots 10 parts of a markdown response • "Can you copy-paste that into DM?" • Formatting completely breaks • "Ugh, nevermind..." Sound familiar? 😤 Our solution: We built md.page out of pure frustration. Now when my agent writes something cool, I can instantly share it as a proper webpage. No Sign-up, No Payment, just ask the agent to use it. How Claude Code helped us build the solution: We didn't just use Claude for snippets; we let Claude Code drive the entire ship. It’s effectively a 100% AI-architected project: The Foundation: Claude designed the entire Cloudflare Workers architecture and handled the complex markdown-it configurations for perfect rendering. DevOps & Deployment: It scripted the full CI/CD pipeline and managed the deployment to production. Security Hardening: It ran its own security audits, implemented rate limiting, and handled input validation to prevent XSS/injection. Quality Control: It wrote the entire test suite and the coolest part - now helps us review and approve community PRs. The CLI: It built the npx mdpage-cli from scratch so you can publish straight from your terminal. What Claude code can do with md.page: • Sharing Claude's code explanations between team members • Publishing Claude's generated documentation • Getting Claude's outputs out of your terminal and into the world • Actually readable formatting when you send links in Slack/Discord/Telegram/any other dm tool Try it: https://md.page (FREE + OPEN SOURCE) submitted by /u/Educational-Cause-53 [link] [comments]
View originalContext full? MCP server list unwieldy? I replaced at least 75 MCP servers and made some new ones like Deletion Interception. Looking for Beta Testers for my self-modding, network scanning, system auditing powerhouse MCP- So avant-guard it's sassy. The beta zip has all the optional dependencies.
I did a thing. This isn't a one-prompt-and-done vibe-coded disaster. I've been building and debugging it for weeks- hundreds of hours to sync tooling across sessions and systems. This is not a burner account- it's my newly created one for my LLC- Try it out, I don't think you'll go back to the old way. Stay Sassy Folks. Summary of the tool below- apologies for the sales monster in me-: I'll cut straight to it. The MCP ecosystem is a mess. You need file operations — install Filesystem. Terminal? Desktop Commander. GitHub? The official GitHub MCP server — which has critical SHA-handling bugs that silently corrupt your commits. Desktop automation? Windows-MCP. Android? mobile-mcp. Memory? Anthropic's memory server. SSH? Pick from 7 competing implementations. Screenshots? OCR? Clipboard? Network scanning? Each one is another server, another config block, another chunk of your context window gone. I've been building SassyMCP: a single Windows exe that consolidates all of this: 257 tools across 31 modules — filesystem, shell (PowerShell/CMD/WSL), desktop automation, GitHub (80 tools with correct SHA handling), Android device control via ADB, phone screen interaction with UI accessibility tree, network scanning (nmap), security auditing, Windows registry, process management, clipboard, Bluetooth, event logs, OCR (Tesseract), web inspection, SSH remote Linux, persistent memory, and self-modification with hot reload 34MB standalone exe — no Python install, no npm, no Docker. Download and run. Beta zip ships with ADB, nmap, plink, scrcpy, and Tesseract OCR bundled — nothing extra to install Smart loading — only loads the tool groups you actually use, so you're not burning 25K tokens of context on tool definitions you never touch Works with Claude Desktop, Grok Desktop, Cursor, Windsurf — stdio and HTTP transport A few things I think are worth highlighting that I haven't seen in other MCP servers: Phone pause/resume with sensitive context detection. The AI operates your Android phone, hits a login screen, and the interaction tools automatically refuse to execute. It reads the UI accessibility tree, detects auth/payment/2FA screens, and stops. You log in manually, tell it to resume, and it picks up where it left off — aware of everything it observed while paused. Safe delete interception. AI agents hallucinate destructive commands. Every delete-family command (rm, del, Remove-Item, rmdir, etc.) across all shells is intercepted. Instead of destroying your files, targets get moved to a _DELETE_/ staging folder in the same directory for you to review. Because "the AI deleted my project" shouldn't be a thing. The GitHub module actually works. The official GitHub MCP server has a well-documented bug where it miscalculates blob SHAs, leading to silent commit corruption. SassyMCP uses correct blob SHA lookups, proper path encoding, atomic multi-file commits via Git Data API, retry logic with exponential backoff, and rate-limit awareness. It also strips 40-70% of the URL metadata bloat from API responses so you're not wasting context on gravatar_url and followers_url fields. Here's what it replaces, specifically: Domain Servers replaced Notable alternative Filesystem / editing 11 Anthropic's Filesystem Shell / terminal 5 Desktop Commander (5.9k stars) Desktop automation 9 Windows-MCP (5k stars) GitHub / Git 5 GitHub MCP Server (28.6k stars) Android / phone 9 mobile-mcp (4.4k stars) Network + security 16 mcp-for-security (601 stars) SSH / remote Linux 7 ssh-mcp (365 stars) Memory / state 7 mcp-memory-service (1.6k stars) Windows system 13 Windows-MCP (5k stars) It's free, it's open source (MIT), and the beta is fully unlocked — all 257 tools, no gating. Download: github.com/sassyconsultingllc/SassyMCP/releases The zip package includes the exe + all external tools bundled. Unzip, run start-beta.bat, add the custom connector to the URL it creates. Full readme within I'm looking for beta testers who are actually using MCP daily and are sick of the fragmentation. If something doesn't work, open an issue. I'm not going to pretend this is perfect — it's a beta. But it works, it's fast, and it's one config block instead of ten. Windows only for now. If there's enough interest I'll look at macOS/Linux. submitted by /u/CapableOrange6064 [link] [comments]
View originalMemora v0.2.25 — MCP memory server for Claude, 5× faster writes on D1
Memora is a lightweight MCP server that gives Claude persistent memory — semantic search, knowledge graph, cross-session recall. SQLite local or Cloudflare D1 / S3 / R2 remote. Just cut v0.2.25. Headline: memory_create / memory_update on D1 drop from 10s+ → ~2s per call. What was slow: ensure_schema() was firing 7–9 D1 round-trips on every tool call (~4–8s wasted each) Crossref scan was a two-step list + get_embeddings pattern (~10 round-trips on a 500-memory store) D1 session token was class-level and got stomped by background threads What changed: Schema cached per backend instance, paid once at connect Crossref scan rewritten as a single paginated LEFT JOIN Session token moved per-instance with a backend-level keep-max bookmark mirror Measured on live D1: memory_create: 10s+ → ~1.8s memory_update: 10s+ → ~1.1s connect() 2nd call onward: ~4–8s → ~0ms (cache hit) Plus: Durable Object request reduction (lower Cloudflare bill), XSS fix in graph UI, schema cache correctness fix for CloudSQLiteBackend file swaps. No schema migration, no API changes, 39/39 tests green. Release: https://github.com/agentic-box/memora/releases/tag/v0.2.25 submitted by /u/spokv [link] [comments]
View originalClaude Code Game Development Log - Week 2 LUMINAL
Two weeks ago today, I started working on an experiment with three.js building out a little line rider game agentically. A routine I've tried with a handful of models the past few years, but none have made it this far. I have not written a line of code and have no experience with game dev (but am a web developer by trade, so I can follow along). I'm quite confident that anyone with a vision and persistence/passion could spit out a quality game in a few months with a 5x or 20x sub. You may just have to learn a few painful lessons I got to skip over. The biggest things that took me a while to stumble into coming to game dev and agentic programming mostly blind were: Git worktrees to avoid collisions, still maintain a develop branch and work exclusively on feature branches (mistakes can happen at 4 am). Close sessions and start new ones once a feature is complete. TypeScript early, unit tests early, include unit tests in every plan while the context is there, instead of a random pass later. Audit your tests regularly. Don't put off E2E too long unless you really enjoy QAing. Get a feel for when you're asking Claude to do the same thing multiple times and confirm there's shared infrastructure for it, and if not, build it before you have to QA the same thing 13 times. Superpowers plugin for its lovely skill builder and brainstorming. Everything from deployment processes to recurring maintenance of architecture and roadmap mds. $20 codex sub, used for code reviewing Claude's work, building out spec for Claude, and making targeted UI tweaks (it's much better at receiving UI guidance than Claude via image + fix this thing) WARP. Much better than any other terminal setup I've tried. SLIDERS. AI really seems to struggle with certain matters of taste. Things like selecting unique color pallets, bloom levels, what a procedural engine should sound like, etc. Having Claude build out full sets of admin-only sliders and toggles (I'm talking hundreds) for everything from bloom to color maps to procedural sounds and a JSON export/import to feed them back to him made all the difference. 21st.dev / codepen / sketchfab & community assets. Some things just aren't worth starting from scratch yet. WORDS OF WISDOM You can build faster than you can bug fix. I am still dealing with the fallout of adding too much too fast and will likely be spending the next 2-3 weeks on polish and writing full browser tests. Don't get too ahead of yourself; you'll regret it later. Spend all of your downtime planning. I personally use workflowy and probably have a solid 20-30 plans/thoughts/uncategorized bug fixes that I'll flesh out fully with ChatGPT/Codex while waiting for claude to do his thing or usage limits to reset. Have fun. I don't plan on monetizing this thing yet but I can confidently say I've already learned a ton and have been directly applying it at my place of work. Tech stack: TypeScript, Three.js, Vite. Firebase (Auth, RTDB, Firestore, Hosting, Cloud Functions). The game server is just node.js running the ws websocket library. Vitest + ESLint. Netcode has been fun - deterministic lockstep simulation syncing only input deltas over a WebSocket relay (Cloudflare Workers). Seeded RNG + frame-hash desync detection for consistent state across clients. Seems to be holding up, but needs more work on reconnecting. https://luminal.live/ Still a buggy mess, but hope y'all have fun with it in its current state. Catch y'all next week :) submitted by /u/Jaded-Comfortable179 [link] [comments]
View originalI open-sourced my AI-curated Reddit feed (Self-hosted on Cloudflare, Supabase, and Vercel)
A week ago I shared a tool I built that scans Reddit and surfaces the actually useful posts about vibecoding and AI-assisted development. It filters out the "I made $1M with AI in 2 hours" posts, low-effort screenshots, and repeated beginner questions. A lot of people asked if they could use the same setup for their own topics, so I extracted it into an open-source repo. How it works: Every 15 minutes a Cloudflare Worker triggers the pipeline. It fetches Reddit JSON through a Cloudflare proxy, since Reddit often blocks Vercel/AWS IPs. A pre-filter removes low-signal posts before any AI runs. Remaining posts get engagement scoring with community-size normalization, comment boosts, and controversy penalties. Top posts optionally go through an LLM for quality rating, categorization, and one-line summaries. A diversity pass prevents one subreddit from dominating the feed. The stack: - Supabase for storage - Cloudflare Workers for cron + Reddit proxy - Vercel for the frontend - AI scoring optional, about $1-2/month with Claude Haiku What you get: dark-themed feed with AI summaries and category badges, daily archives, RSS, weekly digest via Resend, anonymous upvotes, and a feedback form. Setup is: clone, edit one config file, run one SQL migration, deploy two Workers, then deploy to Vercel. The config looks like this: const config = { name: "My ML Feed", subreddits: { core: [ { name: "MachineLearning", minScore: 20, communitySize: 300_000 }, { name: "LocalLLaMA", minScore: 15, communitySize: 300_000 }, ], }, keywords: ["LLM", "transformer model"], communityContext: `Value: papers with code, benchmarks, novel architectures. Penalize: hype, speculation, product launches without technical depth.`, }; GitHub: github.com/solzange/reddit-signal Built with Claude Code. Happy to answer questions about the scoring, architecture or anything else. submitted by /u/solzange [link] [comments]
View originalClaude coding an iOS flight tracker in 3 weeks - the good, the bad and the ugly
TLDR: once you go over the basic app level, things start to get messy. What the app has: a backend engine deployed on Cloudflare premium: D1 storage, KV cache, crons and heavy processing worker, including a custom push notifications server a data ingestion pipeline from Airlaps API an iOS UI with Map view, Traffic Pulse score and alerts set up as you can see, it's not the easy-peasy app you vibe code in weekend. It took me no less than 82 builds to reach the AppStore submission level. Challenges: maintain high level features on a small API budget. I had to constantly go back and forth with Claude on what to include in any feature, how to get the data from that feature via the Airlabs API, while not breaking the bank with extra API calls. I had at least 4 or 5 major iterations here. I started with aviastack API and eventually moved to Airlabs finding the core set of features and sticking to it. Initially, the app had a chat window, in which you could ask an LLM about the current state of the skies. Typical "nail looking for a hammer" situation. I decided to abstract that into the visual pulse, and a little briefing based on user alerts. I'm using on-device Apple Intelligence for summarization, no extra calls. No more chat window. keeping the codebase clean and maintainable. That required daily human code audits, and driving Claude around all kind of pitfalls. Ended up adding a lot of coding "best practices" in Claude MD ("do not alter the anomaly engine and make sure traffic pulse works after each feature implementation"). A-ha moments: Seeing an actual flight disturbance forming, reaching climax and then slowly disappearing. I find this fascinating and checking the "traffic pulse" became a daily routine now. Curious about your feedback: https://apps.apple.com/us/app/flight-lens-traffic-pulse/id6759946030 submitted by /u/dragosroua [link] [comments]
View originalChecklist for release.
Hi, I have been vibe coding an API + Fully custom website connected to API to handle keys, and also a CMS for myself, like a control center. It has been 6 months now of non stop coding, I started with 2 Gemini 2.5 accounts, then 3.0 and 3.1 couldn't handle it anymore and is completely useless now, so I switched to 2 Claude accounts, and now just 1 claude max is enough and the best. I have been reading a lot of posts lately from real Devs saying vibe coding a real app/project is impossible and that there are so many edge cases, and crazy database layers to do that no vibe coder or Claude could ever fix/make. I was wondering if any real full stack coder could point me in the right direction to find basic checks, and more advanced security checks/fixes, and how to handle high volumes. In the past 2 months I have fixed over 10,000 bugs, security vulnerabilities, edge cases and so on, now when I push Claude, it really doesn't find anything, not even low level bugs. but of course I want to be sure. to resume, I want to be sure: -Custom stripe subscription system is perfectly handled in all cases.(CRON properly charges, monthly, yearly, retries etc..) -API usage is perfectly handled on all endpoints -Account management -database access is always possible I also have an overage system like Claude, is there any info I need to know about this? Important stuff? They can add balance, still use API, turn off, auto fill etc... I use Cloudflare(pro), Supabase(pro), wordpress(custom plugin + custom php pages). thank you in advance. submitted by /u/francesco_puig [link] [comments]
View originalI used Claude Code to build a portable AI worker Desktop from scratch — the open-source community gave it 391 stars in 6 days
I want to share something I built with Claude Code over the past week because it shows what AI-assisted development can actually do when pointed at a genuinely hard problem: moving AI agents beyond one-off task execution. Most AI wrappers just send prompts to an API. Building a continuously operating AI worker requires queueing, harness integration, and MCP orchestration. I wanted a way to make AI worker environments fully portable. No widely adopted solution had cleanly solved the "how do we package the context, tools, and skills so anyone can run it locally" problem effectively. What Claude Code did: I pointed Claude (Opus 4.6 - high thinking) at the architecture design for Holaboss, an AI Worker Desktop. Claude helped me build a three-layer system separating the Electron desktop UI, the TypeScript-based runtime system, and the sandbox root. It understood how to implement the memory catalog metadata, helped me write the compaction boundary logic for session continuity, and worked through the MCP orchestration so workspace skills could be merged with embedded runtime skills seamlessly. The result is a fully portable runtime. Your AI workers, along with their context and tools, can be packaged and shared. It's free, open-source (MIT), and runs locally with Node.js (desktop + runtime bundle). It supports OpenAI, Anthropic, OpenRouter, Gemini, and Ollama out of the box. I open-sourced this a few days ago and the reaction has been unreal. The GitHub repo hit 391 stars in just 6 days. The community is already building on top of the 4 built-in worker templates (Social Operator, Gmail Assistant, Build in Public, and Starter Workspace). This was so far from the typical "I used AI to write a to-do app." This was Claude Code helping architect a real, local, three-tier desktop and runtime system for autonomous AI workers. And people are running it on their Macs right now (Windows & Linux in progress). I truly still can't believe it. The GitHub repo is public if you want to try it or build your own worker. GitHub ⭐️: https://github.com/holaboss-ai/holaboss-ai submitted by /u/Imaginary-Tax2075 [link] [comments]
View originalI built a Claude Code plugin that intercepts vague prompts and executes a improved prompt. Tired of wasting credits on bad prompts. We just hit 4300 stars on GitHub ‼️
My prompt-master skill just crossed 4300 stars ⭐ on GitHub and Thank you guys! Got so much positive feedback and support, and was poured in with many suggestions. One BIG suggestion was to make a prompt improver plugin for Claude Code. Vague Claude Code prompts can, hallucinate features, get wrong output, burns through credits on re-tries, use wrong frameworks and stacks. So I built prompt-mini. A Claude Code plugin that hooks prompts before Claude executes them. You type your idea, it asks you the questions, builds a structured prompt, then executes it immediately. What it actually does: • Detects your stack automatically from your project files or gives options to choose from - never asks what it can read itself. • Intercepts every prompt before Claude Code runs a single line. Clear prompts pass through without any change. • Asks you everything upfront -- stack, UI style, auth approach, which pages to build; so Claude Code never has to guess. • Builds a 6-block structured prompt with file paths, hard stop conditions, and MUST NOT rules locked in the first 30% where attention is highest. 35 credit-killing patterns caught and fixed: Things like - No scope, no stop conditions, no file path, ghost features, building the whole thing in one shot all gone. Supports 40+ stacks/framework specific routing -- Next.js, Expo, Supabase, FastAPI, Chrome MV3, LangChain, Drizzle, Cloudflare Workers - each one has its own rules so the output is never generic. Please do give it a try and comment some feedback! Repo: github.com/nidhinjs/prompt-mini ⭐ submitted by /u/CompetitionTrick2836 [link] [comments]
View originalGeterDone - for projects with a long game in mind
I have a little treat for you. It's something I spent the last few weeks building among all the other things I'm working on, and I thought it might help others, so here it is. For you, just 'cause you are great, it's free :) It's called Geterdone, somewhat fitting right?! A project execution plugin for Claude's Cowork mode. Whether you have one project on the go or an entire universe of them running at once, it handles it all. Here's what it does: → Describe a goal. It builds a day-by-day execution plan → Classifies each task: fully autonomous, semi-auto (you approve), or manual → Runs tasks on a schedule — without you opening anything → Surfaces proactive intel you didn't know to ask for → Tracks all your projects in a kanban dashboard → Revenue-weights your decisions so you work on what moves the needle first The part that makes it different: it auto-configures to your installed plugins and tools. My version runs on Cloudflare, Supabase, Apollo, Ahrefs, and Notion. Yours runs on whatever you have connected. It also solves something that used to drive me crazy — no more retraining Claude each session on what's happening across your projects. Geterdone keeps the context alive, and Notion stays in sync as the single source of truth. They update each other. https://github.com/jayrockliffe-defused/geterdone If this saves you time, share it. Would love to see what you build with it. P.S. Next up: an auto-joke generator so that every time you check your tasks, you get a laugh, too. Why not? Stay tuned 😄 #buildinpublic #claudeai #geterdone #solofounder #automation #defusely submitted by /u/Additional_Win_4018 [link] [comments]
View originalClaude Code's full source just leaked via npm source maps -- here's what 512K lines of TypeScript reveal
Security researcher Chaofan Shou discovered that Anthropic shipped source maps in the npm package u/anthropic-ai/claude-code@2.1.88. The 57MB cli.js.map pointed to a Cloudflare R2 bucket with the full unobfuscated TypeScript source. I've been building SpecWeave (spec-driven development framework, 100+ skills) on top of Claude Code for 5+ months, so I spent today analyzing the architecture. **Key findings:** **BUDDY** -- Full AI pet system. 18 species, rarity tiers, gacha mechanics, stats (DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK). Teaser April 1-7, launch May 2026. **Auto-Dream** -- Background memory consolidation that runs as a forked subagent. Fires after 24h + 5 sessions. Four phases: Orient, Gather, Consolidate, Prune. Your coding assistant literally dreams. **Undercover Mode** -- Auto-activates on public repos to strip internal Anthropic info from commits. "There is NO force-OFF." Found via a leak. **Advisor Tool** -- Can call a second, stronger model to review its work before acting. Embedded AI code review. **4-Layer Context Compression** -- MicroCompact -> AutoCompact (triggers ~187K tokens) -> Session Memory -> Full Summarization. Only restores 5 files post-compact. **Next models** -- opus-4-7 and sonnet-4-8 already referenced. "Capybara" model family. 22 secret internal Anthropic repos in undercover allowlist. **KAIROS** -- Always-on persistent assistant mode. Background session management with daemon mode. **Fast Mode costs 6x more** -- $30/$150 per MTok vs $5/$25 normal. Same Opus 4.6 model. **Full architecture analysis:** https://verified-skill.com/insights/claude-code **Source still live:** The R2 bucket hasn't been taken down yet. npm rolled back to 2.1.87 but the source is out there. Anthropic DMCA'd 438+ repos for a previous reverse-engineering effort in 2025, so mirrors may not last. https://preview.redd.it/agu2j0qhkgsg1.png?width=2294&format=png&auto=webp&s=263358c67d93a8af55c735d2cb2f0e62079f3302 https://preview.redd.it/rsdi3cuakgsg1.png?width=547&format=png&auto=webp&s=f64937b25315b2b580b8c50a15ba22401396bf65 submitted by /u/OwenAnton84 [link] [comments]
View originalYes, Cloudflare offers a free tier. Pricing found: $5, $5, $10, $3, $5
Key features include: 2026 Cloudflare, Inc., Connect your workforce, AI agents, apps, and infrastructure, Protect and accelerate websites and AI-enabled apps, Build and secure AI agents, Connect, Protect, Build, Connect users and apps securely.
Cloudflare is commonly used for: Build and secure AI agents.
Based on 22 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Pieter Levels
Founder at PhotoAI / NomadList
2 mentions