Build and deploy AI agents and applications on the AI Cloud powered by Cloudflare
The same end-to-end AI stack behind Cloudflare's own products — battle-tested across billions of requests and millions of users daily. Build with the same primitives we use in production. Build AI agents on durable objects with code execution, inference, AI gateway all built-in "Cloudflare provided everything from OAuth to out-of-the-box remote MCP support so we could quickly build, secure, and scale a fully operational setup." Build AI agents on durable objects with code execution, inference, AI gateway all built-in "Cloudflare provided everything from OAuth to out-of-the-box remote MCP support so we could quickly build, secure, and scale a fully operational setup." Everything you need to build, deploy, and scale AI Agents from inference to orchestration, all on one global network. Access Llama 3, Gemma 3, Whisper, TTS, and LoRA-fine-tuned variants across 190+ locations. Build goal-driven agents that call models, APIs, and schedules from a single TypeScript API. Secure, OAuth-scoped endpoints that expose tools and data to agents without self-hosting. Complete RAG workflows with automatic indexing and fresh data. Ship AI search and chat with one instance in minutes. Globally-replicated vector database that pairs with Workers AI for RAG in a few lines of code. Store terabytes of training data, checkpoints, and user uploads. Move to any cloud for $0 egress. Built-in caching, rate-limiting, model fallback, and observability for every inference call. Deploy MCP Servers between meetings with Agents SDK. Build intelligent, goal-driven agents that call models, APIs, and tasks from one unified SDK — designed to run fast, securely, and globally on Workers. Trusted by the teams you trust. And thousands more... Build on the infrastructure powering 20% of the Internet. Join thousands of developers who've eliminated infrastructure complexity and deployed globally with Cloudflare. Start building for free — no credit card required.
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Features
Use Cases
Industry
computer & network security
Employees
4,400
Pricing found: $0
Open-sourced our internal AI coding agent — assign a Linear ticket, get a PR with a live preview
https://reddit.com/link/1sgi3z4/video/gt4izt7wy3ug1/player Repo: https://github.com/Deepank308/hermes-swe-agent Deep Wiki: https://deepwiki.com/Deepank308/hermes-swe-agent At my work, engineers were spending too much time on small features and bug fixes — the kind of work that's well-defined but tedious. PMs would file tickets, engineers would context-switch, and it'd eat into time for bigger projects. We're also remote-first, so PMs often had to wait for a developer in the right timezone to pick up a ticket. So I built Hermes — an AI agent that PMs can assign Linear tickets to directly. It: Spins up a full dev environment on EC2 (Docker, PostgreSQL, Redis, the whole stack) Reads the codebase, plans, writes code, runs tests Streams progress back to Linear in real-time Creates a PR with a live preview URL so PMs can actually verify the changes themselves This reduced review burden too — by the time an engineer looks at the PR, the code has been tested and there's a working preview to click through. And since it runs 24/7, timezone gaps stopped being a bottleneck. Why we built our own instead of using existing solutions: We wanted to keep our codebase on infrastructure we control rather than running on a third-party agent platform. With Hermes, the dev environment, Docker stack, and all execution happens in our own VPC — code context is sent to Anthropic's API for inference (same as any Claude usage) but nothing is stored or executed on someone else's platform. I used Claude extensively to build this — it's been a great learning experience and honestly a showcase of what's possible with Claude Code as a development tool. I also added Claude Code skills (like /setup) so fellow Claude users can onboard easily — just open the repo in Claude Code and it walks you through everything. I open-sourced it by stripping away the company-specific parts (preview scripts, app configs, Docker setup). The core orchestration, agent lifecycle, firewall, session management, and Linear/Slack/GitHub integrations are all there — you can customize it for your own repos and stack. Heads up: Still actively working on strengthening the security aspects (learning as I go) — outbound firewall, network isolation, and agent sandboxing are in place but evolving. PRs and feedback welcome. Setup is a single script — fill in a .env.local and run bash scripts/setup.sh. It creates the AWS resources, launches the orchestrator, sets up a Cloudflare tunnel, and you're running. Happy to answer questions about the architecture or how we use it. submitted by /u/coldddeadRepeated [link] [comments]
View originalThis sub made my app viral & got me an invite to apply at the Claude Dev Conference in SF. So, I built caffeine half life & sleep health tooling for everyone.
Hey [r/ClaudeAI](r/ClaudeAI) A little while back I shared my Caffeine Curfew app on here and it completely blew up. Because of that amazing viral response, I actually got invited to apply for the Claude developer conference. I am so incredibly grateful to this community, and I really wanted to find a way to give back and share the core tooling with you all for completely free. I built an MCP server for Claude Code and the Claude mobile app that tracks your caffeine intake over time and tells you exactly when it is safe to sleep. Have you ever had a late afternoon coffee and then wondered at midnight why you are staring at the ceiling? This solves that problem using standard pharmacological decay modeling. Every time you log a drink, the server stores it and runs a decay formula. It adds up your whole history to give you a real time caffeine level in mg. Then it looks forward in time to find the exact minute your caffeine drops below your sleep interference threshold. The default half life is five hours and the sleep threshold defaults to 25mg, but both are adjustable since everyone is different! The tech makes the tools ridiculously easy to use. There are zero complicated parameters to memorize. Once connected, it remembers your history automatically and you just talk to Claude naturally: • "Log 150mg of coffee, I just had it" • "When can I safely go to bed tonight?" • "If I have another espresso right now how late would I have to stay up?" • "Show me my caffeine habits for the last thirty days" Under the hood, there are eight simple tools powering this: • log_entry: Log a drink by name and mg • list_entries: See your history • delete_entry: Remove a mistaken entry • get_caffeine_level: Current mg in your system right now • get_safe_bedtime: Earliest time you can safely sleep • simulate_drink: See how another coffee shifts your bedtime before you even drink it • get_status_summary: Full picture with a target bedtime check • get_insights: Seven or thirty day report with trend direction and peak days I am hosting this server on my Mac Mini behind a Cloudflare Tunnel. It features strict database isolation, meaning every single person gets a unique URL and your data is totally separate from everyone else. No login, no signup, no account. Want to try it out? Just leave a comment below and I will reply with your personal key! Once you get your key, you just paste the URL into your Claude desktop app under Settings then Connected Tools, or drop it into your Claude desktop config file. For the tech people curious about the stack: Python, FastMCP, SQLite, SSE transport, Cloudflare Tunnel, and launchd for auto start. The user isolation uses an ASGI middleware that extracts your key from the SSE connection URL and stores it in a ContextVar, ensuring every tool call is automatically scoped to the right user without any extra steps. If you would rather host it yourself, you can get it running in about five minutes. I have the full open source code on GitHub here: https://github.com/garrettmichae1/CaffeineCurfewMCPServer The repo readme has all the exact terminal commands to easily get your own tunnel and server up and running. Original App: https://apps.apple.com/us/app/caffeine-curfew-caffeine-log/id6757022559 ( The MCP server does everything the app does, but better besides maybe the presentation of the data itself. ) Original Post: https://www.reddit.com/r/ClaudeCode/s/FsrPyl7g6r submitted by /u/pythononrailz [link] [comments]
View originalFree MCP server I built: gives Claude access to 11M businesses with phone/email/hours, no Google Places API needed
Hi r/ClaudeAI 👋 I built and published a free MCP server for Claude Desktop / Claude Code that gives Claude access to a structured directory of 11M+ real businesses across 233 countries — phone numbers, opening hours, emails, addresses, websites, geo coordinates. It's called agentweb-mcp. Free signup, no credit card, runs on a single VPS I pay for personally. ────────────────────────────────── What you can ask Claude after installing it ────────────────────────────────── • "Find me 3 vegan restaurants near 51.51, -0.13 within 2 km, with phones" • "What time does that bakery in Copenhagen open on Sundays?" • "Search for dentists in Berlin Mitte with verified opening hours" • "I'm in Tokyo — find a 24/7 pharmacy near my coordinates" • "List all hardware stores in Dublin with a website" Plus write-back tools so Claude can also contribute: • "Add this restaurant I just visited to AgentWeb" (auto-dedupes by name+coords+phone) • "Report that the dentist on Hauptstrasse closed" (3+ closed reports auto-lower trust score) ────────────────────────────────── Install (60 seconds) ────────────────────────────────── Get a free key: https://agentweb.live/#signup Add to claude_desktop_config.json: { "mcpServers": { "agentweb": { "command": "npx", "args": ["-y", "agentweb-mcp"], "env": { "AGENTWEB_API_KEY": "aw_live_..." } } } } Restart Claude Desktop. Done. ────────────────────────────────── Why I built it ────────────────────────────────── I needed business data in agent-native format and Google Places costs ~$17 per 1k lookups, which is fine for human apps but instantly painful for any agent doing meaningful work. OpenStreetMap has the data but Overpass query syntax is rough for LLMs to generate. I wanted something Claude could just call as a tool with no friction. ────────────────────────────────── How I built it (the part that might help anyone making their own MCP) ────────────────────────────────── A few things I learned along the way that I'd recommend to anyone building an MCP server: **Make at least one tool work without an API key.** Most MCP servers gate everything behind auth. Mine has a "substrate read" — agentweb_get_short — that hits a public endpoint with no key required, returns the business in 700 bytes instead of 3-5KB. Single-letter JSON keys, schema documented at /v1/schema/short. ~80% token savings on bulk lookups. Lowering friction by zero-auth on the most common path is the single biggest win for adoption. **The MCP server itself is tiny.** ~400 lines of TypeScript. It's just a thin protocol adapter — search_businesses → /v1/search, get_business → /v1/r/{id}, etc. The real work is in the FastAPI backend behind it (Postgres + PostGIS for geo, Redis for hot caching, Cloudflare in front). If you're starting an MCP, build the REST API first and treat the MCP layer as the last 5% of work. **Postgres is enough for "AI-native" infrastructure.** I almost migrated to ClickHouse for analytics performance but the actual fix was just refreshing the visibility map (VACUUM) and adding composite indexes. Postgres + pgvector handles geo, full-text, JSONB, and vector search in one engine. The boring database is the right database. **Per-field provenance + confidence scores matter for agents.** Every record returned has src (jsonld / osm / owner_claim) and t (trust score 0-1). Agents can filter on these. I think this is going to be table stakes for any agent-data API in 18 months. **Owner-claimable in 30 seconds, no website required.** Most directories require businesses to verify via website or Google Business — long tail businesses (the bakery on the corner) get locked out. Mine lets the owner claim with email-at-domain verification, takes 30 seconds, no website needed. This is the moat I'm betting on long-term. ────────────────────────────────── Honest limitations ────────────────────────────────── • Phone coverage varies by country. Nordics + Western Europe are great (60-80% coverage). Parts of SE Asia and Africa are sparse. • Some rows are stale; I have enrichment workers running continuously but it's not Google-perfect yet. • Free tier has rate limits, but they're generous for personal use. Free, MIT licensed, source: github.com/zerabic/agentweb-mcp npm: https://www.npmjs.com/package/agentweb-mcp Live demo + manifesto: https://agentweb.live Happy to answer any technical questions, particularly about the token-efficient shorthand format, the substrate architecture, or the matview-based aggregate cache. Built solo over a few weeks. submitted by /u/ZeroSubic [link] [comments]
View originalI built a personal finance dashboard using Claude
It started as a simple Python script. Now it’s a full-stack app that brings all your investments into one place — stocks, mutual funds, physical gold, fixed deposits, and more. Entirely runs on my spare PC and served via Cloudflare Tunnel. https://metron.thecoducer.com/ Here’s the part I care about the most. It doesn’t just show what you own. It shows what you’re actually exposed to. It breaks that down, so before you buy a stock, you can see if you’re already overexposed to it through your funds. It can also parse your CAMS CAS statement and show you detailed transaction insights. A few things worth knowing: - Your data stays with you — everything is stored in your own Google Sheets on your Google Drive. No databases used. - You can sync holdings via Zerodha or add them manually - NSDL/CDSL CAS support is coming soon This project is part of my personal learning journey to explore what it really means to build a full system with AI, not just a toy app. While AI was helpful, it still struggles with writing clean, modular code and designing scalable systems. Getting things right required a lot of iteration and careful prompting. That said, the process was genuinely fun and eye-opening. If you try it out, I’d genuinely love your feedback, especially what feels missing or broken. submitted by /u/tenantoftheweb [link] [comments]
View originalsharing AI agent outputs with each other was a nightmare - So we built md.page
My friend and I are both heavy Claude users. But we kept running into the same annoying problem: how do you actually share the stuff your agents answer you? The daily struggle: • "Dude, Claude just wrote the perfect research!" • Screenshots 10 parts of a markdown response • "Can you copy-paste that into DM?" • Formatting completely breaks • "Ugh, nevermind..." Sound familiar? 😤 Our solution: We built md.page out of pure frustration. Now when my agent writes something cool, I can instantly share it as a proper webpage. No Sign-up, No Payment, just ask the agent to use it. How Claude Code helped us build the solution: We didn't just use Claude for snippets; we let Claude Code drive the entire ship. It’s effectively a 100% AI-architected project: The Foundation: Claude designed the entire Cloudflare Workers architecture and handled the complex markdown-it configurations for perfect rendering. DevOps & Deployment: It scripted the full CI/CD pipeline and managed the deployment to production. Security Hardening: It ran its own security audits, implemented rate limiting, and handled input validation to prevent XSS/injection. Quality Control: It wrote the entire test suite and the coolest part - now helps us review and approve community PRs. The CLI: It built the npx mdpage-cli from scratch so you can publish straight from your terminal. What Claude code can do with md.page: • Sharing Claude's code explanations between team members • Publishing Claude's generated documentation • Getting Claude's outputs out of your terminal and into the world • Actually readable formatting when you send links in Slack/Discord/Telegram/any other dm tool Try it: https://md.page (FREE + OPEN SOURCE) submitted by /u/Educational-Cause-53 [link] [comments]
View originalClaude Code Game Development Log - Week 2 LUMINAL
Two weeks ago today, I started working on an experiment with three.js building out a little line rider game agentically. A routine I've tried with a handful of models the past few years, but none have made it this far. I have not written a line of code and have no experience with game dev (but am a web developer by trade, so I can follow along). I'm quite confident that anyone with a vision and persistence/passion could spit out a quality game in a few months with a 5x or 20x sub. You may just have to learn a few painful lessons I got to skip over. The biggest things that took me a while to stumble into coming to game dev and agentic programming mostly blind were: Git worktrees to avoid collisions, still maintain a develop branch and work exclusively on feature branches (mistakes can happen at 4 am). Close sessions and start new ones once a feature is complete. TypeScript early, unit tests early, include unit tests in every plan while the context is there, instead of a random pass later. Audit your tests regularly. Don't put off E2E too long unless you really enjoy QAing. Get a feel for when you're asking Claude to do the same thing multiple times and confirm there's shared infrastructure for it, and if not, build it before you have to QA the same thing 13 times. Superpowers plugin for its lovely skill builder and brainstorming. Everything from deployment processes to recurring maintenance of architecture and roadmap mds. $20 codex sub, used for code reviewing Claude's work, building out spec for Claude, and making targeted UI tweaks (it's much better at receiving UI guidance than Claude via image + fix this thing) WARP. Much better than any other terminal setup I've tried. SLIDERS. AI really seems to struggle with certain matters of taste. Things like selecting unique color pallets, bloom levels, what a procedural engine should sound like, etc. Having Claude build out full sets of admin-only sliders and toggles (I'm talking hundreds) for everything from bloom to color maps to procedural sounds and a JSON export/import to feed them back to him made all the difference. 21st.dev / codepen / sketchfab & community assets. Some things just aren't worth starting from scratch yet. WORDS OF WISDOM You can build faster than you can bug fix. I am still dealing with the fallout of adding too much too fast and will likely be spending the next 2-3 weeks on polish and writing full browser tests. Don't get too ahead of yourself; you'll regret it later. Spend all of your downtime planning. I personally use workflowy and probably have a solid 20-30 plans/thoughts/uncategorized bug fixes that I'll flesh out fully with ChatGPT/Codex while waiting for claude to do his thing or usage limits to reset. Have fun. I don't plan on monetizing this thing yet but I can confidently say I've already learned a ton and have been directly applying it at my place of work. Tech stack: TypeScript, Three.js, Vite. Firebase (Auth, RTDB, Firestore, Hosting, Cloud Functions). The game server is just node.js running the ws websocket library. Vitest + ESLint. Netcode has been fun - deterministic lockstep simulation syncing only input deltas over a WebSocket relay (Cloudflare Workers). Seeded RNG + frame-hash desync detection for consistent state across clients. Seems to be holding up, but needs more work on reconnecting. https://luminal.live/ Still a buggy mess, but hope y'all have fun with it in its current state. Catch y'all next week :) submitted by /u/Jaded-Comfortable179 [link] [comments]
View originalI open-sourced my AI-curated Reddit feed (Self-hosted on Cloudflare, Supabase, and Vercel)
A week ago I shared a tool I built that scans Reddit and surfaces the actually useful posts about vibecoding and AI-assisted development. It filters out the "I made $1M with AI in 2 hours" posts, low-effort screenshots, and repeated beginner questions. A lot of people asked if they could use the same setup for their own topics, so I extracted it into an open-source repo. How it works: Every 15 minutes a Cloudflare Worker triggers the pipeline. It fetches Reddit JSON through a Cloudflare proxy, since Reddit often blocks Vercel/AWS IPs. A pre-filter removes low-signal posts before any AI runs. Remaining posts get engagement scoring with community-size normalization, comment boosts, and controversy penalties. Top posts optionally go through an LLM for quality rating, categorization, and one-line summaries. A diversity pass prevents one subreddit from dominating the feed. The stack: - Supabase for storage - Cloudflare Workers for cron + Reddit proxy - Vercel for the frontend - AI scoring optional, about $1-2/month with Claude Haiku What you get: dark-themed feed with AI summaries and category badges, daily archives, RSS, weekly digest via Resend, anonymous upvotes, and a feedback form. Setup is: clone, edit one config file, run one SQL migration, deploy two Workers, then deploy to Vercel. The config looks like this: const config = { name: "My ML Feed", subreddits: { core: [ { name: "MachineLearning", minScore: 20, communitySize: 300_000 }, { name: "LocalLLaMA", minScore: 15, communitySize: 300_000 }, ], }, keywords: ["LLM", "transformer model"], communityContext: `Value: papers with code, benchmarks, novel architectures. Penalize: hype, speculation, product launches without technical depth.`, }; GitHub: github.com/solzange/reddit-signal Built with Claude Code. Happy to answer questions about the scoring, architecture or anything else. submitted by /u/solzange [link] [comments]
View originalI used Claude Code to build a portable AI worker Desktop from scratch — the open-source community gave it 391 stars in 6 days
I want to share something I built with Claude Code over the past week because it shows what AI-assisted development can actually do when pointed at a genuinely hard problem: moving AI agents beyond one-off task execution. Most AI wrappers just send prompts to an API. Building a continuously operating AI worker requires queueing, harness integration, and MCP orchestration. I wanted a way to make AI worker environments fully portable. No widely adopted solution had cleanly solved the "how do we package the context, tools, and skills so anyone can run it locally" problem effectively. What Claude Code did: I pointed Claude (Opus 4.6 - high thinking) at the architecture design for Holaboss, an AI Worker Desktop. Claude helped me build a three-layer system separating the Electron desktop UI, the TypeScript-based runtime system, and the sandbox root. It understood how to implement the memory catalog metadata, helped me write the compaction boundary logic for session continuity, and worked through the MCP orchestration so workspace skills could be merged with embedded runtime skills seamlessly. The result is a fully portable runtime. Your AI workers, along with their context and tools, can be packaged and shared. It's free, open-source (MIT), and runs locally with Node.js (desktop + runtime bundle). It supports OpenAI, Anthropic, OpenRouter, Gemini, and Ollama out of the box. I open-sourced this a few days ago and the reaction has been unreal. The GitHub repo hit 391 stars in just 6 days. The community is already building on top of the 4 built-in worker templates (Social Operator, Gmail Assistant, Build in Public, and Starter Workspace). This was so far from the typical "I used AI to write a to-do app." This was Claude Code helping architect a real, local, three-tier desktop and runtime system for autonomous AI workers. And people are running it on their Macs right now (Windows & Linux in progress). I truly still can't believe it. The GitHub repo is public if you want to try it or build your own worker. GitHub ⭐️: https://github.com/holaboss-ai/holaboss-ai submitted by /u/Imaginary-Tax2075 [link] [comments]
View originalClaude Code's full source just leaked via npm source maps -- here's what 512K lines of TypeScript reveal
Security researcher Chaofan Shou discovered that Anthropic shipped source maps in the npm package u/anthropic-ai/claude-code@2.1.88. The 57MB cli.js.map pointed to a Cloudflare R2 bucket with the full unobfuscated TypeScript source. I've been building SpecWeave (spec-driven development framework, 100+ skills) on top of Claude Code for 5+ months, so I spent today analyzing the architecture. **Key findings:** **BUDDY** -- Full AI pet system. 18 species, rarity tiers, gacha mechanics, stats (DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK). Teaser April 1-7, launch May 2026. **Auto-Dream** -- Background memory consolidation that runs as a forked subagent. Fires after 24h + 5 sessions. Four phases: Orient, Gather, Consolidate, Prune. Your coding assistant literally dreams. **Undercover Mode** -- Auto-activates on public repos to strip internal Anthropic info from commits. "There is NO force-OFF." Found via a leak. **Advisor Tool** -- Can call a second, stronger model to review its work before acting. Embedded AI code review. **4-Layer Context Compression** -- MicroCompact -> AutoCompact (triggers ~187K tokens) -> Session Memory -> Full Summarization. Only restores 5 files post-compact. **Next models** -- opus-4-7 and sonnet-4-8 already referenced. "Capybara" model family. 22 secret internal Anthropic repos in undercover allowlist. **KAIROS** -- Always-on persistent assistant mode. Background session management with daemon mode. **Fast Mode costs 6x more** -- $30/$150 per MTok vs $5/$25 normal. Same Opus 4.6 model. **Full architecture analysis:** https://verified-skill.com/insights/claude-code **Source still live:** The R2 bucket hasn't been taken down yet. npm rolled back to 2.1.87 but the source is out there. Anthropic DMCA'd 438+ repos for a previous reverse-engineering effort in 2025, so mirrors may not last. https://preview.redd.it/agu2j0qhkgsg1.png?width=2294&format=png&auto=webp&s=263358c67d93a8af55c735d2cb2f0e62079f3302 https://preview.redd.it/rsdi3cuakgsg1.png?width=547&format=png&auto=webp&s=f64937b25315b2b580b8c50a15ba22401396bf65 submitted by /u/OwenAnton84 [link] [comments]
View originalI made a free tool to make it easier for anyone to publish websites with GitHub Pages
It’s always irked me that Squarespace and others can get away with charging $20/mo just to host a simple site, when it’s easy to host for free elsewhere (GitHub, Cloudflare, GitLab, Vercel, many more). It feels like they profit off people’s ignorance. However, I know the website builder can be valuable for non-technical folks. These days, AI has made it easier than ever for anyone to create a website, even without needing a fancy “drag and drop” builder. You can just ask AI to “make me a website about XYZ”, or write something in Word and ask it to turn it into a blog post. But I still don’t think most people know there are so many ways to host a basic website for free. And even if they do find something, none of those platforms are designed for hosting a simple website. Instead, they’re aimed at professional software engineers, with tons of complicated features and solutions, so they can be confusing and intimidating for someone new. So I made weejur, which is basically a super simple UI front-end for GitHub Pages. You log in with OAuth, and then you can just paste HTML or upload files to publish a website. If you don't have a GitHub account, you can sign up right in the OAuth flow. It's completely free, and in fact the site itself is hosted on GitHub Pages too (so you can view the source here). Claude Code did most of the work on this, but there was quite a bit of manual polishing on the design and UI to get flows and layouts that make sense. Maybe the next frontier for Claude! Feel free to try it out and please share any questions/ideas/feedback. https://weejur.com submitted by /u/elementninety3 [link] [comments]
View originalYes, Cloudflare AI offers a free tier. Pricing found: $0
Key features include: Build and deploy AI agents and applications on the AI Cloud.
Cloudflare AI is commonly used for: Build and deploy AI agents and applications on the AI Cloud.
Based on user reviews and social mentions, the most common pain points are: spending too much.
Based on 15 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.