Convex is the backend platform that keeps your app in sync. Everything you need to build your full-stack project.
Stack is the Convex developer portal and blog, sharing bright ideas and techniques for building with Convex. From database schemas to queries, from auth to APIs, express every part of your backend in pure TypeScript. Your backend code lives next to your app code, is typechecked and autocompleted, and is generated by AI with exceptional accuracy. Convex libraries guarantee that your app always reflects changes to your frontend code, backend code, and database state in real time. No need for state managers, cache invalidation policies, or websockets. Create cron jobs, kick off backend AI workflows, leverage built-in auth, and tap into a growing ecosystem of components that solve common backend needs with just an npm i. From database schemas to queries, from auth to APIs, express every part of your backend in pure TypeScript. Your backend code lives next to your app code, is typechecked and autocompleted, and is generated by AI with exceptional accuracy. With Convex, everything is just TypeScript. This means your favorite AI tools are pre-equipped to generate high quality code. Everything your product deserves to build, launch, and scale. What people building their business on Convex are saying. Connect your backend to your client libraries and frameworks Stack is the Convex developer portal and blog, sharing bright ideas and techniques for building with Convex. Free plan has built-in resources. Starter plan is pay-as-you-go beyond free tier limits. Professional includes higher built-in resources and lower per-unit rates. Need help choosing the right plan? Our team is here to help.
Mentions (30d)
0
Reviews
0
Platforms
2
GitHub Stars
11,047
659 forks
Industry
information technology & services
Employees
34
Funding Stage
Series A
Total Funding
$53.2M
1,992
GitHub followers
175
GitHub repos
11,047
GitHub stars
20
npm packages
Pricing found: $0/month, $25, $2,500, $2.20, $2
I just scaled Convex's open-source database horizontally using Claude Code. I don't write Rust and I barely understand database internals.
So I've been using Convex for a while and the one thing that bugged me is that the self-hosted backend is single-node only. Their docs literally have this line: "You'll have to modify the code to support horizontal scalability of the database, or swap in a different database technology" Nobody had actually done it. So I decided to try. For context, Convex isn't like a normal database. It's a reactive database that has things no distributed database has all together: • Real-time WebSocket subscriptions (push updates to clients instantly) • In-memory snapshot state machine (the whole live database sits in memory) • Optimistic concurrency control with automatic retry • TypeScript/JavaScript function execution (your backend logic runs inside the database) • ACID transactions CockroachDB doesn't have real-time subscriptions. TiDB doesn't have in-memory snapshots. Vitess doesn't have OCC. Spanner doesn't run your application code. Convex has all of them — but couldn't scale past one machine. The problem is the entire backend is written in Rust and I don't write Rust. I also didn't know anything about distributed systems, Raft consensus, two-phase commit, or how databases like CockroachDB and TiDB actually work under the hood. So I used Claude Code (Anthropic's CLI tool) for the entire thing. I basically told it what I wanted, it researched how the big distributed databases solve each problem, and then implemented it. I pushed back when things looked too simple, asked it to explain decisions, and made it redo things when I didn't like the approach. What we ended up building: • Read scaling — multiple nodes serve queries via NATS JetStream delta replication • Write scaling — tables partitioned across nodes (like Vitess), with two-phase commit for cross-partition writes • Automatic failover — tikv/raft-rs consensus per partition, sub-second leader election. Kill any node, writes resume on the new leader • Persistent Raft logs — TiKV's raft-engine (they moved away from RocksDB for this because of 30x write amplification) • Global timestamp ordering — batch TSO from TiDB's PD pattern, zero network calls in the hot path • 87 integration tests — patterns from Jepsen tests that found real bugs in CockroachDB, TiDB, and YugabyteDB Every engineering pattern came from studying how CockroachDB, TiDB, Vitess, YugabyteDB, and Google Spanner solved the same problems. Nothing was invented — it was all researched from how the giants do it and then applied to Convex's unique architecture. You can run the whole thing with one command: docker compose --profile cluster up 6 nodes (2 partitions × 3 Raft nodes), automatic leader election, all nodes serve reads, kill any node and it recovers in ~1 second. Images published to GitHub Container Registry — no local build needed. Repo: https://github.com/MartinKalema/horizontal-scaling-convex I'm not claiming this is a breakthrough — every individual technique already existed in production at these companies. But nobody had combined them for Convex before, and the challenge was keeping all the things that make Convex special (subscriptions, in-memory OCC, TypeScript execution) while adding horizontal scaling on top. I genuinely could not have done this without AI. The entire codebase is Rust and I've never written a line of Rust in my life. Claude Code wrote every line of Rust, researched every distributed systems pattern, and debugged every failure. I directed the project, made the product decisions, and kept pushing for the proper engineering approach. Curious what people think. Is AI-assisted systems engineering like this going to become normal? Would love feedback on the architecture from anyone who actually works on distributed databases. submitted by /u/CourageCareless3219 [link] [comments]
View originalImmediately Revert Update - Stop with Emojis
Claude started writing with emojis just like ChatGPT. Stop it. ChatGPT went downhill hard after they did that. This screams unprofessional - something Claude stood out for. Instead of introducing dumb ass emojis, how about you fix markdown and katex/latex embedding. Claude keeps having these rendering issues with markdown recently and latex in blocks ie $$...$$ also only renders correctly 70% of the time because Claude keeps putting newlines into these blocks which doesnt work. https://preview.redd.it/dowulrpm1srg1.png?width=1314&format=png&auto=webp&s=0f0883966081f7c237dc0170c9142a0b51a2e6a8 submitted by /u/Bravo6GoingDark__ [link] [comments]
View original[Day 2/5] I built a SaaS using an AI coding assistant. Here is exactly how that works and where it breaks.
Yesterday I posted Day 1 of this series — the origin story and numbers from a 129-location franchise project. Got some solid feedback, including someone pointing out my mobile layout was broken and my site was crashing. They were right on both counts. Fixed it that night. Today: how the thing actually gets built, what works, and where it completely falls apart. The stack: Next.js 16 (App Router) — file-based routing, React ecosystem Convex — real-time database with WebSocket subscriptions. When a lead's intent score goes from WARM to HOT, every connected client sees it instantly. For speed-to-lead, real-time isn't optional Clerk for auth — org management, role-based access, webhook sync to Convex Railway for hosting — push to deploy I picked each piece because it handles a complete domain. I describe features in plain English, Claude Code writes the implementation. If I'm spending time debugging OAuth flows instead of product logic, I've picked the wrong tools. What works: Describing features and getting working code in minutes. "When a lead crosses the HOT threshold, send a push notification to the nearest sales rep with tap-to-call and a personalised call script." Schema changes, API endpoints, UI — done. The throughput on product-level code is 10-20x what hiring would give me at this stage. Where it falls apart — deployment: Feb 26 was my worst day. 40 commits. Most were fixes. Railway needs standalone Next.js output for Docker. The build succeeded locally but failed in production because of a manifest file Railway couldn't resolve. Spent the entire day on output configs and middleware edge cases. The AI can't SSH into your container. Can't read runtime logs. When the deploy pipeline is the problem, you're on your own. The site went down for 4 days. I didn't know. No monitoring, no alerts, and I was testing locally. Found out when I tried to demo to a prospect. The fix was one line. Four days of downtime for a one-line fix. Auth was rewritten 4 times: Clerk handles auth, Convex handles the database. They sync via webhook. Simple in theory. Iteration 1: worked in dev, broke in production. JWT issuer domain was different between Clerk's dev and prod instances. Iteration 2: fixed JWT. New problem — race condition. User signs up, redirects to onboarding, but the webhook hasn't arrived. Database says "who are you?" two seconds after account creation. First impression destroyed. Iteration 3: polling. Check for the user record every 500ms for 10 seconds. Worked but felt terrible. Iteration 4: restructured everything. Onboarding creates the user record using Clerk's session data. Webhook becomes a sync mechanism, not the creation path. Finally solid. Four iterations. Each half a day. Each time I was sure it was done. Someone in yesterday's comments asked about schema sprawl — fair question. Started at 20 tables, now at 39. Here's what forced the growth: leadEvents: needed every interaction tracked — page views, clicks, form abandonment — to build an accurate intent score. One table became two shiftSchedules + centerHours: can't alert reps at 2 AM. Shift-aware routing wasn't optional achievements + leaderboardEntries: gamification was scope creep. But 5 reps competing to respond fastest? A leaderboard is the cheapest motivation tool there is boostSites: AI scans a prospect's website and shows exactly what SignalSprint would add. Became the best sales tool in the stack Every table exists because something broke without it. But yeah, 39 is a lot. Some of it could probably be consolidated. What I'd tell anyone building with AI tools: Pick a stack where each piece owns a domain. Don't build your own auth or real-time layer Test everything. Click every button. Try to break it. The AI writes code that looks right and breaks in production Deployment is where AI help drops to near zero. Budget 3x the time One person flagging your mobile layout is worth more than a week of building features. Ship early, take the punches Tomorrow: the rebrand, the Stripe bugs, and the emotional part nobody posts about. TL;DR: Building with Claude Code. 391 commits, 39 tables. AI is 10-20x faster on product code. Useless for deployment. Auth rewrote 4 times. Site down 4 days and I didn't know. Someone told me my mobile layout was broken yesterday — they were right. Ship early, fix fast. submitted by /u/powleads [link] [comments]
View originalI shipped a production SaaS with 39 database tables using Claude Code. I am not a developer. Here is what actually works and what breaks.
I'm not a developer. I've never written a line of code by hand. But I just shipped a production SaaS with 39 database tables, real-time WebSocket connections, Stripe billing, and a multi-portal architecture. All built with Claude Code. Here's the honest version of what that actually looks like — because the "vibe coding" narrative online skips the hard parts. The backstory: I was running Facebook ads for a wellness franchise with 129 locations. Kept optimising everything — creatives, dynamic landers, personalised guides based on lead form input. Engagement numbers looked great. Bottom line barely moved. Then I pulled the response time data. The locations were taking hours to call leads back. That was the actual bottleneck — not the ads, not the landing page. Speed to lead. So I decided to build a system that fixes this. A single JavaScript snippet that adds dynamic widgets to any existing site, tracks lead behaviour in real-time, assigns intent scores (COLD/WARM/HOT), and sends instant push notifications to the nearest sales rep with tap-to-call when a lead goes hot. The stack (chosen specifically for AI-assisted building): Next.js 16 (App Router) — file-based routing means less wiring to explain to the AI Convex — real-time database with WebSocket subscriptions out of the box. This was the critical choice. For a speed-to-lead product, real-time updates aren't optional Clerk — handles auth so I don't need to debug OAuth flows Railway — push to deploy Each piece handles an entire domain. That matters when you're describing features in plain English and the AI is writing the implementation — you want it focused on your product logic, not infrastructure plumbing. What actually works well: I can describe a feature like "when a lead's intent score crosses the HOT threshold, send a push notification to the assigned sales rep with their name, the lead's name, and a tap-to-call button" and get a working implementation in minutes. Schema changes, API endpoints, UI components. The throughput is genuinely wild compared to hiring. Building new features is fast. Iterating on UI is fast. Adding database tables and the associated CRUD operations — fast. Where it falls apart: Deployment. Railway was down for 4 days at one point because a CI check was silently failing and I had no monitoring. The AI couldn't help — it can't SSH into your Railway container or read runtime logs in context. Auth was rewritten 4 times. Webhook race conditions between Clerk and Convex. JWT issuer mismatches between dev and production. Each iteration took half a day and the AI kept confidently writing code that worked in isolation but broke in production. Stripe had three bugs that each took hours: currency defaulting to USD instead of GBP, missing portal configuration, and webhook event ordering issues. The AI was useless for the event ordering bug because it only happened 30% of the time. The security problem nobody talks about: I ran a security audit and found 4 critical issues: unauthenticated database functions, missing webhook signature verification, no rate limiting on public endpoints, and exposed environment variables. These were introduced because the AI doesn't think about security by default — it writes code that works, not code that's safe. The numbers: 391 git commits. 39 database tables. 60 backend files. Across 2,617 tracked leads at the franchise: 56.7% engagement rate (industry avg is 20-30%), response time went from 2-4 hours to under 5 minutes. The product is live at signalsprint.io. Zero paying customers so far. Building is the easy part. What I'd tell anyone starting this: Pick a stack where each piece handles a complete domain — auth, real-time data, hosting. Don't try to build your own Test EVERYTHING yourself. The AI will write code that looks right and passes the vibe check but breaks in production Run a security audit before you launch. The AI introduces vulnerabilities it doesn't mention Deployment is where AI-assisted development hits a wall. Budget 3x the time you think Version control every single change. 391 commits means I can bisect back to any breaking change I'm documenting the full journey in a 5-day Reddit series if anyone's interested. Happy to answer questions about specific parts of the stack or workflow. submitted by /u/powleads [link] [comments]
View originalRepository Audit Available
Deep analysis of get-convex/convex-backend — architecture, costs, security, dependencies & more
Yes, Convex offers a free tier. Pricing found: $0/month, $25, $2,500, $2.20, $2
Convex has a public GitHub repository with 11,047 stars.