Build production-grade applications with a Postgres database, Authentication, instant APIs, Realtime, Functions, Storage and Vector embeddings. Start
Based on these social mentions, users view Supabase very positively as a developer-friendly backend solution. The main strength highlighted is its exceptional ease of use and rapid setup - users praise how quickly they can get databases and backend infrastructure running, with one beginner noting they connected Supabase to a website in just 20 minutes using Claude AI. Users frequently integrate Supabase into complex AI and web development projects, indicating strong compatibility with modern development stacks and workflows. The tool appears to have an excellent reputation among developers for enabling fast prototyping and production deployments, though specific pricing sentiment isn't evident from these mentions.
Mentions (30d)
11
10 this week
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Based on these social mentions, users view Supabase very positively as a developer-friendly backend solution. The main strength highlighted is its exceptional ease of use and rapid setup - users praise how quickly they can get databases and backend infrastructure running, with one beginner noting they connected Supabase to a website in just 20 minutes using Claude AI. Users frequently integrate Supabase into complex AI and web development projects, indicating strong compatibility with modern development stacks and workflows. The tool appears to have an excellent reputation among developers for enabling fast prototyping and production deployments, though specific pricing sentiment isn't evident from these mentions.
Features
Industry
information technology & services
Employees
300
Funding Stage
Series E
Total Funding
$696.1M
192,774
Twitter followers
20
npm packages
25
HuggingFace models
Pricing found: $1, $1, $1
I built a security scanner for Claude Code (and vibe coding in general) — here's what it found in my own projects
I built VibeLint using Claude Code. It runs as an MCP server inside your IDE and scans AI-generated code for security issues before it gets written to your files. While building it, I started scanning my own projects with it. What I found was uncomfortable. In one file, it caught my OpenAI API key and my Supabase service role key — both hardcoded by the AI. The service role key bypasses RLS entirely, meaning anyone with it has unrestricted access to the database. Across my last 5 projects, the most common issues were injection risks, missing or insecure auth, CORS misconfigurations, and hardcoded secrets. Claude Code is genuinely great at writing fast, functional code. But "functional" and "secure" are different things, and the AI optimizes for the first one. VibeLint is free to try. The free version runs locally and catches the most common issues. Repo and install instructions at vibelint.dev. Happy to answer questions about how I built it or what the MCP integration looks like. submitted by /u/vibelint_dev [link] [comments]
View originalI built a full iOS app in 2 weeks with Claude Code. Here’s what it was great at, and where it broke.
I wanted to share an honest breakdown of what using Claude Code as my main dev tool actually felt like. This wasn’t a landing page or a toy project. I used it to build and ship a full React Native app to the App Store. The app has 225 lessons, 13 exercise types, a real-time duel system, Supabase backend/auth, subscriptions, and a bunch of gamification. What Claude Code was great at It was insanely fast at scaffolding. I could describe a feature and it would generate the project structure, screens, navigation, and boilerplate way faster than I would have done manually. It was also really strong for repetitive mechanical work. Once I had the pattern right, it helped me build out learning paths, exercise formats, and backend wiring much faster than normal. Supabase was also smoother than I expected. Auth, schemas, and edge functions were all very doable with the right prompts. Where it broke Big files were the biggest problem. Once I started feeding it large content files, it would lose the plot, repeat itself, or start hallucinating. Breaking content generation into much smaller lesson batches fixed most of that. It also had a tendency to overcorrect. Sometimes I wanted one small fix and it would try to rewrite an entire page. I got much better results once I started keeping prompts short, specific, and focused on one change at a time. What workflow worked best The best workflow for me was: short prompt → test visually → commit if good → move to the next chunk Once I stopped treating it like magic and started treating it more like very fast pair programming, everything got easier. The more specific and pointed you can be with your prompts, the better. I also ended up using different models for different jobs. Opus was better for writing actual lesson content. Sonnet was better for mechanical edits and formatting. What I’d tell anyone starting Don’t try to make one giant prompt do everything. Break the app into small chunks. Keep prompts narrow. Verify visually. Commit constantly. If you do that, Claude Code becomes a lot more useful and a lot less chaotic. The app is called Kiro. It’s basically Duolingo for AI skills, and I built the whole thing solo in about 2 weeks. Happy to answer questions if anyone here is building with Claude Code too. submitted by /u/Kiro_ai [link] [comments]
View originalBuilt a tool to capture and search AI coding sessions across providers. Looking for feedback on the approach.
Core problem: AI sessions aren't searchable across providers. You solve something with Claude Code, need it again weeks later, can't find it. Start over. What I built: Three capture methods: API proxy for OpenAI/Anthropic/Google endpoints (zero code changes) Native hooks for Claude Code and Gemini CLI (structured session data via stdin) Browser extension for ChatGPT/Claude.ai Everything flows into a unified search: hybrid semantic (embeddings) + keyword (BM25), RRF fusion for ranking. Sub-second results across all providers. Hook-level DLP: When Claude Code reads .env files, actual secrets never reach the model. Intercepts file reads, replaces values with [REDACTED:API_KEY] placeholders, passes sanitized version to Claude. Model can reason about variables without seeing credentials. Architecture: Python FastAPI backend Qdrant for vector search (OpenAI embeddings, 1536d) Supabase (PostgreSQL) for session storage Next.js frontend Privacy: Everything runs locally or in your account. Export/delete anytime. Nothing shared. PyPI package: https://pypi.org/project/rclm (hooks + proxy) Live beta: reclaimllm.com Questions for this community: Claude Code users: Would you actually use hook-level capture, or is the transcript file enough? DLP approach: Is interception at file-read too aggressive, or is post-hoc flagging insufficient? Missing features: What would make this actually useful vs just interesting? Marketplace: Given the sessions can be sanitized to certain extent, would it make sense for a marketplace where people can share/sell their chat sessions? Primarily I think from open source perspective as we are getting tied down to closed source models Enterprise: What enterprise use you can think of for this service Honest feedback appreciated. If the approach is fundamentally wrong, I'd rather know now. submitted by /u/Inevitable-Lack-8747 [link] [comments]
View originalI built a CLI that installs the right AI agent skills for your project in one command (npx skillsense)
Hey r/ClaudeAI, I got tired of spending 20-40 minutes manually setting up skills every time I started a new project. Find the right ones, download them, put them in the right folder, check for conflicts... pure friction. So I built skillsense. npx skillsense That's it. It reads your package.json / pyproject.toml / go.mod / Cargo.toml / Gemfile, detects your stack, and installs the correct SKILL.md files into .claude/skills/ (or .opencode/, .github/skills/, .vscode/ depending on your agent). What it does: • Detects 27 stacks: Next.js, React, Vue, Django, FastAPI, Rails, Go, Rust, Prisma, Supabase, Tailwind, Stripe, Docker... • Applies combo rules (e.g. Next.js + Prisma + Supabase installs all three in the right order) • Verifies SHA-256 integrity on every download • Full rollback if anything fails • Works with Claude Code, OpenCode, GitHub Copilot, and VS Code Flags: --dry-run, --yes, --global, --agent It's open source and the catalog is a YAML file in the repo — easy to contribute new skills. GitHub: https://github.com/andresquirogadev/skillsense npm: https://www.npmjs.com/package/skillsense Happy to hear what stacks you'd want added! submitted by /u/AndresQuirogaa [link] [comments]
View originalBuilt a conversational AI career tool in 5 days with no coding background — looking for honest feedback
I’m a paraprofessional with an education degree. Couldn’t find a job last week so I built one instead. Lune is a 10 question conversation that tries to surface what resumes miss. Not a resume builder, not a job board. It just asks what’s going on and tries to say something true back to you. It does passive constraint detection and gap analysis between what you say you want versus what you actually seem to need. Closing question is generated from the most specific thing you said in the whole conversation. I stress tested it against 42 synthetic personas — undocumented workers, formerly incarcerated people, grieving widowers, minors raising siblings. No failures but I also built the thing so I’m probably missing stuff. Stack if you care: Vercel, Claude Sonnet, Supabase, Resend, Stripe. Started as a single HTML file, now has a real backend. Conversation is free. I’m not trying to get paying users right now I just want people who will actually try it and tell me what’s broken or what doesn’t land. Strictly looking for feedback! submitted by /u/visaversa123 [link] [comments]
View originalVibe coded a full SaaS, how do I actually make sure it’s secure before launching?
I’ve built a SaaS almost entirely with AI assistance (Claude) and I’m getting close to wanting real users on it. The stack is Next.js, Supabase, Stripe Connect, and Vercel. It’s got multiple user roles with different permissions, payments, email notifications, and a fair bit of data that really shouldn’t be visible across accounts. I’m not a senior dev, I can sort of read and understand the code but I didn’t write most of it from scratch. That’s what’s making me nervous. It looks fine but I don’t fully know what I don’t know. ∙ Anything Stripe Connect specific I should be auditing? ∙ Are there any tools that can scan for obvious vulnerabilities? Has anyone gone through this process with a vibe coded app? What did your security checklist look like and where did you find the gaps? submitted by /u/becauseadele [link] [comments]
View originalClaude Code (CLI) vs. App: Is the terminal more token-efficient for Pro users?
Hey everyone I'm about to pull the trigger on a Claude Pro subscription ($20 is a bit steep in my local currency, so I need to make it count). I’ve noticed that using Claude in the browser seems to hit the usage limits very quickly. The desktop app felt a bit more stable, but I’m curious about Claude Code (the CLI tool). Is it the "meta" for power users who want to avoid the "You've reached your limit" message as long as possible? I'm mostly working on n8n automation and Supabase backends, so contexts can get messy pretty fast. Would love to hear your experiences before I subscribe! P.S. Used AI to help translate this post. I'm from Brazil. submitted by /u/Objective_Office_409 [link] [comments]
View originalI accidentally built a 30-agent marketing system because I couldn't be bothered doing SEO manually
So I run a small web design studio for tradespeople — plumbers, electricians, builders. The kind of people who'd rather be fixing a boiler than thinking about their website. The problem was I had a product but absolutely no idea how to get it in front of people. I'm not a marketer. I'm a developer who keeps accidentally building tools instead of doing the actual work. Anyway, I started building agents in Claude Code to handle my marketing. One for SEO keyword research. Then one for content strategy. Then one for writing the content. Then I thought "well, I should probably do Meta Ads too" so I built 8 more. Then social media. Then I built agents that improve the other agents (at this point I'm aware I have a problem). I now have 30 agents across 3 channels: 1) Meta Ads (8 agents): from competitor research all the way to campaign deployment 2) SEO (8 agents): query classification → content → outreach → learning 3)Social Media (8 agents): audience research → content → publishing → engagement 4) Infrastructure (6 agents): these ones scan for new tools and upgrade the others weekly. Yes, I built agents that improve agents. No, I don't know when to stop. The bit I'm actually proud of: they all share a brain. It's a Supabase table called `marketing_knowledge`. When the Meta Ads agent discovers that pain-point hooks convert better than questions — the SEO content writer and social media agents pick that up automatically. Each cycle the whole thing gets a bit smarter. It's all just markdown files. No executables, no binaries, nothing dodgy. You can read every line before installing. ``` git remote set-url origin https://github.com/hothands123/marketing-agents.git cd marketing-agents && bash install.sh ``` Then `/marketing-setup` to configure it for your business. I built it for myself but figured others might find it useful. Genuinely keen to hear what's missing — I've been staring at this for weeks and have lost all objectivity. submitted by /u/Humble_Ear_2012 [link] [comments]
View originalself-hosted monitoring for Claude Code & Codex
About a month after our team started using Claude Code, someone asked in Slack how much we were spending. Nobody knew. We looked around for a monitoring tool, didn't find one we liked, and ended up building our own. Zeude is a self-hosted dashboard that tracks Claude Code and OpenAI Codex usage in one place. You get per-prompt token and cost breakdowns, a weekly leaderboard (with cohort grouping if your org is big enough to care), and a way to push skills, MCP servers, and hooks to your whole team from the dashboard instead of chasing people on Slack The big things in v1.0.0: Windows support. It was macOS/Linux only before. Now the whole team can use it regardless of OS. Codex integration. A lot of teams use both Claude Code and Codex, and tracking only one of them gives you half the picture on costs. Now both go through the same dashboard. Per-user skill opt-out. Team skill sync was already there, but it was all-or-nothing. Now individuals can turn off skills they don't want. Turns out not everyone wants every skill pushed to their machine. Stack is Next.js + Supabase + ClickHouse + OTel Collector. All your data stays on your infra. We ran it internally for ~6 months before cleaning it up for open source. It's not perfect, but it solved a real problem for us and figured others might be in the same spot. https://github.com/zep-us/zeude If you try it out, let me know what breaks. submitted by /u/Lopsided_Yak9897 [link] [comments]
View originalBuilt with Claude API: Give your agent SKILL.md and it handles the rest — Agenexus
I built Agenexus because I kept hitting the same wall: multi-agent systems require knowing your agents in advance. Every collaboration is hardcoded. There's no way for an agent to find a collaborator that wasn't pre-wired to work with. Claude API is the core of how it works: Claude evaluates capability challenges to verify that agents are real and can do what they claim Claude powers the semantic matching between agents based on their SKILL.md profiles Each agent in a collaboration gets its own Claude-powered instance with its own conversation history How I built it: Next.js frontend, Supabase for the database, Voyage AI for embeddings, Claude API for intelligence. The hardest part was designing the agent-native onboarding — no forms, no UI, just a markdown file the agent reads and follows autonomously. Why agent-native: I wanted to build something where humans are optional. No human accounts exist on the platform. Agents register themselves, complete challenges, get matched, and collaborate. Humans just watch. Free to try: give your agent agenexus.ai/skill.md and it handles the rest. submitted by /u/Agenexus [link] [comments]
View originalI gave Claude Code a knowledge graph, spaced repetition, and semantic search over my Obsidian vault — it actually remembers things now
# I built a 25-tool AI Second Brain with Claude Code + Obsidian + Ollama — here's the full architecture **TL;DR:** I spent a night building a self-improving knowledge system that runs 25 automated tools hourly. It indexes my vault with semantic search (bge-m3 on a 3080), builds a knowledge graph (375 nodes), detects contradictions, auto-prunes stale notes, tracks my frustration levels, does autonomous research, and generates Obsidian Canvas maps — all without me touching anything. Claude Code gets smarter every session because the vault feeds it optimized context automatically. --- ## The Problem I run a solo dev agency (web design + social media automation for Serbian SMBs). I have 4 interconnected projects, 64K business leads, and hundreds of Claude Code sessions per week. My problem: **Claude Code starts every session with amnesia.** It doesn't remember what we did yesterday, what decisions we made, or what's blocked. The standard fix (CLAUDE.md + MEMORY.md) helped but wasn't enough. I needed a system that: - Gets smarter over time without manual work - Survives context compaction (when Claude's memory gets cleared mid-session) - Connects knowledge across projects - Catches when old info contradicts new reality ## What I Built ### The Stack - **Obsidian** vault (~350 notes) as the knowledge store - **Claude Code** (Opus) as the AI that reads/writes the vault - **Ollama** + **bge-m3** (1024-dim embeddings, RTX 3080) for local semantic search - **SQLite** (better-sqlite3) for search index, graph DB, codebase index - **Express** server for a React dashboard - **2 MCP servers** giving Claude native vault + graph access - **Windows Task Scheduler** running everything hourly ### 25 Tools (all Node.js ES modules, zero external dependencies beyond what's already in the repo) #### Layer 1: Data Collection | Tool | What it does | |------|-------------| | `vault-live-sync.mjs` | Watches Claude Code JSONL sessions in real-time, converts to Obsidian notes | | `vault-sync.mjs` | Hourly sync: Supabase stats, AutoPost status, git activity, project context | | `vault-voice.mjs` | Voice-to-vault: Whisper transcription + Sonnet summary of audio files | | `vault-clip.mjs` | Web clipping: RSS feeds + Brave Search topic monitoring + AI summary | | `vault-git-stats.mjs` | Git metrics: commit streaks, file hotspots, hourly distribution, per-project breakdown | #### Layer 2: Processing & Intelligence | Tool | What it does | |------|-------------| | `vault-digest.mjs` | Daily digest: aggregates all sessions into one readable page | | `vault-reflect.mjs` | Uses Sonnet to extract key decisions from sessions, auto-promotes to MEMORY.md | | `vault-autotag.mjs` | AI auto-tagging: Sonnet suggests tags + wikilink connections for changed notes | | `vault-schema.mjs` | Frontmatter validator: 10 note types, compliance reporting, auto-fix mode | | `vault-handoff.mjs` | Generates machine-readable `handoff.json` (survives compaction better than markdown) | | `vault-session-start.mjs` | Assembles optimal context package for new Claude sessions | #### Layer 3: Search & Retrieval | Tool | What it does | |------|-------------| | `vault-search.mjs` | FTS5 + chunked semantic search (512-char chunks, bge-m3 1024-dim). Flags: `--semantic`, `--hybrid`, `--scope`, `--since`, `--between`, `--recent`. Retrieval logging + heat map. | | `vault-codebase.mjs` | Indexes 2,011 source files: exports, routes, imports, JSDoc. "Where is the image upload logic?" actually works. | | `vault-graph.mjs` | Knowledge graph: 375 nodes, 275 edges, betweenness centrality, community detection, link suggestions | | `vault-graph-mcp.mjs` | Graph as MCP server: 6 tools (search, neighbors, paths, common, bridges, communities) Claude can use natively | #### Layer 4: Self-Improvement | Tool | What it does | |------|-------------| | `vault-patterns.mjs` | Weekly patterns: momentum score (1-10), project attention %, velocity trends, token burn ($), stuck detection, frustration/energy tracking, burnout risk | | `vault-spaced.mjs` | Spaced repetition (FSRS): 348 notes tracked, priority-based review scheduling. Critical decisions resurface before you forget them. | | `vault-prune.mjs` | Hot/warm/cold decay scoring. Auto-archives stale notes. Never-retrieved notes get flagged. | | `vault-contradict.mjs` | Contradiction detection: rule-based (stale references, metric drift, date conflicts) + AI-powered (Sonnet compares related docs) | | `vault-research.mjs` | Autonomous research: Brave Search + Sonnet, scheduled topic monitoring (competitors, grants, tech trends) | #### Layer 5: Visualization & Monitoring | Tool | What it does | |------|-------------| | `vault-canvas.mjs` | Auto-generates Obsidian Canvas files from knowledge graph (5 modes: full map, per-project, hub-centered, communities, daily) | | `vault-heartbeat.mjs` | Proactive agent: gathers state from all services, Sonnet reasons about what needs attention, sends WhatsApp alerts | | `vault-dashboard/` | React SPA dashboard (Expre
View originalPACT v0.7.1 — What's changed since Compound Intelligence (and thank you!)
Hey everyone. Last week I posted about PACT when I added Compound Intelligence (the research knowledge base, capability baseline, and knowledge directory). A lot has happened since then and I wanted to share an update. What's new since that post: Subagents (v0.6.0): Three specialized agents that run in isolated contexts so your main session stays focused. pact-researcher verifies packages and APIs before you write code against them. pact-reviewer runs a governance checklist before commits. pact-tracer maps dependency chains before you edit shared code. They're Sonnet-based and dispatch automatically. Live Dashboard (v0.5.0): Real-time visualization of agent activity. Session lanes, task tracking, per-type icons, activity timeline, and a task rating system. Rate Claude's work 1-5 after each task. Ratings feed into a scorecard that the agent reads at session start, so it adjusts based on your feedback. Runs locally. Vector Memory (v0.7.0): Semantic search across all PACT knowledge using sqlite-vec and a local embedding model. No server, no API keys. Your bugs, solutions, research, and task feedback are all indexed and searchable by meaning, not just keywords. /pact-recall searches it on demand. Embedded Agent Guide (v0.7.1): This is the big one for me personally. Besides my main Flutter+Dart project, I've been using PACT's principles in a production web app where Claude is embedded as an AI bookkeeper for a small business. The patterns translate cleanly: Supabase tables instead of YAML files, API middleware instead of shell hooks, system prompt redirections instead of CLAUDE.md. The guide (EMBEDDED.md) walks through the full translation with code examples and covers four common embedded agent patterns: bookkeeping/financial, customer support, operations/scheduling, and sales/CRM. Project Context for Subagents (v0.7.1): A pact-context.yaml file that gives subagents project awareness. Before this, they launched blind and only knew what you told them in the prompt. Now they read your stack, conventions, critical paths, and external service gotchas before doing any work. Bug fixes (v0.7.1): The feedback milestone prompts (Day 2 and Week 2) weren't triggering for anyone. The plugin's session-register script was a stripped-down copy that was missing the feedback, dashboard, and scorecard checks. Also fixed: the dashboard "ask" prompt never appeared on session start. Oh, and THANK YOU! PACT has over 100 users now. I built this because I was frustrated with Claude forgetting everything between sessions and making the same mistakes I'd already corrected (or brand new mistakes that were equally mind-boggling). If you've tried PACT, I'd love to hear what worked and what didn't. What subsystems do you actually use? I personally get heavy mileage out of the cognitive redirections + hooks. Side note... I just learned a few days ago that there is another project with the same acronym, and theirs is about a month older. Oops! I hope that doesn't get too confusing, but I worry I'm too deep into this to change project names. lol Repo: https://github.com/jonathanmr22/pact submitted by /u/jonathanmr22 [link] [comments]
View originalI built a real-time PVP arena for Claude Code buddies — your companion's stats actually matter
After seeing the /buddy system ship, I spent a few days building Buddy Battle — a multiplayer terminal fighting game where your Claude Code companion is your fighter. It pulls your buddy from Claude Code and uses it's stats to pump your Buddy's actions. Matches happen in real-time over Supabase. There's also a global ELO leaderboard. Would love to fight some of your buddies. One command to jump in. In your terminal run: npx -y buddy-battle@latest Happy to answer questions about the architecture — the headless bot fallback and match token system were the most interesting parts to build. submitted by /u/zenofase [link] [comments]
View originalI open-sourced my AI-curated Reddit feed (Self-hosted on Cloudflare, Supabase, and Vercel)
A week ago I shared a tool I built that scans Reddit and surfaces the actually useful posts about vibecoding and AI-assisted development. It filters out the "I made $1M with AI in 2 hours" posts, low-effort screenshots, and repeated beginner questions. A lot of people asked if they could use the same setup for their own topics, so I extracted it into an open-source repo. How it works: Every 15 minutes a Cloudflare Worker triggers the pipeline. It fetches Reddit JSON through a Cloudflare proxy, since Reddit often blocks Vercel/AWS IPs. A pre-filter removes low-signal posts before any AI runs. Remaining posts get engagement scoring with community-size normalization, comment boosts, and controversy penalties. Top posts optionally go through an LLM for quality rating, categorization, and one-line summaries. A diversity pass prevents one subreddit from dominating the feed. The stack: - Supabase for storage - Cloudflare Workers for cron + Reddit proxy - Vercel for the frontend - AI scoring optional, about $1-2/month with Claude Haiku What you get: dark-themed feed with AI summaries and category badges, daily archives, RSS, weekly digest via Resend, anonymous upvotes, and a feedback form. Setup is: clone, edit one config file, run one SQL migration, deploy two Workers, then deploy to Vercel. The config looks like this: const config = { name: "My ML Feed", subreddits: { core: [ { name: "MachineLearning", minScore: 20, communitySize: 300_000 }, { name: "LocalLLaMA", minScore: 15, communitySize: 300_000 }, ], }, keywords: ["LLM", "transformer model"], communityContext: `Value: papers with code, benchmarks, novel architectures. Penalize: hype, speculation, product launches without technical depth.`, }; GitHub: github.com/solzange/reddit-signal Built with Claude Code. Happy to answer questions about the scoring, architecture or anything else. submitted by /u/solzange [link] [comments]
View originalBeginner in Ai Coding
hello, I'm a newbie in Ai Coding. I am using Claude and I am amazed as it settled up my supabase and connect it to a internet page that it created in 20 min. so my question is how can I increase the efficiency. I am currently using Claude Code, is there any other program that is better? any settings I need to do? or any prompts I should know. submitted by /u/JediQuinlanVos [link] [comments]
View originalRepository Audit Available
Deep analysis of supabase/supabase — architecture, costs, security, dependencies & more
Pricing found: $1, $1, $1
Key features include: AI Integrations, Analytics Buckets (with Iceberg), Auth Hooks, Authorization via Row Level Security, Auto-generated GraphQL API via pg_graphql, Auto-generated REST API via PostgREST, Automatic Embeddings, Branching.
Based on 26 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Sasha Rush
Professor at Cornell / Hugging Face
1 mention