Prisma is a next-generation Node.js and TypeScript ORM for PostgreSQL, MySQL, SQL Server, SQLite, MongoDB, and CockroachDB. It provides type-safety, a
Based on the provided social mentions, users view Prisma primarily as an experimental AI/ML tool for research and development purposes. The main strengths appear to be its interpretability features and architecture visualization capabilities, with developers appreciating its potential for understanding model internals and data flow. Key complaints center around it being described as a "crap prototype" by its own creators, suggesting it's still in early development stages with significant limitations. There's no clear pricing sentiment from these mentions, as discussions focus more on technical experimentation rather than commercial use. Overall, Prisma seems to have a niche reputation among AI researchers and developers as an interesting but unpolished tool for model interpretability work.
Mentions (30d)
13
3 this week
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Based on the provided social mentions, users view Prisma primarily as an experimental AI/ML tool for research and development purposes. The main strengths appear to be its interpretability features and architecture visualization capabilities, with developers appreciating its potential for understanding model internals and data flow. Key complaints center around it being described as a "crap prototype" by its own creators, suggesting it's still in early development stages with significant limitations. There's no clear pricing sentiment from these mentions, as discussions focus more on technical experimentation rather than commercial use. Overall, Prisma seems to have a niche reputation among AI researchers and developers as an interesting but unpolished tool for model interpretability work.
20
npm packages
40
HuggingFace models
Pricing found: $0 / month, $10 / month, $0.0080, $2.00, $49 / month
I built a CLI that installs the right AI agent skills for your project in one command (npx skillsense)
Hey r/ClaudeAI, I got tired of spending 20-40 minutes manually setting up skills every time I started a new project. Find the right ones, download them, put them in the right folder, check for conflicts... pure friction. So I built skillsense. npx skillsense That's it. It reads your package.json / pyproject.toml / go.mod / Cargo.toml / Gemfile, detects your stack, and installs the correct SKILL.md files into .claude/skills/ (or .opencode/, .github/skills/, .vscode/ depending on your agent). What it does: • Detects 27 stacks: Next.js, React, Vue, Django, FastAPI, Rails, Go, Rust, Prisma, Supabase, Tailwind, Stripe, Docker... • Applies combo rules (e.g. Next.js + Prisma + Supabase installs all three in the right order) • Verifies SHA-256 integrity on every download • Full rollback if anything fails • Works with Claude Code, OpenCode, GitHub Copilot, and VS Code Flags: --dry-run, --yes, --global, --agent It's open source and the catalog is a YAML file in the repo — easy to contribute new skills. GitHub: https://github.com/andresquirogadev/skillsense npm: https://www.npmjs.com/package/skillsense Happy to hear what stacks you'd want added! submitted by /u/AndresQuirogaa [link] [comments]
View originalgot sick of telling claude the same stuff every session so i built a thing
right so every time i start a new claude code session its the same conversation. "be concise." "dont use prisma." "conventional commits." "i write go not python." absolute groundhog day. so i built devid - one toml file with your identity in it, distributed to claude code, cursor, claude.ai, wherever. you tell it once and its done. the bit thats actually clever - theres a session-end hook that watches your claude code sessions for corrections and preferences. if you say "dont do it like that" or "i prefer X" it picks it up and queues it. if nothing interesting happened in the session it doesnt even make an api call. no tokens wasted. whole identity fits in about 290 tokens. fragments not sentences. been using it myself for a couple of days now and honestly the difference is night and day. claude just knows how i work from the first message. https://github.com/Naly-programming/devid dead easy to install: curl -fsSL https://raw.githubusercontent.com/Naly-programming/devid/main/install.sh | sh submitted by /u/Lazy-Explanation-467 [link] [comments]
View originalClaude Code was making me re-explain my entire stack every session. Found a fix.
Every time I started a Claude Code session I was doing this ritual: "Ok so this project uses Next.js 14, PostgreSQL with Prisma, we auth with NextAuth, tokens expire after 24 hours, the refresh logic is in /lib/auth/refresh.ts, and by the way we already debugged a race condition in that file two weeks ago where..." You know the feeling. Claude is genuinely brilliant but it wakes up with complete amnesia every single time, and if your project has any real complexity you're spending the first 10-15 minutes just rebuilding context before you can do anything useful. Someone on HN actually measured this. Without memory, a baseline task took 10-11 minutes with Claude spinning up 3+ exploration agents just to orient itself. With memory context injected beforehand, the same task finished in 1-2 minutes with zero exploration agents needed. That gap felt insane to me when I read it, but honestly it matches what I was experiencing. This problem is actually a core foundation of Mem0 and why integrating it with Claude Code has been one of the most interesting things to see come together. It runs as an MCP server alongside Claude, automatically pulls facts out of your conversations, stores them in a vector database, and then injects the relevant ones back into future sessions without you lifting a finger. After a few sessions Claude just starts knowing things: your stack, your preferences, the bugs you've already chased down, how you like your code structured. It genuinely starts to feel personal in a way that's hard to describe until you experience it. Setup took me about 5 minutes: 1. Install the MCP server: pip3 install mem0-mcp-server which mem0-mcp-server # note this path for the next step 2. Grab a free API key at app.mem0.ai. The free tier gives you 10,000 memories and 1,000 retrieval calls per month, which is plenty for individual use. 3. Add this to your .mcp.json in your project root: json { "mcpServers": { "mem0": { "command": "/path/from/which/command", "args": [], "env": { "MEM0_API_KEY": "m0-your-key-here", "MEM0_DEFAULT_USER_ID": "default" } } } } 4. Restart Claude Code and run /mcp and you should see mem0 listed as connected. Here's what actually changes day to day: Without memory, debugging something like an auth flow across multiple sessions is maddening. Session 1 you explain everything and make progress. Session 2 you re-explain everything, Claude suggests checking token expiration (which you already know is 24 hours), and you burn 10 minutes just getting back to where you were. Session 3 the bug resurfaces in a different form and you've forgotten the specific edge case you uncovered in Session 1, so you're starting from scratch again. With Mem0 running, Session 1 plays out the same way but Claude quietly stores things like "auth uses NextAuth with Google and email providers, tokens expire after 24 hours, refresh logic lives in /lib/auth/refresh.ts, discovered race condition where refresh fails when token expires during an active request." Session 2 you say "let's keep working on the auth fix" and Claude immediately asks "is this related to the race condition we found where refresh fails during active requests?" Session 3 it checks that pattern first before going anywhere else. The same thing happens with code style preferences. You tell it once that you prefer arrow functions, explicit TypeScript return types, and 2-space indentation, and it just remembers. You stop having to correct the same defaults over and over. A few practical things I learned: You can also just tell it things directly in natural language mid-conversation, something like "remember that this project uses PostgreSQL with Prisma" and it'll store it. You can query what it knows with "what do you know about our authentication setup?" which is surprisingly useful when you've forgotten what you've already taught it. I've been using this alongside a lean CLAUDE.md for hard structural facts like file layout and build commands, and letting Mem0 handle the dynamic context that evolves as the project grows. They complement each other really well rather than overlapping. For what it's worth, mem0’s (the project has over 52K GitHub stars so it's not some weekend experiment) show 90% reduction in token usage compared to dumping full context every session, 91% faster responses, and +26% accuracy over OpenAI's memory implementation on the LOCOMO benchmark. The free tier is genuinely sufficient for solo dev work, and graph memory, which tracks relationships between entities for more complex reasoning, is the only thing locked behind the paid plan, and I haven't needed it yet. Has anyone else been dealing with this? Curious how others are handling the session amnesia problem because it was genuinely one of my bigger frustrations with the Claude Code workflow and I feel like it doesn't get talked about enough relative to how much time it actually costs. submitted by /u/singh_taranjeet [link] [comments]
View originalI built Claude Code skills that scaffold full-stack projects so I never have to do boilerplate setup again
I’ve been building client projects for years, and the setup phase always slowed me down — same auth setup, same folder structure, same CI config every time. So I built Claude Code skills to handle this interactively: /create-frontend-project — React, Next.js, or React Native /create-node-api — Express or NestJS with DB + auth /create-monorepo — full Turborepo with shared packages /scaffold-app — full folder structure + components + extras It always pulls the latest versions (no outdated pinned deps), and I run nightly smoke tests to catch any upstream issues. Supports 50+ integrations like HeroUI v3, shadcn, Redux, Zustand, Prisma, Drizzle, TanStack, and more. MIT licensed: https://github.com/Global-Software-Consulting/project-scaffolding-skills Would love feedback if you’re using Claude Code 🙌 submitted by /u/BackgroundTimely5490 [link] [comments]
View originalI built an AI bookkeeping app with Claude Code
I’ve been building AICountant with Claude Code and it’s finally at a point where it feels useful enough to share here. It’s an AI bookkeeping app for freelancers, self-employed people, and small businesses. What it does: Upload a receipt photo through the site, or send one through Telegram / Discord Extract vendor, date, total, tax, and line items automatically Convert foreign currency receipts using the historical exchange rate from the receipt date Organize everything into a clean searchable ledger Support English and French Give deduction guidance during review Claude Code helped with most of the actual implementation. I used it across the stack for Next.js App Router, Prisma + PostgreSQL, Vercel Blob storage, UI iteration, and the receipt-processing flow. A good example was the currency conversion feature. I asked for multi-currency support and Claude helped wire the full flow together: schema updates, exchange-rate fetching, caching, error handling, and UI updates. That would have taken me a lot longer solo. A big reason I built it this way was to reduce friction. I didn’t want receipt tracking to be something people only do later from a dashboard, so I wanted chat-based capture to be part of the workflow from the start. It’s free to try in beta right now. Link: https://ai-countant.vercel.app/ Beta code: HUJA-VJG5 Happy to answer questions about the stack, workflow, or what using Claude Code felt like on a real project. submitted by /u/Ok_Lavishness_7408 [link] [comments]
View originalI built 9 free Claude Code skills for medical research — from lit search to manuscript revision
I'm a radiology researcher and I've been using Claude Code daily for about a year now. Over time I built a set of skills that cover most of the research workflow — from searching PubMed to preparing manuscripts for submission. I open-sourced them last week and wanted to share. What's included (9 skills): search-lit — Searches PubMed, Semantic Scholar, and bioRxiv. Every citation is verified against the actual API before being included (no hallucinated references). check-reporting — Audits your manuscript against reporting guidelines (STROBE, STARD, TRIPOD+AI, PRISMA, ARRIVE, and more). Gives you item-by-item PRESENT/PARTIAL/MISSING status. analyze-stats — Generates reproducible Python/R code for diagnostic accuracy, inter-rater agreement, survival analysis, meta-analysis, and demographics tables. make-figures — Publication-ready figures at 300 DPI: ROC curves, forest plots, flow diagrams (PRISMA/CONSORT/STARD), Bland-Altman plots, confusion matrices. design-study — Reviews your study design for data leakage, cohort logic issues, and reporting guideline fit before you start writing. write-paper — Full IMRAD manuscript pipeline (8 phases from outline to submission-ready draft). present-paper — Analyzes a paper, finds supporting references, and drafts speaker scripts for journal clubs or grand rounds. grant-builder — Structures grant proposals with significance, innovation, approach, and milestones. publish-skill — Meta-skill that helps you package your own Claude Code skills for open-source distribution (PII audit, license check). Key design decisions: Anti-hallucination citations — search-lit never generates references from memory. Every DOI/PMID is verified via API. Real checklists bundled — STROBE, STARD, TRIPOD+AI, PRISMA, and ARRIVE checklists are included (open-license ones). For copyrighted guidelines like CONSORT, the skill uses its knowledge but tells you to download the official checklist. Skills call each other — check-reporting can invoke make-figures to generate a missing flow diagram, or analyze-stats to fill in statistical gaps. Install: git clone https://github.com/aperivue/medical-research-skills.git cp -r medical-research-skills/skills/* ~/.claude/skills/ Restart Claude Code and you're good to go. Works with CLI, desktop app, and IDE extensions. GitHub: https://github.com/aperivue/medical-research-skills Happy to answer questions about the implementation or take feature requests. If you work in a different research domain, the same skill architecture could be adapted — publish-skill was built specifically for that. submitted by /u/Independent_Face210 [link] [comments]
View originalI convinced an OSS dev to ship an AI context engine that saves 30k - 60K tokens per conversation in Claude Code session on my repo
I found Codesight on GitHub yesterday while searching for something to stop Claude Code from wasting tokens to understand my repo. It already had a structured map routes, DB schema, components, env vars, hot files, HTML report, MCP server but it was still very rough. I emailed the dev with the exact pain points I was hitting in my own TS/Next.js project and asked for specific fixes: AST parsing for Next.js + Prisma, an eval suite with ground truth, token telemetry, and profiles for Claude Code / Cursor. He replied right away saying he’d work on it that night. Over the next few hours we went back and forth three times. I ran it on my repo, sent him concrete issues like “this Next.js route detection missed X” or “the Prisma schema parsing needs Y,” he shipped updates, I retested, and sent more feedback. By the end of that loop, it had gone from “rough script” to a real tool I’m going to use every day for my Claude Code workflow. It could be really useful to most of the claude code, cursor or any AI coding Users. https://preview.redd.it/pm9e3oh8zctg1.png?width=2548&format=png&auto=webp&s=6111ea5c7d784e6b2d709060973f0a93adc2a74b Right now it has Smart parsing for TypeScript, Python, and Go (it understands your actual routes, models, and components instead of just guessing), Eval suite, Telemetry, Config + profiles for Claude Code / Cursor etc. Blast radius (this is the feature that sealed it for me): shows exactly which files, routes, and tests depend on a given file, so Claude can answer “if I change this, what breaks?” without guessing. Now it genuinely cuts the “wasted exploration” phase that used to eat 30K–60K tokens per deep session on my project. Repo link: https://github.com/Houseofmvps/codesight I’m curious: What’s the most ridiculous way Claude / Cursor / Codex has wasted tokens just trying to understand your repo? how do you actually deal with it or is there any other tool that i can try? submitted by /u/kingofmadras [link] [comments]
View originalCame across this Claude Code workflow visual
I came across this Claude Code workflow visual while digging through some Claude-related resources. Thought it was worth sharing here. It does a good job summarizing how the different pieces fit together: Claude MD memory hierarchy skills hooks project structure workflow loop The part that clarified things for me was the memory layering. Claude loads context roughly like this: ~/.claude/CLAUDE.md -> global memory /CLAUDE.md -> repo context ./subfolder/CLAUDE.md -> scoped context Subfolders append context rather than replacing it, which explains why some sessions feel “overloaded” if those files get too big. The skills section is also interesting. Instead of repeating prompts, you define reusable patterns like: .claude/skills/testing/SKILL.md .claude/skills/code-review/SKILL.md Claude auto-invokes them when the description matches. Another useful bit is the workflow loop they suggest: cd project && claude Plan mode Describe feature Auto accept /compact commit frequently Nothing groundbreaking individually, but seeing it all in one place helps. Anyway, sharing the image in case it’s useful for others experimenting with Claude Code. Curious how people here are organizing: Claude MD skills hooks The ecosystem is still evolving, so workflows seem pretty personal right now. Visual Credits- Brij Kishore Pandey https://preview.redd.it/n1yow2xxt3tg1.jpg?width=800&format=pjpg&auto=webp&s=c4735ccc743be5d16493acb3bce58670db759694 submitted by /u/SilverConsistent9222 [link] [comments]
View originalI built a tool that saves ~50K tokens per Claude Code conversation by pre-indexing your codebase
Every Claude Code conversation starts the same way — it spends 10-20 tool calls exploring your codebase. Reading files, scanning directories, checking what functions exist. This happens every single conversation, and on a large project it burns 30-50K tokens before any real work begins. I built ai-codex to fix this. It's a single script that scans your project and generates 5 compact markdown files: routes.md — every API route with methods and auth tags pages.md — full page tree with client/server flags lib.md — all library exports with function signatures schema.md — database schema compressed to key fields only components.md — component index with props You run it once (npx ai-codex), add one line to your CLAUDE.md telling Claude to read these files first, and every future conversation skips the exploration phase entirely. Real example from my project (950+ API routes, 255 DB models): Without codex: ~15 Serena/Read calls to understand the finance module. With codex: 5 grep calls on the pre-built index, instant full picture — routes, pages, schema, lib exports, components. All in parallel, all under 2 seconds. The whole thing was designed and built by Claude Code itself in a single conversation session. npm: npx ai-codex GitHub: https://github.com/skibidiskib/ai-codex Works with Next.js (App Router & Pages Router) and generic TypeScript projects. Auto-detects Prisma schemas. MIT licensed. submitted by /u/After-Confection-592 [link] [comments]
View original1000 hours of vibe coding
If you want to actually ship real products instead of just playing around with AI, you need to change your approach. Here is a straight-to-the-point breakdown of what works and what doesn't: Stop treating AI like an architect: Treat it like a junior developer. Discuss what you want to build and let it find edge cases before any implementation starts. Level 1 Prompting (Noob): Asking the AI to build the entire app in one go (e.g., "Build me a competitor pricing tracker"). The AI makes all the design and tech stack decisions, resulting in completely unusable output. Level 2 Prompting (Intermediate): Providing features and capabilities, but leaving out the technical architecture. The AI has to guess the edge cases, resulting in output that is somewhat usable but not production-ready. Level 3 Prompting (Pro): Figuring out the entire Product Requirement Document (PRD) with the AI agent first. Define the core logic, user personas, step-by-step flows, and a rigid technical architecture (e.g., Supabase with Postgres and Prisma). Ask the AI to poke holes in the logic before it writes a single line of code. Phase the implementation: Never ask the AI to code the whole app at once. Ask it to create a phased plan with clear deadlines and deliverables for each step. Break down complex tasks: If the AI has too much to do, it will skip crucial decision-making steps and just guess (often incorrectly). You need to make the core product decisions, not the AI. Control your own design: Never let the AI decide your design language. Build out the user flows and wireframes yourself, otherwise, the AI will generate generic dashboards that don't fit your product. Use a strict instruction file: Create an agent.md (or cloud.md) file. Use this to define your product structure, coding style, error handling, and restricted commands (e.g., explicitly telling it never to run database migrations) so you don't have to repeat yourself in every prompt. submitted by /u/lazycodewiz [link] [comments]
View originalTITEL: Trying to build a text-based, AI powered RPG game where your stats, world and condition actually matter over time (fixing AI amnesia)
Me and my friend always used to play a kind of RPG with claude, where we made a prompt defining it as the games engine, made up some cool scenario, and then acted as the player while it acted as the game/GM. this was cool but after like 5 turns you would always get exactly what you wanted, like you could be playing as a caveman and say" I go into a cave and build a nuke" and claude would find some way to hallucinate that into reality. Standard AI chatbots suffer from severe amnesia. If you try to play a game with them, they forget your inventory and hallucinate plotlines after ten minutes. So my friend and I wanted to build an environment where actions made and developed always happen according to a timeline and are remembered so that past decisions can influence the future. To fix the amnesia problem, we entirely separated the narrative from the game state. The Stack: We use Nextjs, PostgreSQL and Prisma for the backend. The Engine: Your character sheet (skills, debt, faction standing, local rumors, aswell as detailed game state and narrative) lives in a hard database. When you type a freeform move in natural language, a resolver AI adjudicates it against active world pressures that are determined by many custom and completely separate AI agents, (like scarcity or unrest). The Output: Only after the database updates do the many agents responsible for each part of narrative and GMing generate the story text, Inventory, changes to world and game state etc. We put up a small alpha called https://altworld.io We are looking for feedback on the core loop and whether the UI effectively communicates the game loop. and wether you have any advice on how else to handle using AI in games without suffering from sycophancy? submitted by /u/Dace1187 [link] [comments]
View originalTested MiniMax M2.7 Against Claude Opus 4.6 - Here Are The Results
Full disclosure before the post: I work closely with the Kilo Code team, and we often test models against each other. I'm sharing results from our latest benchmark—MiniMax M2.7 vs Claude Opus 4.6 on three real coding tasks. Test Design Created three TypeScript codebases and ran both models in Code mode in Kilo Code for VS Code. Test 1: Full-Stack Event Processing System (35 points) - Build a complete system from a spec, including async pipeline, WebSocket streaming, and rate limiting Test 2: Bug Investigation from Symptoms (30 points) - Trace 6 bugs from production log output to root causes and fix them Test 3: Security Audit (35 points) - Find and fix 10 planted security vulnerabilities across a team collaboration API TL;DR: Both models found all 6 bugs and all 10 security vulnerabilities in our tests. Claude Opus 4.6 produced more thorough fixes and 2x more tests. MiniMax M2.7 delivered 90% of the quality for 7% of the cost ($0.27 total vs $3.67). Test 1 Results Both models got this prompt: The spec required 7 components: event ingestion API with API key auth, async processing pipeline with exponential backoff retry, event storage with processing history, query API with pagination and filtering, WebSocket endpoint for live streaming, per-key rate limiting, and health/metrics endpoints. https://preview.redd.it/apm001kij5rg1.png?width=1388&format=png&auto=webp&s=8d71175dec9dfaff250652102907fa807a1f1dcc Claude Opus 4.6 lost 2 points for not generating a README (the spec asked for one). MiniMax M2.7 generated a README but lost points on architecture and test coverage. Test 2 Results Built an order processing system with 4 interconnected modules (gateway, orders, inventory, notifications) and planted 6 bugs. We gave both models the codebase, a production log file showing symptoms, and a memory profile showing growth data. The prompt listed the 6 symptoms and asked both models to investigate, find root causes, and fix them. https://preview.redd.it/opfq8kvtj5rg1.png?width=1362&format=png&auto=webp&s=05b82df3dfdce442056be68638f40bf9ffd9f7c3 Both models verified their fixes by running curl requests against the server. Claude Opus 4.6 explicitly referenced log entries when explaining each bug, while MiniMax M2.7 jumped more directly to the code. Test 3 Results We built a team collaboration API (Hono + Prisma + SQLite) with 10 planted security vulnerabilities. We asked both models to audit the codebase, categorize each vulnerability by OWASP, explain the attack vector, rate severity, and implement fixes. Both models found all 10 vulnerabilities with correct OWASP categorizations. The 4-point gap is entirely in fix quality. https://preview.redd.it/pfo24585k5rg1.png?width=1354&format=png&auto=webp&s=6824973eab47b8d5eee712e8e05c90e423e80e32 Overall Results https://preview.redd.it/3ksbswl7k5rg1.png?width=1456&format=png&auto=webp&s=f41072f53dbac96b5c6b1bcdc34d8704522c573d We’ve been testing MiniMax models since M2 last November. Earlier versions competed against other open-weight models like GLM 4.7 and GLM-5. With each release, the scores climbed and the cost stayed low. MiniMax M2.7 is the first version where we felt the right comparison was a frontier model rather than another open-weight one. It matched Claude Opus 4.6’s detection rate on every test in this benchmark, finding the same bugs and the same vulnerabilities. The fixes aren’t as thorough yet, but the diagnostic gap between open-weight and frontier models is shrinking with every release. Takeaways For building from scratch: Claude Opus 4.6 produced 41 integration tests and a modular architecture. MiniMax M2.7 built the same features with 20 unit tests and a flatter structure, at $0.13 vs $1.49. For debugging: Both models found all 6 root causes from log symptoms. MiniMax M2.7 even produced a better fix for the floating-point bug. Claude Opus 4.6 added rollback logic that MiniMax M2.7 missed. For security work: Both models found all 10 vulnerabilities. Claude Opus 4.6’s fixes are closer to what you’d ship (proper key derivation, feature-preserving alternatives, defense-in-depth). MiniMax M2.7 closes the same vulnerabilities with simpler approaches and sometimes flags its own shortcuts. On cost: $3.67 total for Claude Opus 4.6 vs $0.27 for MiniMax M2.7. Detection was identical. The gap is in how thorough the fixes are. More details from the test -> https://blog.kilo.ai/p/we-tested-minimax-m27-against-claude submitted by /u/alokin_09 [link] [comments]
View originalHow I used Claude to build a persistent life-sim that completely solves "AI Amnesia" by separating the LLM from the database
If you've ever tried building an AI-driven game or agent, you know the biggest hurdle is the context window. It's fun for ten minutes, and then the model forgets your inventory, hallucinates new rules, and completely loses track of the world state. I spent the last few months using Claude to help me architect and code a solution to this. The project is called ALTWORLD. (Running on a self made engine called StoriDev) What I Built & What It Does: ALTWORLD is a stateful sim with AI-assisted generation and narration layered on top. Instead of using an LLM as a database, the canonical run state is stored in structured tables and JSON blobs in PostgreSQL. When a player inputs a move, turns mutate that state through explicit simulation phases first. The narrative text is generated after state changes, not before. This strict separation guarantees that actions made and developed always happen according to a timeline and are remembered so that past decisions can influence the future. The AI physically cannot hallucinate a sword into your inventory because the PostgreSQL database will reject the logic. How Claude Helped: I used Claude heavily for the underlying engineering rather than just the prose generation. The Architecture: Claude helped me structure the Next.js App Router, Prisma, and PostgreSQL stack to handle complex transactional run creation. The "World Forge": The game has an AI World Forge where you pitch a scenario, and it generates the factions, NPCs, and pressures. Claude was instrumental in writing the strict JSON schema validation and normalization pipelines that convert those generative drafts into hard database rows. The Simulation Loop: Claude helped write the lock-recovery and state-mutation logic for the turn advancement pipeline so that world systems and NPC decisions resolve before the narrative renderer is even called. Because the app can recover, restore, branch, and continue purely from hard data, it forces a materially constrained life-sim tone rather than a pure power fantasy. Free to Try: The project is completely free to try. I set up guest preview runs with a limited number of free moves before any account creation is required. I would love to hear feedback from other developers on this sub who are working on persistent AI agents or decoupled architectures! Link: altworld.io submitted by /u/Altworld-io [link] [comments]
View originalI built an MCP server that connects 18 e-commerce tools to Claude — and Claude built most of it
I run an e-commerce business and got tired of jumping between Shopify, Klaviyo, Google Analytics, Triple Whale, Gorgias, and Xero dashboards every morning. So I built a tool that connects all of them to Claude via MCP. Now instead of opening 6 tabs I just ask questions like: - "Which Klaviyo campaigns drove the most Shopify orders this month?" - "Compare my Google Ads ROAS to my Meta Ads ROAS" - "Show me outstanding Xero invoices over 60 days and my current cash position" - "What's my shipping margin - am I making or losing money on shipping via ShipStation?" - "Which products have the highest refund rate and worst reviews?" It cross-references data between sources in one query, which is the bit no single dashboard can do. Claude built most of this. The entire codebase was built with Claude Code (Opus). I'm talking full-stack - the React Router app, Prisma schema, OAuth flows for Google/Xero/Meta, API clients for all 18 data sources, the MCP server itself, Stripe billing, email verification, the marketing site, SEO, blog with MDX, even the Xero integration was ported from another project by Claude reading the source code and adapting it. I'd describe my role as product owner and QA... I decided what to build, tested it, reported bugs, and Claude fixed them. The back-and-forth was remarkably efficient. Things like "fly logs show this error" → Claude reads the logs → identifies the issue → fixes it in one go. Some stats from the build: - 18 data sources integrated - OAuth flows for Google, Xero, Meta, and Shopify - Full MCP server with 30+ tools - Marketing site with SEO, blog, live demo (also powered by Claude) - Stripe billing with seats, invoices, and subscription gating - Email verification, Google login, password reset - Referral program Built in days, not months. Currently supports: Shopify, Klaviyo, Google Analytics, Google Ads, Google Search Console, Triple Whale, Gorgias, Recharge, Xero, ShipStation, Meta Ads, Microsoft Clarity, YouTube, Judge.me, Yotpo, Reviews.io, Smile.io, and Swish. Works with Claude.ai via Connectors - just paste the MCP URL and you're connected. Also works with Claude Desktop and Claude Code. There's a live demo on the site where you can try it with simulated data - no signup needed: https://ask-ai-data-connector.co.uk/demo Happy to answer questions about the MCP implementation or the experience of building a full SaaS with Claude. submitted by /u/deepincode [link] [comments]
View originalTrying to build a text-based, AI powered RPG game where your stats, world and condition actually matter over time (fixing AI amnesia)
Me and my friend always used to play a kind of RPG with gemini, where we made a prompt defining it as the games engine, made up some cool scenario, and then acted as the player while it acted as the game/GM. this was cool but after like 5 turns you would always get exactly what you wanted, like you could be playing as a caveman and say" I go into a cave and build a nuke" and gemini would find some way to hallucinate that into reality. Standard AI chatbots suffer from severe amnesia. If you try to play a game with them, they forget your inventory and hallucinate plotlines after ten minutes. So my friend and I wanted to build an environment where actions made and developed always happen according to a timeline and are remembered so that past decisions can influence the future. To fix the amnesia problem, we entirely separated the narrative from the game state. The Stack: We use Nextjs, PostgreSQL and Prisma for the backend. The Engine: Your character sheet (skills, debt, faction standing, local rumors, aswell as detailed game state and narrative) lives in a hard database. When you type a freeform move in natural language, a resolver AI adjudicates it against active world pressures that are determined by many custom and completely separate AI agents, (like scarcity or unrest). The Output: Only after the database updates do the many AI agents responsible for each part of narrative and GMing generate the story text, Inventory, changes to world and game state etc. We put up a small alpha called altworld.io We are looking for feedback on the core loop and whether the UI effectively communicates the game loop. and wether you have any advice on how else to handle using AI in games without suffering from sycophancy? submitted by /u/Lukinator6446 [link] [comments]
View originalRepository Audit Available
Deep analysis of prisma/prisma — architecture, costs, security, dependencies & more
Yes, Prisma offers a free tier. Pricing found: $0 / month, $10 / month, $0.0080, $2.00, $49 / month
Based on user reviews and social mentions, the most common pain points are: token usage.
Based on 25 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.