PlanetScale offers the world’s fastest and most scalable cloud hosting for Vitess and Postgres.
PlanetScale was founded in 2018 by the co-creators and maintainers of the Vitess open source project. We are a globally-distributed company that values high performance, accountability, and integrity. We are a small team of database experts with a combined experience of over 600 years scaling some of the largest systems on the planet at these companies: Time is the only thing we cannot buy more of. Nobody owes us tomorrow. You can only build a future you believe in. We make bets, we take risks, we are optimists. We all drive the company forward and we all have key contributions to make. Nobody gets to sit back. Nobody takes a backseat to our success and comes along for the ride. We do not tolerate leaders that are not experts in their craft. This is in service to our people. Nobody wants to work for someone they do not have professional respect for. Raw performance says more than a million features ever could. Speed and consistency conquers all.
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Features
Use Cases
Industry
information technology & services
Employees
99
Funding Stage
Series C
Total Funding
$105.0M
Pricing found: $15, $0.06 / gb, $15, $5/month, $50/month
there is why sora is taking down
The $5.4 Billion Mirage: The Brutal Economics Behind the Sora Shutdown We often ask "Why?" when a platform as revolutionary as Sora begins to aggressively scale back or restrict its features. But to find the answer, we must stop looking at the technology and start looking at the balance sheet. OpenAI is no longer just a research laboratory; it is a massive corporate machine navigating an unprecedented cash burn. Behind every "innovation" and every "downgrade" stands a team of financial experts and risk assessors whose job is to determine if a project is viable. The reality of Sora is not a technical failure—it is a brutal collision between bleeding-edge ambition and the cold, hard laws of unit economics. Here is the factual breakdown of why Sora hit a wall: The Staggering Cost of Compute: A $15 Million Daily Burn People wonder why ChatGPT is so expensive to run compared to other platforms, but AI video generation is on an entirely different spectrum of cost. The "compute" required to generate high-fidelity video is an absolute resource sinkhole. * The Per-Video Cost: Analysts at financial firm Cantor Fitzgerald estimated that generating a single 10-second Sora clip costs OpenAI approximately $1.30 in pure computing power (requiring roughly 40 minutes of total GPU time). * The Annual Deficit: Extrapolating this to millions of users, Forbes estimated that operating the Sora infrastructure was burning through roughly $15 million every single day. That translates to an annualized cost of over $5.4 billion for one single product. * The Subscription Flaw: Even hidden behind a $200/month "Pro" paywall, the math fails. If a power user generates just 20 videos a day, they cost the company over $700 a month in server compute. There is currently no consumer subscription model that makes this "worthy" without OpenAI actively losing money on every generation. The "30 to 10" Sacrifice: A Move for the Fans that Backfired The decision to heavily restrict daily generation limits and reduce video duration down to 10 seconds wasn't a creative glitch—it was a tactical sacrifice made for the community. Faced with "completely unsustainable" economics, OpenAI tried to stretch their server capacity so the general fan base could still experience the platform. However, the strategy was immediately exploited. The moment access was granted, the number of "alt" accounts (secondary accounts used to bypass limits) exploded. Users essentially siphoned the compute power faster than the servers could process it. OpenAI’s financial team had to step in: the choice was either to shut it down or watch the company bleed billions. The Macro Financial Crisis of AI To understand Sora's fate, you have to look at OpenAI's broader financial picture. Despite generating massive revenue, the company is operating at a historic deficit. * In 2024, reports indicated OpenAI lost roughly $5 billion. * By the first half of 2025, despite revenues soaring past $4.3 billion, their net loss widened to a jaw-dropping $13.5 billion, largely driven by the colossal cost of training and running these advanced models. Sora, as incredible as it is, was the most expensive drain on an already bleeding balance sheet. The Legal and Ethical Minefield Beyond the catastrophic server costs, there is the immediate threat of litigation. The rumors involving deepfakes and the unauthorized usage of notorious or deceased individuals’ likenesses have created a liability nightmare. OpenAI’s legal and financial experts know the score: "Take this down now, or we face copyright and defamation lawsuits with zero chance of winning." In a world of strict intellectual property laws, a platform heavily used for "meme culture" is a legal ticking time bomb. The Industry Proof: Look at Google Veo If you doubt the economic severity of this issue, look at the rest of the market. Google possesses one of the largest and most advanced server infrastructures on the planet. Yet, even Google heavily restricts its state-of-the-art video model, Gemini Veo 3. If you pay for the Google AI Pro tier, you are limited to a mere 3 generations per 24 hours in the Gemini app. These are short clips with virtually no advanced editing features. Why? Because even a multi-trillion-dollar giant like Google cannot absorb the energy and compute costs of unlimited AI video generation. Conclusion: A Masterpiece Ahead of Its Economy OpenAI likely intended for Sora to be a high-end, professional tool for enterprise advertising and marketing companies. Instead, the promotional rollout turned it into a consumer meme platform. When you combine $1.30 per-video generation costs, billions in annual burn rates, and the constant threat of lawsuits, the corporate mandate becomes obvious. The technology is God-tier, but our current hardware and economic models simply cannot support it. Welcome to the real world. submitted by /u/Either-Ad-5185 [link] [comments]
View originalWe're building features faster than ever. Not sure that's a good thing.
Before you come at me. I like claude code. My engineers like claude code. It's genuinely impressive for what it does. but here's what's actually happening on my team since we went all-in on it: Engineers are now shipping features in hours that used to take days. sounds great right? except nobody's spending more time on the UX just because the code takes less time. They're spending less. the AI generates functional code fast, but it has zero understanding of our product. It doesn't know our users. It doesn't know our design patterns. it doesn't know that we spent 3 months learning that our users hate modal-based flows. So now instead of slow, considered features we get fast, generic ones and the design review bottleneck hasn't improved. It's gotten worse. because now there's 3x the volume of stuff to review and half of it looks like it was designed by someone who's never seen our product before. The fundamental problem is that these tools understand code but not product context. they can build a settings page in 20 minutes but it'll look and feel like every other SaaS settings page on the planet. No awareness of your information architecture, your component library, your specific user mental models. nothing. and I get it, that's not what claude code is for. It's a coding tool. but the downstream effect on product quality is something nobody's talking about. We've basically given the team a faster engine with no steering wheel. What I've started doing is forcing a "context step" before any AI-assisted feature work. basically a doc that captures the product context, relevant design precedents, and user behavior patterns for that surface area. it helps but it's manual and it doesn't scale. Been exploring some tools that try to ingest your actual product context (design system, existing flows, docs) and generate UX from that instead of from a blank slate. early days but the direction feels right. Curious if other product teams are feeling this or am I just bad at process and blaming the tools. submitted by /u/PlentyMedia34 [link] [comments]
View originalI asked Claude how it would visualize itself as an AI agent. Then I built exactly what it described.
A few days ago, I asked Claude a curious question: "If you could choose a visual representation of yourself as an AI agent, working, evaluating input, 'thinking' (computing), how would you represent yourself and your fellow AI agents collaborating?" Claude answered: "The interaction between celestial bodies in a universe is an excellent way to showcase it. Each agent is a planet — a massive entity that consumes energy, emits output, and exerts gravitational influence on its surroundings. Tasks orbit agents like moons. Data flows between agents as spaceships traversing curved arcs through space. And at the center of it all, a black hole — the singularity where completed work collapses, pulling everything toward it. This metaphor works because it scales naturally. One agent is a lonely planet. Five agents become a solar system. The visual language — size, color, orbital speed, surface type — encodes real information without requiring labels or dashboards." So I built it. Event Horizon is a VS Code extension that renders your Claude Code sessions as a gas giant planet in a living universe. You can watch it grow larger as load increases, sprout moons as subagents spin up, and send ships to other agents when collaborating. There's a black hole at the center of everything. Completed work collapses into it. The extension supports Claude Code with one-click hook installation. OpenCode is also supported. Copilot and Cursor are coming. GitHub: https://github.com/HeytalePazguato/event-horizon I thought this community would appreciate the origin story. Has anyone else gotten surprisingly poetic answers from Claude about its own nature? https://reddit.com/link/1rpkcr1/video/5qjit40gl4og1/player submitted by /u/HeytalePazguato [link] [comments]
View originalYes, PlanetScale AI offers a free tier. Pricing found: $15, $0.06 / gb, $15, $5/month, $50/month
Key features include: Pre-filtering and post-filtering, Full SQL syntax — including JOIN, WHERE, and subqueries, ACID compliance, High availability: 1 primary and 2 replicas across three AZs with automated failover., Branch-per-environment; each branch runs on its own dedicated cluster, priced prorated to the millisecond., Built for large scale Postgres workloads., Designed for predictable performance and fault tolerance at scale., Online MySQL and Vitess version updates.
PlanetScale AI is commonly used for: unlimited IOPS, High availability: 1 primary and 2 replicas across three AZs with automated failover., Designed for predictable performance and fault tolerance at scale..