Create lifelike speech with our AI voice generator and voice agents platform. Access 5,000+ voices in 70+ languages with secure APIs and SDKs.
Based on the limited social mentions provided, users appear to view ElevenLabs as a premium AI voice/audio tool but express significant concern about its pricing at $99/month. The primary complaint centers around the cost being perceived as "expensive" and users actively seeking free alternatives to avoid what they consider "overpaying." The repeated YouTube mentions suggest the tool has visibility and usage, but the lack of detailed reviews makes it difficult to assess user satisfaction with the actual functionality. Overall, ElevenLabs seems to have a reputation as a capable but overpriced solution in the AI tools market.
Mentions (30d)
0
Reviews
0
Platforms
4
Sentiment
0%
0 positive
Based on the limited social mentions provided, users appear to view ElevenLabs as a premium AI voice/audio tool but express significant concern about its pricing at $99/month. The primary complaint centers around the cost being perceived as "expensive" and users actively seeking free alternatives to avoid what they consider "overpaying." The repeated YouTube mentions suggest the tool has visibility and usage, but the lack of detailed reviews makes it difficult to assess user satisfaction with the actual functionality. Overall, ElevenLabs seems to have a reputation as a capable but overpriced solution in the AI tools market.
Features
Use Cases
Industry
research
Employees
400
Funding Stage
Series D
Total Funding
$1.0B
5 Expensive AI Tools... And Their Free Clones (You won’t believe how much you’re overpaying.) 💸 ChatGPT? $200/month 💸 Midjourney? $60/month 💸 ElevenLabs? $99/month 💸 Aiva? $54/month 💸 Tome? $16
5 Expensive AI Tools... And Their Free Clones (You won’t believe how much you’re overpaying.) 💸 ChatGPT? $200/month 💸 Midjourney? $60/month 💸 ElevenLabs? $99/month 💸 Aiva? $54/month 💸 Tome? $16/month But here’s the twist. Their free alternatives do 80–95% of the job. For $0. 🔥 Research: DeekSeek AI 🎨 Image Generation: Leonardo AI 🎙️ Text-to-Speech: Speechma 🎼 Music Generator: Suno AI 📊 Presentation Builder: Gamma Whether you're a content creator, founder, student, or solo builder 👉 You don't need to burn your wallet to build smart. Save this post so you always know where to find powerful free tools. #AITools #ProductivityTools #FreeAI #NoCode #SoloFounder #Bootstrapping #StartupTips --- Would you like a shortened caption version for TikTok/Instagram reels under 220 characters?
View originalPricing found: $0, $5, $22, $11, $99
My Claude.md file
This is my Claude.md file, it is the same information for Gemini.md as i use Claude Max and Gemini Ultra. # CLAUDE.md This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. ## Project Overview **Atlas UX** is a full-stack AI receptionist platform for trade businesses (plumbers, salons, HVAC). Lucy answers calls 24/7, books appointments, sends SMS confirmations, and notifies via Slack — for $99/mo. It runs as a web SPA and Electron desktop app, deployed on AWS Lightsail. The project is in Beta with built-in approval workflows and safety guardrails. ## Commands ### Frontend (root directory) ```bash npm run dev # Vite dev server at localhost:5173 npm run build # Production build to ./dist npm run preview # Preview production build npm run electron:dev # Run Electron desktop app npm run electron:build # Build Electron app ``` ### Backend (cd backend/) ```bash npm run dev # tsx watch mode (auto-recompile) npm run build # tsc compile to ./dist npm run start # Start Fastify server (port 8787) npm run worker:engine # Run AI orchestration loop npm run worker:email # Run email sender worker ``` ### Database ```bash docker-compose -f backend/docker-compose.yml up # Local PostgreSQL 16 npx prisma migrate dev # Run migrations npx prisma studio # DB GUI npx prisma db seed # Seed database ``` ### Knowledge Base ```bash cd backend && npm run kb:ingest-agents # Ingest agent docs cd backend && npm run kb:chunk-docs # Chunk KB documents ``` ## Architecture ### Directory Structure - `src/` — React 18 frontend (Vite + TypeScript + Tailwind CSS) - `components/` — Feature components (40+, often 10–70KB each) - `pages/` — Public-facing pages (Landing, Blog, Privacy, Terms, Store) - `lib/` — Client utilities (`api.ts`, `activeTenant.tsx` context) - `core/` — Client-side domain logic (agents, audit, exec, SGL) - `config/` — Email maps, AI personality config - `routes.ts` — All app routes (HashRouter-based) - `backend/src/` — Fastify 5 + TypeScript backend - `routes/` — 30+ route files, all mounted under `/v1` - `core/engine/` — Main AI orchestration engine - `plugins/` — Fastify plugins: `authPlugin`, `tenantPlugin`, `auditPlugin`, `csrfPlugin`, `tenantRateLimit` - `domain/` — Business domain logic (audit, content, ledger) - `services/` — Service layer (`elevenlabs.ts`, `credentialResolver.ts`, etc.) - `tools/` — Tool integrations (Outlook, Slack) - `workers/` — `engineLoop.ts` (ticks every 5s), `emailSender.ts` - `jobs/` — Database-backed job queue - `lib/encryption.ts` — AES-256-GCM encryption for stored credentials - `lib/webSearch.ts` — Multi-provider web search (You.com, Brave, Exa, Tavily, SerpAPI) with randomized rotation - `ai.ts` — AI provider setup (OpenAI, DeepSeek, OpenRouter, Cerebras) - `env.ts` — All environment variable definitions - `backend/prisma/` — Prisma schema (30KB+) and migrations - `electron/` — Electron main process and preload - `Agents/` — Agent configurations and policies - `policies/` — SGL.md (System Governance Language DSL), EXECUTION_CONSTITUTION.md - `workflows/` — Predefined workflow definitions ### Key Architectural Patterns **Multi-Tenancy:** Every DB table has a `tenant_id` FK. The backend's `tenantPlugin` extracts `x-tenant-id` from request headers. **Authentication:** JWT-based via `authPlugin.ts` (HS256, issuer/audience validated). Frontend sends token in Authorization header. Revoked tokens are checked against a `revokedToken` table (fail-closed). Expired revoked tokens are pruned daily. **CSRF Protection:** DB-backed synchronizer token pattern via `csrfPlugin.ts`. Tokens are issued on mutating responses, stored in `oauth_state` with 1-hour TTL, and validated on all state-changing requests. Webhook/callback endpoints are exempt (see `SKIP_PREFIXES` in the plugin). **Audit Trail:** All mutations must be logged to `audit_log` table via `auditPlugin`. Successful GETs and health/polling endpoints are skipped to reduce noise. On DB write failure, audit events fall back to stderr (never lost). Hash chain integrity (SOC 2 CC7.2) via `lib/auditChain.ts`. **Job System:** Async work is queued to the `jobs` DB table (statuses: queued → running → completed/failed). The engine loop picks up jobs periodically. **Engine Loop:** `workers/engineLoop.ts` is a separate Node process that ticks every `ENGINE_TICK_INTERVAL_MS` (default 5000ms). It handles the orchestration of autonomous agent actions. **AI Agents:** Named agents (Atlas=CEO, Binky=CRO, etc.) each have their own email accounts and role definitions. Agent behavior is governed by SGL policies. **Decisions/Approval Workflow:** High-risk actions (recurring charges, spend above `AUTO_SPEND_LIMIT_USD`, risk tier ≥ 2) require a `decision_memo` approval before execution. **Frontend Routing:** Uses `HashRouter` from React Router v7. All routes are defined in `src/routes.ts`. **Code Splitting:** Vite config splits chunks into `react-vendor`, `router`, `ui-vendor`, `charts`. **ElevenLabs Voice Agents:** Lucy's
View originalI built a fully automated daily AI news podcast using Claude Code + ElevenLabs
I wanted to share a project I recently launched: a daily AI news podcast that runs entirely on its own. The whole thing started as me wanting to prove I could build something end-to-end with AI tools. It is called Build By AI and it's now live and publishing episodes regularly. Claude Code helped to code the whole thing besides that i Used ElevenLabs to convert script to audio and Buzzsprout via their APIs. Happy to answer questions about the pipeline or any of the tools! Would you actually listen to one, knowing there is no human host behind it? Or does that put you off? submitted by /u/madeo216 [link] [comments]
View originalYou can now give an AI agent its own email, phone number, wallet, computer, and voice. This is what the stack looks like
I’ve been tracking the companies building primitives specifically for agents rather than humans. The pattern is becoming obvious: every capability a human employee takes for granted is getting rebuilt as an API. Here are some of the companies building for AI agents: AgentMail — agents can have email accounts AgentPhone — agents can have phone numbers Kapso — agents can have WhatsApp numbers Daytona / E2B — agents can have their own computers monid.ai — agents can read social media (X, TikTok, Reddit, LinkedIn, Amazon, Facebook) Browserbase / Browser Use / Hyperbrowser — agents can use web browsers Firecrawl — agents can crawl the web without a browser Mem0 — agents can remember things Kite / Sponge — agents can pay for things Composio — agents can use your SaaS tools Orthogonal — agents can access APIs more easily ElevenLabs / Vapi — agents can have a voice Sixtyfour — agents can search for people and companies Exa — agents can search the web (Google isn’t built for agents) What’s interesting is how quickly this came together. Not long ago, none of this really existed in a usable form. Now you can piece together an agent with identity, memory, communication, and spending in a single afternoon. Feels less like “AI tools” and more like the early version of an agent-native infrastructure stack. Curious if anyone here is actually building on top of this. What are you using? Also probably missing a bunch - drop anything I should add and I’ll keep this updated. submitted by /u/Shot_Fudge_6195 [link] [comments]
View originalI built a full content production system with Claude Code and Vercel and it’s replaced my entire workflow
I’ve been running this setup for a few months now and it’s changed how I think about content production entirely. Wanted to share because I haven’t seen anyone else doing it this way. The pipeline: I code videos programmatically with added vfx, sfx, and voiceover through ElevenLabs voice cloning, and render them in any aspect ratio depending on the platform. YouTube and LinkedIn get 16:9, Reels and Shorts get 9:16, whatever the destination needs. Every output is deterministic so the quality and branding never drift. Then in the same session Claude Code generates a blog post with the video embedded and deploys it to a Next.js site on Vercel. From there it fans out to socials with copy that adapts to each platform’s tone and format. I run this for clients too. Each brand gets their own config. Colors, fonts, voice, templates, language style. I can spin up content for any of them from the same system. No switching between ten different tools, no Canva, no manual uploads. Just terminal to production. The whole thing feels like having a content studio that runs on code instead of a team. I’m one operator doing the work that used to take a designer, editor, copywriter, voiceover artist, and social manager. Happy to walk through the setup or answer questions if anyone’s curious about the architecture. Btw I’ve tried so many tools so it’s such a good feeling when you finally find something that works and actually solves the bottlenecks. for yourself as well as clients. submitted by /u/whystrohm [link] [comments]
View originalI built an AI job search system with Claude Code that scored 740+ offers and landed me a job. Just open sourced it.
Edit: title should say "scored 740+ listings" not "offers": it evaluated 740+ job postings, not 740 actual job offers. my bad on the wording. A few weeks ago I shared a video of this system on r/SideProject (534 upvotes). A lot of people asked for the code, so I cleaned it up and open sourced it. What it is: A Claude Code project that turns your terminal into a job search command center. You paste a job URL, and it evaluates the offer, generates a tailored PDF resume, and tracks everything. How Claude helps: Claude Code reads a CLAUDE.md with 14 skill modes and acts as the engine for everything — evaluating fit across 10 dimensions, rewriting your CV per listing, scanning 45+ company career pages, preparing STAR interview stories, even filling application forms. It's not a wrapper around an API — it's Claude Code with custom skills. What's in the repo: 14 skill modes (evaluate, scan, PDF, batch, interview prep, negotiation...) Go terminal dashboard (Bubble Tea) to browse your pipeline 45+ companies pre-configured (Anthropic, OpenAI, ElevenLabs, Stripe...) ATS-optimized PDF generation via Playwright Onboarding wizard — Claude walks you through setup in 5 minutes Scoring system focused on quality over quantity (this is NOT a spray-and-pray tool) Important: The system is designed to help you apply only where there's a real match. It scores fit so you focus on high-quality applications instead of wasting everyone's time. Always review before submitting. Free, MIT licensed, no paid tiers: https://github.com/santifer/career-ops Full case study with architecture: https://santifer.io/career-ops-system I used it to evaluate 740+ offers before landing my current role as Head of Applied AI. Happy to answer questions about the architecture or how to customize it for your own search. submitted by /u/Beach-Independent [link] [comments]
View originalCLAUDELCARS — Star Trek LCARS Dashboard for Claude Code
I built claude-hud-lcars, a dashboard that scans your Claude Code configuration and generates a full LCARS-themed interface for browsing and managing skills, hooks, MCP servers, agents, memory files, and environment variables. It's built specifically for Claude Code and Claude Code built most of it with me. What it does: Claude Code's setup lives in flat files and JSON configs scattered across your home directory. This dashboard makes all of it visible and manageable in one place. You get counts of everything, syntax-highlighted detail panels, the ability to create and edit skills/hooks/agents/MCP configs directly from the browser, and a force-directed graph showing how your whole setup connects. There's also a COMPUTER chat bar that streams Claude responses as the ship's LCARS system, ElevenLabs voice integration, a boot sequence with system beeps, RED/YELLOW ALERT states based on MCP health checks, and Q from the Continuum who shows up uninvited to roast your config. How Claude helped build it: The entire project was built using Claude Code as the primary development partner. Claude wrote the bulk of the codebase, I directed architecture decisions and iterated on the output. The dashboard generates a single self-contained HTML file using only Node.js built-ins, no framework, no bundler, no node_modules. CSS, JS, markdown renderer, syntax highlighter, chat client, voice engine, sound effects, force-directed graph, all inline in one file. Free and open source. MIT license. One command to try it: npx claude-hud-lcars For the full experience with chat and voice: export ANTHROPIC_API_KEY=sk-ant-... npx claude-hud-lcars --serve Repo: github.com/polyxmedia/claude-hud-lcars I also wrote a deep-dive article about the build if anyone wants the full story: https://buildingbetter.tech/p/i-built-a-star-trek-lcars-terminal submitted by /u/snozberryface [link] [comments]
View originalI built an open-source 6-agent pipeline that generates ready-to-post TikToks from a single command
Got tired of the $30/mo faceless video tools that produce the same generic slop everyone else is posting. So I built my own. Claude Auto-Tok is a fully automated TikTok content factory that runs 6 specialized AI agents in sequence: Research agent — scrapes trending content via ScrapeCreators, scores hooks, checks trend saturation Creative agent — generates multiple hook variations using proven formulas (contradictions, knowledge gaps, bold claims), writes the full script with overlay text Audio agent — ElevenLabs TTS with word-level timing for synced subtitles Visual agent — plans scenes, pulls B-roll from Pexels or generates clips via Kling AI, builds thumbnails Render agent — compiles final 9:16 video in Remotion with 6 different templates (split reveal, terminal, cinematic text, card stacks, zoom focus, rapid cuts) QA agent — scores the video on a 20-point rubric across hook effectiveness, completion rate, thumbnail, and SEO. Triggers up to 2 revision cycles if it doesn't pass One command. ~8 minutes. Ready-to-post video with caption, hashtags, and thumbnail. Cost per video is around $0.05 without AI-generated clips. Supports cron scheduling for 2 videos/day and has TikTok Direct Post API integration for hands-free publishing. Built with TypeScript, Claude via OpenRouter for creative, Gemini 2.5 for research/review, Remotion for rendering. MIT licensed: https://github.com/nullxnothing/claude-auto-tok Would appreciate feedback from anyone running faceless content or automating short-form video. submitted by /u/Pretty_Spell_9967 [link] [comments]
View originalI Built a Star Trek LCARS Terminal to Manage My Claude Code Setup
I’ve been using Claude Code heavily for months now. Skills, agents, hooks, MCP servers, plugins, memory files, environment variables, the whole stack. And at some point I realized I had no idea what I’d actually built. Everything lives in ~/.claude/ spread across dozens of files and JSON configs and I was just... hoping it all worked together. So I built a dashboard. And because I’m the kind of person who watched every episode of TNG twice and still thinks the LCARS interface is the best UI ever designed for a computer, I made it look like a Starfleet terminal. One Command and You’re on the Bridge You run npx claude-hud-lcars and it scans your entire ~/.claude/ directory, reads every skill definition, every agent prompt, every MCP server config, every hook, every memory file, and generates a single self-contained HTML dashboard that renders the whole thing in an authentic LCARS interface. It uses the real TNG color palette with the signature rounded elbows, Antonio typeface standing in for Swiss 911, pill-shaped navigation buttons against the black void background. If you grew up watching Picard walk onto the bridge and glance at a wall panel, you know exactly what this looks like. The aesthetics are doing actual work tho. Every single item is clickable. You hit a skill and the detail panel slides open showing the full SKILL.md with syntax-highlighted code blocks, proper markdown rendering, headers, tables, all of it. Click an MCP server and you see the complete JSON config with your API keys automatically redacted. Click a hook and you get the full event definition. It genuinely looks like pulling up a classified Starfleet briefing on a PADD. The Computer Actually Talks Back You type “status report” into the input bar at the bottom of the screen and Claude responds as the ship’s computer. Calm, structured, addressing you like a bridge officer. It calls your skills installed modules, your MCP servers the fleet, your projects active missions. The system prompt turns Claude into LCARS, the Library Computer Access and Retrieval System, and the whole interaction streams in real time through a response overlay that slides up from the bottom. I kept going. You can connect ElevenLabs for premium voice output, and the config panel lets you browse all your available voices with live audio previews before selecting one so you’re not guessing. Voice input works too, you talk to the computer and it talks back. Getting that to work as an actual conversation loop meant solving echo detection so it doesn’t hear itself, interrupt handling, mic cooldown after speech, the whole thing. It took more effort than I expected but it actually works, which honestly surprised me more than anything else in this project. Sound effects are all synthesized via the Web Audio API, sine wave oscillators tuned to frequencies that sound right for navigation clicks, panel opens, message sends. Toggleable obviously. The Tactical Display The TACTICAL tab is the one that makes people stop scrolling. It renders your entire Claude Code setup as an interactive force-directed graph that looks like a Star Trek sensor display. Your LCARS core sits at the center with category hubs orbiting around it, skills in periwinkle, MCP servers in orange, hooks in tan, agents in peach, all connected by pulsing edges. A rotating scanner line sweeps around like a tactical readout and you can click any node to navigate straight to that item’s detail view. There’s also an ENTERPRISE tab that loads a real 3D model of the USS Enterprise NCC-1701-D via Sketchfab. Full interactive, you can rotate it, zoom in, see the hull detail. Because if you’re going to build a Star Trek dashboard you don’t do it halfway. Boot Sequence and Red Alert When you load the dashboard you get a 3 second boot animation. The Starfleet Command logo fades in, your ship name appears (you can name your workstation in the config, mine is USS Defiant), then seven subsystems come online one by one with ascending beeps until the progress bar fills and “ALL SYSTEMS NOMINAL” pulses across the screen before the overlay fades to reveal the dashboard. I spent an unreasonable amount of time tuning those boot frequencies and I would absolutely do it again. Five seconds after boot the system runs a health check. MCP servers offline? RED ALERT, flashing red border, klaxon alarm. Missing configs? YELLOW ALERT. Everything clean shows CONDITION GREEN briefly then dismisses. If you’re a Trek fan you already understand why this matters more than it should. Four ship themes too, switchable from CONFIG. Enterprise-D is the classic TNG orange and blue, Defiant is darker and more aggressive in red and grey, Voyager is blue-shifted and distant, Discovery is silver and blue for the modern Starfleet aesthetic. CSS variable swap, instant application, persisted in localStorage. Q Shows Up Whether You Want Him To or Not There’s a Q tab where you can talk to Q, the omnipotent being from the Continuum. He’s in
View originalBuilt a multi-agent debate app using Claude — two personas argue any topic with voice + images
Been experimenting with using Claude to power AI debates. You pick two personas and a topic, then Claude generates the arguments for each side in character. An AI judge scores and picks a winner. Added ElevenLabs for voice and Flux for images to make it feel like a real debate show. Some fun ones: Tesla vs Edison on electricity, pineapple on pizza, Batman vs Sherlock Holmes. Free to try, no sign-up: https://hottakeai.app Built as a solo side project. Would love feedback from this community since Claude is doing all the heavy lifting. submitted by /u/Dry_Database_3005 [link] [comments]
View originalI built a product explainer video (with VO and assets) with Friday (read more)
And I used the platform to create ITS OWN product explainer video. The whole process took no more than an hour. What I did was: gather the assets, prompt it to create selective slides, write a script that narrates the whole thing well, and add transitions. And add the voice-over (ElevenLabs API integration). As you can see later in the video, it all came along pretty well. And oh, the assets of the video aren't 'AI-generated' images, but real graphics and data presented professionally, which Friday AI managed. What are your thoughts? submitted by /u/One-Problem-5085 [link] [comments]
View originalI vibe-coded a full WC2 inspired RTS game with Claude - 9 factions, 200+ units, multiplayer, AI commanders, and it runs in your browser
I've been vibe coding a full RTS game with Claude in my spare time. 20 minutes here and there in the evening, walking the dog, waiting for the kettle to boil. I'm not a game dev. All I did was dump ideas in using plan mode and sub agent teams to go faster in parallel. You can play it here in your browser: https://shardsofstone.com/ What's in it: 9 factions with unique units & buildings 200+ units across ground, air, and naval — 70+ buildings, 50+ spells Full tech trees with 3-tier upgrades Fog of war, garrison system, trading economy, magic system Hero progression with branching abilities Procedurally generated maps (4 types, different sizes) 1v1 multiplayer (probs has some bugs..) Skirmish vs AI (easy, medium, hard difficulties + LLM difficulty if you set an API model key in settings - Gemini Flash is cheap to fight against). Community map editor LLM-powered AI commander/helper that reads game state and adapts in real-time (requires API key). AI vs AI spectator mode - watch Claude vs ChatGPT battle it out Voice control - speak commands and the game executes them, hold v to talk. 150+ music tracks, 1000s of voice lines, 1000s of sprites and artwork Runs in any browser with touch support, mobile responsive Player accounts, profiles, stat tracking and multiplayer leaderboard, plus guest mode Music player, artwork gallery, cheats and some other extras Unlockable portraits and art A million other things I probably can't remember or don't even know about because Claude decided to just do them I recommend playing skirmish mode against the AI right now :) As for map/terrain settings try forest biome, standard map with no water or go with a river with bridges (the AI opponent system is a little confused with water at the minute). Still WIP: Campaign, missions and storyline Terrain sprites need redone (just leveraging wc2 sprite sheet for now as yet to find something that can handle generating wang tilesets nicely Unit animations Faction balance across all 9 races Making each faction more unique with different play styles Desktop apps for Mac, Windows, Linux Built with: Anthropic Claude (Max plan), Google Gemini (sprites/artwork), Suno (music), ElevenLabs (voice), Turso, Vercel, Cloudflare R2 & Tauri (desktop apps soon). From zero game dev experience to this, entirely through conversation. The scope creep has been absolutely wild as you can probably tell from the feature list above. Play it, break it, tell me what you think! submitted by /u/Alarmed_Profit1426 [link] [comments]
View originalI built a P2P network where Claude Code agents can hire each other — AgentBnB
I’ve been building with Claude Code for months, and I kept running into the same problem: every agent eventually needs capabilities it doesn’t have. Need TTS? Add ElevenLabs. Need image generation? Add another API. Need finance analysis? Add a new toolchain. Over time, every agent starts turning into a monolith. So I built AgentBnB — a network where agents can publish what they do, discover specialists, hire them, and settle the job with credits. The part I’m most excited about is team formation: one conductor agent can take a task, break it down, find multiple specialists on the network, hire them, coordinate the execution, and return one combined result. A real example from my demo: one agent needed help with analysis + output delivery, discovered specialist agents on the network, delegated work, got structured results back, and continued the workflow without me manually routing anything. That’s the thing I wanted most: not one bigger agent, but agents that can find help. Credits are just the internal medium: agents earn credits by doing work for other agents, and spend credits when they need help. No crypto, no human payment flow in the loop. If you’re curious, I’d love feedback specifically on two things: does the Claude Code integration feel intuitive? does “agents hiring agents” feel genuinely useful, or just interesting? Repo: github.com/Xiaoher-C/agentbnb Demo video attached. submitted by /u/Pretend_Future_1036 [link] [comments]
View originalLooking for free AI narrator website.
Hey everyone, I’m looking for some good (and free) alternatives to ElevenLabs for voice generation (mainly for YouTube voiceovers). The main issue is that the free plan only gives 10k tokens per month, which isn’t really enough for my needs. I’m trying to find something that’s free (or at least has a generous free tier) and can generate realistic, natural-sounding voices. Does anyone have recommendations or tools they’ve had good experience with? submitted by /u/Agitated-Scholar-502 [link] [comments]
View originalThis girl ghosted me after I shared my entire freelancing workflow with her and I feel like a genuine idiot 💀
I don't even know how to start this so I'll just say it. I got played. Not in a dramatic way. In a quiet, slow, completely avoidable way that I walked into with my eyes open because I am apparently that guy. Some context. I have been doing video generation freelancing on the side for a few months now. Nothing crazy, not replacing my income yet, but it's real money and it's growing and I was genuinely proud of it. Started talking about it a little in my college group chat, not flexing, just sharing because I was excited and these are supposed to be my people. That's where she came in. She started DMing me almost immediately. Asking questions about everything. How did I find clients, what tools I use, how much I charge, how I structure my packages. I thought she was just curious. Then she started asking about my day. Sending memes. Checking in randomly. I am embarrassed to type this but I genuinely thought she was into me. I am an idiot. Moving on. So I shared everything. Like actually everything. Walked her through my whole workflow. For the video side I use a mix of tools depending on the client and budget. Kling for when clients want something more cinematic. ElevenLabs for voiceover because good audio makes average video look professional and bad audio makes professional video look amateur, learned that the hard way. For the actual generation work I mostly use Magic Hour, clean output, pricing made sense when I was starting with basically nothing. CapCut to tie everything together at the end. For client acquisition I walked her through cold emailing, showed her how I use Instantly to automate outreach without it looking automated, how I record quick Loom videos to personalise pitches, how I price for small businesses who have never bought this kind of content before. She was a good student. Asked smart questions. Took notes apparently. I helped her land her first client. Then her second. Then her third. Every time she got a yes she would message me excited and I would feel genuinely happy for her because I am apparently a golden retriever in a human body. Then after the third client something shifted. Responses got slower. One word answers. Eventually just left on read. I sent a normal message last week. Nothing weird, just hey how's the new client going. Still sitting there. Delivered. Not read. Or read and ignored which is somehow worse. I have been replaying every conversation trying to figure out where I went wrong and I think the honest answer is nowhere. She got what she needed and moved on and I mistook a transaction for a connection. And the thing that actually stings is I was genuinely into her. Not in a weird way, just in a quiet I really like this person way that made me blind to what was actually happening. Every question I thought was her being curious about me was just her being curious about my workflow. I was so happy someone I liked was paying attention that I handed over everything without thinking twice. The freelancing is fine. Clients don't ghost you when they've paid you first. That's something I guess. But I feel like a complete fool and I would really love to know if anyone else has done something this stupid so I feel like slightly less of an idiot. Please tell me I'm not the only one 💀[NOT LOOKING FOR CLIENTS PLS DONT DM] submitted by /u/Personal_Brilliant39 [link] [comments]
View originalShow HN: RubyLLM:Agents – A Rails engine for building and monitoring LLM agents
I've been building a Rails engine for managing LLM-powered agents in production. The main problem it solves: you define agents with a Ruby DSL, and everything else — cost tracking, retries, fallbacks, circuit breakers, caching, multi-tenancy, and observability — is handled by a middleware pipeline.<p>It ships with a mountable dashboard that shows execution history, spending charts (cost/tokens over time), per-agent stats, model breakdowns, and multi-tenant budget management with hard/soft enforcement.<p>Works with OpenAI, Anthropic, Google, ElevenLabs via RubyLLM. Supports text agents, embedders, TTS, transcription, image generation, message routing, and agent-as-tool composition.<p>v3.7, MIT licensed, ~4000 specs. Would appreciate feedback on the DSL design and middleware architecture
View originalYes, ElevenLabs offers a free tier. Pricing found: $0, $5, $22, $11, $99
Key features include: ElevenCreative, ElevenAgents, All-in-one AI editor, Omnichannel agents, Analytics, Testing, Guardrails, Workflows.
ElevenLabs is commonly used for: Eleven Multilingual.
Based on user reviews and social mentions, the most common pain points are: cost tracking.
Based on 21 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Adam Evans
EVP and GM, Salesforce AI at Salesforce
3 mentions